Sample records for random recursive partitioning

  1. Decision tree modeling using R.

    PubMed

    Zhang, Zhongheng

    2016-08-01

    In machine learning field, decision tree learner is powerful and easy to interpret. It employs recursive binary partitioning algorithm that splits the sample in partitioning variable with the strongest association with the response variable. The process continues until some stopping criteria are met. In the example I focus on conditional inference tree, which incorporates tree-structured regression models into conditional inference procedures. While growing a single tree is subject to small changes in the training data, random forests procedure is introduced to address this problem. The sources of diversity for random forests come from the random sampling and restricted set of input variables to be selected. Finally, I introduce R functions to perform model based recursive partitioning. This method incorporates recursive partitioning into conventional parametric model building.

  2. An Introduction to Recursive Partitioning: Rationale, Application, and Characteristics of Classification and Regression Trees, Bagging, and Random Forests

    ERIC Educational Resources Information Center

    Strobl, Carolin; Malley, James; Tutz, Gerhard

    2009-01-01

    Recursive partitioning methods have become popular and widely used tools for nonparametric regression and classification in many scientific fields. Especially random forests, which can deal with large numbers of predictor variables even in the presence of complex interactions, have been applied successfully in genetics, clinical medicine, and…

  3. Recursions for the exchangeable partition function of the seedbank coalescent.

    PubMed

    Kurt, Noemi; Rafler, Mathias

    2017-04-01

    For the seedbank coalescent with mutation under the infinite alleles assumption, which describes the gene genealogy of a population with a strong seedbank effect subject to mutations, we study the distribution of the final partition with mutation. This generalizes the coalescent with freeze by Dong et al. (2007) to coalescents where ancestral lineages are blocked from coalescing. We derive an implicit recursion which we show to have a unique solution and give an interpretation in terms of absorption problems of a random walk. Moreover, we derive recursions for the distribution of the number of blocks in the final partition. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Scoring and staging systems using cox linear regression modeling and recursive partitioning.

    PubMed

    Lee, J W; Um, S H; Lee, J B; Mun, J; Cho, H

    2006-01-01

    Scoring and staging systems are used to determine the order and class of data according to predictors. Systems used for medical data, such as the Child-Turcotte-Pugh scoring and staging systems for ordering and classifying patients with liver disease, are often derived strictly from physicians' experience and intuition. We construct objective and data-based scoring/staging systems using statistical methods. We consider Cox linear regression modeling and recursive partitioning techniques for censored survival data. In particular, to obtain a target number of stages we propose cross-validation and amalgamation algorithms. We also propose an algorithm for constructing scoring and staging systems by integrating local Cox linear regression models into recursive partitioning, so that we can retain the merits of both methods such as superior predictive accuracy, ease of use, and detection of interactions between predictors. The staging system construction algorithms are compared by cross-validation evaluation of real data. The data-based cross-validation comparison shows that Cox linear regression modeling is somewhat better than recursive partitioning when there are only continuous predictors, while recursive partitioning is better when there are significant categorical predictors. The proposed local Cox linear recursive partitioning has better predictive accuracy than Cox linear modeling and simple recursive partitioning. This study indicates that integrating local linear modeling into recursive partitioning can significantly improve prediction accuracy in constructing scoring and staging systems.

  5. A Random Walk Approach to Query Informative Constraints for Clustering.

    PubMed

    Abin, Ahmad Ali

    2017-08-09

    This paper presents a random walk approach to the problem of querying informative constraints for clustering. The proposed method is based on the properties of the commute time, that is the expected time taken for a random walk to travel between two nodes and return, on the adjacency graph of data. Commute time has the nice property of that, the more short paths connect two given nodes in a graph, the more similar those nodes are. Since computing the commute time takes the Laplacian eigenspectrum into account, we use this property in a recursive fashion to query informative constraints for clustering. At each recursion, the proposed method constructs the adjacency graph of data and utilizes the spectral properties of the commute time matrix to bipartition the adjacency graph. Thereafter, the proposed method benefits from the commute times distance on graph to query informative constraints between partitions. This process iterates for each partition until the stop condition becomes true. Experiments on real-world data show the efficiency of the proposed method for constraints selection.

  6. Spectroscopic diagnosis of laryngeal carcinoma using near-infrared Raman spectroscopy and random recursive partitioning ensemble techniques.

    PubMed

    Teh, Seng Khoon; Zheng, Wei; Lau, David P; Huang, Zhiwei

    2009-06-01

    In this work, we evaluated the diagnostic ability of near-infrared (NIR) Raman spectroscopy associated with the ensemble recursive partitioning algorithm based on random forests for identifying cancer from normal tissue in the larynx. A rapid-acquisition NIR Raman system was utilized for tissue Raman measurements at 785 nm excitation, and 50 human laryngeal tissue specimens (20 normal; 30 malignant tumors) were used for NIR Raman studies. The random forests method was introduced to develop effective diagnostic algorithms for classification of Raman spectra of different laryngeal tissues. High-quality Raman spectra in the range of 800-1800 cm(-1) can be acquired from laryngeal tissue within 5 seconds. Raman spectra differed significantly between normal and malignant laryngeal tissues. Classification results obtained from the random forests algorithm on tissue Raman spectra yielded a diagnostic sensitivity of 88.0% and specificity of 91.4% for laryngeal malignancy identification. The random forests technique also provided variables importance that facilitates correlation of significant Raman spectral features with cancer transformation. This study shows that NIR Raman spectroscopy in conjunction with random forests algorithm has a great potential for the rapid diagnosis and detection of malignant tumors in the larynx.

  7. A Recursive Method for Calculating Certain Partition Functions.

    ERIC Educational Resources Information Center

    Woodrum, Luther; And Others

    1978-01-01

    Describes a simple recursive method for calculating the partition function and average energy of a system consisting of N electrons and L energy levels. Also, presents an efficient APL computer program to utilize the recursion relation. (Author/GA)

  8. Assessing the relative importance of correlates of loneliness in later life. Gaining insight using recursive partitioning.

    PubMed

    Ejlskov, Linda; Wulff, Jesper; Bøggild, Henrik; Kuh, Diana; Stafford, Mai

    2017-09-08

    Improving the design and targeting of interventions is important for alleviating loneliness among older adults. This requires identifying which correlates are the most important predictors of loneliness. This study demonstrates the use of recursive partitioning in exploring the characteristics and assessing the relative importance of correlates of loneliness in older adults. Using exploratory regression trees and random forests, we examined combinations and the relative importance of 42 correlates in relation to loneliness at age 68 among 2453 participants from the birth cohort study the MRC National Survey of Health and Development. Positive mental well-being, personal mastery, identifying the spouse as the closest confidant, being extrovert and informal social contact were the most important correlates of lower loneliness levels. Participation in organised groups and demographic correlates were poor identifiers of loneliness. The regression tree suggested that loneliness was not raised among those with poor mental wellbeing if they identified their partner as closest confidante and had frequent social contact. Recursive partitioning can identify which combinations of experiences and circumstances characterise high-risk groups. Poor mental wellbeing and sparse social contact emerged as especially important and classical demographic factors as insufficient in identifying high loneliness levels among older adults.

  9. Establishing Long-Term Efficacy in Chronic Disease: Use of Recursive Partitioning and Propensity Score Adjustment to Estimate Outcome in MS

    PubMed Central

    Goodin, Douglas S.; Jones, Jason; Li, David; Traboulsee, Anthony; Reder, Anthony T.; Beckmann, Karola; Konieczny, Andreas; Knappertz, Volker

    2011-01-01

    Context Establishing the long-term benefit of therapy in chronic diseases has been challenging. Long-term studies require non-randomized designs and, thus, are often confounded by biases. For example, although disease-modifying therapy in MS has a convincing benefit on several short-term outcome-measures in randomized trials, its impact on long-term function remains uncertain. Objective Data from the 16-year Long-Term Follow-up study of interferon-beta-1b is used to assess the relationship between drug-exposure and long-term disability in MS patients. Design/Setting To mitigate the bias of outcome-dependent exposure variation in non-randomized long-term studies, drug-exposure was measured as the medication-possession-ratio, adjusted up or down according to multiple different weighting-schemes based on MS severity and MS duration at treatment initiation. A recursive-partitioning algorithm assessed whether exposure (using any weighing scheme) affected long-term outcome. The optimal cut-point that was used to define “high” or “low” exposure-groups was chosen by the algorithm. Subsequent to verification of an exposure-impact that included all predictor variables, the two groups were compared using a weighted propensity-stratified analysis in order to mitigate any treatment-selection bias that may have been present. Finally, multiple sensitivity-analyses were undertaken using different definitions of long-term outcome and different assumptions about the data. Main Outcome Measure Long-Term Disability. Results In these analyses, the same weighting-scheme was consistently selected by the recursive-partitioning algorithm. This scheme reduced (down-weighted) the effectiveness of drug exposure as either disease duration or disability at treatment-onset increased. Applying this scheme and using propensity-stratification to further mitigate bias, high-exposure had a consistently better clinical outcome compared to low-exposure (Cox proportional hazard ratio = 0.30–0.42; p<0.0001). Conclusions Early initiation and sustained use of interferon-beta-1b has a beneficial impact on long-term outcome in MS. Our analysis strategy provides a methodological framework for bias-mitigation in the analysis of non-randomized clinical data. Trial Registration Clinicaltrials.gov NCT00206635 PMID:22140424

  10. Multi-jagged: A scalable parallel spatial partitioning algorithm

    DOE PAGES

    Deveci, Mehmet; Rajamanickam, Sivasankaran; Devine, Karen D.; ...

    2015-03-18

    Geometric partitioning is fast and effective for load-balancing dynamic applications, particularly those requiring geometric locality of data (particle methods, crash simulations). We present, to our knowledge, the first parallel implementation of a multidimensional-jagged geometric partitioner. In contrast to the traditional recursive coordinate bisection algorithm (RCB), which recursively bisects subdomains perpendicular to their longest dimension until the desired number of parts is obtained, our algorithm does recursive multi-section with a given number of parts in each dimension. By computing multiple cut lines concurrently and intelligently deciding when to migrate data while computing the partition, we minimize data movement compared to efficientmore » implementations of recursive bisection. We demonstrate the algorithm's scalability and quality relative to the RCB implementation in Zoltan on both real and synthetic datasets. Our experiments show that the proposed algorithm performs and scales better than RCB in terms of run-time without degrading the load balance. Lastly, our implementation partitions 24 billion points into 65,536 parts within a few seconds and exhibits near perfect weak scaling up to 6K cores.« less

  11. Model-based recursive partitioning to identify risk clusters for metabolic syndrome and its components: findings from the International Mobility in Aging Study

    PubMed Central

    Pirkle, Catherine M; Wu, Yan Yan; Zunzunegui, Maria-Victoria; Gómez, José Fernando

    2018-01-01

    Objective Conceptual models underpinning much epidemiological research on ageing acknowledge that environmental, social and biological systems interact to influence health outcomes. Recursive partitioning is a data-driven approach that allows for concurrent exploration of distinct mixtures, or clusters, of individuals that have a particular outcome. Our aim is to use recursive partitioning to examine risk clusters for metabolic syndrome (MetS) and its components, in order to identify vulnerable populations. Study design Cross-sectional analysis of baseline data from a prospective longitudinal cohort called the International Mobility in Aging Study (IMIAS). Setting IMIAS includes sites from three middle-income countries—Tirana (Albania), Natal (Brazil) and Manizales (Colombia)—and two from Canada—Kingston (Ontario) and Saint-Hyacinthe (Quebec). Participants Community-dwelling male and female adults, aged 64–75 years (n=2002). Primary and secondary outcome measures We apply recursive partitioning to investigate social and behavioural risk factors for MetS and its components. Model-based recursive partitioning (MOB) was used to cluster participants into age-adjusted risk groups based on variabilities in: study site, sex, education, living arrangements, childhood adversities, adult occupation, current employment status, income, perceived income sufficiency, smoking status and weekly minutes of physical activity. Results 43% of participants had MetS. Using MOB, the primary partitioning variable was participant sex. Among women from middle-incomes sites, the predicted proportion with MetS ranged from 58% to 68%. Canadian women with limited physical activity had elevated predicted proportions of MetS (49%, 95% CI 39% to 58%). Among men, MetS ranged from 26% to 41% depending on childhood social adversity and education. Clustering for MetS components differed from the syndrome and across components. Study site was a primary partitioning variable for all components except HDL cholesterol. Sex was important for most components. Conclusion MOB is a promising technique for identifying disease risk clusters (eg, vulnerable populations) in modestly sized samples. PMID:29500203

  12. What contributes to perceived stress in later life? A recursive partitioning approach.

    PubMed

    Scott, Stacey B; Jackson, Brenda R; Bergeman, C S

    2011-12-01

    One possible explanation for the individual differences in outcomes of stress is the diversity of inputs that produce perceptions of being stressed. The current study examines how combinations of contextual features (e.g., social isolation, neighborhood quality, health problems, age discrimination, financial concerns, and recent life events) of later life contribute to overall feelings of stress. Recursive partitioning techniques (regression trees and random forests) were used to examine unique interrelations between predictors of perceived stress in a sample of 282 community-dwelling adults. Trees provided possible examples of equifinality (i.e., subsets of people with similar levels of perceived stress but different predictors) as well as identification both of contextual combinations that separated participants with very high and very low perceived stress. Random forest analyses aggregated across many trees based on permuted versions of the data and predictors; loneliness, financial strain, neighborhood strain, ageism, and to some extent life events emerged as important predictors. Interviews with a subsample of participants provided both thick description of the complex relationships identified in the trees, as well as additional risks not appearing in the survey results. Together, the analyses highlight what may be missed when stress is used as a simple unidimensional construct and can guide differential intervention efforts.

  13. What contributes to perceived stress in later life? A recursive partitioning approach

    PubMed Central

    Scott, Stacey B.; Jackson, Brenda R.; Bergeman, C. S.

    2011-01-01

    One possible explanation for the individual differences in outcomes of stress is the diversity of inputs that produce perceptions of being stressed. The current study examines how combinations of contextual features (e.g., social isolation, neighborhood quality, health problems, age discrimination, financial concerns, and recent life events) of later life contribute to overall feelings of stress. Recursive partitioning techniques (regression trees and random forests) were used to examine unique interrelations between predictors of perceived stress in a sample of 282 community-dwelling adults. Trees provided possible examples of equifinality (i.e., subsets of people with similar levels of perceived stress but different predictors) as well as for the identification both of contextual combinations that separated participants with very high and very low perceived stress. Random forest analyses aggregated across many trees based on permuted versions of the data and predictors; loneliness, financial strain, neighborhood strain, ageism, and to some extent life events emerged as important predictors. Interviews with a subsample of participants provided both thick description of the complex relationships identified in the trees, as well as additional risks not appearing in the survey results. Together, the analyses highlight what may be missed when stress is used as a simple unidimensional construct and can guide differential intervention efforts. PMID:21604885

  14. Statistically extracted fundamental watershed variables for estimating the loads of total nitrogen in small streams

    USGS Publications Warehouse

    Kronholm, Scott C.; Capel, Paul D.; Terziotti, Silvia

    2016-01-01

    Accurate estimation of total nitrogen loads is essential for evaluating conditions in the aquatic environment. Extrapolation of estimates beyond measured streams will greatly expand our understanding of total nitrogen loading to streams. Recursive partitioning and random forest regression were used to assess 85 geospatial, environmental, and watershed variables across 636 small (<585 km2) watersheds to determine which variables are fundamentally important to the estimation of annual loads of total nitrogen. Initial analysis led to the splitting of watersheds into three groups based on predominant land use (agricultural, developed, and undeveloped). Nitrogen application, agricultural and developed land area, and impervious or developed land in the 100-m stream buffer were commonly extracted variables by both recursive partitioning and random forest regression. A series of multiple linear regression equations utilizing the extracted variables were created and applied to the watersheds. As few as three variables explained as much as 76 % of the variability in total nitrogen loads for watersheds with predominantly agricultural land use. Catchment-scale national maps were generated to visualize the total nitrogen loads and yields across the USA. The estimates provided by these models can inform water managers and help identify areas where more in-depth monitoring may be beneficial.

  15. Personalized Risk Prediction in Clinical Oncology Research: Applications and Practical Issues Using Survival Trees and Random Forests.

    PubMed

    Hu, Chen; Steingrimsson, Jon Arni

    2018-01-01

    A crucial component of making individualized treatment decisions is to accurately predict each patient's disease risk. In clinical oncology, disease risks are often measured through time-to-event data, such as overall survival and progression/recurrence-free survival, and are often subject to censoring. Risk prediction models based on recursive partitioning methods are becoming increasingly popular largely due to their ability to handle nonlinear relationships, higher-order interactions, and/or high-dimensional covariates. The most popular recursive partitioning methods are versions of the Classification and Regression Tree (CART) algorithm, which builds a simple interpretable tree structured model. With the aim of increasing prediction accuracy, the random forest algorithm averages multiple CART trees, creating a flexible risk prediction model. Risk prediction models used in clinical oncology commonly use both traditional demographic and tumor pathological factors as well as high-dimensional genetic markers and treatment parameters from multimodality treatments. In this article, we describe the most commonly used extensions of the CART and random forest algorithms to right-censored outcomes. We focus on how they differ from the methods for noncensored outcomes, and how the different splitting rules and methods for cost-complexity pruning impact these algorithms. We demonstrate these algorithms by analyzing a randomized Phase III clinical trial of breast cancer. We also conduct Monte Carlo simulations to compare the prediction accuracy of survival forests with more commonly used regression models under various scenarios. These simulation studies aim to evaluate how sensitive the prediction accuracy is to the underlying model specifications, the choice of tuning parameters, and the degrees of missing covariates.

  16. ReHypar: A Recursive Hybrid Chunk Partitioning Method Using NAND-Flash Memory SSD

    PubMed Central

    Park, Sung-Soon; Lim, Cheol-Su

    2014-01-01

    Due to the rapid development of flash memory, SSD is considered to be the replacement of HDD in the storage market. Although SSD retains several promising characteristics, such as high random I/O performance and nonvolatility, its high expense per capacity is the main obstacle in replacing HDD in all storage solutions. An alternative is to provide a hybrid structure where a small portion of SSD address space is combined with the much larger HDD address space. In such a structure, maximizing the space utilization of SSD in a cost-effective way is extremely important to generate high I/O performance. We developed ReHypar (recursive hybrid chunk partitioning) that enables improving the space utilization of SSD in the hybrid structure. The first objective of ReHypar is to mitigate the fragmentation overhead of SSD address space, by reusing the remaining free space of I/O units as much as possible. Furthermore, ReHypar allows defining several, logical data sections in SSD address space, with each of those sections being configured with the different I/O unit. We integrated ReHypar with ext2 and ext4 and evaluated it using two public benchmarks including IOzone and Postmark. PMID:24987741

  17. Recursive inverse factorization.

    PubMed

    Rubensson, Emanuel H; Bock, Nicolas; Holmström, Erik; Niklasson, Anders M N

    2008-03-14

    A recursive algorithm for the inverse factorization S(-1)=ZZ(*) of Hermitian positive definite matrices S is proposed. The inverse factorization is based on iterative refinement [A.M.N. Niklasson, Phys. Rev. B 70, 193102 (2004)] combined with a recursive decomposition of S. As the computational kernel is matrix-matrix multiplication, the algorithm can be parallelized and the computational effort increases linearly with system size for systems with sufficiently sparse matrices. Recent advances in network theory are used to find appropriate recursive decompositions. We show that optimization of the so-called network modularity results in an improved partitioning compared to other approaches. In particular, when the recursive inverse factorization is applied to overlap matrices of irregularly structured three-dimensional molecules.

  18. Detecting treatment-subgroup interactions in clustered data with generalized linear mixed-effects model trees.

    PubMed

    Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H

    2017-10-25

    Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.

  19. Subarachnoid hemorrhage admissions retrospectively identified using a prediction model

    PubMed Central

    McIntyre, Lauralyn; Fergusson, Dean; Turgeon, Alexis; dos Santos, Marlise P.; Lum, Cheemun; Chassé, Michaël; Sinclair, John; Forster, Alan; van Walraven, Carl

    2016-01-01

    Objective: To create an accurate prediction model using variables collected in widely available health administrative data records to identify hospitalizations for primary subarachnoid hemorrhage (SAH). Methods: A previously established complete cohort of consecutive primary SAH patients was combined with a random sample of control hospitalizations. Chi-square recursive partitioning was used to derive and internally validate a model to predict the probability that a patient had primary SAH (due to aneurysm or arteriovenous malformation) using health administrative data. Results: A total of 10,322 hospitalizations with 631 having primary SAH (6.1%) were included in the study (5,122 derivation, 5,200 validation). In the validation patients, our recursive partitioning algorithm had a sensitivity of 96.5% (95% confidence interval [CI] 93.9–98.0), a specificity of 99.8% (95% CI 99.6–99.9), and a positive likelihood ratio of 483 (95% CI 254–879). In this population, patients meeting criteria for the algorithm had a probability of 45% of truly having primary SAH. Conclusions: Routinely collected health administrative data can be used to accurately identify hospitalized patients with a high probability of having a primary SAH. This algorithm may allow, upon validation, an easy and accurate method to create validated cohorts of primary SAH from either ruptured aneurysm or arteriovenous malformation. PMID:27629096

  20. CD process control through machine learning

    NASA Astrophysics Data System (ADS)

    Utzny, Clemens

    2016-10-01

    For the specific requirements of the 14nm and 20nm site applications a new CD map approach was developed at the AMTC. This approach relies on a well established machine learning technique called recursive partitioning. Recursive partitioning is a powerful technique which creates a decision tree by successively testing whether the quantity of interest can be explained by one of the supplied covariates. The test performed is generally a statistical test with a pre-supplied significance level. Once the test indicates significant association between the variable of interest and a covariate a split performed at a threshold value which minimizes the variation within the newly attained groups. This partitioning is recurred until either no significant association can be detected or the resulting sub group size falls below a pre-supplied level.

  1. Scalable detection of statistically significant communities and hierarchies, using message passing for modularity

    PubMed Central

    Zhang, Pan; Moore, Cristopher

    2014-01-01

    Modularity is a popular measure of community structure. However, maximizing the modularity can lead to many competing partitions, with almost the same modularity, that are poorly correlated with each other. It can also produce illusory ‘‘communities’’ in random graphs where none exist. We address this problem by using the modularity as a Hamiltonian at finite temperature and using an efficient belief propagation algorithm to obtain the consensus of many partitions with high modularity, rather than looking for a single partition that maximizes it. We show analytically and numerically that the proposed algorithm works all of the way down to the detectability transition in networks generated by the stochastic block model. It also performs well on real-world networks, revealing large communities in some networks where previous work has claimed no communities exist. Finally we show that by applying our algorithm recursively, subdividing communities until no statistically significant subcommunities can be found, we can detect hierarchical structure in real-world networks more efficiently than previous methods. PMID:25489096

  2. Venous tree separation in the liver: graph partitioning using a non-ising model.

    PubMed

    O'Donnell, Thomas; Kaftan, Jens N; Schuh, Andreas; Tietjen, Christian; Soza, Grzegorz; Aach, Til

    2011-01-01

    Entangled tree-like vascular systems are commonly found in the body (e.g., in the peripheries and lungs). Separation of these systems in medical images may be formulated as a graph partitioning problem given an imperfect segmentation and specification of the tree roots. In this work, we show that the ubiquitous Ising-model approaches (e.g., Graph Cuts, Random Walker) are not appropriate for tackling this problem and propose a novel method based on recursive minimal paths for doing so. To motivate our method, we focus on the intertwined portal and hepatic venous systems in the liver. Separation of these systems is critical for liver intervention planning, in particular when resection is involved. We apply our method to 34 clinical datasets, each containing well over a hundred vessel branches, demonstrating its effectiveness.

  3. Stockholder projector analysis: A Hilbert-space partitioning of the molecular one-electron density matrix with orthogonal projectors

    NASA Astrophysics Data System (ADS)

    Vanfleteren, Diederik; Van Neck, Dimitri; Bultinck, Patrick; Ayers, Paul W.; Waroquier, Michel

    2012-01-01

    A previously introduced partitioning of the molecular one-electron density matrix over atoms and bonds [D. Vanfleteren et al., J. Chem. Phys. 133, 231103 (2010)] is investigated in detail. Orthogonal projection operators are used to define atomic subspaces, as in Natural Population Analysis. The orthogonal projection operators are constructed with a recursive scheme. These operators are chemically relevant and obey a stockholder principle, familiar from the Hirshfeld-I partitioning of the electron density. The stockholder principle is extended to density matrices, where the orthogonal projectors are considered to be atomic fractions of the summed contributions. All calculations are performed as matrix manipulations in one-electron Hilbert space. Mathematical proofs and numerical evidence concerning this recursive scheme are provided in the present paper. The advantages associated with the use of these stockholder projection operators are examined with respect to covalent bond orders, bond polarization, and transferability.

  4. Prognostic Indexes for Brain Metastases: Which Is the Most Powerful?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arruda Viani, Gustavo, E-mail: gusviani@gmail.com; Bernardes da Silva, Lucas Godoi; Stefano, Eduardo Jose

    Purpose: The purpose of the present study was to compare the prognostic indexes (PIs) of patients with brain metastases (BMs) treated with whole brain radiotherapy (WBRT) using an artificial neural network. This analysis is important, because it evaluates the prognostic power of each PI to guide clinical decision-making and outcomes research. Methods and Materials: A retrospective prognostic study was conducted of 412 patients with BMs who underwent WBRT between April 1998 and March 2010. The eligibility criteria for patients included having undergone WBRT or WBRT plus neurosurgery. The data were analyzed using the artificial neural network. The input neural datamore » consisted of all prognostic factors included in the 5 PIs (recursive partitioning analysis, graded prognostic assessment [GPA], basic score for BMs, Rotterdam score, and Germany score). The data set was randomly divided into 300 training and 112 testing examples for survival prediction. All 5 PIs were compared using our database of 412 patients with BMs. The sensibility of the 5 indexes to predict survival according to their input variables was determined statistically using receiver operating characteristic curves. The importance of each variable from each PI was subsequently evaluated. Results: The overall 1-, 2-, and 3-year survival rate was 22%, 10.2%, and 5.1%, respectively. All classes of PIs were significantly associated with survival (recursive partitioning analysis, P < .0001; GPA, P < .0001; basic score for BMs, P = .002; Rotterdam score, P = .001; and Germany score, P < .0001). Comparing the areas under the curves, the GPA was statistically most sensitive in predicting survival (GPA, 86%; recursive partitioning analysis, 81%; basic score for BMs, 79%; Rotterdam, 73%; and Germany score, 77%; P < .001). Among the variables included in each PI, the performance status and presence of extracranial metastases were the most important factors. Conclusion: A variety of prognostic models describe the survival of patients with BMs to a more or less satisfactory degree. Among the 5 PIs evaluated in the present study, GPA was the most powerful in predicting survival. Additional studies should include emerging biologic prognostic factors to improve the sensibility of these PIs.« less

  5. Tear fluid proteomics multimarkers for diabetic retinopathy screening

    PubMed Central

    2013-01-01

    Background The aim of the project was to develop a novel method for diabetic retinopathy screening based on the examination of tear fluid biomarker changes. In order to evaluate the usability of protein biomarkers for pre-screening purposes several different approaches were used, including machine learning algorithms. Methods All persons involved in the study had diabetes. Diabetic retinopathy (DR) was diagnosed by capturing 7-field fundus images, evaluated by two independent ophthalmologists. 165 eyes were examined (from 119 patients), 55 were diagnosed healthy and 110 images showed signs of DR. Tear samples were taken from all eyes and state-of-the-art nano-HPLC coupled ESI-MS/MS mass spectrometry protein identification was performed on all samples. Applicability of protein biomarkers was evaluated by six different optimally parameterized machine learning algorithms: Support Vector Machine, Recursive Partitioning, Random Forest, Naive Bayes, Logistic Regression, K-Nearest Neighbor. Results Out of the six investigated machine learning algorithms the result of Recursive Partitioning proved to be the most accurate. The performance of the system realizing the above algorithm reached 74% sensitivity and 48% specificity. Conclusions Protein biomarkers selected and classified with machine learning algorithms alone are at present not recommended for screening purposes because of low specificity and sensitivity values. This tool can be potentially used to improve the results of image processing methods as a complementary tool in automatic or semiautomatic systems. PMID:23919537

  6. Prediction of mutagenic toxicity by combination of Recursive Partitioning and Support Vector Machines.

    PubMed

    Liao, Quan; Yao, Jianhua; Yuan, Shengang

    2007-05-01

    The study of prediction of toxicity is very important and necessary because measurement of toxicity is typically time-consuming and expensive. In this paper, Recursive Partitioning (RP) method was used to select descriptors. RP and Support Vector Machines (SVM) were used to construct structure-toxicity relationship models, RP model and SVM model, respectively. The performances of the two models are different. The prediction accuracies of the RP model are 80.2% for mutagenic compounds in MDL's toxicity database, 83.4% for compounds in CMC and 84.9% for agrochemicals in in-house database respectively. Those of SVM model are 81.4%, 87.0% and 87.3% respectively.

  7. Binary recursive partitioning: background, methods, and application to psychology.

    PubMed

    Merkle, Edgar C; Shaffer, Victoria A

    2011-02-01

    Binary recursive partitioning (BRP) is a computationally intensive statistical method that can be used in situations where linear models are often used. Instead of imposing many assumptions to arrive at a tractable statistical model, BRP simply seeks to accurately predict a response variable based on values of predictor variables. The method outputs a decision tree depicting the predictor variables that were related to the response variable, along with the nature of the variables' relationships. No significance tests are involved, and the tree's 'goodness' is judged based on its predictive accuracy. In this paper, we describe BRP methods in a detailed manner and illustrate their use in psychological research. We also provide R code for carrying out the methods.

  8. P-HARP: A parallel dynamic spectral partitioner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sohn, A.; Biswas, R.; Simon, H.D.

    1997-05-01

    Partitioning unstructured graphs is central to the parallel solution of problems in computational science and engineering. The authors have introduced earlier the sequential version of an inertial spectral partitioner called HARP which maintains the quality of recursive spectral bisection (RSB) while forming the partitions an order of magnitude faster than RSB. The serial HARP is known to be the fastest spectral partitioner to date, three to four times faster than similar partitioners on a variety of meshes. This paper presents a parallel version of HARP, called P-HARP. Two types of parallelism have been exploited: loop level parallelism and recursive parallelism.more » P-HARP has been implemented in MPI on the SGI/Cray T3E and the IBM SP2. Experimental results demonstrate that P-HARP can partition a mesh of over 100,000 vertices into 256 partitions in 0.25 seconds on a 64-processor T3E. Experimental results further show that P-HARP can give nearly a 20-fold speedup on 64 processors. These results indicate that graph partitioning is no longer a major bottleneck that hinders the advancement of computational science and engineering for dynamically-changing real-world applications.« less

  9. Application of Recursive Partitioning to Derive and Validate a Claims-Based Algorithm for Identifying Keratinocyte Carcinoma (Nonmelanoma Skin Cancer).

    PubMed

    Chan, An-Wen; Fung, Kinwah; Tran, Jennifer M; Kitchen, Jessica; Austin, Peter C; Weinstock, Martin A; Rochon, Paula A

    2016-10-01

    Keratinocyte carcinoma (nonmelanoma skin cancer) accounts for substantial burden in terms of high incidence and health care costs but is excluded by most cancer registries in North America. Administrative health insurance claims databases offer an opportunity to identify these cancers using diagnosis and procedural codes submitted for reimbursement purposes. To apply recursive partitioning to derive and validate a claims-based algorithm for identifying keratinocyte carcinoma with high sensitivity and specificity. Retrospective study using population-based administrative databases linked to 602 371 pathology episodes from a community laboratory for adults residing in Ontario, Canada, from January 1, 1992, to December 31, 2009. The final analysis was completed in January 2016. We used recursive partitioning (classification trees) to derive an algorithm based on health insurance claims. The performance of the derived algorithm was compared with 5 prespecified algorithms and validated using an independent academic hospital clinic data set of 2082 patients seen in May and June 2011. Sensitivity, specificity, positive predictive value, and negative predictive value using the histopathological diagnosis as the criterion standard. We aimed to achieve maximal specificity, while maintaining greater than 80% sensitivity. Among 602 371 pathology episodes, 131 562 (21.8%) had a diagnosis of keratinocyte carcinoma. Our final derived algorithm outperformed the 5 simple prespecified algorithms and performed well in both community and hospital data sets in terms of sensitivity (82.6% and 84.9%, respectively), specificity (93.0% and 99.0%, respectively), positive predictive value (76.7% and 69.2%, respectively), and negative predictive value (95.0% and 99.6%, respectively). Algorithm performance did not vary substantially during the 18-year period. This algorithm offers a reliable mechanism for ascertaining keratinocyte carcinoma for epidemiological research in the absence of cancer registry data. Our findings also demonstrate the value of recursive partitioning in deriving valid claims-based algorithms.

  10. Identifying risk profiles for childhood obesity using recursive partitioning based on individual, familial, and neighborhood environment factors.

    PubMed

    Van Hulst, Andraea; Roy-Gagnon, Marie-Hélène; Gauvin, Lise; Kestens, Yan; Henderson, Mélanie; Barnett, Tracie A

    2015-02-15

    Few studies consider how risk factors within multiple levels of influence operate synergistically to determine childhood obesity. We used recursive partitioning analysis to identify unique combinations of individual, familial, and neighborhood factors that best predict obesity in children, and tested whether these predict 2-year changes in body mass index (BMI). Data were collected in 2005-2008 and in 2008-2011 for 512 Quebec youth (8-10 years at baseline) with a history of parental obesity (QUALITY study). CDC age- and sex-specific BMI percentiles were computed and children were considered obese if their BMI was ≥95th percentile. Individual (physical activity and sugar-sweetened beverage intake), familial (household socioeconomic status and measures of parental obesity including both BMI and waist circumference), and neighborhood (disadvantage, prestige, and presence of parks, convenience stores, and fast food restaurants) factors were examined. Recursive partitioning, a method that generates a classification tree predicting obesity based on combined exposure to a series of variables, was used. Associations between resulting varying risk group membership and BMI percentile at baseline and 2-year follow up were examined using linear regression. Recursive partitioning yielded 7 subgroups with a prevalence of obesity equal to 8%, 11%, 26%, 28%, 41%, 60%, and 63%, respectively. The 2 highest risk subgroups comprised i) children not meeting physical activity guidelines, with at least one BMI-defined obese parent and 2 abdominally obese parents, living in disadvantaged neighborhoods without parks and, ii) children with these characteristics, except with access to ≥1 park and with access to ≥1 convenience store. Group membership was strongly associated with BMI at baseline, but did not systematically predict change in BMI. Findings support the notion that obesity is predicted by multiple factors in different settings and provide some indications of potentially obesogenic environments. Alternate group definitions as well as longer duration of follow up should be investigated to predict change in obesity.

  11. An Element-Based Concurrent Partitioner for Unstructured Finite Element Meshes

    NASA Technical Reports Server (NTRS)

    Ding, Hong Q.; Ferraro, Robert D.

    1996-01-01

    A concurrent partitioner for partitioning unstructured finite element meshes on distributed memory architectures is developed. The partitioner uses an element-based partitioning strategy. Its main advantage over the more conventional node-based partitioning strategy is its modular programming approach to the development of parallel applications. The partitioner first partitions element centroids using a recursive inertial bisection algorithm. Elements and nodes then migrate according to the partitioned centroids, using a data request communication template for unpredictable incoming messages. Our scalable implementation is contrasted to a non-scalable implementation which is a straightforward parallelization of a sequential partitioner.

  12. Efficient method for computing the electronic transport properties of a multiterminal system

    NASA Astrophysics Data System (ADS)

    Lima, Leandro R. F.; Dusko, Amintor; Lewenkopf, Caio

    2018-04-01

    We present a multiprobe recursive Green's function method to compute the transport properties of mesoscopic systems using the Landauer-Büttiker approach. By introducing an adaptive partition scheme, we map the multiprobe problem into the standard two-probe recursive Green's function method. We apply the method to compute the longitudinal and Hall resistances of a disordered graphene sample, a system of current interest. We show that the performance and accuracy of our method compares very well with other state-of-the-art schemes.

  13. Discovery of novel SERCA inhibitors by virtual screening of a large compound library.

    PubMed

    Elam, Christopher; Lape, Michael; Deye, Joel; Zultowsky, Jodie; Stanton, David T; Paula, Stefan

    2011-05-01

    Two screening protocols based on recursive partitioning and computational ligand docking methodologies, respectively, were employed for virtual screens of a compound library with 345,000 entries for novel inhibitors of the enzyme sarco/endoplasmic reticulum calcium ATPase (SERCA), a potential target for cancer chemotherapy. A total of 72 compounds that were predicted to be potential inhibitors of SERCA were tested in bioassays and 17 displayed inhibitory potencies at concentrations below 100 μM. The majority of these inhibitors were composed of two phenyl rings tethered to each other by a short link of one to three atoms. Putative interactions between SERCA and the inhibitors were identified by inspection of docking-predicted poses and some of the structural features required for effective SERCA inhibition were determined by analysis of the classification pattern employed by the recursive partitioning models. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  14. Differential diagnosis of jaw pain using informatics technology.

    PubMed

    Nam, Y; Kim, H-G; Kho, H-S

    2018-05-21

    This study aimed to deduce evidence-based clinical clues that differentiate temporomandibular disorders (TMD)-mimicking conditions from genuine TMD by text mining using natural language processing (NLP) and recursive partitioning. We compared the medical records of 29 patients diagnosed with TMD-mimicking conditions and 290 patients diagnosed with genuine TMD. Chief complaints and medical histories were preprocessed via NLP to compare the frequency of word usage. In addition, recursive partitioning was used to deduce the optimal size of mouth opening, which could differentiate TMD-mimicking from genuine TMD groups. The prevalence of TMD-mimicking conditions was more evenly distributed across all age groups and showed a nearly equal gender ratio, which was significantly different from genuine TMD. TMD-mimicking conditions were caused by inflammation, infection, hereditary disease and neoplasm. Patients with TMD-mimicking conditions frequently used "mouth opening limitation" (P < .001), but less commonly used words such as "noise" (P < .001) and "temporomandibular joint" (P < .001) than patients with genuine TMD. A diagnostic classification tree on the basis of recursive partitioning suggested that 12.0 mm of comfortable mouth opening and 26.5 mm of maximum mouth opening were deduced as the most optimal mouth-opening cutoff sizes. When the combined analyses were performed based on both the text mining and clinical examination data, the predictive performance of the model was 96.6% with 69.0% sensitivity and 99.3% specificity in predicting TMD-mimicking conditions. In conclusion, this study showed that AI technology-based methods could be applied in the field of differential diagnosis of orofacial pain disorders. © 2018 John Wiley & Sons Ltd.

  15. Predicting cannabis abuse screening test (CAST) scores: a recursive partitioning analysis using survey data from Czech Republic, Italy, the Netherlands and Sweden.

    PubMed

    Blankers, Matthijs; Frijns, Tom; Belackova, Vendula; Rossi, Carla; Svensson, Bengt; Trautmann, Franz; van Laar, Margriet

    2014-01-01

    Cannabis is Europe's most commonly used illicit drug. Some users do not develop dependence or other problems, whereas others do. Many factors are associated with the occurrence of cannabis-related disorders. This makes it difficult to identify key risk factors and markers to profile at-risk cannabis users using traditional hypothesis-driven approaches. Therefore, the use of a data-mining technique called binary recursive partitioning is demonstrated in this study by creating a classification tree to profile at-risk users. 59 variables on cannabis use and drug market experiences were extracted from an internet-based survey dataset collected in four European countries (Czech Republic, Italy, Netherlands and Sweden), n = 2617. These 59 potential predictors of problematic cannabis use were used to partition individual respondents into subgroups with low and high risk of having a cannabis use disorder, based on their responses on the Cannabis Abuse Screening Test. Both a generic model for the four countries combined and four country-specific models were constructed. Of the 59 variables included in the first analysis step, only three variables were required to construct a generic partitioning model to classify high risk cannabis users with 65-73% accuracy. Based on the generic model for the four countries combined, the highest risk for cannabis use disorder is seen in participants reporting a cannabis use on more than 200 days in the last 12 months. In comparison to the generic model, the country-specific models led to modest, non-significant improvements in classification accuracy, with an exception for Italy (p = 0.01). Using recursive partitioning, it is feasible to construct classification trees based on only a few variables with acceptable performance to classify cannabis users into groups with low or high risk of meeting criteria for cannabis use disorder. The number of cannabis use days in the last 12 months is the most relevant variable. The identified variables may be considered for use in future screeners for cannabis use disorders.

  16. Factors affecting exits from homelessness among persons with serious mental illness and substance use disorders

    PubMed Central

    Gabrielian, Sonya; Bromley, Elizabeth; Hellemann, Gerhard S.; Kern, Robert S.; Goldenson, Nicholas I.; Danley, Megan E.; Young, Alexander S.

    2015-01-01

    Objective We sought to understand the housing trajectories of homeless consumers with serious mental illness (SMI) and co-occurring substance use disorders (SUD) and to identify factors that best-predicted achievement of independent housing. Methods Using administrative data, we identified homeless persons with SMI and SUD admitted to a residential rehabilitation program from 12/2008-11/2011. On a random sample (n=36), we assessed a range of potential predictors of housing outcomes, including symptoms, cognition, and social/community supports. We used the Residential Time-Line Follow-Back (TLFB) Inventory to gather housing histories since exiting rehabilitation and identify housing outcomes. We used recursive partitioning to identify variables that best-differentiated participants by these outcomes. Results We identified three housing trajectories: stable housing (n=14); unstable housing (n=15); and continuously engaged in housing services (n=7). Using recursive partitioning, two variables (symbol digit modalities test (SDMT), a neurocognitive speed of processing measure and Behavior and Symptom Identification Scale (BASIS)-relationships subscale, which quantifies symptoms affecting relationships) were sufficient to capture information provided by 26 predictors to classify participants by housing outcome. Participants predicted to continuously engage in services had impaired processing speeds (SDMT score<32.5). Among consumers with SDMT score≥32.5, those predicted to achieve stable housing had fewer interpersonal symptoms (BASIS-relationships score<0.81) than those predicted to have unstable housing. This model explains 57% of this sample's variability and 14% of this population's variability in housing outcomes. Conclusion As cognition and symptoms influencing relationships predicted housing outcomes for homeless adults with SMI and SUD, cognitive and social skills trainings may be useful for this population. PMID:25919839

  17. TREAT (TREe-based Association Test)

    Cancer.gov

    TREAT is an R package for detecting complex joint effects in case-control studies. The test statistic is derived from a tree-structure model by recursive partitioning the data. Ultra-fast algorithm is designed to evaluate the significance of association between candidate gene and disease outcome

  18. Towards rigorous analysis of the Levitov-Mirlin-Evers recursion

    NASA Astrophysics Data System (ADS)

    Fyodorov, Y. V.; Kupiainen, A.; Webb, C.

    2016-12-01

    This paper aims to develop a rigorous asymptotic analysis of an approximate renormalization group recursion for inverse participation ratios P q of critical powerlaw random band matrices. The recursion goes back to the work by Mirlin and Evers (2000 Phys. Rev. B 62 7920) and earlier works by Levitov (1990 Phys. Rev. Lett. 64 547, 1999 Ann. Phys. 8 697-706) and is aimed to describe the ensuing multifractality of the eigenvectors of such matrices. We point out both similarities and dissimilarities between the LME recursion and those appearing in the theory of multiplicative cascades and branching random walks and show that the methods developed in those fields can be adapted to the present case. In particular the LME recursion is shown to exhibit a phase transition, which we expect is a freezing transition, where the role of temperature is played by the exponent q. However, the LME recursion has features that make its rigorous analysis considerably harder and we point out several open problems for further study.

  19. Recurrence relations in one-dimensional Ising models.

    PubMed

    da Conceição, C M Silva; Maia, R N P

    2017-09-01

    The exact finite-size partition function for the nonhomogeneous one-dimensional (1D) Ising model is found through an approach using algebra operators. Specifically, in this paper we show that the partition function can be computed through a trace from a linear second-order recurrence relation with nonconstant coefficients in matrix form. A relation between the finite-size partition function and the generalized Lucas polynomials is found for the simple homogeneous model, thus establishing a recursive formula for the partition function. This is an important property and it might indicate the possible existence of recurrence relations in higher-dimensional Ising models. Moreover, assuming quenched disorder for the interactions within the model, the quenched averaged magnetic susceptibility displays a nontrivial behavior due to changes in the ferromagnetic concentration probability.

  20. Cooperating reduction machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kluge, W.E.

    1983-11-01

    This paper presents a concept and a system architecture for the concurrent execution of program expressions of a concrete reduction language based on lamda-expressions. If formulated appropriately, these expressions are well-suited for concurrent execution, following a demand-driven model of computation. In particular, recursive program expressions with nonlinear expansion may, at run time, recursively be partitioned into a hierarchy of independent subexpressions which can be reduced by a corresponding hierarchy of virtual reduction machines. This hierarchy unfolds and collapses dynamically, with virtual machines recursively assuming the role of masters that create and eventually terminate, or synchronize with, slaves. The paper alsomore » proposes a nonhierarchically organized system of reduction machines, each featuring a stack architecture, that effectively supports the allocation of virtual machines to the real machines of the system in compliance with their hierarchical order of creation and termination. 25 references.« less

  1. Item-focussed Trees for the Identification of Items in Differential Item Functioning.

    PubMed

    Tutz, Gerhard; Berger, Moritz

    2016-09-01

    A novel method for the identification of differential item functioning (DIF) by means of recursive partitioning techniques is proposed. We assume an extension of the Rasch model that allows for DIF being induced by an arbitrary number of covariates for each item. Recursive partitioning on the item level results in one tree for each item and leads to simultaneous selection of items and variables that induce DIF. For each item, it is possible to detect groups of subjects with different item difficulties, defined by combinations of characteristics that are not pre-specified. The way a DIF item is determined by covariates is visualized in a small tree and therefore easily accessible. An algorithm is proposed that is based on permutation tests. Various simulation studies, including the comparison with traditional approaches to identify items with DIF, show the applicability and the competitive performance of the method. Two applications illustrate the usefulness and the advantages of the new method.

  2. Detection of Problem Gambler Subgroups Using Recursive Partitioning

    ERIC Educational Resources Information Center

    Markham, Francis; Young, Martin; Doran, Bruce

    2013-01-01

    The multivariate socio-demographic risk factors for problem gambling have been well documented. While this body of research is valuable in determining risk factors aggregated across various populations, the majority of studies tend not to specifically identify particular subgroups of problem gamblers based on the interaction between variables. The…

  3. Condensate statistics and thermodynamics of weakly interacting Bose gas: Recursion relation approach

    NASA Astrophysics Data System (ADS)

    Dorfman, K. E.; Kim, M.; Svidzinsky, A. A.

    2011-03-01

    We study condensate statistics and thermodynamics of weakly interacting Bose gas with a fixed total number N of particles in a cubic box. We find the exact recursion relation for the canonical ensemble partition function. Using this relation, we calculate the distribution function of condensate particles for N=200. We also calculate the distribution function based on multinomial expansion of the characteristic function. Similar to the ideal gas, both approaches give exact statistical moments for all temperatures in the framework of Bogoliubov model. We compare them with the results of unconstraint canonical ensemble quasiparticle formalism and the hybrid master equation approach. The present recursion relation can be used for any external potential and boundary conditions. We investigate the temperature dependence of the first few statistical moments of condensate fluctuations as well as thermodynamic potentials and heat capacity analytically and numerically in the whole temperature range.

  4. Recursive partitioned inversion of large (1500 x 1500) symmetric matrices

    NASA Technical Reports Server (NTRS)

    Putney, B. H.; Brownd, J. E.; Gomez, R. A.

    1976-01-01

    A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.

  5. Recursive time-varying filter banks for subband image coding

    NASA Technical Reports Server (NTRS)

    Smith, Mark J. T.; Chung, Wilson C.

    1992-01-01

    Filter banks and wavelet decompositions that employ recursive filters have been considered previously and are recognized for their efficiency in partitioning the frequency spectrum. This paper presents an analysis of a new infinite impulse response (IIR) filter bank in which these computationally efficient filters may be changed adaptively in response to the input. The filter bank is presented and discussed in the context of finite-support signals with the intended application in subband image coding. In the absence of quantization errors, exact reconstruction can be achieved and by the proper choice of an adaptation scheme, it is shown that IIR time-varying filter banks can yield improvement over conventional ones.

  6. High Performance Computing Based Parallel HIearchical Modal Association Clustering (HPAR HMAC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patlolla, Dilip R; Surendran Nair, Sujithkumar; Graves, Daniel A.

    For many applications, clustering is a crucial step in order to gain insight into the makeup of a dataset. The best approach to a given problem often depends on a variety of factors, such as the size of the dataset, time restrictions, and soft clustering requirements. The HMAC algorithm seeks to combine the strengths of 2 particular clustering approaches: model-based and linkage-based clustering. One particular weakness of HMAC is its computational complexity. HMAC is not practical for mega-scale data clustering. For high-definition imagery, a user would have to wait months or years for a result; for a 16-megapixel image, themore » estimated runtime skyrockets to over a decade! To improve the execution time of HMAC, it is reasonable to consider an multi-core implementation that utilizes available system resources. An existing imple-mentation (Ray and Cheng 2014) divides the dataset into N partitions - one for each thread prior to executing the HMAC algorithm. This implementation benefits from 2 types of optimization: parallelization and divide-and-conquer. By running each partition in parallel, the program is able to accelerate computation by utilizing more system resources. Although the parallel implementation provides considerable improvement over the serial HMAC, it still suffers from poor computational complexity, O(N2). Once the maximum number of cores on a system is exhausted, the program exhibits slower behavior. We now consider a modification to HMAC that involves a recursive partitioning scheme. Our modification aims to exploit divide-and-conquer benefits seen by the parallel HMAC implementation. At each level in the recursion tree, partitions are divided into 2 sub-partitions until a threshold size is reached. When the partition can no longer be divided without falling below threshold size, the base HMAC algorithm is applied. This results in a significant speedup over the parallel HMAC.« less

  7. Beating the Odds: Trees to Success in Different Countries

    ERIC Educational Resources Information Center

    Finch, W. Holmes; Marchant, Gregory J.

    2017-01-01

    A recursive partitioning model approach in the form of classification and regression trees (CART) was used with 2012 PISA data for five countries (Canada, Finland, Germany, Singapore-China, and the Unites States). The objective of the study was to determine demographic and educational variables that differentiated between low SES student that were…

  8. Predictive Value of Morphological Features in Patients with Autism versus Normal Controls

    ERIC Educational Resources Information Center

    Ozgen, H.; Hellemann, G. S.; de Jonge, M. V.; Beemer, F. A.; van Engeland, H.

    2013-01-01

    We investigated the predictive power of morphological features in 224 autistic patients and 224 matched-pairs controls. To assess the relationship between the morphological features and autism, we used the receiver operator curves (ROC). In addition, we used recursive partitioning (RP) to determine a specific pattern of abnormalities that is…

  9. Efficiently Exploring Multilevel Data with Recursive Partitioning

    ERIC Educational Resources Information Center

    Martin, Daniel P.; von Oertzen, Timo; Rimm-Kaufman, Sara E.

    2015-01-01

    There is an increasing number of datasets with many participants, variables, or both, in education and other fields that often deal with large, multilevel data structures. Once initial confirmatory hypotheses are exhausted, it can be difficult to determine how best to explore the dataset to discover hidden relationships that could help to inform…

  10. Recursive Partitioning to Identify Potential Causes of Differential Item Functioning in Cross-National Data

    ERIC Educational Resources Information Center

    Finch, W. Holmes; Hernández Finch, Maria E.; French, Brian F.

    2016-01-01

    Differential item functioning (DIF) assessment is key in score validation. When DIF is present scores may not accurately reflect the construct of interest for some groups of examinees, leading to incorrect conclusions from the scores. Given rising immigration, and the increased reliance of educational policymakers on cross-national assessments…

  11. Perceived Organizational Support for Enhancing Welfare at Work: A Regression Tree Model

    PubMed Central

    Giorgi, Gabriele; Dubin, David; Perez, Javier Fiz

    2016-01-01

    When trying to examine outcomes such as welfare and well-being, research tends to focus on main effects and take into account limited numbers of variables at a time. There are a number of techniques that may help address this problem. For example, many statistical packages available in R provide easy-to-use methods of modeling complicated analysis such as classification and tree regression (i.e., recursive partitioning). The present research illustrates the value of recursive partitioning in the prediction of perceived organizational support in a sample of more than 6000 Italian bankers. Utilizing the tree function party package in R, we estimated a regression tree model predicting perceived organizational support from a multitude of job characteristics including job demand, lack of job control, lack of supervisor support, training, etc. The resulting model appears particularly helpful in pointing out several interactions in the prediction of perceived organizational support. In particular, training is the dominant factor. Another dimension that seems to influence organizational support is reporting (perceived communication about safety and stress concerns). Results are discussed from a theoretical and methodological point of view. PMID:28082924

  12. Fragment-based prediction of skin sensitization using recursive partitioning

    NASA Astrophysics Data System (ADS)

    Lu, Jing; Zheng, Mingyue; Wang, Yong; Shen, Qiancheng; Luo, Xiaomin; Jiang, Hualiang; Chen, Kaixian

    2011-09-01

    Skin sensitization is an important toxic endpoint in the risk assessment of chemicals. In this paper, structure-activity relationships analysis was performed on the skin sensitization potential of 357 compounds with local lymph node assay data. Structural fragments were extracted by GASTON (GrAph/Sequence/Tree extractiON) from the training set. Eight fragments with accuracy significantly higher than 0.73 ( p < 0.1) were retained to make up an indicator descriptor fragment. The fragment descriptor and eight other physicochemical descriptors closely related to the endpoint were calculated to construct the recursive partitioning tree (RP tree) for classification. The balanced accuracy of the training set, test set I, and test set II in the leave-one-out model were 0.846, 0.800, and 0.809, respectively. The results highlight that fragment-based RP tree is a preferable method for identifying skin sensitizers. Moreover, the selected fragments provide useful structural information for exploring sensitization mechanisms, and RP tree creates a graphic tree to identify the most important properties associated with skin sensitization. They can provide some guidance for designing of drugs with lower sensitization level.

  13. Randomized Phase II Trial of High-Dose Melatonin and Radiation Therapy for RPA Class 2 Patients With Brain Metastases (RTOG 0119)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berk, Lawrence; Berkey, Brian; Rich, Tyvin

    Purpose: To determine if high-dose melatonin for Radiation Therapy Oncology Group (RTOG) recursive partitioning analysis (RPA) Class 2 patients with brain metastases improved survival over historical controls, and to determine if the time of day melatonin was given affected its toxicity or efficacy. RTOG 0119 was a phase II randomized trial for this group of patients. Methods and Materials: RTOG RPA Class 2 patients with brain metastases were randomized to 20 mg of melatonin, given either in the morning (8-9 AM) or in the evening (8-9 PM). All patients received radiation therapy (30 Gy in 10 fractions) in the afternoon.more » Melatonin was continued until neurologic deterioration or death. The primary endpoint was overall survival time. Neurologic deterioration, as reflected by the Mini-Mental Status Examination, was also measured. Results: Neither of the randomized groups had survival distributions that differed significantly from the historic controls of patients treated with whole-brain radiotherapy. The median survivals of the morning and evening melatonin treatments were 3.4 and 2.8 months, while the RTOG historical control survival was 4.1 months. Conclusions: High-dose melatonin did not show any beneficial effect in this group of patients.« less

  14. A Matched-Pair Analysis Comparing Whole-Brain Radiotherapy Plus Stereotactic Radiosurgery Versus Surgery Plus Whole-Brain Radiotherapy and a Boost to the Metastatic Site for One or Two Brain Metastases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rades, Dirk; Department of Radiation Oncology, University Medical Center, Hamburg; Kueter, Jan-Dirk

    2009-03-15

    Purpose: To compare the results of whole-brain radiotherapy plus stereotactic radiosurgery (WBRT+SRS) with those of surgery plus whole-brain radiotherapy and a boost to the metastatic site (OP+WBRT+boost) for patients with one or two brain metastases. Methods and Materials: Survival, intracerebral control, and local control of the treated metastases were retrospectively evaluated. To reduce the risk of selection bias, a matched-pair analysis was performed. The outcomes of 47 patients who received WBRT+SRS were compared with those of a second cohort of 47 patients who received OP+WBRT+boost. The two treatment groups were matched for the following potential prognostic factors: WBRT schedule, age,more » gender, performance status, tumor type, number of brain metastases, extracerebral metastases, recursive partitioning analysis class, and interval from tumor diagnosis to WBRT. Results: The 1-year survival rates were 65% after WBRT+SRS and 63% after OP+WBRT+boost (p = 0.19). The 1-year intracerebral control rates were 70% and 78% (p = 0.39), respectively. The 1-year local control rates were 84% and 83% (p = 0.87), respectively. On multivariate analyses, improved survival was significantly associated with better performance status (p = 0.009), no extracerebral metastases (p = 0.004), recursive partitioning analysis Class 1 (p = 0.004), and interval from tumor diagnosis to WBRT (p = 0.001). Intracerebral control was not significantly associated with any of the potential prognostic factors. Improved local control was significantly associated with no extracerebral metastases (p = 0.037). Conclusions: Treatment outcomes were not significantly different after WBRT+SRS compared with OP+WBRT+boost. However, WBRT+SRS is less invasive than OP+WBRT+boost and may be preferable for patients with one or two brain metastases. The results should be confirmed by randomized t0011ria.« less

  15. Efficiency of International Classification of Diseases, Ninth Revision, Billing Code Searches to Identify Emergency Department Visits for Blood or Body Fluid Exposures through a Statewide Multicenter Database

    PubMed Central

    Rosen, Lisa M.; Liu, Tao; Merchant, Roland C.

    2016-01-01

    BACKGROUND Blood and body fluid exposures are frequently evaluated in emergency departments (EDs). However, efficient and effective methods for estimating their incidence are not yet established. OBJECTIVE Evaluate the efficiency and accuracy of estimating statewide ED visits for blood or body fluid exposures using International Classification of Diseases, Ninth Revision (ICD-9), code searches. DESIGN Secondary analysis of a database of ED visits for blood or body fluid exposure. SETTING EDs of 11 civilian hospitals throughout Rhode Island from January 1, 1995, through June 30, 2001. PATIENTS Patients presenting to the ED for possible blood or body fluid exposure were included, as determined by prespecified ICD-9 codes. METHODS Positive predictive values (PPVs) were estimated to determine the ability of 10 ICD-9 codes to distinguish ED visits for blood or body fluid exposure from ED visits that were not for blood or body fluid exposure. Recursive partitioning was used to identify an optimal subset of ICD-9 codes for this purpose. Random-effects logistic regression modeling was used to examine variations in ICD-9 coding practices and styles across hospitals. Cluster analysis was used to assess whether the choice of ICD-9 codes was similar across hospitals. RESULTS The PPV for the original 10 ICD-9 codes was 74.4% (95% confidence interval [CI], 73.2%–75.7%), whereas the recursive partitioning analysis identified a subset of 5 ICD-9 codes with a PPV of 89.9% (95% CI, 88.9%–90.8%) and a misclassification rate of 10.1%. The ability, efficiency, and use of the ICD-9 codes to distinguish types of ED visits varied across hospitals. CONCLUSIONS Although an accurate subset of ICD-9 codes could be identified, variations across hospitals related to hospital coding style, efficiency, and accuracy greatly affected estimates of the number of ED visits for blood or body fluid exposure. PMID:22561713

  16. Evaluation of funnel traps for estimating tree mortality and associated population phase of spruce beetle in Utah

    Treesearch

    E. Matthew Hansen; Barbara J. Bentz; A. Steven Munson; James C. Vandygriff; David L. Turner

    2006-01-01

    Although funnel traps are routinely used to manage bark beetles, little is known regarding the relationship between trap captures of spruce beetle (Dendroctonus rufipennis Kirby) and mortality of Engelmann spruce (Picea engelmannii Parry ex Engelm.) within a 10 ha block of the trap. Using recursive partitioning tree analyses, rules...

  17. Predicting forest attributes from climate data using a recursive partitioning and regression tree algorithm

    Treesearch

    Greg C. Liknes; Christopher W. Woodall; Charles H. Perry

    2009-01-01

    Climate information frequently is included in geospatial modeling efforts to improve the predictive capability of other data sources. The selection of an appropriate climate data source requires consideration given the number of choices available. With regard to climate data, there are a variety of parameters (e.g., temperature, humidity, precipitation), time intervals...

  18. The nondeterministic divide

    NASA Technical Reports Server (NTRS)

    Charlesworth, Arthur

    1990-01-01

    The nondeterministic divide partitions a vector into two non-empty slices by allowing the point of division to be chosen nondeterministically. Support for high-level divide-and-conquer programming provided by the nondeterministic divide is investigated. A diva algorithm is a recursive divide-and-conquer sequential algorithm on one or more vectors of the same range, whose division point for a new pair of recursive calls is chosen nondeterministically before any computation is performed and whose recursive calls are made immediately after the choice of division point; also, access to vector components is only permitted during activations in which the vector parameters have unit length. The notion of diva algorithm is formulated precisely as a diva call, a restricted call on a sequential procedure. Diva calls are proven to be intimately related to associativity. Numerous applications of diva calls are given and strategies are described for translating a diva call into code for a variety of parallel computers. Thus diva algorithms separate logical correctness concerns from implementation concerns.

  19. Adventures in Topological Field Theory

    NASA Astrophysics Data System (ADS)

    Horne, James H.

    1990-01-01

    This thesis consists of 5 parts. In part I, the topological Yang-Mills theory and the topological sigma model are presented in a superspace formulation. This greatly simplifies the field content of the theories, and makes the Q-invariance more obvious. The Feynman rules for the topological Yang -Mills theory are derived. We calculate the one-loop beta-functions of the topological sigma model in superspace. The lattice version of these theories is presented. The self-duality constraints of both models lead to spectrum doubling. In part II, we show that conformally invariant gravity in three dimensions is equivalent to the Yang-Mills gauge theory of the conformal group in three dimensions, with a Chern-Simons action. This means that conformal gravity is finite and exactly soluble. In part III, we derive the skein relations for the fundamental representations of SO(N), Sp(2n), Su(m| n), and OSp(m| 2n). These relations can be used recursively to calculate the expectation values of Wilson lines in three-dimensional Chern-Simons gauge theory with these gauge groups. A combination of braiding and tying of Wilson lines completely describes the skein relations. In part IV, we show that the k = 1 two dimensional gravity amplitudes at genus 3 agree precisely with the results from intersection theory on moduli space. Predictions for the genus 4 intersection numbers follow from the two dimensional gravity theory. In part V, we discuss the partition function in two dimensional gravity. For the one matrix model at genus 2, we use the partition function to derive a recursion relation. We show that the k = 1 amplitudes completely determine the partition function at arbitrary genus. We present a conjecture for the partition function for the arbitrary topological field theory coupled to topological gravity.

  20. Virasoro constraints and polynomial recursion for the linear Hodge integrals

    NASA Astrophysics Data System (ADS)

    Guo, Shuai; Wang, Gehao

    2017-04-01

    The Hodge tau-function is a generating function for the linear Hodge integrals. It is also a tau-function of the KP hierarchy. In this paper, we first present the Virasoro constraints for the Hodge tau-function in the explicit form of the Virasoro equations. The expression of our Virasoro constraints is simply a linear combination of the Virasoro operators, where the coefficients are restored from a power series for the Lambert W function. Then, using this result, we deduce a simple version of the Virasoro constraints for the linear Hodge partition function, where the coefficients are restored from the Gamma function. Finally, we establish the equivalence relation between the Virasoro constraints and polynomial recursion formula for the linear Hodge integrals.

  1. Accounting for Individual Differences in Bradley-Terry Models by Means of Recursive Partitioning

    ERIC Educational Resources Information Center

    Strobl, Carolin; Wickelmaier, Florian; Zeileis, Achim

    2011-01-01

    The preference scaling of a group of subjects may not be homogeneous, but different groups of subjects with certain characteristics may show different preference scalings, each of which can be derived from paired comparisons by means of the Bradley-Terry model. Usually, either different models are fit in predefined subsets of the sample or the…

  2. Censored quantile regression with recursive partitioning-based weights

    PubMed Central

    Wey, Andrew; Wang, Lan; Rudser, Kyle

    2014-01-01

    Censored quantile regression provides a useful alternative to the Cox proportional hazards model for analyzing survival data. It directly models the conditional quantile of the survival time and hence is easy to interpret. Moreover, it relaxes the proportionality constraint on the hazard function associated with the popular Cox model and is natural for modeling heterogeneity of the data. Recently, Wang and Wang (2009. Locally weighted censored quantile regression. Journal of the American Statistical Association 103, 1117–1128) proposed a locally weighted censored quantile regression approach that allows for covariate-dependent censoring and is less restrictive than other censored quantile regression methods. However, their kernel smoothing-based weighting scheme requires all covariates to be continuous and encounters practical difficulty with even a moderate number of covariates. We propose a new weighting approach that uses recursive partitioning, e.g. survival trees, that offers greater flexibility in handling covariate-dependent censoring in moderately high dimensions and can incorporate both continuous and discrete covariates. We prove that this new weighting scheme leads to consistent estimation of the quantile regression coefficients and demonstrate its effectiveness via Monte Carlo simulations. We also illustrate the new method using a widely recognized data set from a clinical trial on primary biliary cirrhosis. PMID:23975800

  3. A hybrid video codec based on extended block sizes, recursive integer transforms, improved interpolation, and flexible motion representation

    NASA Astrophysics Data System (ADS)

    Karczewicz, Marta; Chen, Peisong; Joshi, Rajan; Wang, Xianglin; Chien, Wei-Jung; Panchal, Rahul; Coban, Muhammed; Chong, In Suk; Reznik, Yuriy A.

    2011-01-01

    This paper describes video coding technology proposal submitted by Qualcomm Inc. in response to a joint call for proposal (CfP) issued by ITU-T SG16 Q.6 (VCEG) and ISO/IEC JTC1/SC29/WG11 (MPEG) in January 2010. Proposed video codec follows a hybrid coding approach based on temporal prediction, followed by transform, quantization, and entropy coding of the residual. Some of its key features are extended block sizes (up to 64x64), recursive integer transforms, single pass switched interpolation filters with offsets (single pass SIFO), mode dependent directional transform (MDDT) for intra-coding, luma and chroma high precision filtering, geometry motion partitioning, adaptive motion vector resolution. It also incorporates internal bit-depth increase (IBDI), and modified quadtree based adaptive loop filtering (QALF). Simulation results are presented for a variety of bit rates, resolutions and coding configurations to demonstrate the high compression efficiency achieved by the proposed video codec at moderate level of encoding and decoding complexity. For random access hierarchical B configuration (HierB), the proposed video codec achieves an average BD-rate reduction of 30.88c/o compared to the H.264/AVC alpha anchor. For low delay hierarchical P (HierP) configuration, the proposed video codec achieves an average BD-rate reduction of 32.96c/o and 48.57c/o, compared to the H.264/AVC beta and gamma anchors, respectively.

  4. S-HARP: A parallel dynamic spectral partitioner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sohn, A.; Simon, H.

    1998-01-01

    Computational science problems with adaptive meshes involve dynamic load balancing when implemented on parallel machines. This dynamic load balancing requires fast partitioning of computational meshes at run time. The authors present in this report a fast parallel dynamic partitioner, called S-HARP. The underlying principles of S-HARP are the fast feature of inertial partitioning and the quality feature of spectral partitioning. S-HARP partitions a graph from scratch, requiring no partition information from previous iterations. Two types of parallelism have been exploited in S-HARP, fine grain loop level parallelism and coarse grain recursive parallelism. The parallel partitioner has been implemented in Messagemore » Passing Interface on Cray T3E and IBM SP2 for portability. Experimental results indicate that S-HARP can partition a mesh of over 100,000 vertices into 256 partitions in 0.2 seconds on a 64 processor Cray T3E. S-HARP is much more scalable than other dynamic partitioners, giving over 15 fold speedup on 64 processors while ParaMeTiS1.0 gives a few fold speedup. Experimental results demonstrate that S-HARP is three to 10 times faster than the dynamic partitioners ParaMeTiS and Jostle on six computational meshes of size over 100,000 vertices.« less

  5. Random forests as cumulative effects models: A case study of lakes and rivers in Muskoka, Canada.

    PubMed

    Jones, F Chris; Plewes, Rachel; Murison, Lorna; MacDougall, Mark J; Sinclair, Sarah; Davies, Christie; Bailey, John L; Richardson, Murray; Gunn, John

    2017-10-01

    Cumulative effects assessment (CEA) - a type of environmental appraisal - lacks effective methods for modeling cumulative effects, evaluating indicators of ecosystem condition, and exploring the likely outcomes of development scenarios. Random forests are an extension of classification and regression trees, which model response variables by recursive partitioning. Random forests were used to model a series of candidate ecological indicators that described lakes and rivers from a case study watershed (The Muskoka River Watershed, Canada). Suitability of the candidate indicators for use in cumulative effects assessment and watershed monitoring was assessed according to how well they could be predicted from natural habitat features and how sensitive they were to human land-use. The best models explained 75% of the variation in a multivariate descriptor of lake benthic-macroinvertebrate community structure, and 76% of the variation in the conductivity of river water. Similar results were obtained by cross-validation. Several candidate indicators detected a simulated doubling of urban land-use in their catchments, and a few were able to detect a simulated doubling of agricultural land-use. The paper demonstrates that random forests can be used to describe the combined and singular effects of multiple stressors and natural environmental factors, and furthermore, that random forests can be used to evaluate the performance of monitoring indicators. The numerical methods presented are applicable to any ecosystem and indicator type, and therefore represent a step forward for CEA. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  6. A Novel Space Partitioning Algorithm to Improve Current Practices in Facility Placement

    PubMed Central

    Jimenez, Tamara; Mikler, Armin R; Tiwari, Chetan

    2012-01-01

    In the presence of naturally occurring and man-made public health threats, the feasibility of regional bio-emergency contingency plans plays a crucial role in the mitigation of such emergencies. While the analysis of in-place response scenarios provides a measure of quality for a given plan, it involves human judgment to identify improvements in plans that are otherwise likely to fail. Since resource constraints and government mandates limit the availability of service provided in case of an emergency, computational techniques can determine optimal locations for providing emergency response assuming that the uniform distribution of demand across homogeneous resources will yield and optimal service outcome. This paper presents an algorithm that recursively partitions the geographic space into sub-regions while equally distributing the population across the partitions. For this method, we have proven the existence of an upper bound on the deviation from the optimal population size for sub-regions. PMID:23853502

  7. PROSPECT: Profiling of Resistance Patterns & Oncogenic Signaling Pathways in Evaluation of Cancers of the Thorax and Therapeutic Target Identification

    DTIC Science & Technology

    2012-06-01

    neoadjuvant therapies on disease-free, progression-free, and overall survival will vary across prognostically distinct groups. 3. Specific molecular... prognostically distinct subpopulations of patients with resectable NSCLC, and to assess the extent to which these molecular profiles correlate with tumor...overall survival, and will use Cox proportional hazards models and recursive partitioning methods to identify important biomarkers and prognostically

  8. New Breast Cancer Recursive Partitioning Analysis Prognostic Index in Patients With Newly Diagnosed Brain Metastases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niwinska, Anna, E-mail: alphaonetau@poczta.onet.pl; Murawska, Magdalena

    2012-04-01

    Purpose: The aim of the study was to present a new breast cancer recursive partitioning analysis (RPA) prognostic index for patients with newly diagnosed brain metastases as a guide in clinical decision making. Methods and Materials: A prospectively collected group of 441 consecutive patients with breast cancer and brain metastases treated between the years 2003 and 2009 was assessed. Prognostic factors significant for univariate analysis were included into RPA. Results: Three prognostic classes of a new breast cancer RPA prognostic index were selected. The median survival of patients within prognostic Classes I, II, and III was 29, 9, and 2.4more » months, respectively (p < 0.0001). Class I included patients with one or two brain metastases, without extracranial disease or with controlled extracranial disease, and with Karnofsky performance status (KPS) of 100. Class III included patients with multiple brain metastases with KPS of {<=}60. Class II included all other cases. Conclusions: The breast cancer RPA prognostic index is an easy and valuable tool for use in clinical practice. It can select patients who require aggressive treatment and those in whom whole-brain radiotherapy or symptomatic therapy is the most reasonable option. An individual approach is required for patients from prognostic Class II.« less

  9. Pathways to Early Coital Debut for Adolescent Girls: A Recursive Partitioning Analysis

    PubMed Central

    Pearson, Matthew R.; Kholodkov, Tatyana; Henson, James M.; Impett, Emily A.

    2011-01-01

    The current study examined pathways to early coital debut among early to middle adolescent girls in the United States. In a two-year longitudinal study of 104 adolescent girls, we conducted Recursive Partitioning (RP) analyses to examine the specific factors that were related to engaging in first intercourse by the 10th grade among adolescent girls who had not yet engaged in sexual intercourse by the 8th grade. RP analyses identified subsamples of girls who had low, medium, and high likelihoods of engaging in early coital debut based on six variables (i.e., school aspirations, early physical intimacy experiences, depression, body objectification, body image, and relationship inauthenticity). For example, girls in the lowest likelihood group (3% had engaged in sex by the 10th grade) reported no prior experiences with being touched under their clothes, low body objectification, high aspirations to complete graduate education, and low depressive symptoms; girls in the highest likelihood group (75% had engaged in sex by the 10th grade) also reported no prior experiences with being touched under their clothes but had high levels of body objectification. The implications of these analyses for the development of female adolescent sexuality as well as for advances in quantitative methods are discussed. PMID:21512947

  10. Mutual Information between Discrete Variables with Many Categories using Recursive Adaptive Partitioning

    PubMed Central

    Seok, Junhee; Seon Kang, Yeong

    2015-01-01

    Mutual information, a general measure of the relatedness between two random variables, has been actively used in the analysis of biomedical data. The mutual information between two discrete variables is conventionally calculated by their joint probabilities estimated from the frequency of observed samples in each combination of variable categories. However, this conventional approach is no longer efficient for discrete variables with many categories, which can be easily found in large-scale biomedical data such as diagnosis codes, drug compounds, and genotypes. Here, we propose a method to provide stable estimations for the mutual information between discrete variables with many categories. Simulation studies showed that the proposed method reduced the estimation errors by 45 folds and improved the correlation coefficients with true values by 99 folds, compared with the conventional calculation of mutual information. The proposed method was also demonstrated through a case study for diagnostic data in electronic health records. This method is expected to be useful in the analysis of various biomedical data with discrete variables. PMID:26046461

  11. Predicting adaptive phenotypes from multilocus genotypes in Sitka spruce (Picea sitchensis) using random forest.

    PubMed

    Holliday, Jason A; Wang, Tongli; Aitken, Sally

    2012-09-01

    Climate is the primary driver of the distribution of tree species worldwide, and the potential for adaptive evolution will be an important factor determining the response of forests to anthropogenic climate change. Although association mapping has the potential to improve our understanding of the genomic underpinnings of climatically relevant traits, the utility of adaptive polymorphisms uncovered by such studies would be greatly enhanced by the development of integrated models that account for the phenotypic effects of multiple single-nucleotide polymorphisms (SNPs) and their interactions simultaneously. We previously reported the results of association mapping in the widespread conifer Sitka spruce (Picea sitchensis). In the current study we used the recursive partitioning algorithm 'Random Forest' to identify optimized combinations of SNPs to predict adaptive phenotypes. After adjusting for population structure, we were able to explain 37% and 30% of the phenotypic variation, respectively, in two locally adaptive traits--autumn budset timing and cold hardiness. For each trait, the leading five SNPs captured much of the phenotypic variation. To determine the role of epistasis in shaping these phenotypes, we also used a novel approach to quantify the strength and direction of pairwise interactions between SNPs and found such interactions to be common. Our results demonstrate the power of Random Forest to identify subsets of markers that are most important to climatic adaptation, and suggest that interactions among these loci may be widespread.

  12. Prognostic factors in children and adolescents with acute myeloid leukemia (excluding children with Down syndrome and acute promyelocytic leukemia): univariate and recursive partitioning analysis of patients treated on Pediatric Oncology Group (POG) Study 8821.

    PubMed

    Chang, M; Raimondi, S C; Ravindranath, Y; Carroll, A J; Camitta, B; Gresik, M V; Steuber, C P; Weinstein, H

    2000-07-01

    The purpose of the paper was to define clinical or biological features associated with the risk for treatment failure for children with acute myeloid leukemia. Data from 560 children and adolescents with newly diagnosed acute myeloid leukemia who entered the Pediatric Oncology Group Study 8821 from June 1988 to March 1993 were analyzed by univariate and recursive partitioning methods. Children with Down syndrome or acute promyelocytic leukemia were excluded from the study. Factors examined included age, number of leukocytes, sex, FAB morphologic subtype, cytogenetic findings, and extramedullary disease at the time of diagnosis. The overall event-free survival (EFS) rate at 4 years was 32.7% (s.e. = 2.2%). Age > or =2 years, fewer than 50 x 10(9)/I leukocytes, and t(8;21) or inv(16), and normal chromosomes were associated with higher rates of EFS (P value = 0.003, 0.049, 0.0003, 0.031, respectively), whereas the M5 subtype of AML (P value = 0.0003) and chromosome abnormalities other than t(8;21) and inv(16) were associated with lower rates of EFS (P value = 0.0001). Recursive partitioning analysis defined three groups of patients with widely varied prognoses: female patients with t(8;21), inv(16), or a normal karyotype (n = 89) had the best prognosis (4-year EFS = 55.1%, s.e. = 5.7%); male patients with t(8;21), inv(16) or normal chromosomes (n = 106) had an intermediate prognosis (4-year EFS = 38.1%, s.e. = 5.3%); patients with chromosome abnormalities other than t(8;21) and inv(16) (n = 233) had the worst prognosis (4-year EFS = 27.0%, s.e. = 3.2%). One hundred and thirty-two patients (24%) could not be grouped because of missing cytogenetic data, mainly due to inadequate marrow samples. The results suggest that pediatric patients with acute myeloid leukemia can be categorized into three potential risk groups for prognosis and that differences in sex and chromosomal abnormalities are associated with differences in estimates of EFS. These results are tentative and must be confirmed by a large prospective clinical trial.

  13. Three quantitative approaches to the diagnosis of abdominal pain in children: practical applications of decision theory.

    PubMed

    Klein, M D; Rabbani, A B; Rood, K D; Durham, T; Rosenberg, N M; Bahr, M J; Thomas, R L; Langenburg, S E; Kuhns, L R

    2001-09-01

    The authors compared 3 quantitative methods for assisting clinicians in the differential diagnosis of abdominal pain in children, where the most common important endpoint is whether the patient has appendicitis. Pretest probability in different age and sex groups were determined to perform Bayesian analysis, binary logistic regression was used to determine which variables were statistically significantly likely to contribute to a diagnosis, and recursive partitioning was used to build decision trees with quantitative endpoints. The records of all children (1,208) seen at a large urban emergency department (ED) with a chief complaint of abdominal pain were immediately reviewed retrospectively (24 to 72 hours after the encounter). Attempts were made to contact all the patients' families to determine an accurate final diagnosis. A total of 1,008 (83%) families were contacted. Data were analyzed by calculation of the posttest probability, recursive partitioning, and binary logistic regression. In all groups the most common diagnosis was abdominal pain (ICD-9 Code 789). After this, however, the order of the most common final diagnoses for abdominal pain varied significantly. The entire group had a pretest probability of appendicitis of 0.06. This varied with age and sex from 0.02 in boys 2 to 5 years old to 0.16 in boys older than 12 years. In boys age 5 to 12, recursive partitioning and binary logistic regression agreed on guarding and anorexia as important variables. Guarding and tenderness were important in girls age 5 to 12. In boys age greater than 12, both agreed on guarding and anorexia. Using sensitivities and specificities from the literature, computed tomography improved the posttest probability for the group from.06 to.33; ultrasound improved it from.06 to.48; and barium enema improved it from.06 to.58. Knowing the pretest probabilities in a specific population allows the physician to evaluate the likely diagnoses first. Other quantitative methods can help judge how much importance a certain criterion should have in the decision making and how much a particular test is likely to influence the probability of a correct diagnosis. It now should be possible to make these sophisticated quantitative methods readily available to clinicians via the computer. Copyright 2001 by W.B. Saunders Company.

  14. A Framework for Parallel Unstructured Grid Generation for Complex Aerodynamic Simulations

    NASA Technical Reports Server (NTRS)

    Zagaris, George; Pirzadeh, Shahyar Z.; Chrisochoides, Nikos

    2009-01-01

    A framework for parallel unstructured grid generation targeting both shared memory multi-processors and distributed memory architectures is presented. The two fundamental building-blocks of the framework consist of: (1) the Advancing-Partition (AP) method used for domain decomposition and (2) the Advancing Front (AF) method used for mesh generation. Starting from the surface mesh of the computational domain, the AP method is applied recursively to generate a set of sub-domains. Next, the sub-domains are meshed in parallel using the AF method. The recursive nature of domain decomposition naturally maps to a divide-and-conquer algorithm which exhibits inherent parallelism. For the parallel implementation, the Master/Worker pattern is employed to dynamically balance the varying workloads of each task on the set of available CPUs. Performance results by this approach are presented and discussed in detail as well as future work and improvements.

  15. Computation of transform domain covariance matrices

    NASA Technical Reports Server (NTRS)

    Fino, B. J.; Algazi, V. R.

    1975-01-01

    It is often of interest in applications to compute the covariance matrix of a random process transformed by a fast unitary transform. Here, the recursive definition of fast unitary transforms is used to derive recursive relations for the covariance matrices of the transformed process. These relations lead to fast methods of computation of covariance matrices and to substantial reductions of the number of arithmetic operations required.

  16. Impact of triple-negative phenotype on prognosis of patients with breast cancer brain metastases.

    PubMed

    Xu, Zhiyuan; Schlesinger, David; Toulmin, Sushila; Rich, Tyvin; Sheehan, Jason

    2012-11-01

    To elucidate survival times and identify potential prognostic factors in patients with triple-negative (TN) phenotype who harbored brain metastases arising from breast cancer and who underwent stereotactic radiosurgery (SRS). A total of 103 breast cancer patients with brain metastases were treated with SRS and then studied retrospectively. Twenty-four patients (23.3%) were TN. Survival times were estimated using the Kaplan-Meier method, with a log-rank test computing the survival time difference between groups. Univariate and multivariate analyses to predict potential prognostic factors were performed using a Cox proportional hazard regression model. The presence of TN phenotype was associated with worse survival times, including overall survival after the diagnosis of primary breast cancer (43 months vs. 82 months), neurologic survival after the diagnosis of intracranial metastases, and radiosurgical survival after SRS, with median survival times being 13 months vs. 25 months and 6 months vs. 16 months, respectively (p < 0.002 in all three comparisons). On multivariate analysis, radiosurgical survival benefit was associated with non-TN status and lower recursive partitioning analysis class at the initial SRS. The TN phenotype represents a significant adverse prognostic factor with respect to overall survival, neurologic survival, and radiosurgical survival in breast cancer patients with intracranial metastasis. Recursive partitioning analysis class also served as an important and independent prognostic factor. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. The use of recursive partitioning analysis grouping in patients with brain metastases from non-small-cell lung cancer.

    PubMed

    Gülbaş, Hülya; Erkal, Haldun Sükrü; Serin, Meltem

    2006-04-01

    This study evaluates the use of recursive partitioning analysis (RPA) grouping in an attempt to predict the survival probabilities in patients with brain metastases from non-small-cell lung cancer (NSCLC). Seventy-two patients with brain metastases from NSCLC treated with radiation therapy were included in the study. Sixty-three patients were male and nine patients were female. Their median age was 57 years and their median Karnofsky performance status was 70. At the time of brain metastases, there was no evidence of the intrathoracic disease in 27 patients and the extrathoracic disease was limited to the intracranial disease in 42 patients. In accordance with RPA grouping, 12 patients were in Group 1, 24 patients were in Group 2, and 36 patients were in Group 3. Radiation therapy was delivered to the whole brain at a dose of 30 Gy in 10 fractions in most of the patients. The median survival time was 7 months for Group 1, 5 months for Group 2 and 3 months for Group 3. The survival probability at 1 year was 50% for Group 1, 26% for Group 2 and 14% for Group 3. This study presents evidence supporting the use of RPA grouping in an attempt to predict the survival probabilities in patients with brain metastases from NSCLC.

  18. Recursive Partitioning Analysis for New Classification of Patients With Esophageal Cancer Treated by Chemoradiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nomura, Motoo, E-mail: excell@hkg.odn.ne.jp; Department of Clinical Oncology, Aichi Cancer Center Hospital, Nagoya; Department of Radiation Oncology, Aichi Cancer Center Hospital, Nagoya

    2012-11-01

    Background: The 7th edition of the American Joint Committee on Cancer staging system does not include lymph node size in the guidelines for staging patients with esophageal cancer. The objectives of this study were to determine the prognostic impact of the maximum metastatic lymph node diameter (ND) on survival and to develop and validate a new staging system for patients with esophageal squamous cell cancer who were treated with definitive chemoradiotherapy (CRT). Methods: Information on 402 patients with esophageal cancer undergoing CRT at two institutions was reviewed. Univariate and multivariate analyses of data from one institution were used to assessmore » the impact of clinical factors on survival, and recursive partitioning analysis was performed to develop the new staging classification. To assess its clinical utility, the new classification was validated using data from the second institution. Results: By multivariate analysis, gender, T, N, and ND stages were independently and significantly associated with survival (p < 0.05). The resulting new staging classification was based on the T and ND. The four new stages led to good separation of survival curves in both the developmental and validation datasets (p < 0.05). Conclusions: Our results showed that lymph node size is a strong independent prognostic factor and that the new staging system, which incorporated lymph node size, provided good prognostic power, and discriminated effectively for patients with esophageal cancer undergoing CRT.« less

  19. Predicting human liver microsomal stability with machine learning techniques.

    PubMed

    Sakiyama, Yojiro; Yuki, Hitomi; Moriya, Takashi; Hattori, Kazunari; Suzuki, Misaki; Shimada, Kaoru; Honma, Teruki

    2008-02-01

    To ensure a continuing pipeline in pharmaceutical research, lead candidates must possess appropriate metabolic stability in the drug discovery process. In vitro ADMET (absorption, distribution, metabolism, elimination, and toxicity) screening provides us with useful information regarding the metabolic stability of compounds. However, before the synthesis stage, an efficient process is required in order to deal with the vast quantity of data from large compound libraries and high-throughput screening. Here we have derived a relationship between the chemical structure and its metabolic stability for a data set of in-house compounds by means of various in silico machine learning such as random forest, support vector machine (SVM), logistic regression, and recursive partitioning. For model building, 1952 proprietary compounds comprising two classes (stable/unstable) were used with 193 descriptors calculated by Molecular Operating Environment. The results using test compounds have demonstrated that all classifiers yielded satisfactory results (accuracy > 0.8, sensitivity > 0.9, specificity > 0.6, and precision > 0.8). Above all, classification by random forest as well as SVM yielded kappa values of approximately 0.7 in an independent validation set, slightly higher than other classification tools. These results suggest that nonlinear/ensemble-based classification methods might prove useful in the area of in silico ADME modeling.

  20. Rotorcraft Blade Mode Damping Identification from Random Responses Using a Recursive Maximum Likelihood Algorithm

    NASA Technical Reports Server (NTRS)

    Molusis, J. A.

    1982-01-01

    An on line technique is presented for the identification of rotor blade modal damping and frequency from rotorcraft random response test data. The identification technique is based upon a recursive maximum likelihood (RML) algorithm, which is demonstrated to have excellent convergence characteristics in the presence of random measurement noise and random excitation. The RML technique requires virtually no user interaction, provides accurate confidence bands on the parameter estimates, and can be used for continuous monitoring of modal damping during wind tunnel or flight testing. Results are presented from simulation random response data which quantify the identified parameter convergence behavior for various levels of random excitation. The data length required for acceptable parameter accuracy is shown to depend upon the amplitude of random response and the modal damping level. Random response amplitudes of 1.25 degrees to .05 degrees are investigated. The RML technique is applied to hingeless rotor test data. The inplane lag regressing mode is identified at different rotor speeds. The identification from the test data is compared with the simulation results and with other available estimates of frequency and damping.

  1. Multi-fidelity Gaussian process regression for prediction of random fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parussini, L.; Venturi, D., E-mail: venturi@ucsc.edu; Perdikaris, P.

    We propose a new multi-fidelity Gaussian process regression (GPR) approach for prediction of random fields based on observations of surrogate models or hierarchies of surrogate models. Our method builds upon recent work on recursive Bayesian techniques, in particular recursive co-kriging, and extends it to vector-valued fields and various types of covariances, including separable and non-separable ones. The framework we propose is general and can be used to perform uncertainty propagation and quantification in model-based simulations, multi-fidelity data fusion, and surrogate-based optimization. We demonstrate the effectiveness of the proposed recursive GPR techniques through various examples. Specifically, we study the stochastic Burgersmore » equation and the stochastic Oberbeck–Boussinesq equations describing natural convection within a square enclosure. In both cases we find that the standard deviation of the Gaussian predictors as well as the absolute errors relative to benchmark stochastic solutions are very small, suggesting that the proposed multi-fidelity GPR approaches can yield highly accurate results.« less

  2. HARP: A Dynamic Inertial Spectral Partitioner

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Sohn, Andrew; Biswas, Rupak

    1997-01-01

    Partitioning unstructured graphs is central to the parallel solution of computational science and engineering problems. Spectral partitioners, such recursive spectral bisection (RSB), have proven effecfive in generating high-quality partitions of realistically-sized meshes. The major problem which hindered their wide-spread use was their long execution times. This paper presents a new inertial spectral partitioner, called HARP. The main objective of the proposed approach is to quickly partition the meshes at runtime in a manner that works efficiently for real applications in the context of distributed-memory machines. The underlying principle of HARP is to find the eigenvectors of the unpartitioned vertices and then project them onto the eigerivectors of the original mesh. Results for various meshes ranging in size from 1000 to 100,000 vertices indicate that HARP can indeed partition meshes rapidly at runtime. Experimental results show that our largest mesh can be partitioned sequentially in only a few seconds on an SP2 which is several times faster than other spectral partitioners while maintaining the solution quality of the proven RSB method. A parallel WI version of HARP has also been implemented on IBM SP2 and Cray T3E. Parallel HARP, running on 64 processors SP2 and T3E, can partition a mesh containing more than 100,000 vertices into 64 subgrids in about half a second. These results indicate that graph partitioning can now be truly embedded in dynamically-changing real-world applications.

  3. Cache Locality Optimization for Recursive Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lifflander, Jonathan; Krishnamoorthy, Sriram

    We present an approach to optimize the cache locality for recursive programs by dynamically splicing--recursively interleaving--the execution of distinct function invocations. By utilizing data effect annotations, we identify concurrency and data reuse opportunities across function invocations and interleave them to reduce reuse distance. We present algorithms that efficiently track effects in recursive programs, detect interference and dependencies, and interleave execution of function invocations using user-level (non-kernel) lightweight threads. To enable multi-core execution, a program is parallelized using a nested fork/join programming model. Our cache optimization strategy is designed to work in the context of a random work stealing scheduler. Wemore » present an implementation using the MIT Cilk framework that demonstrates significant improvements in sequential and parallel performance, competitive with a state-of-the-art compile-time optimizer for loop programs and a domain- specific optimizer for stencil programs.« less

  4. Introduction in IND and recursive partitioning

    NASA Technical Reports Server (NTRS)

    Buntine, Wray; Caruana, Rich

    1991-01-01

    This manual describes the IND package for learning tree classifiers from data. The package is an integrated C and C shell re-implementation of tree learning routines such as CART, C4, and various MDL and Bayesian variations. The package includes routines for experiment control, interactive operation, and analysis of tree building. The manual introduces the system and its many options, gives a basic review of tree learning, contains a guide to the literature and a glossary, and lists the manual pages for the routines and instructions on installation.

  5. A mean field neural network for hierarchical module placement

    NASA Technical Reports Server (NTRS)

    Unaltuna, M. Kemal; Pitchumani, Vijay

    1992-01-01

    This paper proposes a mean field neural network for the two-dimensional module placement problem. An efficient coding scheme with only O(N log N) neurons is employed where N is the number of modules. The neurons are evolved in groups of N in log N iteration steps such that the circuit is recursively partitioned in alternating vertical and horizontal directions. In our simulations, the network was able to find optimal solutions to all test problems with up to 128 modules.

  6. Off-diagonal long-range order, cycle probabilities, and condensate fraction in the ideal Bose gas.

    PubMed

    Chevallier, Maguelonne; Krauth, Werner

    2007-11-01

    We discuss the relationship between the cycle probabilities in the path-integral representation of the ideal Bose gas, off-diagonal long-range order, and Bose-Einstein condensation. Starting from the Landsberg recursion relation for the canonic partition function, we use elementary considerations to show that in a box of size L3 the sum of the cycle probabilities of length k>L2 equals the off-diagonal long-range order parameter in the thermodynamic limit. For arbitrary systems of ideal bosons, the integer derivative of the cycle probabilities is related to the probability of condensing k bosons. We use this relation to derive the precise form of the pik in the thermodynamic limit. We also determine the function pik for arbitrary systems. Furthermore, we use the cycle probabilities to compute the probability distribution of the maximum-length cycles both at T=0, where the ideal Bose gas reduces to the study of random permutations, and at finite temperature. We close with comments on the cycle probabilities in interacting Bose gases.

  7. Random number generators tested on quantum Monte Carlo simulations.

    PubMed

    Hongo, Kenta; Maezono, Ryo; Miura, Kenichi

    2010-08-01

    We have tested and compared several (pseudo) random number generators (RNGs) applied to a practical application, ground state energy calculations of molecules using variational and diffusion Monte Carlo metheds. A new multiple recursive generator with 8th-order recursion (MRG8) and the Mersenne twister generator (MT19937) are tested and compared with the RANLUX generator with five luxury levels (RANLUX-[0-4]). Both MRG8 and MT19937 are proven to give the same total energy as that evaluated with RANLUX-4 (highest luxury level) within the statistical error bars with less computational cost to generate the sequence. We also tested the notorious implementation of linear congruential generator (LCG), RANDU, for comparison. (c) 2010 Wiley Periodicals, Inc.

  8. Race and acute abdominal pain in a pediatric emergency department.

    PubMed

    Caperell, Kerry; Pitetti, Raymond; Cross, Keith P

    2013-06-01

    To investigate the demographic and clinical factors of children who present to the pediatric emergency department (ED) with abdominal pain and their outcomes. A review of the electronic medical record of patients 1 to 18 years old, who presented to the Children's Hospital of Pittsburgh ED with a complaint of abdominal pain over the course of 2 years, was conducted. Demographic and clinical characteristics, as well as visit outcomes, were reviewed. Subjects were grouped by age, race, and gender. Results of evaluation, treatment, and clinical outcomes were compared between groups by using multivariate analysis and recursive partitioning. There were 9424 patient visits during the study period that met inclusion and exclusion criteria. Female gender comprised 61% of African American children compared with 52% of white children. Insurance was characterized as private for 75% of white and 37% of African American children. A diagnosis of appendicitis was present in 1.9% of African American children and 5.1% of white children. Older children were more likely to be admitted and have an operation associated with their ED visit. Appendicitis was uncommon in younger children. Constipation was commonly diagnosed. Multivariate analysis by diagnosis as well as recursive partitioning analysis did not reflect any racial differences in evaluation, treatment, or outcome. Constipation is the most common diagnosis in children presenting with abdominal pain. Our data demonstrate that no racial differences exist in the evaluation, treatment, and disposition of children with abdominal pain.

  9. Recursive partition analysis of peritoneal and systemic recurrence in patients with gastric cancer who underwent D2 gastrectomy: Implications for neoadjuvant therapy consideration.

    PubMed

    Chang, Jee Suk; Kim, Kyung Hwan; Keum, Ki Chang; Noh, Sung Hoon; Lim, Joon Seok; Kim, Hyo Song; Rha, Sun Young; Lee, Yong Chan; Hyung, Woo Jin; Koom, Woong Sub

    2016-12-01

    To classify patients with nonmetastatic advanced gastric cancer who underwent D2-gastrectomy into prognostic groups based on peritoneal and systemic recurrence risks. Between 2004 and 2007, 1,090 patients with T3-4 or N+ gastric cancer were identified from our registry. Recurrence rates were estimated using a competing-risk analysis. Different prognostic groups were defined using recursive partitioning analysis (RPA). Median follow-up was 7 years. In the RPA-model for peritoneal recurrence risk, the initial node was split by T stage, indicating that differences between patients with T1-3 and T4 cancer were the greatest. The 5-year peritoneal recurrence rates for patients with T4 (n = 627) and T1-3 (n = 463) disease were 34.3% and 9.1%, respectively. N stage and neural invasion had an additive impact on high-risk patients. The RPA model for systemic relapse incorporated N stage alone and gave two terminal nodes: N0-2 (n = 721) and N3 (n = 369). The 5-year cumulative incidences were 7.7% and 24.5%, respectively. We proposed risk stratification models of peritoneal and systemic recurrence in patients undergoing D2-gastrectomy. This classification could be used for stratification protocols in future studies evaluating adjuvant therapies such as preoperative chemoradiotherapy. J. Surg. Oncol. 2016;114:859-864. © 2016 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  10. Essential energy space random walks to accelerate molecular dynamics simulations: Convergence improvements via an adaptive-length self-healing strategy

    NASA Astrophysics Data System (ADS)

    Zheng, Lianqing; Yang, Wei

    2008-07-01

    Recently, accelerated molecular dynamics (AMD) technique was generalized to realize essential energy space random walks so that further sampling enhancement and effective localized enhanced sampling could be achieved. This method is especially meaningful when essential coordinates of the target events are not priori known; moreover, the energy space metadynamics method was also introduced so that biasing free energy functions can be robustly generated. Despite the promising features of this method, due to the nonequilibrium nature of the metadynamics recursion, it is challenging to rigorously use the data obtained at the recursion stage to perform equilibrium analysis, such as free energy surface mapping; therefore, a large amount of data ought to be wasted. To resolve such problem so as to further improve simulation convergence, as promised in our original paper, we are reporting an alternate approach: the adaptive-length self-healing (ALSH) strategy for AMD simulations; this development is based on a recent self-healing umbrella sampling method. Here, the unit simulation length for each self-healing recursion is increasingly updated based on the Wang-Landau flattening judgment. When the unit simulation length for each update is long enough, all the following unit simulations naturally run into the equilibrium regime. Thereafter, these unit simulations can serve for the dual purposes of recursion and equilibrium analysis. As demonstrated in our model studies, by applying ALSH, both fast recursion and short nonequilibrium data waste can be compromised. As a result, combining all the data obtained from all the unit simulations that are in the equilibrium regime via the weighted histogram analysis method, efficient convergence can be robustly ensured, especially for the purpose of free energy surface mapping.

  11. Time-based partitioning model for predicting neurologically favorable outcome among adults with witnessed bystander out-of-hospital CPA.

    PubMed

    Abe, Toshikazu; Tokuda, Yasuharu; Cook, E Francis

    2011-01-01

    Optimal acceptable time intervals from collapse to bystander cardiopulmonary resuscitation (CPR) for neurologically favorable outcome among adults with witnessed out-of-hospital cardiopulmonary arrest (CPA) have been unclear. Our aim was to assess the optimal acceptable thresholds of the time intervals of CPR for neurologically favorable outcome and survival using a recursive partitioning model. From January 1, 2005 through December 31, 2009, we conducted a prospective population-based observational study across Japan involving consecutive out-of-hospital CPA patients (N = 69,648) who received a witnessed bystander CPR. Of 69,648 patients, 34,605 were assigned to the derivation data set and 35,043 to the validation data set. Time factors associated with better outcomes: the better outcomes were survival and neurologically favorable outcome at one month, defined as category one (good cerebral performance) or two (moderate cerebral disability) of the cerebral performance categories. Based on the recursive partitioning model from the derivation dataset (n = 34,605) to predict the neurologically favorable outcome at one month, 5 min threshold was the acceptable time interval from collapse to CPR initiation; 11 min from collapse to ambulance arrival; 18 min from collapse to return of spontaneous circulation (ROSC); and 19 min from collapse to hospital arrival. Among the validation dataset (n = 35,043), 209/2,292 (9.1%) in all patients with the acceptable time intervals and 1,388/2,706 (52.1%) in the subgroup with the acceptable time intervals and pre-hospital ROSC showed neurologically favorable outcome. Initiation of CPR should be within 5 min for obtaining neurologically favorable outcome among adults with witnessed out-of-hospital CPA. Patients with the acceptable time intervals of bystander CPR and pre-hospital ROSC within 18 min could have 50% chance of neurologically favorable outcome.

  12. Estimation of correlation functions by stochastic approximation.

    NASA Technical Reports Server (NTRS)

    Habibi, A.; Wintz, P. A.

    1972-01-01

    Consideration of the autocorrelation function of a zero-mean stationary random process. The techniques are applicable to processes with nonzero mean provided the mean is estimated first and subtracted. Two recursive techniques are proposed, both of which are based on the method of stochastic approximation and assume a functional form for the correlation function that depends on a number of parameters that are recursively estimated from successive records. One technique uses a standard point estimator of the correlation function to provide estimates of the parameters that minimize the mean-square error between the point estimates and the parametric function. The other technique provides estimates of the parameters that maximize a likelihood function relating the parameters of the function to the random process. Examples are presented.

  13. Strong monogamy of bipartite and genuine multipartite entanglement: the Gaussian case.

    PubMed

    Adesso, Gerardo; Illuminati, Fabrizio

    2007-10-12

    We demonstrate the existence of general constraints on distributed quantum correlations, which impose a trade-off on bipartite and multipartite entanglement at once. For all N-mode Gaussian states under permutation invariance, we establish exactly a monogamy inequality, stronger than the traditional one, that by recursion defines a proper measure of genuine N-partite entanglement. Strong monogamy holds as well for subsystems of arbitrary size, and the emerging multipartite entanglement measure is found to be scale invariant. We unveil its operational connection with the optimal fidelity of continuous variable teleportation networks.

  14. Introduction to IND and recursive partitioning, version 1.0

    NASA Technical Reports Server (NTRS)

    Buntine, Wray; Caruana, Rich

    1991-01-01

    This manual describes the IND package for learning tree classifiers from data. The package is an integrated C and C shell re-implementation of tree learning routines such as CART, C4, and various MDL and Bayesian variations. The package includes routines for experiment control, interactive operation, and analysis of tree building. The manual introduces the system and its many options, gives a basic review of tree learning, contains a guide to the literature and a glossary, lists the manual pages for the routines, and instructions on installation.

  15. Raw and Central Moments of Binomial Random Variables via Stirling Numbers

    ERIC Educational Resources Information Center

    Griffiths, Martin

    2013-01-01

    We consider here the problem of calculating the moments of binomial random variables. It is shown how formulae for both the raw and the central moments of such random variables may be obtained in a recursive manner utilizing Stirling numbers of the first kind. Suggestions are also provided as to how students might be encouraged to explore this…

  16. Early symptom burden predicts recovery after sport-related concussion

    PubMed Central

    Mannix, Rebekah; Monuteaux, Michael C.; Stein, Cynthia J.; Bachur, Richard G.

    2014-01-01

    Objective: To identify independent predictors of and use recursive partitioning to develop a multivariate regression tree predicting symptom duration greater than 28 days after a sport-related concussion. Methods: We conducted a prospective cohort study of patients in a sports concussion clinic. Participants completed questionnaires that included the Post-Concussion Symptom Scale (PCSS). Participants were asked to record the date on which they last experienced symptoms. Potential predictor variables included age, sex, score on symptom inventories, history of prior concussions, performance on computerized neurocognitive assessments, loss of consciousness and amnesia at the time of injury, history of prior medical treatment for headaches, history of migraines, and family history of concussion. We used recursive partitioning analysis to develop a multivariate prediction model for identifying athletes at risk for a prolonged recovery from concussion. Results: A total of 531 patients ranged in age from 7 to 26 years (mean 14.6 ± 2.9 years). The mean PCSS score at the initial visit was 26 ± 26; mean time to presentation was 12 ± 5 days. Only total score on symptom inventory was independently associated with symptoms lasting longer than 28 days (adjusted odds ratio 1.044; 95% confidence interval [CI] 1.034, 1.054 for PCSS). No other potential predictor variables were independently associated with symptom duration or useful in developing the optimal regression decision tree. Most participants (86%; 95% CI 80%, 90%) with an initial PCSS score of <13 had resolution of their symptoms within 28 days of injury. Conclusions: The only independent predictor of prolonged symptoms after sport-related concussion is overall symptom burden. PMID:25381296

  17. Major depressive disorder subtypes to predict long-term course

    PubMed Central

    van Loo, Hanna M.; Cai, Tianxi; Gruber, Michael J.; Li, Junlong; de Jonge, Peter; Petukhova, Maria; Rose, Sherri; Sampson, Nancy A.; Schoevers, Robert A.; Wardenaar, Klaas J.; Wilcox, Marsha A.; Al-Hamzawi, Ali Obaid; Andrade, Laura Helena; Bromet, Evelyn J.; Bunting, Brendan; Fayyad, John; Florescu, Silvia E.; Gureje, Oye; Hu, Chiyi; Huang, Yueqin; Levinson, Daphna; Medina-Mora, Maria Elena; Nakane, Yoshibumi; Posada-Villa, Jose; Scott, Kate M.; Xavier, Miguel; Zarkov, Zahari; Kessler, Ronald C.

    2016-01-01

    Background Variation in course of major depressive disorder (MDD) is not strongly predicted by existing subtype distinctions. A new subtyping approach is considered here. Methods Two data mining techniques, ensemble recursive partitioning and Lasso generalized linear models (GLMs) followed by k-means cluster analysis, are used to search for subtypes based on index episode symptoms predicting subsequent MDD course in the World Mental Health (WMH) Surveys. The WMH surveys are community surveys in 16 countries. Lifetime DSM-IV MDD was reported by 8,261 respondents. Retrospectively reported outcomes included measures of persistence (number of years with an episode; number of with an episode lasting most of the year) and severity (hospitalization for MDD; disability due to MDD). Results Recursive partitioning found significant clusters defined by the conjunctions of early onset, suicidality, and anxiety (irritability, panic, nervousness-worry-anxiety) during the index episode. GLMs found additional associations involving a number of individual symptoms. Predicted values of the four outcomes were strongly correlated. Cluster analysis of these predicted values found three clusters having consistently high, intermediate, or low predicted scores across all outcomes. The high-risk cluster (30.0% of respondents) accounted for 52.9-69.7% of high persistence and severity and was most strongly predicted by index episode severe dysphoria, suicidality, anxiety, and early onset. A total symptom count, in comparison, was not a significant predictor. Conclusions Despite being based on retrospective reports, results suggest that useful MDD subtyping distinctions can be made using data mining methods. Further studies are needed to test and expand these results with prospective data. PMID:24425049

  18. Whole brain radiotherapy and stereotactic radiosurgery for patients with recursive partitioning analysis I and lesions <5 cm(3): A matched pair analysis.

    PubMed

    Viani, Gustavo Arruda; Godoi da Silva, Lucas Bernardes; Viana, Bruno Silveira; Rossi, Bruno Tiago; Suguikawa, Elton; Zuliani, Gisele

    2016-01-01

    The intention of this study is to compare whole brain radiotherapy and stereotactic radiosurgery (WBRT + SRS) with WBRT in patients with 1-4 brain metastases to find a subgroup of patients that have a great benefit with aggressive treatment. Between December 2002 and December 2013, 60 patients with 1-4 brain metastases were treated by WBRT + SRS. In this period, 60 patients treated with WBRT were matched with patients treated with WBRT + SRS. The median survival for the entire cohort was 8.3 months. In the univariate analysis, WBRT + SRS (0.031), the presence of extracranial disease (P = 0.02), Karnofsky performance score <70 (P = 0.0001), and age >65 (P = 0.001) years were significant factors for survival. In the entire cohort, the median survival for recursive partitioning analysis (RPA) classes I, II, and III was 11, 7, and 3 months, respectively (P = 0.0001). In a stratified analysis, only RPA class I achieved statistical significance for 1-year survival between the groups (WBRT + SRS = 51% and WBRT = 23%, P = 0.03). Cox regression analysis revealed WBRT + SRS, age >65 years, and extracranial disease as independent prognostic factors. In the univariate analysis, lesion volume ≤5 cm 3 (P = 0.002) and WBRT + SRS (P = 0.003) were the significant factors associated with better brain control. WBRT plus SRS was an independent prognostic factor for survival. However, the combined treatment appears to be justified only in patients with RPA I and lesion volume ≤5 cm 3, independently of the number of lesions.

  19. Developing the surveillance algorithm for detection of failure to recognize and treat severe sepsis.

    PubMed

    Harrison, Andrew M; Thongprayoon, Charat; Kashyap, Rahul; Chute, Christopher G; Gajic, Ognjen; Pickering, Brian W; Herasevich, Vitaly

    2015-02-01

    To develop and test an automated surveillance algorithm (sepsis "sniffer") for the detection of severe sepsis and monitoring failure to recognize and treat severe sepsis in a timely manner. We conducted an observational diagnostic performance study using independent derivation and validation cohorts from an electronic medical record database of the medical intensive care unit (ICU) of a tertiary referral center. All patients aged 18 years and older who were admitted to the medical ICU from January 1 through March 31, 2013 (N=587), were included. The criterion standard for severe sepsis/septic shock was manual review by 2 trained reviewers with a third superreviewer for cases of interobserver disagreement. Critical appraisal of false-positive and false-negative alerts, along with recursive data partitioning, was performed for algorithm optimization. An algorithm based on criteria for suspicion of infection, systemic inflammatory response syndrome, organ hypoperfusion and dysfunction, and shock had a sensitivity of 80% and a specificity of 96% when applied to the validation cohort. In order, low systolic blood pressure, systemic inflammatory response syndrome positivity, and suspicion of infection were determined through recursive data partitioning to be of greatest predictive value. Lastly, 117 alert-positive patients (68% of the 171 patients with severe sepsis) had a delay in recognition and treatment, defined as no lactate and central venous pressure measurement within 2 hours of the alert. The optimized sniffer accurately identified patients with severe sepsis that bedside clinicians failed to recognize and treat in a timely manner. Copyright © 2015 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  20. Relationship between financial impact and coverage of drugs in Australia.

    PubMed

    Mauskopf, Josephine; Chirila, Costel; Masaquel, Catherine; Boye, Kristina S; Bowman, Lee; Birt, Julie; Grainger, David

    2013-01-01

    The aim of this study was to estimate the relationship between the financial impact of a new drug and the recommendation for reimbursement by the Australian Pharmaceutical Benefits Advisory Committee (PBAC). Data in the PBAC summary database were abstracted for decisions made between July 2005 and November 2009. Financial impact-the upper bound of the values presented in the PBAC summary database-was categorized as ≤A$0, >A$0 up to A$10 million, A$10 million up to A$30 million, and >A$30 million per year. Descriptive, logistic, survival, and recursive partitioning decision analyses were used to estimate the relationship between the financial impact of a new drug indication and the recommendation for reimbursement. Multivariable analyses controlled for other clinical and economic variables, including cost per quality-adjusted life-year gained. Financial impact was a significant predictor of the recommendation for reimbursement. In the logistic analysis, the odds ratios of reimbursement for drug submissions with financial impacts ≥A$10 million to ≥A$30 million or >A$0 to

  1. Genetic Variation in the Raptor Gene Is Associated With Overweight But Not Hypertension in American Men of Japanese Ancestry

    PubMed Central

    Carnes, Bruce A.; Chen, Randi; Donlon, Timothy A.; He, Qimei; Grove, John S.; Masaki, Kamal H.; Elliott, Ayako; Willcox, Donald C.; Allsopp, Richard; Willcox, Bradley J.

    2015-01-01

    BACKGROUND The mechanistic target of rapamycin (mTOR) pathway is pivotal for cell growth. Regulatory associated protein of mTOR complex I (Raptor) is a unique component of this pro-growth complex. The present study tested whether variation across the raptor gene (RPTOR) is associated with overweight and hypertension. METHODS We tested 61 common (allele frequency ≥ 0.1) tagging single nucleotide polymorphisms (SNPs) that captured most of the genetic variation across RPTOR in 374 subjects of normal lifespan and 439 subjects with a lifespan exceeding 95 years for association with overweight/obesity, essential hypertension, and isolated systolic hypertension. Subjects were drawn from the Honolulu Heart Program, a homogeneous population of American men of Japanese ancestry, well characterized for phenotypes relevant to conditions of aging. Hypertension status was ascertained when subjects were 45–68 years old. Statistical evaluation involved contingency table analysis, logistic regression, and the powerful method of recursive partitioning. RESULTS After analysis of RPTOR genotypes by each statistical approach, we found no significant association between genetic variation in RPTOR and either essential hypertension or isolated systolic hypertension. Models generated by recursive partitioning analysis showed that RPTOR SNPs significantly enhanced the ability of the model to accurately assign individuals to either the overweight/obese or the non-overweight/obese groups (P = 0.008 by 1-tailed Z test). CONCLUSION Common genetic variation in RPTOR is associated with overweight/obesity but does not discernibly contribute to either essential hypertension or isolated systolic hypertension in the population studied. PMID:25249372

  2. Early symptom burden predicts recovery after sport-related concussion.

    PubMed

    Meehan, William P; Mannix, Rebekah; Monuteaux, Michael C; Stein, Cynthia J; Bachur, Richard G

    2014-12-09

    To identify independent predictors of and use recursive partitioning to develop a multivariate regression tree predicting symptom duration greater than 28 days after a sport-related concussion. We conducted a prospective cohort study of patients in a sports concussion clinic. Participants completed questionnaires that included the Post-Concussion Symptom Scale (PCSS). Participants were asked to record the date on which they last experienced symptoms. Potential predictor variables included age, sex, score on symptom inventories, history of prior concussions, performance on computerized neurocognitive assessments, loss of consciousness and amnesia at the time of injury, history of prior medical treatment for headaches, history of migraines, and family history of concussion. We used recursive partitioning analysis to develop a multivariate prediction model for identifying athletes at risk for a prolonged recovery from concussion. A total of 531 patients ranged in age from 7 to 26 years (mean 14.6 ± 2.9 years). The mean PCSS score at the initial visit was 26 ± 26; mean time to presentation was 12 ± 5 days. Only total score on symptom inventory was independently associated with symptoms lasting longer than 28 days (adjusted odds ratio 1.044; 95% confidence interval [CI] 1.034, 1.054 for PCSS). No other potential predictor variables were independently associated with symptom duration or useful in developing the optimal regression decision tree. Most participants (86%; 95% CI 80%, 90%) with an initial PCSS score of <13 had resolution of their symptoms within 28 days of injury. The only independent predictor of prolonged symptoms after sport-related concussion is overall symptom burden. © 2014 American Academy of Neurology.

  3. FRPA: A Framework for Recursive Parallel Algorithms

    DTIC Science & Technology

    2015-05-01

    a t o i ( argv [ 1 ] ) ; s td : : s t r i n g i n t e r l e a v i n g = ( argc > 2) ? argv [ 2 ] : " " ; double ∗ A = randomArray ( l e n g t h...actually determines how deep the recursion is. For example, a configuration with schedule ‘BBDB’ and depth 3 represents the in- terleaving ‘ BBD ’. This means...depth 3 represents the same interleaving as the configuration with schedule ‘BBDD’ and depth 3, namely ‘ BBD ’. In our experiments, this redundancy did

  4. Propensity score method: a non-parametric technique to reduce model dependence

    PubMed Central

    2017-01-01

    Propensity score analysis (PSA) is a powerful technique that it balances pretreatment covariates, making the causal effect inference from observational data as reliable as possible. The use of PSA in medical literature has increased exponentially in recent years, and the trend continue to rise. The article introduces rationales behind PSA, followed by illustrating how to perform PSA in R with MatchIt package. There are a variety of methods available for PS matching such as nearest neighbors, full matching, exact matching and genetic matching. The task can be easily done by simply assigning a string value to the method argument in the matchit() function. The generic summary() and plot() functions can be applied to an object of class matchit to check covariate balance after matching. Furthermore, there is a useful package PSAgraphics that contains several graphical functions to check covariate balance between treatment groups across strata. If covariate balance is not achieved, one can modify model specifications or use other techniques such as random forest and recursive partitioning to better represent the underlying structure between pretreatment covariates and treatment assignment. The process can be repeated until the desirable covariate balance is achieved. PMID:28164092

  5. Faces of matrix models

    NASA Astrophysics Data System (ADS)

    Morozov, A.

    2012-08-01

    Partition functions of eigenvalue matrix models possess a number of very different descriptions: as matrix integrals, as solutions to linear and nonlinear equations, as τ-functions of integrable hierarchies and as special-geometry prepotentials, as result of the action of W-operators and of various recursions on elementary input data, as gluing of certain elementary building blocks. All this explains the central role of such matrix models in modern mathematical physics: they provide the basic "special functions" to express the answers and relations between them, and they serve as a dream model of what one should try to achieve in any other field.

  6. Accelerating calculations of RNA secondary structure partition functions using GPUs

    PubMed Central

    2013-01-01

    Background RNA performs many diverse functions in the cell in addition to its role as a messenger of genetic information. These functions depend on its ability to fold to a unique three-dimensional structure determined by the sequence. The conformation of RNA is in part determined by its secondary structure, or the particular set of contacts between pairs of complementary bases. Prediction of the secondary structure of RNA from its sequence is therefore of great interest, but can be computationally expensive. In this work we accelerate computations of base-pair probababilities using parallel graphics processing units (GPUs). Results Calculation of the probabilities of base pairs in RNA secondary structures using nearest-neighbor standard free energy change parameters has been implemented using CUDA to run on hardware with multiprocessor GPUs. A modified set of recursions was introduced, which reduces memory usage by about 25%. GPUs are fastest in single precision, and for some hardware, restricted to single precision. This may introduce significant roundoff error. However, deviations in base-pair probabilities calculated using single precision were found to be negligible compared to those resulting from shifting the nearest-neighbor parameters by a random amount of magnitude similar to their experimental uncertainties. For large sequences running on our particular hardware, the GPU implementation reduces execution time by a factor of close to 60 compared with an optimized serial implementation, and by a factor of 116 compared with the original code. Conclusions Using GPUs can greatly accelerate computation of RNA secondary structure partition functions, allowing calculation of base-pair probabilities for large sequences in a reasonable amount of time, with a negligible compromise in accuracy due to working in single precision. The source code is integrated into the RNAstructure software package and available for download at http://rna.urmc.rochester.edu. PMID:24180434

  7. Demographic predictors of active tuberculosis in people migrating to British Columbia, Canada: a retrospective cohort study.

    PubMed

    Ronald, Lisa A; Campbell, Jonathon R; Balshaw, Robert F; Romanowski, Kamila; Roth, David Z; Marra, Fawziah; Cook, Victoria J; Johnston, James C

    2018-02-26

    Canadian tuberculosis (TB) guidelines recommend targeting postlanding screening for and treatment of latent tuberculosis infection (LTBI) in people migrating to Canada who are at increased risk for TB reactivation. Our objectives were to calculate robust longitudinal estimates of TB incidence in a cohort of people migrating to British Columbia, Canada, over a 29-year period, and to identify groups at highest risk of developing TB based on demographic characteristics at time of landing. We included all individuals ( n = 1 080 908) who became permanent residents of Canada between Jan. 1, 1985, and Dec. 31, 2012, and were resident in BC at any time between 1985 and 2013. Multiple administrative databases were linked to the provincial TB registry. We used recursive partitioning models to identify populations with high TB yield. Active TB was diagnosed in 2814 individuals (incidence rate 24.2/100 000 person-years). Demographic factors (live-in caregiver, family, refugee immigration classes; higher TB incidence in country of birth; and older age) were strong predictors of TB incidence in BC, with elevated rates continuing many years after entry into the cohort. Recursive partitioning identified refugees 18-64 years of age from countries with a TB incidence greater than 224/100 000 population as a high-yield group, with 1% developing TB within the first 10 years. These findings support recommendations in Canadian guidelines to target postlanding screening for and treatment of LTBI in adult refugees from high-incidence countries. Because high-yield populations can be identified at entry via demographic data, screening at this point may be practical and high-impact, particularly if the LTBI care cascade can be optimized. © 2018 Joule Inc. or its licensors.

  8. Demographic predictors of active tuberculosis in people migrating to British Columbia, Canada: a retrospective cohort study

    PubMed Central

    Ronald, Lisa A.; Campbell, Jonathon R.; Balshaw, Robert F.; Romanowski, Kamila; Roth, David Z.; Marra, Fawziah; Cook, Victoria J.; Johnston, James C.

    2018-01-01

    BACKGROUND: Canadian tuberculosis (TB) guidelines recommend targeting postlanding screening for and treatment of latent tuberculosis infection (LTBI) in people migrating to Canada who are at increased risk for TB reactivation. Our objectives were to calculate robust longitudinal estimates of TB incidence in a cohort of people migrating to British Columbia, Canada, over a 29-year period, and to identify groups at highest risk of developing TB based on demographic characteristics at time of landing. METHODS: We included all individuals (n = 1 080 908) who became permanent residents of Canada between Jan. 1, 1985, and Dec. 31, 2012, and were resident in BC at any time between 1985 and 2013. Multiple administrative databases were linked to the provincial TB registry. We used recursive partitioning models to identify populations with high TB yield. RESULTS: Active TB was diagnosed in 2814 individuals (incidence rate 24.2/100 000 person-years). Demographic factors (live-in caregiver, family, refugee immigration classes; higher TB incidence in country of birth; and older age) were strong predictors of TB incidence in BC, with elevated rates continuing many years after entry into the cohort. Recursive partitioning identified refugees 18–64 years of age from countries with a TB incidence greater than 224/100 000 population as a high-yield group, with 1% developing TB within the first 10 years. INTERPRETATION: These findings support recommendations in Canadian guidelines to target postlanding screening for and treatment of LTBI in adult refugees from high-incidence countries. Because high-yield populations can be identified at entry via demographic data, screening at this point may be practical and high-impact, particularly if the LTBI care cascade can be optimized. PMID:29483329

  9. Exact partition functions for deformed N=2 theories with N_f=4 flavours

    NASA Astrophysics Data System (ADS)

    Beccaria, Matteo; Fachechi, Alberto; Macorini, Guido; Martina, Luigi

    2016-12-01

    We consider the Ω-deformed N=2 SU(2) gauge theory in four dimensions with N f = 4 massive fundamental hypermultiplets. The low energy effective action depends on the deformation parameters ɛ 1 , ɛ 2, the scalar field expectation value a, and the hypermultiplet masses m = ( m 1 , m 2 , m 3 , m 4). Motivated by recent findings in the N={2}^{*} theory, we explore the theories that are characterized by special fixed ratios ɛ 2 /ɛ 1 and m /ɛ 1 and propose a simple condition on the structure of the multi-instanton contributions to the prepotential determining the effective action. This condition determines a finite set Π N of special points such that the prepotential has N poles at fixed positions independent on the instanton number. In analogy with what happens in the N={2}^{*} gauge theory, the full prepotential of the Π N theories may be given in closed form as an explicit function of a and the modular parameter q appearing in special combinations of Eisenstein series and Jacobi theta functions with well defined modular properties. The resulting finite pole partition functions are related by AGT correspondence to special 4-point spherical conformal blocks of the Virasoro algebra. We examine in full details special cases where the closed expression of the block is known and confirms our Ansatz. We systematically study the special features of Zamolodchikov's recursion for the Π N conformal blocks. As a result, we provide a novel effective recursion relation that can be exactly solved and allows to prove the conjectured closed expressions analytically in the case of the Π1 and Π2 conformal blocks.

  10. Validation and Development of a Modified Breast Graded Prognostic Assessment As a Tool for Survival in Patients With Breast Cancer and Brain Metastases.

    PubMed

    Subbiah, Ishwaria M; Lei, Xiudong; Weinberg, Jeffrey S; Sulman, Erik P; Chavez-MacGregor, Mariana; Tripathy, Debu; Gupta, Rohan; Varma, Ankur; Chouhan, Jay; Guevarra, Richard P; Valero, Vicente; Gilbert, Mark R; Gonzalez-Angulo, Ana M

    2015-07-10

    Several indices have been developed to predict overall survival (OS) in patients with breast cancer with brain metastases, including the breast graded prognostic assessment (breast-GPA), comprising age, tumor subtype, and Karnofsky performance score. However, number of brain metastases-a highly relevant clinical variable-is less often incorporated into the final model. We sought to validate the existing breast-GPA in an independent larger cohort and refine it integrating number of brain metastases. Data were retrospectively gathered from a prospectively maintained institutional database. Patients with newly diagnosed brain metastases from 1996 to 2013 were identified. After validating the breast-GPA, multivariable Cox regression and recursive partitioning analysis led to the development of the modified breast-GPA. The performances of the breast-GPA and modified breast-GPA were compared using the concordance index. In our cohort of 1,552 patients, the breast-GPA was validated as a prognostic tool for OS (P < .001). In multivariable analysis of the breast-GPA and number of brain metastases (> three v ≤ three), both were independent predictors of OS. We therefore developed the modified breast-GPA integrating a fourth clinical parameter. Recursive partitioning analysis reinforced the prognostic significance of these four factors. Concordance indices were 0.78 (95% CI, 0.77 to 0.80) and 0.84 (95% CI, 0.83 to 0.85) for the breast-GPA and modified breast-GPA, respectively (P < .001). The modified breast-GPA incorporates four simple clinical parameters of high prognostic significance. This index has an immediate role in the clinic as a formative part of the clinician's discussion of prognosis and direction of care and as a potential patient selection tool for clinical trials. © 2015 by American Society of Clinical Oncology.

  11. Adaptive semi-supervised recursive tree partitioning: The ART towards large scale patient indexing in personalized healthcare.

    PubMed

    Wang, Fei

    2015-06-01

    With the rapid development of information technologies, tremendous amount of data became readily available in various application domains. This big data era presents challenges to many conventional data analytics research directions including data capture, storage, search, sharing, analysis, and visualization. It is no surprise to see that the success of next-generation healthcare systems heavily relies on the effective utilization of gigantic amounts of medical data. The ability of analyzing big data in modern healthcare systems plays a vital role in the improvement of the quality of care delivery. Specifically, patient similarity evaluation aims at estimating the clinical affinity and diagnostic proximity of patients. As one of the successful data driven techniques adopted in healthcare systems, patient similarity evaluation plays a fundamental role in many healthcare research areas such as prognosis, risk assessment, and comparative effectiveness analysis. However, existing algorithms for patient similarity evaluation are inefficient in handling massive patient data. In this paper, we propose an Adaptive Semi-Supervised Recursive Tree Partitioning (ART) framework for large scale patient indexing such that the patients with similar clinical or diagnostic patterns can be correctly and efficiently retrieved. The framework is designed for semi-supervised settings since it is crucial to leverage experts' supervision knowledge in medical scenario, which are fairly limited compared to the available data. Starting from the proposed ART framework, we will discuss several specific instantiations and validate them on both benchmark and real world healthcare data. Our results show that with the ART framework, the patients can be efficiently and effectively indexed in the sense that (1) similarity patients can be retrieved in a very short time; (2) the retrieval performance can beat the state-of-the art indexing methods. Copyright © 2015. Published by Elsevier Inc.

  12. Tumor Volume and Patient Weight as Predictors of Outcome in Children with Intermediate Risk Rhabdomyosarcoma (RMS): A Report from the Children’s Oncology Group

    PubMed Central

    Rodeberg, David A.; Stoner, Julie A.; Garcia-Henriquez, Norbert; Randall, R. Lor; Spunt, Sheri L.; Arndt, Carola A.; Kao, Simon; Paidas, Charles N.; Million, Lynn; Hawkins, Douglas S.

    2010-01-01

    Background To compare tumor volume and patient weight vs. traditional factors of tumor diameter and patient age, to determine which parameters best discriminates outcome among intermediate risk RMS patients. Methods Complete patient information for non-metastatic RMS patients enrolled in the Children’s Oncology Group (COG) intermediate risk study D9803 (1999–2005) was available for 370 patients. The Kaplan-Meier method was used to estimate survival distributions. A recursive partitioning model was used to identify prognostic factors associated with event-free survival (EFS). Cox-proportional hazards regression models were used to estimate the association between patient characteristics and the risk of failure or death. Results For all intermediate risk patients with RMS, a recursive partitioning algorithm for EFS suggests that prognostic groups should optimally be defined by tumor volume (transition point 20 cm3), weight (transition point 50 kg), and embryonal histology. Tumor volume and patient weight added significant outcome information to the standard prognostic factors including tumor diameter and age (p=0.02). The ability to resect the tumor completely was not significantly associated with the size of the patient, and patient weight did not significantly modify the association between tumor volume and EFS after adjustment for standard risk factors (p=0.2). Conclusion The factors most strongly associated with EFS were tumor volume, patient weight, and histology. Based on regression modeling, volume and weight are superior predictors of outcome compared to tumor diameter and patient age in children with intermediate risk RMS. Prognostic performance of tumor volume and patient weight should be assessed in an independent prospective study. PMID:24048802

  13. Experiments to Determine Whether Recursive Partitioning (CART) or an Artificial Neural Network Overcomes Theoretical Limitations of Cox Proportional Hazards Regression

    NASA Technical Reports Server (NTRS)

    Kattan, Michael W.; Hess, Kenneth R.; Kattan, Michael W.

    1998-01-01

    New computationally intensive tools for medical survival analyses include recursive partitioning (also called CART) and artificial neural networks. A challenge that remains is to better understand the behavior of these techniques in effort to know when they will be effective tools. Theoretically they may overcome limitations of the traditional multivariable survival technique, the Cox proportional hazards regression model. Experiments were designed to test whether the new tools would, in practice, overcome these limitations. Two datasets in which theory suggests CART and the neural network should outperform the Cox model were selected. The first was a published leukemia dataset manipulated to have a strong interaction that CART should detect. The second was a published cirrhosis dataset with pronounced nonlinear effects that a neural network should fit. Repeated sampling of 50 training and testing subsets was applied to each technique. The concordance index C was calculated as a measure of predictive accuracy by each technique on the testing dataset. In the interaction dataset, CART outperformed Cox (P less than 0.05) with a C improvement of 0.1 (95% Cl, 0.08 to 0.12). In the nonlinear dataset, the neural network outperformed the Cox model (P less than 0.05), but by a very slight amount (0.015). As predicted by theory, CART and the neural network were able to overcome limitations of the Cox model. Experiments like these are important to increase our understanding of when one of these new techniques will outperform the standard Cox model. Further research is necessary to predict which technique will do best a priori and to assess the magnitude of superiority.

  14. Impact of Age and Antibody Type on Progression From Single to Multiple Autoantibodies in Type 1 Diabetes Relatives.

    PubMed

    Bosi, Emanuele; Boulware, David C; Becker, Dorothy J; Buckner, Jane H; Geyer, Susan; Gottlieb, Peter A; Henderson, Courtney; Kinderman, Amanda; Sosenko, Jay M; Steck, Andrea K; Bingley, Polly J

    2017-08-01

    Islet autoantibodies are markers of type 1 diabetes, and an increase in number of autoantibodies detected during the preclinical phase predicts progression to overt disease. To refine the effect of age in relation to islet antibody type on progression from single to multiple autoantibodies in relatives of people with type 1 diabetes. We examined 994 relatives with normal glucose tolerance who were positive for a single autoantibody, followed prospectively in the TrialNet Pathway to Prevention. Antibodies to glutamic acid decarboxylase (GADA), insulin (IAA), insulinoma-associated antigen 2, and zinc transporter 8 and islet cell antibodies were tested every 6 to 12 months. The primary outcome was confirmed development of multiple autoantibodies. Age was categorized as <8 years, 8 to 11 years, 12 to 17 years, and ≥18 years, and optimal age breakpoints were identified by recursive partitioning analysis. After median follow-up of 2 years, 141 relatives had developed at least one additional autoantibodies. Five-year risk was inversely related to age, but the pattern differed by antibody type: Relatives with GADA showed a gradual decrease in risk over the four age groups, whereas relatives with IAA showed a sharp decrease above age 8 years. Recursive partitioning analysis identified age breakpoints at 14 years in relatives with GADA and at 4 years in relatives with IAA. In relatives with IAA, spread of islet autoimmunity is largely limited to early childhood, whereas immune responses initially directed at glutamic acid decarboxylase can mature over a longer period. These differences have important implications for monitoring these patients and for designing prevention trials. Copyright © 2017 Endocrine Society

  15. Prognostic Classification Factors Associated With Development of Multiple Autoantibodies, Dysglycemia, and Type 1 Diabetes—A Recursive Partitioning Analysis

    PubMed Central

    Krischer, Jeffrey P.

    2016-01-01

    OBJECTIVE To define prognostic classification factors associated with the progression from single to multiple autoantibodies, multiple autoantibodies to dysglycemia, and dysglycemia to type 1 diabetes onset in relatives of individuals with type 1 diabetes. RESEARCH DESIGN AND METHODS Three distinct cohorts of subjects from the Type 1 Diabetes TrialNet Pathway to Prevention Study were investigated separately. A recursive partitioning analysis (RPA) was used to determine the risk classes. Clinical characteristics, including genotype, antibody titers, and metabolic markers were analyzed. RESULTS Age and GAD65 autoantibody (GAD65Ab) titers defined three risk classes for progression from single to multiple autoantibodies. The 5-year risk was 11% for those subjects >16 years of age with low GAD65Ab titers, 29% for those ≤16 years of age with low GAD65Ab titers, and 45% for those subjects with high GAD65Ab titers regardless of age. Progression to dysglycemia was associated with islet antigen 2 Ab titers, and 2-h glucose and fasting C-peptide levels. The 5-year risk is 28%, 39%, and 51% for respective risk classes defined by the three predictors. Progression to type 1 diabetes was associated with the number of positive autoantibodies, peak C-peptide level, HbA1c level, and age. Four risk classes defined by RPA had a 5-year risk of 9%, 33%, 62%, and 80%, respectively. CONCLUSIONS The use of RPA offered a new classification approach that could predict the timing of transitions from one preclinical stage to the next in the development of type 1 diabetes. Using these RPA classes, new prevention techniques can be tailored based on the individual prognostic risk characteristics at different preclinical stages. PMID:27208341

  16. Identifying Emergency Department Patients at Low Risk for a Variceal Source of Upper Gastrointestinal Hemorrhage.

    PubMed

    Klein, Lauren R; Money, Joel; Maharaj, Kaveesh; Robinson, Aaron; Lai, Tarissa; Driver, Brian E

    2017-11-01

    Assessing the likelihood of a variceal versus nonvariceal source of upper gastrointestinal bleeding (UGIB) guides therapy, but can be difficult to determine on clinical grounds. The objective of this study was to determine if there are easily ascertainable clinical and laboratory findings that can identify a patient as low risk for a variceal source of hemorrhage. This was a retrospective cohort study of adult ED patients with UGIB between January 2008 and December 2014 who had upper endoscopy performed during hospitalization. Clinical and laboratory data were abstracted from the medical record. The source of the UGIB was defined as variceal or nonvariceal based on endoscopic reports. Binary recursive partitioning was utilized to create a clinical decision rule. The rule was internally validated and test characteristics were calculated with 1,000 bootstrap replications. A total of 719 patients were identified; mean age was 55 years and 61% were male. There were 71 (10%) patients with a variceal UGIB identified on endoscopy. Binary recursive partitioning yielded a two-step decision rule (platelet count > 200 × 10 9 /L and an international normalized ratio [INR] < 1.3), which identified patients who were low risk for a variceal source of hemorrhage. For the bootstrapped samples, the rule performed with 97% sensitivity (95% confidence interval [CI] = 91%-100%) and 49% specificity (95% CI = 44%-53%). Although this derivation study must be externally validated before widespread use, patients presenting to the ED with an acute UGIB with platelet count of >200 × 10 9 /L and an INR of <1.3 may be at very low risk for a variceal source of their upper gastrointestinal hemorrhage. © 2017 by the Society for Academic Emergency Medicine.

  17. Adjuvant treatment may benefit patients with high-risk upper rectal cancer: A nomogram and recursive partitioning analysis of 547 patients.

    PubMed

    Wang, Xin; Jin, Jing; Yang, Yong; Liu, Wen-Yang; Ren, Hua; Feng, Yan-Ru; Xiao, Qin; Li, Ning; Deng, Lei; Fang, Hui; Jing, Hao; Lu, Ning-Ning; Tang, Yu; Wang, Jian-Yang; Wang, Shu-Lian; Wang, Wei-Hu; Song, Yong-Wen; Liu, Yue-Ping; Li, Ye-Xiong

    2016-10-04

    The role of adjuvant chemoradiotherapy (ACRT) or adjuvant chemotherapy (ACT) in treating patients with locally advanced upper rectal cancer (URC) after total mesorectal excision (TME) surgery remains unclear. We developed a clinical nomogram and a recursive partitioning analysis (RPA)-based risk stratification system for predicting 5-year cancer-specific survival (CSS) to determine whether these individuals require ACRT or ACT. This retrospective analysis included 547 patients with primary URC. A nomogram was developed based on the Cox regression model. The performance of the model was assessed by concordance index (C-index) and calibration curve in internal validation with bootstrapping. RPA stratified patients into risk groups based on their tumor characteristics. Five independent prognostic factors (age, preoperative increased carcinoembryonic antigen and carcinoma antigen 19-9, positive lymph node [PLN] number, tumor deposit [TD], pathological T classification) were identified and entered into the predictive nomogram. The bootstrap-corrected C-index was 0.757. RPA stratification of the three prognostic groups showed obviously different prognosis. Only the high-risk group (patients with PLN ≤ 6 and TD, or PLN > 6) benefited from ACRT plus ACT when compared with surgery followed by ACRT or ACT, and surgery alone (5-year CSS: 70.8% vs. 57.8% vs. 15.6%, P < 0.001). Our nomogram predicts 5-year CSS after TME surgery for locally advanced rectal cancer and RPA-based stratification indicates that ACRT plus ACT post-surgery may be an important treatment plan with potentially ignificant survival advantages in high-risk URC. This may help to select candidates of adjuvant treatment in prospective studies.

  18. Systemic treatment after whole-brain radiotherapy may improve survival in RPA class II/III breast cancer patients with brain metastasis.

    PubMed

    Zhang, Qian; Chen, Jian; Yu, Xiaoli; Ma, Jinli; Cai, Gang; Yang, Zhaozhi; Cao, Lu; Chen, Xingxing; Guo, Xiaomao; Chen, Jiayi

    2013-09-01

    Whole brain radiotherapy (WBRT) is the most widely used treatment for brain metastasis (BM), especially for patients with multiple intracranial lesions. The purpose of this study was to examine the efficacy of systemic treatments following WBRT in breast cancer patients with BM who had different clinical characteristics, based on the classification of the Radiation Therapy Oncology Group recursive partitioning analysis (RPA) and the breast cancer-specific Graded Prognostic Assessment (Breast-GPA). One hundred and one breast cancer patients with BM treated between 2006 and 2010 were analyzed. The median interval between breast cancer diagnosis and identification of BM in the triple-negative patients was shorter than in the luminal A subtype (26 vs. 36 months, respectively; P = 0.021). Univariate analysis indicated that age at BM diagnosis, Karnofsky performance status/recursive partitioning analysis (KPS/RPA) classes, number of BMs, primary tumor control, extracranial metastases and systemic treatment following WBRT were significant prognostic factors for overall survival (OS) (P < 0.05). Multivariate analysis revealed that KPS/RPA classes and systemic treatments following WBRT remained the significant prognostic factors for OS. For RPA class I, the median survival with and without systemic treatments following WBRT was 25 and 22 months, respectively (P = 0.819), while for RPA class II/III systemic treatments significantly improved OS from 7 and 2 months to 11 and 5 months, respectively (P < 0.05). Our results suggested that triple-negative patients had a shorter interval between initial diagnosis and the development of BM than luminal A patients. Systemic treatments following WBRT improved the survival of RPA class II/III patients.

  19. Genetic variation in the raptor gene is associated with overweight but not hypertension in American men of Japanese ancestry.

    PubMed

    Morris, Brian J; Carnes, Bruce A; Chen, Randi; Donlon, Timothy A; He, Qimei; Grove, John S; Masaki, Kamal H; Elliott, Ayako; Willcox, Donald C; Allsopp, Richard; Willcox, Bradley J

    2015-04-01

    The mechanistic target of rapamycin (mTOR) pathway is pivotal for cell growth. Regulatory associated protein of mTOR complex I (Raptor) is a unique component of this pro-growth complex. The present study tested whether variation across the raptor gene (RPTOR) is associated with overweight and hypertension. We tested 61 common (allele frequency ≥ 0.1) tagging single nucleotide polymorphisms (SNPs) that captured most of the genetic variation across RPTOR in 374 subjects of normal lifespan and 439 subjects with a lifespan exceeding 95 years for association with overweight/obesity, essential hypertension, and isolated systolic hypertension. Subjects were drawn from the Honolulu Heart Program, a homogeneous population of American men of Japanese ancestry, well characterized for phenotypes relevant to conditions of aging. Hypertension status was ascertained when subjects were 45-68 years old. Statistical evaluation involved contingency table analysis, logistic regression, and the powerful method of recursive partitioning. After analysis of RPTOR genotypes by each statistical approach, we found no significant association between genetic variation in RPTOR and either essential hypertension or isolated systolic hypertension. Models generated by recursive partitioning analysis showed that RPTOR SNPs significantly enhanced the ability of the model to accurately assign individuals to either the overweight/obese or the non-overweight/obese groups (P = 0.008 by 1-tailed Z test). Common genetic variation in RPTOR is associated with overweight/obesity but does not discernibly contribute to either essential hypertension or isolated systolic hypertension in the population studied. © American Journal of Hypertension, Ltd 2014. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. A predictive model of inflammatory markers and patient-reported symptoms for cachexia in newly diagnosed pancreatic cancer patients.

    PubMed

    Fogelman, David R; Morris, J; Xiao, L; Hassan, M; Vadhan, S; Overman, M; Javle, S; Shroff, R; Varadhachary, G; Wolff, R; Vence, L; Maitra, A; Cleeland, C; Wang, X S

    2017-06-01

    Cachexia is a frequent manifestation of pancreatic cancer, can limit a patient's ability to take chemotherapy, and is associated with shortened survival. We developed a model to predict the early onset of cachexia in advanced pancreatic cancer patients. Patients with newly diagnosed, untreated metastatic or locally advanced pancreatic cancer were included. Serum cytokines were drawn prior to therapy. Patient symptoms were recorded using the M.D. Anderson Symptom Inventory (MDASI). Our primary endpoint was either 10% weight loss or death within 60 days of the start of therapy. Twenty-seven of 89 patients met the primary endpoint (either having lost 10% of body weight or having died within 60 days of the start of treatment). In a univariate analysis, smoking, history symptoms of pain and difficulty swallowing, high levels of MK, CXCL-16, IL-6, TNF-a, and low IL-1b all correlated with this endpoint. We used recursive partition to fit a regression tree model, selecting four of 26 variables (CXCL-16, IL-1b, pain, swallowing difficulty) as important in predicting cachexia. From these, a model of two cytokines (CXCL-16 > 5.135 ng/ml and IL-1b < 0.08 ng/ml) demonstrated a better sensitivity and specificity for this outcome (0.70 and 0.86, respectively) than any individual cytokine or tumor marker. Cachexia is frequent in pancreatic cancer; one in three patients met our endpoint of 10% weight loss or death within 60 days. Inflammatory cytokines are better than conventional tumor markers at predicting this outcome. Recursive partitioning analysis suggests that a model of CXCL-16 and IL-1B may offer a better ability than individual cytokines to predict this outcome.

  1. Recursive partitioning analysis (RPA) classification predicts survival in patients with brain metastases from sarcoma.

    PubMed

    Grossman, Rachel; Ram, Zvi

    2014-12-01

    Sarcoma rarely metastasizes to the brain, and there are no specific treatment guidelines for these tumors. The recursive partitioning analysis (RPA) classification is a well-established prognostic scale used in many malignancies. In this study we assessed the clinical characteristics of metastatic sarcoma to the brain and the validity of the RPA classification system in a subset of 21 patients who underwent surgical resection of metastatic sarcoma to the brain We retrospectively analyzed the medical, radiological, surgical, pathological, and follow-up clinical records of 21 patients who were operated for metastatic sarcoma to the brain between 1996 and 2012. Gliosarcomas, sarcomas of the head and neck with local extension into the brain, and metastatic sarcomas to the spine were excluded from this reported series. The patients' mean age was 49.6 ± 14.2 years (range, 25-75 years) at the time of diagnosis. Sixteen patients had a known history of systemic sarcoma, mostly in the extremities, and had previously received systemic chemotherapy and radiation therapy for their primary tumor. The mean maximal tumor diameter in the brain was 4.9 ± 1.7 cm (range 1.7-7.2 cm). The group's median preoperative Karnofsky Performance Scale was 80, with 14 patients presenting with Karnofsky Performance Scale of 70 or greater. The median overall survival was 7 months (range 0.2-204 months). The median survival time stratified by the Radiation Therapy Oncology Group RPA classes were 31, 7, and 2 months for RPA class I, II, and III, respectively (P = 0.0001). This analysis is the first to support the prognostic utility of the Radiation Therapy Oncology Group RPA classification for sarcoma brain metastases and may be used as a treatment guideline tool in this rare disease. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. A matched-pair analysis comparing whole-brain radiotherapy with and without a stereotactic boost for intracerebral control and overall survival in patients with one to three cerebral metastases.

    PubMed

    Rades, Dirk; Janssen, Stefan; Bajrovic, Amira; Khoa, Mai Trong; Veninga, Theo; Schild, Steven E

    2017-04-24

    Twelve years ago, a randomized trial demonstrated that a radiosurgery boost added to whole-brain radiotherapy (WBRT) improved intracerebral control (IC) in patients with one to three cerebral metastases. Overall survival (OS) was improved only in the subgroup of patients with a single metastasis but not in the entire cohort. The present study compared both regimens in a different scenario outside a randomized trial. A total of 252 patients with one to three cerebral metastases were included. Eighty-four patients receiving WBRT plus a planned stereotactic boost and 168 patients receiving WBRT alone were individually matched 1:2 for nine factors including fractionation of WBRT, age, gender, performance score, primary tumor, number of cerebral metastases, extracerebral metastases, recursive partitioning analysis class, and time between cancer diagnosis and WBRT. Each group of three patients was required to match for all nine factors. Both groups were compared for IC and OS. IC rates at 6, 12, 18 and 24 months were 88, 71, 45 and 22% after WBRT plus stereotactic boost vs. 75, 48, 38 and 22% after WBRT alone (p = 0.005). OS rates at 6, 12, 18 and 24 months were 76, 53, 32 and 25% after WBRT plus stereotactic boost and 67, 45, 29 and 20% after WBRT alone (p = 0.10). In patients with a single lesion, OS rates were also not significantly different (p = 0.12). Similar to the previous randomized trial from 2004, this matched-pair study showed that a stereotactic boost in addition to WBRT significantly improved IC but not OS.

  3. Topological string, supersymmetric gauge theory and bps counting

    NASA Astrophysics Data System (ADS)

    Pan, Guang

    In this thesis we study the Donaldson-Thomas theory on the local curve geometry, which arises in the context of geometric engineering of supersymmetric gauge theory from type IIA string compactification. The topological A-model amplitude gives the F-term interaction of the compactified theory. In particular, it is related to the instanton partition function via Nekrasov conjecture. We will introduce ADHM sheaves on curve, as an alternative description of local Donaldson-Thomas theory. We derive the wallcrossing of ADHM invariants and their refinements. We show that it is equivalent to the semi-primitive wallcrossing from supergravity, and the Kontsevich-Soibelman wallcrossing formula. As an application, we discuss the connection between ADHM moduli space with Hitchin system. In particular we give a recursive formula for the Poincare polynomial of Hitchin system in terms of instanton partition function, from refined wallcrossing. We also introduce higher rank generalization of Donaldson-Thomas invariant in the context of ADHM sheaves. We study their wallcrossing and discuss their physical interpretation via string duality.

  4. Stationary Random Metrics on Hierarchical Graphs Via {(min,+)}-type Recursive Distributional Equations

    NASA Astrophysics Data System (ADS)

    Khristoforov, Mikhail; Kleptsyn, Victor; Triestino, Michele

    2016-07-01

    This paper is inspired by the problem of understanding in a mathematical sense the Liouville quantum gravity on surfaces. Here we show how to define a stationary random metric on self-similar spaces which are the limit of nice finite graphs: these are the so-called hierarchical graphs. They possess a well-defined level structure and any level is built using a simple recursion. Stopping the construction at any finite level, we have a discrete random metric space when we set the edges to have random length (using a multiplicative cascade with fixed law {m}). We introduce a tool, the cut-off process, by means of which one finds that renormalizing the sequence of metrics by an exponential factor, they converge in law to a non-trivial metric on the limit space. Such limit law is stationary, in the sense that glueing together a certain number of copies of the random limit space, according to the combinatorics of the brick graph, the obtained random metric has the same law when rescaled by a random factor of law {m} . In other words, the stationary random metric is the solution of a distributional equation. When the measure m has continuous positive density on {mathbf{R}+}, the stationary law is unique up to rescaling and any other distribution tends to a rescaled stationary law under the iterations of the hierarchical transformation. We also investigate topological and geometric properties of the random space when m is log-normal, detecting a phase transition influenced by the branching random walk associated to the multiplicative cascade.

  5. Optimal partitioning of random programs across two processors

    NASA Technical Reports Server (NTRS)

    Nicol, D. M.

    1986-01-01

    The optimal partitioning of random distributed programs is discussed. It is concluded that the optimal partitioning of a homogeneous random program over a homogeneous distributed system either assigns all modules to a single processor, or distributes the modules as evenly as possible among all processors. The analysis rests heavily on the approximation which equates the expected maximum of a set of independent random variables with the set's maximum expectation. The results are strengthened by providing an approximation-free proof of this result for two processors under general conditions on the module execution time distribution. It is also shown that use of this approximation causes two of the previous central results to be false.

  6. Recursive grid partitioning on a cortical surface model: an optimized technique for the localization of implanted subdural electrodes.

    PubMed

    Pieters, Thomas A; Conner, Christopher R; Tandon, Nitin

    2013-05-01

    Precise localization of subdural electrodes (SDEs) is essential for the interpretation of data from intracranial electrocorticography recordings. Blood and fluid accumulation underneath the craniotomy flap leads to a nonlinear deformation of the brain surface and of the SDE array on postoperative CT scans and adversely impacts the accurate localization of electrodes located underneath the craniotomy. Older methods that localize electrodes based on their identification on a postimplantation CT scan with coregistration to a preimplantation MR image can result in significant problems with accuracy of the electrode localization. The authors report 3 novel methods that rely on the creation of a set of 3D mesh models to depict the pial surface and a smoothed pial envelope. Two of these new methods are designed to localize electrodes, and they are compared with 6 methods currently in use to determine their relative accuracy and reliability. The first method involves manually localizing each electrode using digital photographs obtained at surgery. This is highly accurate, but requires time intensive, operator-dependent input. The second uses 4 electrodes localized manually in conjunction with an automated, recursive partitioning technique to localize the entire electrode array. The authors evaluated the accuracy of previously published methods by applying the methods to their data and comparing them against the photograph-based localization. Finally, the authors further enhanced the usability of these methods by using automatic parcellation techniques to assign anatomical labels to individual electrodes as well as by generating an inflated cortical surface model while still preserving electrode locations relative to the cortical anatomy. The recursive grid partitioning had the least error compared with older methods (672 electrodes, 6.4-mm maximum electrode error, 2.0-mm mean error, p < 10(-18)). The maximum errors derived using prior methods of localization ranged from 8.2 to 11.7 mm for an individual electrode, with mean errors ranging between 2.9 and 4.1 mm depending on the method used. The authors also noted a larger error in all methods that used CT scans alone to localize electrodes compared with those that used both postoperative CT and postoperative MRI. The large mean errors reported with these methods are liable to affect intermodal data comparisons (for example, with functional mapping techniques) and may impact surgical decision making. The authors have presented several aspects of using new techniques to visualize electrodes implanted for localizing epilepsy. The ability to use automated labeling schemas to denote which gyrus a particular electrode overlies is potentially of great utility in planning resections and in corroborating the results of extraoperative stimulation mapping. Dilation of the pial mesh model provides, for the first time, a sense of the cortical surface not sampled by the electrode, and the potential roles this "electrophysiologically hidden" cortex may play in both eloquent function and seizure onset.

  7. A probabilistic, distributed, recursive mechanism for decision-making in the brain

    PubMed Central

    Gurney, Kevin N.

    2018-01-01

    Decision formation recruits many brain regions, but the procedure they jointly execute is unknown. Here we characterize its essential composition, using as a framework a novel recursive Bayesian algorithm that makes decisions based on spike-trains with the statistics of those in sensory cortex (MT). Using it to simulate the random-dot-motion task, we demonstrate it quantitatively replicates the choice behaviour of monkeys, whilst predicting losses of otherwise usable information from MT. Its architecture maps to the recurrent cortico-basal-ganglia-thalamo-cortical loops, whose components are all implicated in decision-making. We show that the dynamics of its mapped computations match those of neural activity in the sensorimotor cortex and striatum during decisions, and forecast those of basal ganglia output and thalamus. This also predicts which aspects of neural dynamics are and are not part of inference. Our single-equation algorithm is probabilistic, distributed, recursive, and parallel. Its success at capturing anatomy, behaviour, and electrophysiology suggests that the mechanism implemented by the brain has these same characteristics. PMID:29614077

  8. Random Partition Distribution Indexed by Pairwise Information

    PubMed Central

    Dahl, David B.; Day, Ryan; Tsai, Jerry W.

    2017-01-01

    We propose a random partition distribution indexed by pairwise similarity information such that partitions compatible with the similarities are given more probability. The use of pairwise similarities, in the form of distances, is common in some clustering algorithms (e.g., hierarchical clustering), but we show how to use this type of information to define a prior partition distribution for flexible Bayesian modeling. A defining feature of the distribution is that it allocates probability among partitions within a given number of subsets, but it does not shift probability among sets of partitions with different numbers of subsets. Our distribution places more probability on partitions that group similar items yet keeps the total probability of partitions with a given number of subsets constant. The distribution of the number of subsets (and its moments) is available in closed-form and is not a function of the similarities. Our formulation has an explicit probability mass function (with a tractable normalizing constant) so the full suite of MCMC methods may be used for posterior inference. We compare our distribution with several existing partition distributions, showing that our formulation has attractive properties. We provide three demonstrations to highlight the features and relative performance of our distribution. PMID:29276318

  9. Orthogonal recursive bisection data decomposition for high performance computing in cardiac model simulations: dependence on anatomical geometry.

    PubMed

    Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U J; Seemann, Gunnar; Dossel, Olaf; Pitman, Michael C; Rice, John J

    2009-01-01

    Orthogonal recursive bisection (ORB) algorithm can be used as data decomposition strategy to distribute a large data set of a cardiac model to a distributed memory supercomputer. It has been shown previously that good scaling results can be achieved using the ORB algorithm for data decomposition. However, the ORB algorithm depends on the distribution of computational load of each element in the data set. In this work we investigated the dependence of data decomposition and load balancing on different rotations of the anatomical data set to achieve optimization in load balancing. The anatomical data set was given by both ventricles of the Visible Female data set in a 0.2 mm resolution. Fiber orientation was included. The data set was rotated by 90 degrees around x, y and z axis, respectively. By either translating or by simply taking the magnitude of the resulting negative coordinates we were able to create 14 data set of the same anatomy with different orientation and position in the overall volume. Computation load ratios for non - tissue vs. tissue elements used in the data decomposition were 1:1, 1:2, 1:5, 1:10, 1:25, 1:38.85, 1:50 and 1:100 to investigate the effect of different load ratios on the data decomposition. The ten Tusscher et al. (2004) electrophysiological cell model was used in monodomain simulations of 1 ms simulation time to compare performance using the different data sets and orientations. The simulations were carried out for load ratio 1:10, 1:25 and 1:38.85 on a 512 processor partition of the IBM Blue Gene/L supercomputer. Th results show that the data decomposition does depend on the orientation and position of the anatomy in the global volume. The difference in total run time between the data sets is 10 s for a simulation time of 1 ms. This yields a difference of about 28 h for a simulation of 10 s simulation time. However, given larger processor partitions, the difference in run time decreases and becomes less significant. Depending on the processor partition size, future work will have to consider the orientation of the anatomy in the global volume for longer simulation runs.

  10. Effect of sanhuangwuji powder, anti-rheumatic drugs, and ginger-partitioned acupoint stimulation on the treatment of rheumatoid arthritis with peptic ulcer: a randomized controlled study.

    PubMed

    Liu, Defang; Guo, Mingyang; Hu, Yonghe; Liu, Taihua; Yan, Jiao; Luo, Yong; Yun, Mingdong; Yang, Min; Zhang, Jun; Guo, Linglin

    2015-06-01

    To observe the efficacy and safety of oral sanhuangwuji powder, anti-rheumatic drugs (ARDs), and ginger-partitioned acupoint stimulation at zusanli (ST 36) on the treatment of rheumatoid arthritis (RA) complicated by peptic ulcer. This prospective randomized controlled study included 180 eligible inpatients and outpatients randomly assigned to an ARD treatment (n.= 60), ginger-partitioned stimulation (n = 60), or combination treatment (n = 60). Patients assigned to the ARD group were given oral celecoxib, methotrexate, and esomeprazole. Patients assigned to the ginger-partitioned stimulation group were given ginger-partitioned acupoint stimulation at zusanli (ST 36) in addition to the ARDs. Patients in the combination treatment group were given oral sanhuangwuji powder, ginger-partitioned acupoint stimulation at susanli (ST 36), and ARDs. All patients were followed up for 2 months to evaluate clinical effects and safety. The study was registered in the World Health Organization database at the General Hospital of Chengdu Military Area Command Chinese People's Liberation Army (ChiCTR-TCC12002824). The combination treatment group had significantly greater improvements in RA symptoms, laboratory outcomes, and gastrointestinal symptom scores, compared with the other groups (P < 0.05). The peptic ulcer healing rate in the combination treatment group was significantly greater than that in the ARD treatment group (χ2= 16.875, P < 0.05) and the ginger-partitioned stimulation group (χ2= 6.171, P < 0.05). Combination treatment with ginger-partitioned acupoint stimulation at zusanli (ST 36), oral sanhuangwuji powder, and ARDs had a better clinical effect for RA with complicated peptic ulcer, compared with ARD treatmentalone or in combination with ginger-partitioned acupoint stimulation.

  11. Gender, Race, and Survival: A Study in Non-Small-Cell Lung Cancer Brain Metastases Patients Utilizing the Radiation Therapy Oncology Group Recursive Partitioning Analysis Classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Videtic, Gregory M.M., E-mail: videtig@ccf.or; Reddy, Chandana A.; Chao, Samuel T.

    Purpose: To explore whether gender and race influence survival in non-small-cell lung cancer (NSCLC) in patients with brain metastases, using our large single-institution brain tumor database and the Radiation Therapy Oncology Group recursive partitioning analysis (RPA) brain metastases classification. Methods and materials: A retrospective review of a single-institution brain metastasis database for the interval January 1982 to September 2004 yielded 835 NSCLC patients with brain metastases for analysis. Patient subsets based on combinations of gender, race, and RPA class were then analyzed for survival differences. Results: Median follow-up was 5.4 months (range, 0-122.9 months). There were 485 male patients (M)more » (58.4%) and 346 female patients (F) (41.6%). Of the 828 evaluable patients (99%), 143 (17%) were black/African American (B) and 685 (83%) were white/Caucasian (W). Median survival time (MST) from time of brain metastasis diagnosis for all patients was 5.8 months. Median survival time by gender (F vs. M) and race (W vs. B) was 6.3 months vs. 5.5 months (p = 0.013) and 6.0 months vs. 5.2 months (p = 0.08), respectively. For patients stratified by RPA class, gender, and race, MST significantly favored BFs over BMs in Class II: 11.2 months vs. 4.6 months (p = 0.021). On multivariable analysis, significant variables were gender (p = 0.041, relative risk [RR] 0.83) and RPA class (p < 0.0001, RR 0.28 for I vs. III; p < 0.0001, RR 0.51 for II vs. III) but not race. Conclusions: Gender significantly influences NSCLC brain metastasis survival. Race trended to significance in overall survival but was not significant on multivariable analysis. Multivariable analysis identified gender and RPA classification as significant variables with respect to survival.« less

  12. Beam Path Toxicities to Non-Target Structures During Intensity-Modulated Radiation Therapy for Head and Neck Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenthal, David I.; Chambers, Mark S.; Fuller, Clifton D.

    2008-11-01

    Background: Intensity-modulated radiation therapy (IMRT) beams traverse nontarget normal structures not irradiated during three-dimensional conformal RT (3D-CRT) for head and neck cancer (HNC). This study estimates the doses and toxicities to nontarget structures during IMRT. Materials and Methods: Oropharyngeal cancer IMRT and 3D-CRT cases were reviewed. Dose-volume histograms (DVH) were used to evaluate radiation dose to the lip, cochlea, brainstem, occipital scalp, and segments of the mandible. Toxicity rates were compared for 3D-CRT, IMRT alone, or IMRT with concurrent cisplatin. Descriptive statistics and exploratory recursive partitioning analysis were used to estimate dose 'breakpoints' associated with observed toxicities. Results: A totalmore » of 160 patients were evaluated for toxicity; 60 had detailed DVH evaluation and 15 had 3D-CRT plan comparison. Comparing IMRT with 3D-CRT, there was significant (p {<=} 0.002) nonparametric differential dose to all clinically significant structures of interest. Thirty percent of IMRT patients had headaches and 40% had occipital scalp alopecia. A total of 76% and 38% of patients treated with IMRT alone had nausea and vomiting, compared with 99% and 68%, respectively, of those with concurrent cisplatin. IMRT had a markedly distinct toxicity profile than 3D-CRT. In recursive partitioning analysis, National Cancer Institute's Common Toxicity Criteria adverse effects 3.0 nausea and vomiting, scalp alopecia and anterior mucositis were associated with reconstructed mean brainstem dose >36 Gy, occipital scalp dose >30 Gy, and anterior mandible dose >34 Gy, respectively. Conclusions: Dose reduction to specified structures during IMRT implies an increased beam path dose to alternate nontarget structures that may result in clinical toxicities that were uncommon with previous, less conformal approaches. These findings have implications for IMRT treatment planning and research, toxicity assessment, and multidisciplinary patient management.« less

  13. A CONCISE PANEL OF BIOMARKERS IDENTIFIES NEUROCOGNITIVE FUNCTIONING CHANGES IN HIV-INFECTED INDIVIDUALS

    PubMed Central

    Marcotte, Thomas D.; Deutsch, Reena; Michael, Benedict Daniel; Franklin, Donald; Cookson, Debra Rosario; Bharti, Ajay R.; Grant, Igor; Letendre, Scott L.

    2013-01-01

    Background Neurocognitive (NC) impairment (NCI) occurs commonly in people living with HIV. Despite substantial effort, no biomarkers have been sufficiently validated for diagnosis and prognosis of NCI in the clinic. The goal of this project was to identify diagnostic or prognostic biomarkers for NCI in a comprehensively characterized HIV cohort. Methods Multidisciplinary case review selected 98 HIV-infected individuals and categorized them into four NC groups using normative data: stably normal (SN), stably impaired (SI), worsening (Wo), or improving (Im). All subjects underwent comprehensive NC testing, phlebotomy, and lumbar puncture at two timepoints separated by a median of 6.2 months. Eight biomarkers were measured in CSF and blood by immunoassay. Results were analyzed using mixed model linear regression and staged recursive partitioning. Results At the first visit, subjects were mostly middle-aged (median 45) white (58%) men (84%) who had AIDS (70%). Of the 73% who took antiretroviral therapy (ART), 54% had HIV RNA levels below 50 c/mL in plasma. Mixed model linear regression identified that only MCP-1 in CSF was associated with neurocognitive change group. Recursive partitioning models aimed at diagnosis (i.e., correctly classifying neurocognitive status at the first visit) were complex and required most biomarkers to achieve misclassification limits. In contrast, prognostic models were more efficient. A combination of three biomarkers (sCD14, MCP-1, SDF-1α) correctly classified 82% of Wo and SN subjects, including 88% of SN subjects. A combination of two biomarkers (MCP-1, TNF-α) correctly classified 81% of Im and SI subjects, including 100% of SI subjects. Conclusions This analysis of well-characterized individuals identified concise panels of biomarkers associated with NC change. Across all analyses, the two most frequently identified biomarkers were sCD14 and MCP-1, indicators of monocyte/macrophage activation. While the panels differed depending on the outcome and on the degree of misclassification, nearly all stable patients were correctly classified. PMID:24101401

  14. Stereotactic Body Radiotherapy for Early-stage Non-small-cell Lung Cancer in Patients 80 Years and Older: A Multi-center Analysis.

    PubMed

    Cassidy, Richard J; Patel, Pretesh R; Zhang, Xinyan; Press, Robert H; Switchenko, Jeffrey M; Pillai, Rathi N; Owonikoko, Taofeek K; Ramalingam, Suresh S; Fernandez, Felix G; Force, Seth D; Curran, Walter J; Higgins, Kristin A

    2017-09-01

    Stereotactic body radiotherapy (SBRT) is the standard of care for medically inoperable early-stage non-small-cell lung cancer. Despite the limited number of octogenarians and nonagenarians on trials of SBRT, its use is increasingly being offered in these patients, given the aging cancer population, medical fragility, or patient preference. Our purpose was to investigate the efficacy, safety, and survival of patients ≥ 80 years old treated with definitive lung SBRT. Patients who underwent SBRT were reviewed from 2009 to 2015 at 4 academic centers. Patients diagnosed at ≥ 80 years old were included. Kaplan-Meier and multivariate logistic regression and Cox proportional hazard regression analyses were performed. Recursive partitioning analysis was done to determine a subgroup of patients most likely to benefit from therapy. A total of 58 patients were included, with a median age of 84.9 years (range, 80.1-95.2 years), a median follow-up time of 19.9 months (range, 6.9-64.9 months), a median fraction size of 10.0 Gy (range, 7.0-20.0 Gy), and a median number of fractions of 5.0 (range, 3.0-8.0 fractions). On multivariate analysis, higher Karnofsky performance status (KPS) was associated with higher local recurrence-free survival (hazard ratio [HR], 0.92; P < .01), regional recurrence-free survival (HR, 0.94; P < .01), and overall survival (HR, 0.91; P < .01). On recursive partitioning analysis, patients with KPS ≥ 75 had improved 3-year cancer-specific and overall survival (99.4% and 91.9%, respectively) compared with patients with KPS < 75 (47.8% and 23.6%, respectively; P < .01). Definitive lung SBRT for early-stage non-small-cell lung cancer was efficacious and safe in patients ≥ 80 years old. Patients with a KPS of ≥ 75 derived the most benefit from therapy. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. A Clinical Decision Tree to Predict Whether a Bacteremic Patient Is Infected With an Extended-Spectrum β-Lactamase-Producing Organism.

    PubMed

    Goodman, Katherine E; Lessler, Justin; Cosgrove, Sara E; Harris, Anthony D; Lautenbach, Ebbing; Han, Jennifer H; Milstone, Aaron M; Massey, Colin J; Tamma, Pranita D

    2016-10-01

    Timely identification of extended-spectrum β-lactamase (ESBL) bacteremia can improve clinical outcomes while minimizing unnecessary use of broad-spectrum antibiotics, including carbapenems. However, most clinical microbiology laboratories currently require at least 24 additional hours from the time of microbial genus and species identification to confirm ESBL production. Our objective was to develop a user-friendly decision tree to predict which organisms are ESBL producing, to guide appropriate antibiotic therapy. We included patients ≥18 years of age with bacteremia due to Escherichia coli or Klebsiella species from October 2008 to March 2015 at Johns Hopkins Hospital. Isolates with ceftriaxone minimum inhibitory concentrations ≥2 µg/mL underwent ESBL confirmatory testing. Recursive partitioning was used to generate a decision tree to determine the likelihood that a bacteremic patient was infected with an ESBL producer. Discrimination of the original and cross-validated models was evaluated using receiver operating characteristic curves and by calculation of C-statistics. A total of 1288 patients with bacteremia met eligibility criteria. For 194 patients (15%), bacteremia was due to a confirmed ESBL producer. The final classification tree for predicting ESBL-positive bacteremia included 5 predictors: history of ESBL colonization/infection, chronic indwelling vascular hardware, age ≥43 years, recent hospitalization in an ESBL high-burden region, and ≥6 days of antibiotic exposure in the prior 6 months. The decision tree's positive and negative predictive values were 90.8% and 91.9%, respectively. Our findings suggest that a clinical decision tree can be used to estimate a bacteremic patient's likelihood of infection with ESBL-producing bacteria. Recursive partitioning offers a practical, user-friendly approach for addressing important diagnostic questions. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.

  16. Prognostic Classification Factors Associated With Development of Multiple Autoantibodies, Dysglycemia, and Type 1 Diabetes-A Recursive Partitioning Analysis.

    PubMed

    Xu, Ping; Krischer, Jeffrey P

    2016-06-01

    To define prognostic classification factors associated with the progression from single to multiple autoantibodies, multiple autoantibodies to dysglycemia, and dysglycemia to type 1 diabetes onset in relatives of individuals with type 1 diabetes. Three distinct cohorts of subjects from the Type 1 Diabetes TrialNet Pathway to Prevention Study were investigated separately. A recursive partitioning analysis (RPA) was used to determine the risk classes. Clinical characteristics, including genotype, antibody titers, and metabolic markers were analyzed. Age and GAD65 autoantibody (GAD65Ab) titers defined three risk classes for progression from single to multiple autoantibodies. The 5-year risk was 11% for those subjects >16 years of age with low GAD65Ab titers, 29% for those ≤16 years of age with low GAD65Ab titers, and 45% for those subjects with high GAD65Ab titers regardless of age. Progression to dysglycemia was associated with islet antigen 2 Ab titers, and 2-h glucose and fasting C-peptide levels. The 5-year risk is 28%, 39%, and 51% for respective risk classes defined by the three predictors. Progression to type 1 diabetes was associated with the number of positive autoantibodies, peak C-peptide level, HbA1c level, and age. Four risk classes defined by RPA had a 5-year risk of 9%, 33%, 62%, and 80%, respectively. The use of RPA offered a new classification approach that could predict the timing of transitions from one preclinical stage to the next in the development of type 1 diabetes. Using these RPA classes, new prevention techniques can be tailored based on the individual prognostic risk characteristics at different preclinical stages. © 2016 by the American Diabetes Association. Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered.

  17. A recursive Bayesian approach for fatigue damage prognosis: An experimental validation at the reliability component level

    NASA Astrophysics Data System (ADS)

    Gobbato, Maurizio; Kosmatka, John B.; Conte, Joel P.

    2014-04-01

    Fatigue-induced damage is one of the most uncertain and highly unpredictable failure mechanisms for a large variety of mechanical and structural systems subjected to cyclic and random loads during their service life. A health monitoring system capable of (i) monitoring the critical components of these systems through non-destructive evaluation (NDE) techniques, (ii) assessing their structural integrity, (iii) recursively predicting their remaining fatigue life (RFL), and (iv) providing a cost-efficient reliability-based inspection and maintenance plan (RBIM) is therefore ultimately needed. In contribution to these objectives, the first part of the paper provides an overview and extension of a comprehensive reliability-based fatigue damage prognosis methodology — previously developed by the authors — for recursively predicting and updating the RFL of critical structural components and/or sub-components in aerospace structures. In the second part of the paper, a set of experimental fatigue test data, available in the literature, is used to provide a numerical verification and an experimental validation of the proposed framework at the reliability component level (i.e., single damage mechanism evolving at a single damage location). The results obtained from this study demonstrate (i) the importance and the benefits of a nearly continuous NDE monitoring system, (ii) the efficiency of the recursive Bayesian updating scheme, and (iii) the robustness of the proposed framework in recursively updating and improving the RFL estimations. This study also demonstrates that the proposed methodology can lead to either an extent of the RFL (with a consequent economical gain without compromising the minimum safety requirements) or an increase of safety by detecting a premature fault and therefore avoiding a very costly catastrophic failure.

  18. Orthogonal recursive bisection as data decomposition strategy for massively parallel cardiac simulations.

    PubMed

    Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Pitman, Michael C; Rice, John J

    2011-06-01

    We present the orthogonal recursive bisection algorithm that hierarchically segments the anatomical model structure into subvolumes that are distributed to cores. The anatomy is derived from the Visible Human Project, with electrophysiology based on the FitzHugh-Nagumo (FHN) and ten Tusscher (TT04) models with monodomain diffusion. Benchmark simulations with up to 16,384 and 32,768 cores on IBM Blue Gene/P and L supercomputers for both FHN and TT04 results show good load balancing with almost perfect speedup factors that are close to linear with the number of cores. Hence, strong scaling is demonstrated. With 32,768 cores, a 1000 ms simulation of full heart beat requires about 6.5 min of wall clock time for a simulation of the FHN model. For the largest machine partitions, the simulations execute at a rate of 0.548 s (BG/P) and 0.394 s (BG/L) of wall clock time per 1 ms of simulation time. To our knowledge, these simulations show strong scaling to substantially higher numbers of cores than reported previously for organ-level simulation of the heart, thus significantly reducing run times. The ability to reduce runtimes could play a critical role in enabling wider use of cardiac models in research and clinical applications.

  19. A Phase III Study of Conventional Radiation Therapy Plus Thalidomide Versus Conventional Radiation Therapy for Multiple Brain Metastases (RTOG 0118)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knisely, Jonathan P.S.; Berkey, Brian; Chakravarti, Arnab

    2008-05-01

    Purpose: To compare whole-brain radiation therapy (WBRT) with WBRT combined with thalidomide for patients with brain metastases not amenable to resection or radiosurgery. Patients and Methods: Patients with Zubrod performance status 0-1, MRI-documented multiple (>3), large (>4 cm), or midbrain brain metastases arising from a histopathologically confirmed extracranial primary tumor, and an anticipated survival of >8 weeks were randomized to receive WBRT to a dose of 37.5 Gy in 15 fractions with or without thalidomide during and after WBRT. Prerandomization stratification used Radiation Therapy Oncology Group (RTOG) Recursive Partitioning Analysis (RPA) Class and whether post-WBRT chemotherapy was planned. Endpoints includedmore » overall survival, progression-free survival, time to neurocognitive progression, the cause of death, toxicities, and quality of life. A protocol-planned interim analysis documented that the trial had an extremely low probability of ever showing a significant difference favoring the thalidomide arm given the results at the time of the analysis, and it was therefore closed on the basis of predefined statistical guidelines. Results: Enrolled in the study were 332 patients. Of 183 accrued patients, 93 were randomized to receive WBRT alone and 90 to WBRT and thalidomide. Median survival was 3.9 months for both arms. No novel toxicities were seen, but thalidomide was not well tolerated in this population. Forty-eight percent of patients discontinued thalidomide because of side effects. Conclusion: Thalidomide provided no survival benefit for patients with multiple, large, or midbrain metastases when combined with WBRT; nearly half the patients discontinued thalidomide due to side effects.« less

  20. Predictive Factor Analysis of Response-Adapted Radiation Therapy for Chemotherapy-Sensitive Pediatric Hodgkin Lymphoma: Analysis of the Children's Oncology Group AHOD 0031 Trial

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charpentier, Anne-Marie; Friedman, Debra L.; Wolden, Suzanne

    Purpose: To evaluate whether clinical risk factors could further distinguish children with intermediate-risk Hodgkin lymphoma (HL) with rapid early and complete anatomic response (RER/CR) who benefit significantly from involved-field RT (IFRT) from those who do not, and thereby aid refinement of treatment selection. Methods and Materials: Children with intermediate-risk HL treated on the Children's Oncology Group AHOD 0031 trial who achieved RER/CR with 4 cycles of chemotherapy, and who were randomized to 21-Gy IFRT or no additional therapy (n=716) were the subject of this study. Recursive partitioning analysis was used to identify factors associated with clinically and statistically significant improvement inmore » event-free survival (EFS) after randomization to IFRT. Bootstrap sampling was used to evaluate the robustness of the findings. Result: Although most RER/CR patients did not benefit significantly from IFRT, those with a combination of anemia and bulky limited-stage disease (n=190) had significantly better 4-year EFS with the addition of IFRT (89.3% vs 77.9% without IFRT; P=.019); this benefit was consistently reproduced in bootstrap analyses and after adjusting for other prognostic factors. Conclusion: Although most patients achieving RER/CR had favorable outcomes with 4 cycles of chemotherapy alone, those children with initial bulky stage I/II disease and anemia had significantly better EFS with the addition of IFRT as part of combined-modality therapy. Further work evaluating the interaction of clinical and biologic factors and imaging response is needed to further optimize and refine treatment selection.« less

  1. Algorithms for the automatic generation of 2-D structured multi-block grids

    NASA Technical Reports Server (NTRS)

    Schoenfeld, Thilo; Weinerfelt, Per; Jenssen, Carl B.

    1995-01-01

    Two different approaches to the fully automatic generation of structured multi-block grids in two dimensions are presented. The work aims to simplify the user interactivity necessary for the definition of a multiple block grid topology. The first approach is based on an advancing front method commonly used for the generation of unstructured grids. The original algorithm has been modified toward the generation of large quadrilateral elements. The second method is based on the divide-and-conquer paradigm with the global domain recursively partitioned into sub-domains. For either method each of the resulting blocks is then meshed using transfinite interpolation and elliptic smoothing. The applicability of these methods to practical problems is demonstrated for typical geometries of fluid dynamics.

  2. Informed and Uninformed Naïve Assessment Constructors' Strategies for Item Selection

    ERIC Educational Resources Information Center

    Fives, Helenrose; Barnes, Nicole

    2017-01-01

    We present a descriptive analysis of 53 naïve assessment constructors' explanations for selecting test items to include on a summative assessment. We randomly assigned participants to an informed and uninformed condition (i.e., informed participants read an article describing a Table of Specifications). Through recursive thematic analyses of…

  3. Estimation and classification by sigmoids based on mutual information

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1994-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the mutual information between the input and the output of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's s method, applied to an estimated density, yields a recursive maximum likelihood estimator, consisting of a single internal layer of sigmoids, for a random variable or a random sequence. Applications to the diamond classification and to the prediction of a sun-spot process are demonstrated.

  4. Method for implementation of recursive hierarchical segmentation on parallel computers

    NASA Technical Reports Server (NTRS)

    Tilton, James C. (Inventor)

    2005-01-01

    A method, computer readable storage, and apparatus for implementing a recursive hierarchical segmentation algorithm on a parallel computing platform. The method includes setting a bottom level of recursion that defines where a recursive division of an image into sections stops dividing, and setting an intermediate level of recursion where the recursive division changes from a parallel implementation into a serial implementation. The segmentation algorithm is implemented according to the set levels. The method can also include setting a convergence check level of recursion with which the first level of recursion communicates with when performing a convergence check.

  5. Is recursion language-specific? Evidence of recursive mechanisms in the structure of intentional action.

    PubMed

    Vicari, Giuseppe; Adenzato, Mauro

    2014-05-01

    In their 2002 seminal paper Hauser, Chomsky and Fitch hypothesize that recursion is the only human-specific and language-specific mechanism of the faculty of language. While debate focused primarily on the meaning of recursion in the hypothesis and on the human-specific and syntax-specific character of recursion, the present work focuses on the claim that recursion is language-specific. We argue that there are recursive structures in the domain of motor intentionality by way of extending John R. Searle's analysis of intentional action. We then discuss evidence from cognitive science and neuroscience supporting the claim that motor-intentional recursion is language-independent and suggest some explanatory hypotheses: (1) linguistic recursion is embodied in sensory-motor processing; (2) linguistic and motor-intentional recursions are distinct and mutually independent mechanisms. Finally, we propose some reflections about the epistemic status of HCF as presenting an empirically falsifiable hypothesis, and on the possibility of testing recursion in different cognitive domains. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Recursive partitioning identifies greater than 4 U of packed red blood cells per hour as an improved massive transfusion definition.

    PubMed

    Moren, Alexis Marika; Hamptom, David; Diggs, Brian; Kiraly, Laszlo; Fox, Erin E; Holcomb, John B; Rahbar, Mohammad Hossein; Brasel, Karen J; Cohen, Mitchell Jay; Bulger, Eileen M; Schreiber, Martin A

    2015-12-01

    Massive transfusion (MT) is classically defined as greater than 10 U of packed red blood cells (PRBCs) in 24 hours. This fails to capture the most severely injured patients. Extending the previous work of Savage and Rahbar, a rolling hourly rate-based definition of MT may more accurately define critically injured patients requiring early, aggressive resuscitation. The Prospective Observational Multicenter Major Trauma Transfusion (PROMMTT) trial collected data from 10 Level 1 trauma centers. Patients were placed into rate-based transfusion groups by maximal number of PRBCs transfused in any hour within the first 6 hours. A nonparametric analysis using classification trees partitioned data according to mortality at 24 hours using a predictor variable of maximum number PRBC units transfused in an hour. Dichotomous variables significant in previous scores and models as predictors of MT were used to identify critically ill patients: a positive finding on Focused Assessment with Sonography in Trauma (FAST) examination, Glasgow Coma Scale (GCS) score less than 8, heart rate greater than 120 beats/min, systolic blood pressure less than 90 mm Hg, penetrating mechanism of injury, international normalized ratio greater than 1.5, hemoglobin less than 11, and base deficit greater than 5. These critical indicators were then compared among the nodes of the classification tree. Patients omitted included those who did not receive PRBCs (n = 24) and those who did not have all eight critical indicators reported (n = 449). In a population of 1,245 patients, the classification tree included 772 patients. Analysis by recursive partitioning showed increased mortality among patients receiving greater than 13 U/h (73.9%, p < 0.01). In those patients receiving less than or equal to 13 U/h, mortality was greater in patients who received more than 4 U/h (16.7% vs. 6.0%, p < 0.01) (Fig. 1). Nodal analysis showed that the median number of critical indicators for each node was 3 (2-4) (≤4 U/h), 4 (3-5) (>4 U/h and ≤13 U/h), and 5 (4-5.5) (>13 U/h). A rate-based transfusion definition identifies a difference in mortality in patients who receive greater than 4 U/h of PRBCs. Redefining MT to greater than 4 U/h allows early identification of patients with a significant mortality risk who may be missed by the current definition. Prognostic/epidemiologic study, level III.

  7. Recursive least-squares learning algorithms for neural networks

    NASA Astrophysics Data System (ADS)

    Lewis, Paul S.; Hwang, Jenq N.

    1990-11-01

    This paper presents the development of a pair of recursive least squares (ItLS) algorithms for online training of multilayer perceptrons which are a class of feedforward artificial neural networks. These algorithms incorporate second order information about the training error surface in order to achieve faster learning rates than are possible using first order gradient descent algorithms such as the generalized delta rule. A least squares formulation is derived from a linearization of the training error function. Individual training pattern errors are linearized about the network parameters that were in effect when the pattern was presented. This permits the recursive solution of the least squares approximation either via conventional RLS recursions or by recursive QR decomposition-based techniques. The computational complexity of the update is 0(N2) where N is the number of network parameters. This is due to the estimation of the N x N inverse Hessian matrix. Less computationally intensive approximations of the ilLS algorithms can be easily derived by using only block diagonal elements of this matrix thereby partitioning the learning into independent sets. A simulation example is presented in which a neural network is trained to approximate a two dimensional Gaussian bump. In this example RLS training required an order of magnitude fewer iterations on average (527) than did training with the generalized delta rule (6 1 BACKGROUND Artificial neural networks (ANNs) offer an interesting and potentially useful paradigm for signal processing and pattern recognition. The majority of ANN applications employ the feed-forward multilayer perceptron (MLP) network architecture in which network parameters are " trained" by a supervised learning algorithm employing the generalized delta rule (GDIt) [1 2]. The GDR algorithm approximates a fixed step steepest descent algorithm using derivatives computed by error backpropagatiori. The GDII algorithm is sometimes referred to as the backpropagation algorithm. However in this paper we will use the term backpropagation to refer only to the process of computing error derivatives. While multilayer perceptrons provide a very powerful nonlinear modeling capability GDR training can be very slow and inefficient. In linear adaptive filtering the analog of the GDR algorithm is the leastmean- squares (LMS) algorithm. Steepest descent-based algorithms such as GDR or LMS are first order because they use only first derivative or gradient information about the training error to be minimized. To speed up the training process second order algorithms may be employed that take advantage of second derivative or Hessian matrix information. Second order information can be incorporated into MLP training in different ways. In many applications especially in the area of pattern recognition the training set is finite. In these cases block learning can be applied using standard nonlinear optimization techniques [3 4 5].

  8. Spectral partitioning in equitable graphs.

    PubMed

    Barucca, Paolo

    2017-06-01

    Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of equitable graphs, i.e., random graphs with a block-regular structure, is studied, for which analytical results can be obtained. In particular, the spectral density of this ensemble is computed exactly for a modular and bipartite structure. Kesten-McKay's law for random regular graphs is found analytically to apply also for modular and bipartite structures when blocks are homogeneous. An exact solution to graph partitioning for two equal-sized communities is proposed and verified numerically, and a conjecture on the absence of an efficient recovery detectability transition in equitable graphs is suggested. A final discussion summarizes results and outlines their relevance for the solution of graph partitioning problems in other graph ensembles, in particular for the study of detectability thresholds and resolution limits in stochastic block models.

  9. Spectral partitioning in equitable graphs

    NASA Astrophysics Data System (ADS)

    Barucca, Paolo

    2017-06-01

    Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of equitable graphs, i.e., random graphs with a block-regular structure, is studied, for which analytical results can be obtained. In particular, the spectral density of this ensemble is computed exactly for a modular and bipartite structure. Kesten-McKay's law for random regular graphs is found analytically to apply also for modular and bipartite structures when blocks are homogeneous. An exact solution to graph partitioning for two equal-sized communities is proposed and verified numerically, and a conjecture on the absence of an efficient recovery detectability transition in equitable graphs is suggested. A final discussion summarizes results and outlines their relevance for the solution of graph partitioning problems in other graph ensembles, in particular for the study of detectability thresholds and resolution limits in stochastic block models.

  10. Cooperative mobile agents search using beehive partitioned structure and Tabu Random search algorithm

    NASA Astrophysics Data System (ADS)

    Ramazani, Saba; Jackson, Delvin L.; Selmic, Rastko R.

    2013-05-01

    In search and surveillance operations, deploying a team of mobile agents provides a robust solution that has multiple advantages over using a single agent in efficiency and minimizing exploration time. This paper addresses the challenge of identifying a target in a given environment when using a team of mobile agents by proposing a novel method of mapping and movement of agent teams in a cooperative manner. The approach consists of two parts. First, the region is partitioned into a hexagonal beehive structure in order to provide equidistant movements in every direction and to allow for more natural and flexible environment mapping. Additionally, in search environments that are partitioned into hexagons, mobile agents have an efficient travel path while performing searches due to this partitioning approach. Second, we use a team of mobile agents that move in a cooperative manner and utilize the Tabu Random algorithm to search for the target. Due to the ever-increasing use of robotics and Unmanned Aerial Vehicle (UAV) platforms, the field of cooperative multi-agent search has developed many applications recently that would benefit from the use of the approach presented in this work, including: search and rescue operations, surveillance, data collection, and border patrol. In this paper, the increased efficiency of the Tabu Random Search algorithm method in combination with hexagonal partitioning is simulated, analyzed, and advantages of this approach are presented and discussed.

  11. Kalman filter for statistical monitoring of forest cover across sub-continental regions

    Treesearch

    Raymond L. Czaplewski

    1991-01-01

    The Kalman filter is a multivariate generalization of the composite estimator which recursively combines a current direct estimate with a past estimate that is updated for expected change over time with a prediction model. The Kalman filter can estimate proportions of different cover types for sub-continental regions each year. A random sample of high-resolution...

  12. Computer simulation of random variables and vectors with arbitrary probability distribution laws

    NASA Technical Reports Server (NTRS)

    Bogdan, V. M.

    1981-01-01

    Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.

  13. CpG island methylation profile in non-invasive oral rinse samples is predictive of oral and pharyngeal carcinoma.

    PubMed

    Langevin, Scott M; Eliot, Melissa; Butler, Rondi A; Cheong, Agnes; Zhang, Xiang; McClean, Michael D; Koestler, Devin C; Kelsey, Karl T

    2015-01-01

    There are currently no screening tests in routine use for oral and pharyngeal cancer beyond visual inspection and palpation, which are provided on an opportunistic basis, indicating a need for development of novel methods for early detection, particularly in high-risk populations. We sought to address this need through comprehensive interrogation of CpG island methylation in oral rinse samples. We used the Infinium HumanMethylation450 BeadArray to interrogate DNA methylation in oral rinse samples collected from 154 patients with incident oral or pharyngeal carcinoma prior to treatment and 72 cancer-free control subjects. Subjects were randomly allocated to either a training or a testing set. For each subject, average methylation was calculated for each CpG island represented on the array. We applied a semi-supervised recursively partitioned mixture model to the CpG island methylation data to identify a classifier for prediction of case status in the training set. We then applied the resultant classifier to the testing set for validation and to assess the predictive accuracy. We identified a methylation classifier comprised of 22 CpG islands, which predicted oral and pharyngeal carcinoma with a high degree of accuracy (AUC = 0.92, 95 % CI 0.86, 0.98). This novel methylation panel is a strong predictor of oral and pharyngeal carcinoma case status in oral rinse samples and may have utility in early detection and post-treatment follow-up.

  14. Teaching and learning recursive programming: a review of the research literature

    NASA Astrophysics Data System (ADS)

    McCauley, Renée; Grissom, Scott; Fitzgerald, Sue; Murphy, Laurie

    2015-01-01

    Hundreds of articles have been published on the topics of teaching and learning recursion, yet fewer than 50 of them have published research results. This article surveys the computing education research literature and presents findings on challenges students encounter in learning recursion, mental models students develop as they learn recursion, and best practices in introducing recursion. Effective strategies for introducing the topic include using different contexts such as recurrence relations, programming examples, fractal images, and a description of how recursive methods are processed using a call stack. Several studies compared the efficacy of introducing iteration before recursion and vice versa. The paper concludes with suggestions for future research into how students learn and understand recursion, including a look at the possible impact of instructor attitude and newer pedagogies.

  15. Stereotactic radiosurgery alone versus resection plus whole-brain radiotherapy for 1 or 2 brain metastases in recursive partitioning analysis class 1 and 2 patients.

    PubMed

    Rades, Dirk; Bohlen, Guenther; Pluemer, Andre; Veninga, Theo; Hanssens, Patrick; Dunst, Juergen; Schild, Steven E

    2007-06-15

    The objective of this study was to compare stereotactic radiosurgery (SRS) alone with resection plus whole-brain radiotherapy (WBRT) for the treatment of patients in recursive partitioning analysis (RPA) class 1 and 2 who had 1 or 2 brain metastases. Two hundred six patients in RPA class 1 and 2 who had 1 or 2 brain metastases were analyzed retrospectively. Patients in Group A (n = 94) received from 18 grays (Gy) to 25 Gy SRS, and patients in Group B (n = 112) underwent resection of their metastases and received 10 x 3 Gy/20 x 2 Gy WBRT. Eight other potential prognostic factors were evaluated regarding overall survival (OS), brain control (BC), and local control (LC) of treated metastases: age, sex, performance status, tumor type, number of brain metastases, extracranial metastases, RPA class, and interval from tumor diagnosis to treatment of brain metastases. A comparison of the 2 treatment groups did not reveal significantly different OS (P = .19), BC (P = .52), or LC (P = .25). In RPA subgroup analyses, outcome also did not differ significantly for either RPA class of patients (P values from .21 to .83). On multivariate analysis, improved OS was associated with age < or =60 years (relative risk [RR], 1.75; P = .002), better performance status (RR, 1.67; P = .015), no extracranial metastases (RR, 2.84; P < .001), interval from tumor diagnosis to treatment >12 months (RR, 1.70; P = .003), and RPA class 1 (RR, 1.51; P = .016). Improved BC was associated with a single metastasis (RR, 1.54; P = .034) and an interval from tumor diagnosis to treatment >12 months (RR, 1.58; P = .019), and improved LC was associated with an interval from tumor diagnosis to treatment >12 months (RR, 1.59; P = .047). SRS alone appeared to be as effective as resection plus WBRT in the treatment of 1 or 2 brain metastases for patients in RPA class 1 and 2. Patient outcomes were associated with age, Karnofsky performance status, number of brain metastases, extracranial metastases, RPA class, and interval from tumor diagnosis to treatment. Copyright 2007 American Cancer Society.

  16. Dosimetric Predictors of Duodenal Toxicity After Intensity Modulated Radiation Therapy for Treatment of the Para-aortic Nodes in Gynecologic Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verma, Jonathan; Sulman, Erik P.; Jhingran, Anuja

    Purpose: To determine the incidence of duodenal toxicity in patients receiving intensity modulated radiation therapy (IMRT) for treatment of para-aortic nodes and to identify dosimetric parameters predictive of late duodenal toxicity. Methods and Materials: We identified 105 eligible patients with gynecologic malignancies who were treated with IMRT for gross metastatic disease in the para-aortic nodes from January 1, 2005, through December 31, 2009. Patients were treated to a nodal clinical target volume to 45 to 50.4 Gy with a boost to 60 to 66 Gy. The duodenum was contoured, and dosimetric data were exported for analysis. Duodenal toxicity was scoredmore » according to Radiation Therapy Oncology Group criteria. Univariate Cox proportional hazards analysis and recursive partitioning analysis were used to determine associations between dosimetric variables and time to toxicity and to identify the optimal threshold that separated patients according to risk of toxicity. Results: Nine of the 105 patients experienced grade 2 to grade 5 duodenal toxicity, confirmed by endoscopy in all cases. The 3-year actuarial rate of any duodenal toxicity was 11.7%. A larger volume of the duodenum receiving 55 Gy (V55) was associated with higher rates of duodenal toxicity. The 3-year actuarial rates of duodenal toxicity with V55 above and below 15 cm{sup 3} were 48.6% and 7.4%, respectively (P<.01). In Cox univariate analysis of dosimetric variables, V55 was associated with duodenal toxicity (P=.029). In recursive partitioning analysis, V55 less than 13.94% segregated all patients with duodenal toxicity. Conclusions: Dose-escalated IMRT can safely and effectively treat para-aortic nodal disease in gynecologic malignancies, provided that care is taken to limit the dose to the duodenum to reduce the risk of late duodenal toxicity. Limiting V55 to below 15 cm{sup 3} may reduce the risk of duodenal complications. In cases where the treatment cannot be delivered within these constraints, consideration should be given to other treatment approaches such as resection or initial chemotherapy.« less

  17. Greedy feature selection for glycan chromatography data with the generalized Dirichlet distribution

    PubMed Central

    2013-01-01

    Background Glycoproteins are involved in a diverse range of biochemical and biological processes. Changes in protein glycosylation are believed to occur in many diseases, particularly during cancer initiation and progression. The identification of biomarkers for human disease states is becoming increasingly important, as early detection is key to improving survival and recovery rates. To this end, the serum glycome has been proposed as a potential source of biomarkers for different types of cancers. High-throughput hydrophilic interaction liquid chromatography (HILIC) technology for glycan analysis allows for the detailed quantification of the glycan content in human serum. However, the experimental data from this analysis is compositional by nature. Compositional data are subject to a constant-sum constraint, which restricts the sample space to a simplex. Statistical analysis of glycan chromatography datasets should account for their unusual mathematical properties. As the volume of glycan HILIC data being produced increases, there is a considerable need for a framework to support appropriate statistical analysis. Proposed here is a methodology for feature selection in compositional data. The principal objective is to provide a template for the analysis of glycan chromatography data that may be used to identify potential glycan biomarkers. Results A greedy search algorithm, based on the generalized Dirichlet distribution, is carried out over the feature space to search for the set of “grouping variables” that best discriminate between known group structures in the data, modelling the compositional variables using beta distributions. The algorithm is applied to two glycan chromatography datasets. Statistical classification methods are used to test the ability of the selected features to differentiate between known groups in the data. Two well-known methods are used for comparison: correlation-based feature selection (CFS) and recursive partitioning (rpart). CFS is a feature selection method, while recursive partitioning is a learning tree algorithm that has been used for feature selection in the past. Conclusions The proposed feature selection method performs well for both glycan chromatography datasets. It is computationally slower, but results in a lower misclassification rate and a higher sensitivity rate than both correlation-based feature selection and the classification tree method. PMID:23651459

  18. Recursive partitioning analysis of 1999 Radiation Therapy Oncology Group (RTOG) patients with locally-advanced non-small-cell lung cancer (LA-NSCLC): identification of five groups with different survival.

    PubMed

    Werner-Wasik, M; Scott, C; Cox, J D; Sause, W T; Byhardt, R W; Asbell, S; Russell, A; Komaki, R; Lee, J S

    2000-12-01

    Survival of patients with locally-advanced non-small-cell lung cancer (LA-NSCLC) is predicted by the stage of the disease and other characteristics. This analysis was undertaken to identify these characteristics in a large cooperative group patient population, as well as to define subgroups of the population with differing outcomes. Analysis included 1,999 patients treated in 9 RTOG trials between 1983 and 1994 with thoracic irradiation (RT) with (n = 355) or without chemotherapy (CT). In univariate analysis, the following characteristics were significantly associated with an improved survival: use of CT, CT delivered without major deviation, abnormal pulmonary function tests, normal hemoglobin, protein, LDH and BUN, presence of dyspnea, hemoptysis, cough or hoarseness, uninvolved lymph nodes, T1 or T2 stage, no malignant pleural effusion (PE), weight loss of < 8%, Karnofsky performance status (KPS) of at least 90, adenocarcinoma histology, female gender, and age less than 70 years. Recursive partitioning analysis (RPA) was subsequently applied to identify 5 patient subgroups with significantly different median survival times (MST): Group I, KPS of > or = 90, who received chemotherapy (MST 16.2 months); Group II, KPS of > or = 90, who received no CT, but had no PE (MST 11.9 months); Group III, KPS < 90, younger than 70 years, with non-large cell histology (MST 9.6 months); Group IV, KPS > or = 90, but with PE, or KPS < 90, younger than 70 years, and with large cell histology, or older than 70 years, but without PE (MST 5.6-6.4 months); Group V, older than 70, with PE (MST 2.9 months). Cisplatinum-based CT improves survival, for excellent prognosis of LA-NSCLC patients, over RT alone. The presence of a malignant pleural effusion is a major negative prognostic factor for survival. The identification of RPA prognostic groups among patients with LA-NSCLC provides prognostic information and may serve as a basis of stratification in future trials.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castle, Katherine O., E-mail: kocastle@mdanderson.org; Hoffman, Karen E.; Levy, Lawrence B.

    Purpose: The benefit of adding androgen deprivation therapy (ADT) to dose-escalated radiation therapy (RT) for men with intermediate-risk prostate cancer is unclear; therefore, we assessed the impact of adding ADT to dose-escalated RT on freedom from failure (FFF). Methods: Three groups of men treated with intensity modulated RT or 3-dimensional conformal RT (75.6-78 Gy) from 1993-2008 for prostate cancer were categorized as (1) 326 intermediate-risk patients treated with RT alone, (2) 218 intermediate-risk patients treated with RT and ≤6 months of ADT, and (3) 274 low-risk patients treated with definitive RT. Median follow-up was 58 months. Recursive partitioning analysis basedmore » on FFF using Gleason score (GS), T stage, and pretreatment PSA concentration was applied to the intermediate-risk patients treated with RT alone. The Kaplan-Meier method was used to estimate 5-year FFF. Results: Based on recursive partitioning analysis, intermediate-risk patients treated with RT alone were divided into 3 prognostic groups: (1) 188 favorable patients: GS 6, ≤T2b or GS 3+4, ≤T1c; (2) 71 marginal patients: GS 3+4, T2a-b; and (3) 68 unfavorable patients: GS 4+3 or T2c disease. Hazard ratios (HR) for recurrence in each group were 1.0, 2.1, and 4.6, respectively. When intermediate-risk patients treated with RT alone were compared to intermediate-risk patients treated with RT and ADT, the greatest benefit from ADT was seen for the unfavorable intermediate-risk patients (FFF, 74% vs 94%, respectively; P=.005). Favorable intermediate-risk patients had no significant benefit from the addition of ADT to RT (FFF, 94% vs 95%, respectively; P=.85), and FFF for favorable intermediate-risk patients treated with RT alone approached that of low-risk patients treated with RT alone (98%). Conclusions: Patients with favorable intermediate-risk prostate cancer did not benefit from the addition of ADT to dose-escalated RT, and their FFF was nearly as good as patients with low-risk disease. In patients with GS 4+3 or T2c disease, the addition of ADT to dose-escalated RT did improve FFF.« less

  20. Recursion Removal as an Instructional Method to Enhance the Understanding of Recursion Tracing

    ERIC Educational Resources Information Center

    Velázquez-Iturbide, J. Ángel; Castellanos, M. Eugenia; Hijón-Neira, Raquel

    2016-01-01

    Recursion is one of the most difficult programming topics for students. In this paper, an instructional method is proposed to enhance students' understanding of recursion tracing. The proposal is based on the use of rules to translate linear recursion algorithms into equivalent, iterative ones. The paper has two main contributions: the…

  1. Random crystal field effects on the integer and half-integer mixed-spin system

    NASA Astrophysics Data System (ADS)

    Yigit, Ali; Albayrak, Erhan

    2018-05-01

    In this work, we have focused on the random crystal field effects on the phase diagrams of the mixed spin-1 and spin-5/2 Ising system obtained by utilizing the exact recursion relations (ERR) on the Bethe lattice (BL). The distribution function P(Di) = pδ [Di - D(1 + α) ] +(1 - p) δ [Di - D(1 - α) ] is used to randomize the crystal field.The phase diagrams are found to exhibit second- and first-order phase transitions depending on the values of α, D and p. It is also observed that the model displays tricritical point, isolated point, critical end point and three compensation temperatures for suitable values of the system parameters.

  2. Moderate Deviations for Recursive Stochastic Algorithms

    DTIC Science & Technology

    2014-08-02

    to (2.14) 1 n n1X i=0 E[R(ni k Xni )] KE a2(n)n : Because of this the (random) Radon -Nikodym derivatives fni (y) = dni d Xni (y) are well de...ned and can be selected in a measurable way. We will control the magnitude of the noise when the Radon -Nikodym derivative is large by bounding 1 n n

  3. Data-driven process decomposition and robust online distributed modelling for large-scale processes

    NASA Astrophysics Data System (ADS)

    Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou

    2018-02-01

    With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.

  4. Identification of individuals with ADHD using the Dean-Woodcock sensory motor battery and a boosted tree algorithm.

    PubMed

    Finch, Holmes W; Davis, Andrew; Dean, Raymond S

    2015-03-01

    The accurate and early identification of individuals with pervasive conditions such as attention deficit hyperactivity disorder (ADHD) is crucial to ensuring that they receive appropriate and timely assistance and treatment. Heretofore, identification of such individuals has proven somewhat difficult, typically involving clinical decision making based on descriptions and observations of behavior, in conjunction with the administration of cognitive assessments. The present study reports on the use of a sensory motor battery in conjunction with a recursive partitioning computer algorithm, boosted trees, to develop a prediction heuristic for identifying individuals with ADHD. Results of the study demonstrate that this method is able to do so with accuracy rates of over 95 %, much higher than the popular logistic regression model against which it was compared. Implications of these results for practice are provided.

  5. A study of the x-ray image quality improvement in the examination of the respiratory system based on the new image processing technique

    NASA Astrophysics Data System (ADS)

    Nagai, Yuichi; Kitagawa, Mayumi; Torii, Jun; Iwase, Takumi; Aso, Tomohiko; Ihara, Kanyu; Fujikawa, Mari; Takeuchi, Yumiko; Suzuki, Katsumi; Ishiguro, Takashi; Hara, Akio

    2014-03-01

    Recently, the double contrast technique in a gastrointestinal examination and the transbronchial lung biopsy in an examination for the respiratory system [1-3] have made a remarkable progress. Especially in the transbronchial lung biopsy, better quality of x-ray fluoroscopic images is requested because this examination is performed under a guidance of x-ray fluoroscopic images. On the other hand, various image processing methods [4] for x-ray fluoroscopic images have been developed as an x-ray system with a flat panel detector [5-7] is widely used. A recursive filtering is an effective method to reduce a random noise in x-ray fluoroscopic images. However it has a limitation for its effectiveness of a noise reduction in case of a moving object exists in x-ray fluoroscopic images because the recursive filtering is a noise reduction method by adding last few images. After recursive filtering a residual signal was produced if a moving object existed in x-ray images, and this residual signal disturbed a smooth procedure of the examinations. To improve this situation, new noise reduction method has been developed. The Adaptive Noise Reduction [ANR] is the brand-new noise reduction technique which can be reduced only a noise regardless of the moving object in x-ray fluoroscopic images. Therefore the ANR is a very suitable noise reduction method for the transbronchial lung biopsy under a guidance of x-ray fluoroscopic images because the residual signal caused of the moving object in x-ray fluoroscopic images is never produced after the ANR. In this paper, we will explain an advantage of the ANR by comparing of a performance between the ANR images and the conventional recursive filtering images.

  6. Verification of recursive probabilistic integration (RPI) method for fatigue life management using non-destructive inspections

    NASA Astrophysics Data System (ADS)

    Chen, Tzikang J.; Shiao, Michael

    2016-04-01

    This paper verified a generic and efficient assessment concept for probabilistic fatigue life management. The concept is developed based on an integration of damage tolerance methodology, simulations methods1, 2, and a probabilistic algorithm RPI (recursive probability integration)3-9 considering maintenance for damage tolerance and risk-based fatigue life management. RPI is an efficient semi-analytical probabilistic method for risk assessment subjected to various uncertainties such as the variability in material properties including crack growth rate, initial flaw size, repair quality, random process modeling of flight loads for failure analysis, and inspection reliability represented by probability of detection (POD). In addition, unlike traditional Monte Carlo simulations (MCS) which requires a rerun of MCS when maintenance plan is changed, RPI can repeatedly use a small set of baseline random crack growth histories excluding maintenance related parameters from a single MCS for various maintenance plans. In order to fully appreciate the RPI method, a verification procedure was performed. In this study, MC simulations in the orders of several hundred billions were conducted for various flight conditions, material properties, and inspection scheduling, POD and repair/replacement strategies. Since the MC simulations are time-consuming methods, the simulations were conducted parallelly on DoD High Performance Computers (HPC) using a specialized random number generator for parallel computing. The study has shown that RPI method is several orders of magnitude more efficient than traditional Monte Carlo simulations.

  7. Categorial Compositionality III: F-(co)algebras and the Systematicity of Recursive Capacities in Human Cognition

    PubMed Central

    Phillips, Steven; Wilson, William H.

    2012-01-01

    Human cognitive capacity includes recursively definable concepts, which are prevalent in domains involving lists, numbers, and languages. Cognitive science currently lacks a satisfactory explanation for the systematic nature of such capacities (i.e., why the capacity for some recursive cognitive abilities–e.g., finding the smallest number in a list–implies the capacity for certain others–finding the largest number, given knowledge of number order). The category-theoretic constructs of initial F-algebra, catamorphism, and their duals, final coalgebra and anamorphism provide a formal, systematic treatment of recursion in computer science. Here, we use this formalism to explain the systematicity of recursive cognitive capacities without ad hoc assumptions (i.e., to the same explanatory standard used in our account of systematicity for non-recursive capacities). The presence of an initial algebra/final coalgebra explains systematicity because all recursive cognitive capacities, in the domain of interest, factor through (are composed of) the same component process. Moreover, this factorization is unique, hence no further (ad hoc) assumptions are required to establish the intrinsic connection between members of a group of systematically-related capacities. This formulation also provides a new perspective on the relationship between recursive cognitive capacities. In particular, the link between number and language does not depend on recursion, as such, but on the underlying functor on which the group of recursive capacities is based. Thus, many species (and infants) can employ recursive processes without having a full-blown capacity for number and language. PMID:22514704

  8. What's special about human language? The contents of the "narrow language faculty" revisited.

    PubMed

    Traxler, Matthew J; Boudewyn, Megan; Loudermilk, Jessica

    2012-10-01

    In this review we re-evaluate the recursion-only hypothesis, advocated by Fitch, Hauser and Chomsky (Hauser, Chomsky & Fitch, 2002; Fitch, Hauser & Chomsky, 2005). According to the recursion-only hypothesis, the property that distinguishes human language from animal communication systems is recursion, which refers to the potentially infinite embedding of one linguistic representation within another of the same type. This hypothesis predicts (1) that non-human primates and other animals lack the ability to learn recursive grammar, and (2) that recursive grammar is the sole cognitive mechanism that is unique to human language. We first review animal studies of recursive grammar, before turning to the claim that recursion is a property of all human languages. Finally, we discuss other views on what abilities may be unique to human language.

  9. Randomized path optimization for thevMitigated counter detection of UAVS

    DTIC Science & Technology

    2017-06-01

    using Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the...Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the true terminal...algorithm’s success. A recursive Bayesian filtering scheme is used to assimilate noisy measurements of the UAVs position to predict its terminal location. We

  10. Recursive recovery of Markov transition probabilities from boundary value data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patch, Sarah Kathyrn

    1994-04-01

    In an effort to mathematically describe the anisotropic diffusion of infrared radiation in biological tissue Gruenbaum posed an anisotropic diffusion boundary value problem in 1989. In order to accommodate anisotropy, he discretized the temporal as well as the spatial domain. The probabilistic interpretation of the diffusion equation is retained; radiation is assumed to travel according to a random walk (of sorts). In this random walk the probabilities with which photons change direction depend upon their previous as well as present location. The forward problem gives boundary value data as a function of the Markov transition probabilities. The inverse problem requiresmore » finding the transition probabilities from boundary value data. Problems in the plane are studied carefully in this thesis. Consistency conditions amongst the data are derived. These conditions have two effects: they prohibit inversion of the forward map but permit smoothing of noisy data. Next, a recursive algorithm which yields a family of solutions to the inverse problem is detailed. This algorithm takes advantage of all independent data and generates a system of highly nonlinear algebraic equations. Pluecker-Grassmann relations are instrumental in simplifying the equations. The algorithm is used to solve the 4 x 4 problem. Finally, the smallest nontrivial problem in three dimensions, the 2 x 2 x 2 problem, is solved.« less

  11. Statistical estimation of ultrasonic propagation path parameters for aberration correction.

    PubMed

    Waag, Robert C; Astheimer, Jeffrey P

    2005-05-01

    Parameters in a linear filter model for ultrasonic propagation are found using statistical estimation. The model uses an inhomogeneous-medium Green's function that is decomposed into a homogeneous-transmission term and a path-dependent aberration term. Power and cross-power spectra of random-medium scattering are estimated over the frequency band of the transmit-receive system by using closely situated scattering volumes. The frequency-domain magnitude of the aberration is obtained from a normalization of the power spectrum. The corresponding phase is reconstructed from cross-power spectra of subaperture signals at adjacent receive positions by a recursion. The subapertures constrain the receive sensitivity pattern to eliminate measurement system phase contributions. The recursion uses a Laplacian-based algorithm to obtain phase from phase differences. Pulse-echo waveforms were acquired from a point reflector and a tissue-like scattering phantom through a tissue-mimicking aberration path from neighboring volumes having essentially the same aberration path. Propagation path aberration parameters calculated from the measurements of random scattering through the aberration phantom agree with corresponding parameters calculated for the same aberrator and array position by using echoes from the point reflector. The results indicate the approach describes, in addition to time shifts, waveform amplitude and shape changes produced by propagation through distributed aberration under realistic conditions.

  12. What's special about human language? The contents of the "narrow language faculty" revisited

    PubMed Central

    Traxler, Matthew J.; Boudewyn, Megan; Loudermilk, Jessica

    2012-01-01

    In this review we re-evaluate the recursion-only hypothesis, advocated by Fitch, Hauser and Chomsky (Hauser, Chomsky & Fitch, 2002; Fitch, Hauser & Chomsky, 2005). According to the recursion-only hypothesis, the property that distinguishes human language from animal communication systems is recursion, which refers to the potentially infinite embedding of one linguistic representation within another of the same type. This hypothesis predicts (1) that non-human primates and other animals lack the ability to learn recursive grammar, and (2) that recursive grammar is the sole cognitive mechanism that is unique to human language. We first review animal studies of recursive grammar, before turning to the claim that recursion is a property of all human languages. Finally, we discuss other views on what abilities may be unique to human language. PMID:23105948

  13. A Survey on Teaching and Learning Recursive Programming

    ERIC Educational Resources Information Center

    Rinderknecht, Christian

    2014-01-01

    We survey the literature about the teaching and learning of recursive programming. After a short history of the advent of recursion in programming languages and its adoption by programmers, we present curricular approaches to recursion, including a review of textbooks and some programming methodology, as well as the functional and imperative…

  14. Practical application of cure mixture model for long-term censored survivor data from a withdrawal clinical trial of patients with major depressive disorder.

    PubMed

    Arano, Ichiro; Sugimoto, Tomoyuki; Hamasaki, Toshimitsu; Ohno, Yuko

    2010-04-23

    Survival analysis methods such as the Kaplan-Meier method, log-rank test, and Cox proportional hazards regression (Cox regression) are commonly used to analyze data from randomized withdrawal studies in patients with major depressive disorder. However, unfortunately, such common methods may be inappropriate when a long-term censored relapse-free time appears in data as the methods assume that if complete follow-up were possible for all individuals, each would eventually experience the event of interest. In this paper, to analyse data including such a long-term censored relapse-free time, we discuss a semi-parametric cure regression (Cox cure regression), which combines a logistic formulation for the probability of occurrence of an event with a Cox proportional hazards specification for the time of occurrence of the event. In specifying the treatment's effect on disease-free survival, we consider the fraction of long-term survivors and the risks associated with a relapse of the disease. In addition, we develop a tree-based method for the time to event data to identify groups of patients with differing prognoses (cure survival CART). Although analysis methods typically adapt the log-rank statistic for recursive partitioning procedures, the method applied here used a likelihood ratio (LR) test statistic from a fitting of cure survival regression assuming exponential and Weibull distributions for the latency time of relapse. The method is illustrated using data from a sertraline randomized withdrawal study in patients with major depressive disorder. We concluded that Cox cure regression reveals facts on who may be cured, and how the treatment and other factors effect on the cured incidence and on the relapse time of uncured patients, and that cure survival CART output provides easily understandable and interpretable information, useful both in identifying groups of patients with differing prognoses and in utilizing Cox cure regression models leading to meaningful interpretations.

  15. Recursion in aphasia.

    PubMed

    Bánréti, Zoltán

    2010-11-01

    This study investigates how aphasic impairment impinges on syntactic and/or semantic recursivity of human language. A series of tests has been conducted with the participation of five Hungarian speaking aphasic subjects and 10 control subjects. Photographs representing simple situations were presented to subjects and questions were asked about them. The responses are supposed to involve formal structural recursion, but they contain semantic-pragmatic operations instead, with 'theory of mind' type embeddings. Aphasic individuals tend to exploit the parallel between 'theory of mind' embeddings and syntactic-structural embeddings in order to avoid formal structural recursion. Formal structural recursion may be more impaired in Broca's aphasia and semantic recursivity may remain selectively unimpaired in this type of aphasia.

  16. Parsimonious extreme learning machine using recursive orthogonal least squares.

    PubMed

    Wang, Ning; Er, Meng Joo; Han, Min

    2014-10-01

    Novel constructive and destructive parsimonious extreme learning machines (CP- and DP-ELM) are proposed in this paper. By virtue of the proposed ELMs, parsimonious structure and excellent generalization of multiinput-multioutput single hidden-layer feedforward networks (SLFNs) are obtained. The proposed ELMs are developed by innovative decomposition of the recursive orthogonal least squares procedure into sequential partial orthogonalization (SPO). The salient features of the proposed approaches are as follows: 1) Initial hidden nodes are randomly generated by the ELM methodology and recursively orthogonalized into an upper triangular matrix with dramatic reduction in matrix size; 2) the constructive SPO in the CP-ELM focuses on the partial matrix with the subcolumn of the selected regressor including nonzeros as the first column while the destructive SPO in the DP-ELM operates on the partial matrix including elements determined by the removed regressor; 3) termination criteria for CP- and DP-ELM are simplified by the additional residual error reduction method; and 4) the output weights of the SLFN need not be solved in the model selection procedure and is derived from the final upper triangular equation by backward substitution. Both single- and multi-output real-world regression data sets are used to verify the effectiveness and superiority of the CP- and DP-ELM in terms of parsimonious architecture and generalization accuracy. Innovative applications to nonlinear time-series modeling demonstrate superior identification results.

  17. Teaching and Learning Recursive Programming: A Review of the Research Literature

    ERIC Educational Resources Information Center

    McCauley, Renée; Grissom, Scott; Fitzgerald, Sue; Murphy, Laurie

    2015-01-01

    Hundreds of articles have been published on the topics of teaching and learning recursion, yet fewer than 50 of them have published research results. This article surveys the computing education research literature and presents findings on challenges students encounter in learning recursion, mental models students develop as they learn recursion,…

  18. How Learning Logic Programming Affects Recursion Comprehension

    ERIC Educational Resources Information Center

    Haberman, Bruria

    2004-01-01

    Recursion is a central concept in computer science, yet it is difficult for beginners to comprehend. Israeli high-school students learn recursion in the framework of a special modular program in computer science (Gal-Ezer & Harel, 1999). Some of them are introduced to the concept of recursion in two different paradigms: the procedural…

  19. Recursive Objects--An Object Oriented Presentation of Recursion

    ERIC Educational Resources Information Center

    Sher, David B.

    2004-01-01

    Generally, when recursion is introduced to students the concept is illustrated with a toy (Towers of Hanoi) and some abstract mathematical functions (factorial, power, Fibonacci). These illustrate recursion in the same sense that counting to 10 can be used to illustrate a for loop. These are all good illustrations, but do not represent serious…

  20. New Instrumentation for Phase Partitioning

    NASA Technical Reports Server (NTRS)

    Harris, J. M.

    1985-01-01

    Cells and molecules can be purified by partitioning between the two immiscible liquid phases formed by aqueous solutions of poly/ethylene glycol and dextran. Such purification can be more selective, higher yielding, and less destructive to sensitive biological materials than other available techniques. Earth's gravitational field is a hindering factor as it causes sedimentation of particles to be purified and shear-induced particle randomization. The present proposal is directed toward developing new instrumentation for performing phase partitioning both on Earth and in microgravity.

  1. How children perceive fractals: Hierarchical self-similarity and cognitive development

    PubMed Central

    Martins, Maurício Dias; Laaha, Sabine; Freiberger, Eva Maria; Choi, Soonja; Fitch, W. Tecumseh

    2014-01-01

    The ability to understand and generate hierarchical structures is a crucial component of human cognition, available in language, music, mathematics and problem solving. Recursion is a particularly useful mechanism for generating complex hierarchies by means of self-embedding rules. In the visual domain, fractals are recursive structures in which simple transformation rules generate hierarchies of infinite depth. Research on how children acquire these rules can provide valuable insight into the cognitive requirements and learning constraints of recursion. Here, we used fractals to investigate the acquisition of recursion in the visual domain, and probed for correlations with grammar comprehension and general intelligence. We compared second (n = 26) and fourth graders (n = 26) in their ability to represent two types of rules for generating hierarchical structures: Recursive rules, on the one hand, which generate new hierarchical levels; and iterative rules, on the other hand, which merely insert items within hierarchies without generating new levels. We found that the majority of fourth graders, but not second graders, were able to represent both recursive and iterative rules. This difference was partially accounted by second graders’ impairment in detecting hierarchical mistakes, and correlated with between-grade differences in grammar comprehension tasks. Empirically, recursion and iteration also differed in at least one crucial aspect: While the ability to learn recursive rules seemed to depend on the previous acquisition of simple iterative representations, the opposite was not true, i.e., children were able to acquire iterative rules before they acquired recursive representations. These results suggest that the acquisition of recursion in vision follows learning constraints similar to the acquisition of recursion in language, and that both domains share cognitive resources involved in hierarchical processing. PMID:24955884

  2. Linear-algebraic bath transformation for simulating complex open quantum systems

    DOE PAGES

    Huh, Joonsuk; Mostame, Sarah; Fujita, Takatoshi; ...

    2014-12-02

    In studying open quantum systems, the environment is often approximated as a collection of non-interacting harmonic oscillators, a configuration also known as the star-bath model. It is also well known that the star-bath can be transformed into a nearest-neighbor interacting chain of oscillators. The chain-bath model has been widely used in renormalization group approaches. The transformation can be obtained by recursion relations or orthogonal polynomials. Based on a simple linear algebraic approach, we propose a bath partition strategy to reduce the system-bath coupling strength. As a result, the non-interacting star-bath is transformed into a set of weakly coupled multiple parallelmore » chains. Furthermore, the transformed bath model allows complex problems to be practically implemented on quantum simulators, and it can also be employed in various numerical simulations of open quantum dynamics.« less

  3. Model evaluation of the phytoextraction potential of heavy metal hyperaccumulators and non-hyperaccumulators.

    PubMed

    Liang, Hong-Ming; Lin, Ting-Hsiang; Chiou, Jeng-Min; Yeh, Kuo-Chen

    2009-06-01

    Evaluation of the remediation ability of zinc/cadmium in hyper- and non-hyperaccumulator plant species through greenhouse studies is limited. To bridge the gap between greenhouse studies and field applications for phytoextraction, we used published data to examine the partitioning of heavy metals between plants and soil (defined as the bioconcentration factor). We compared the remediation ability of the Zn/Cd hyperaccumulators Thlaspi caerulescens and Arabidopsis halleri and the non-hyperaccumulators Nicotiana tabacum and Brassica juncea using a hierarchical linear model (HLM). A recursive algorithm was then used to evaluate how many harvest cycles were required to clean a contaminated site to meet Taiwan Environmental Protection Agency regulations. Despite the high bioconcentration factor of both hyperaccumulators, metal removal was still limited because of the plants' small biomass. Simulation with N. tabacum and the Cadmium model suggests further study and development of plants with high biomass and improved phytoextraction potential for use in environmental cleanup.

  4. Entanglement distillation protocols and number theory

    NASA Astrophysics Data System (ADS)

    Bombin, H.; Martin-Delgado, M. A.

    2005-09-01

    We show that the analysis of entanglement distillation protocols for qudits of arbitrary dimension D benefits from applying basic concepts from number theory, since the set ZDn associated with Bell diagonal states is a module rather than a vector space. We find that a partition of ZDn into divisor classes characterizes the invariant properties of mixed Bell diagonal states under local permutations. We construct a very general class of recursion protocols by means of unitary operations implementing these local permutations. We study these distillation protocols depending on whether we use twirling operations in the intermediate steps or not, and we study them both analytically and numerically with Monte Carlo methods. In the absence of twirling operations, we construct extensions of the quantum privacy algorithms valid for secure communications with qudits of any dimension D . When D is a prime number, we show that distillation protocols are optimal both qualitatively and quantitatively.

  5. Excess BMI in Childhood: A Modifiable Risk Factor for Type 1 Diabetes Development?

    PubMed

    Ferrara, Christine Therese; Geyer, Susan Michelle; Liu, Yuk-Fun; Evans-Molina, Carmella; Libman, Ingrid M; Besser, Rachel; Becker, Dorothy J; Rodriguez, Henry; Moran, Antoinette; Gitelman, Stephen E; Redondo, Maria J

    2017-05-01

    We aimed to determine the effect of elevated BMI over time on the progression to type 1 diabetes in youth. We studied 1,117 children in the TrialNet Pathway to Prevention cohort (autoantibody-positive relatives of patients with type 1 diabetes). Longitudinally accumulated BMI above the 85th age- and sex-adjusted percentile generated a cumulative excess BMI (ceBMI) index. Recursive partitioning and multivariate analyses yielded sex- and age-specific ceBMI thresholds for greatest type 1 diabetes risk. Higher ceBMI conferred significantly greater risk of progressing to type 1 diabetes. The increased diabetes risk occurred at lower ceBMI values in children <12 years of age compared with older subjects and in females versus males. Elevated BMI is associated with increased risk of diabetes progression in pediatric autoantibody-positive relatives, but the effect varies by sex and age. © 2017 by the American Diabetes Association.

  6. Excess BMI in Childhood: A Modifiable Risk Factor for Type 1 Diabetes Development?

    PubMed Central

    Liu, Yuk-Fun; Evans-Molina, Carmella; Libman, Ingrid M.; Besser, Rachel; Becker, Dorothy J.; Rodriguez, Henry; Moran, Antoinette; Gitelman, Stephen E.; Redondo, Maria J.

    2017-01-01

    OBJECTIVE We aimed to determine the effect of elevated BMI over time on the progression to type 1 diabetes in youth. RESEARCH DESIGN AND METHODS We studied 1,117 children in the TrialNet Pathway to Prevention cohort (autoantibody-positive relatives of patients with type 1 diabetes). Longitudinally accumulated BMI above the 85th age- and sex-adjusted percentile generated a cumulative excess BMI (ceBMI) index. Recursive partitioning and multivariate analyses yielded sex- and age-specific ceBMI thresholds for greatest type 1 diabetes risk. RESULTS Higher ceBMI conferred significantly greater risk of progressing to type 1 diabetes. The increased diabetes risk occurred at lower ceBMI values in children <12 years of age compared with older subjects and in females versus males. CONCLUSIONS Elevated BMI is associated with increased risk of diabetes progression in pediatric autoantibody-positive relatives, but the effect varies by sex and age. PMID:28202550

  7. Automatic Tortuosity-Based Retinopathy of Prematurity Screening System

    NASA Astrophysics Data System (ADS)

    Sukkaew, Lassada; Uyyanonvara, Bunyarit; Makhanov, Stanislav S.; Barman, Sarah; Pangputhipong, Pannet

    Retinopathy of Prematurity (ROP) is an infant disease characterized by increased dilation and tortuosity of the retinal blood vessels. Automatic tortuosity evaluation from retinal digital images is very useful to facilitate an ophthalmologist in the ROP screening and to prevent childhood blindness. This paper proposes a method to automatically classify the image into tortuous and non-tortuous. The process imitates expert ophthalmologists' screening by searching for clearly tortuous vessel segments. First, a skeleton of the retinal blood vessels is extracted from the original infant retinal image using a series of morphological operators. Next, we propose to partition the blood vessels recursively using an adaptive linear interpolation scheme. Finally, the tortuosity is calculated based on the curvature of the resulting vessel segments. The retinal images are then classified into two classes using segments characterized by the highest tortuosity. For an optimal set of training parameters the prediction is as high as 100%.

  8. Structural Group-based Auditing of Missing Hierarchical Relationships in UMLS

    PubMed Central

    Chen, Yan; Gu, Huanying(Helen); Perl, Yehoshua; Geller, James

    2009-01-01

    The Metathesaurus of the UMLS was created by integrating various source terminologies. The inter-concept relationships were either integrated into the UMLS from the source terminologies or specially generated. Due to the extensive size and inherent complexity of the Metathesaurus, the accidental omission of some hierarchical relationships was inevitable. We present a recursive procedure which allows a human expert, with the support of an algorithm, to locate missing hierarchical relationships. The procedure starts with a group of concepts with exactly the same (correct) semantic type assignments. It then partitions the concepts, based on child-of hierarchical relationships, into smaller, singly rooted, hierarchically connected subgroups. The auditor only needs to focus on the subgroups with very few concepts and their concepts with semantic type reassignments. The procedure was evaluated by comparing it with a comprehensive manual audit and it exhibits a perfect error recall. PMID:18824248

  9. MreB is important for cell shape but not for chromosome segregation of the filamentous cyanobacterium Anabaena sp. PCC 7120.

    PubMed

    Hu, Bin; Yang, Guohua; Zhao, Weixing; Zhang, Yingjiao; Zhao, Jindong

    2007-03-01

    MreB is a bacterial actin that plays important roles in determination of cell shape and chromosome partitioning in Escherichia coli and Caulobacter crescentus. In this study, the mreB from the filamentous cyanobacterium Anabaena sp. PCC 7120 was inactivated. Although the mreB null mutant showed a drastic change in cell shape, its growth rate, cell division and the filament length were unaltered. Thus, MreB in Anabaena maintains cell shape but is not required for chromosome partitioning. The wild type and the mutant had eight and 10 copies of chromosomes per cell respectively. We demonstrated that DNA content in two daughter cells after cell division in both strains was not always identical. The ratios of DNA content in two daughter cells had a Gaussian distribution with a standard deviation much larger than a value expected if the DNA content in two daughter cells were identical, suggesting that chromosome partitioning is a random process. The multiple copies of chromosomes in cyanobacteria are likely required for chromosome random partitioning in cell division.

  10. Thermodynamic limit of random partitions and dispersionless Toda hierarchy

    NASA Astrophysics Data System (ADS)

    Takasaki, Kanehisa; Nakatsu, Toshio

    2012-01-01

    We study the thermodynamic limit of random partition models for the instanton sum of 4D and 5D supersymmetric U(1) gauge theories deformed by some physical observables. The physical observables correspond to external potentials in the statistical model. The partition function is reformulated in terms of the density function of Maya diagrams. The thermodynamic limit is governed by a limit shape of Young diagrams associated with dominant terms in the partition function. The limit shape is characterized by a variational problem, which is further converted to a scalar-valued Riemann-Hilbert problem. This Riemann-Hilbert problem is solved with the aid of a complex curve, which may be thought of as the Seiberg-Witten curve of the deformed U(1) gauge theory. This solution of the Riemann-Hilbert problem is identified with a special solution of the dispersionless Toda hierarchy that satisfies a pair of generalized string equations. The generalized string equations for the 5D gauge theory are shown to be related to hidden symmetries of the statistical model. The prepotential and the Seiberg-Witten differential are also considered.

  11. Combinatorial approximation algorithms for MAXCUT using random walks.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seshadhri, Comandur; Kale, Satyen

    We give the first combinatorial approximation algorithm for MaxCut that beats the trivial 0.5 factor by a constant. The main partitioning procedure is very intuitive, natural, and easily described. It essentially performs a number of random walks and aggregates the information to provide the partition. We can control the running time to get an approximation factor-running time tradeoff. We show that for any constant b > 1.5, there is an {tilde O}(n{sup b}) algorithm that outputs a (0.5 + {delta})-approximation for MaxCut, where {delta} = {delta}(b) is some positive constant. One of the components of our algorithm is a weakmore » local graph partitioning procedure that may be of independent interest. Given a starting vertex i and a conductance parameter {phi}, unless a random walk of length {ell} = O(log n) starting from i mixes rapidly (in terms of {phi} and {ell}), we can find a cut of conductance at most {phi} close to the vertex. The work done per vertex found in the cut is sublinear in n.« less

  12. Deep Learning Accurately Predicts Estrogen Receptor Status in Breast Cancer Metabolomics Data.

    PubMed

    Alakwaa, Fadhl M; Chaudhary, Kumardeep; Garmire, Lana X

    2018-01-05

    Metabolomics holds the promise as a new technology to diagnose highly heterogeneous diseases. Conventionally, metabolomics data analysis for diagnosis is done using various statistical and machine learning based classification methods. However, it remains unknown if deep neural network, a class of increasingly popular machine learning methods, is suitable to classify metabolomics data. Here we use a cohort of 271 breast cancer tissues, 204 positive estrogen receptor (ER+), and 67 negative estrogen receptor (ER-) to test the accuracies of feed-forward networks, a deep learning (DL) framework, as well as six widely used machine learning models, namely random forest (RF), support vector machines (SVM), recursive partitioning and regression trees (RPART), linear discriminant analysis (LDA), prediction analysis for microarrays (PAM), and generalized boosted models (GBM). DL framework has the highest area under the curve (AUC) of 0.93 in classifying ER+/ER- patients, compared to the other six machine learning algorithms. Furthermore, the biological interpretation of the first hidden layer reveals eight commonly enriched significant metabolomics pathways (adjusted P-value <0.05) that cannot be discovered by other machine learning methods. Among them, protein digestion and absorption and ATP-binding cassette (ABC) transporters pathways are also confirmed in integrated analysis between metabolomics and gene expression data in these samples. In summary, deep learning method shows advantages for metabolomics based breast cancer ER status classification, with both the highest prediction accuracy (AUC = 0.93) and better revelation of disease biology. We encourage the adoption of feed-forward networks based deep learning method in the metabolomics research community for classification.

  13. "They're homeless in a home": Retaining homeless-experienced consumers in supported housing.

    PubMed

    Gabrielian, Sonya; Hamilton, Alison B; Alexandrino, Adrian; Hellemann, Gerhard; Young, Alexander S

    2017-05-01

    Permanent, community-based housing with supportive services ("supported housing") has numerous favorable outcomes for homeless-experienced consumers. Little is known, however, about consumers who attain but subsequently lose their supported housing. Using mixed methods, we compared persons who retained their supported housing for at least 1 year ("stayers") with those who lost their supported housing within 1 year of move-in ("exiters"). Among persons housed through the VA Supported Housing (VASH) program at the VA Greater Los Angeles between 2011 and 2012, we queried VA homeless registry data to identify stayers (n = 1,558) and exiters (n = 85). We reviewed the medical records of 85 randomly selected stayers and all 85 exiters to compare demographics, homelessness chronicity, era of service, income, presence or absence of a serious mental illness, and health service utilization. From this subsample, we purposively selected 20 stayers and 20 exiters for semistructured, qualitative interviews, and more detailed medical record review. We also performed qualitative interviews and focus groups with VASH staff/leadership (n = 15). Recursive partitioning identified quantitative variables that best-differentiated stayers from exiters. Thematic analyses were performed on qualitative data. Interrelated factors were associated with exiting supported housing: chronic homelessness; low intrinsic motivation; unmet needs for mental health care, substance abuse treatment, and independent living skills; poor primary care engagement; frequent emergency department use; and recent mental health hospitalizations. These findings suggest the value of clinical interventions that address these factors-for example, motivational interviewing or social skills training-adapted to the setting and context of supported housing. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. The islands are different: human perceptions of game species in Hawaii.

    PubMed

    Lohr, Cheryl A; Lepczyk, Christopher A; Johnson, Edwin D

    2014-10-01

    Hawaii's game animals are all non-native species, which provokes human-wildlife conflict among stakeholders. The management of human-wildlife conflict in Hawaii is further complicated by the discrete nature of island communities. Our goal was to understand the desires and perceived values or impacts of game held by residents of Hawaii regarding six game species [pigs (Sus scrofa), goats (Capra hircus), mouflon (Ovis musimon), axis deer (Axis axis), turkeys (Melagris gallopavo), and doves (Geopelia striata)]. We measured the desired abundance of game on the six main Hawaiian Islands using the potential for conflict index and identified explanatory variables for those desires via recursive partitioning. In 2011 we surveyed 5,407 residents (2,360 random residents and 3,047 pre-identified stakeholders). Overall 54.5 and 27.6 % of the emailed and mailed surveys were returned (n = 1,510). A non-respondent survey revealed that respondents and non-respondents had similar interest in wildlife, and a similar education level. The desired abundance of game differed significantly among stakeholders, species, and islands. The desired abundance scores were higher for axis deer, mouflon, and turkeys compared to pigs, goats or doves. Enjoyment at seeing game and the cultural value of game were widespread explanatory variables for desired abundance. Models for Lanai emphasized the economic value of game, whereas models for Maui identified the potential for game to contaminate soil and water. Models for Oahu and Kauai revealed concern for human health and safety. Given our findings we recommend managers design separate management plans for each island taking into consideration the values of residents.

  15. The Islands Are Different: Human Perceptions of Game Species in Hawaii

    NASA Astrophysics Data System (ADS)

    Lohr, Cheryl A.; Lepczyk, Christopher A.; Johnson, Edwin D.

    2014-10-01

    Hawaii's game animals are all non-native species, which provokes human-wildlife conflict among stakeholders. The management of human-wildlife conflict in Hawaii is further complicated by the discrete nature of island communities. Our goal was to understand the desires and perceived values or impacts of game held by residents of Hawaii regarding six game species [pigs ( Sus scrofa), goats ( Capra hircus), mouflon ( Ovis musimon), axis deer ( Axis axis), turkeys ( Melagris gallopavo), and doves ( Geopelia striata)]. We measured the desired abundance of game on the six main Hawaiian Islands using the potential for conflict index and identified explanatory variables for those desires via recursive partitioning. In 2011 we surveyed 5,407 residents (2,360 random residents and 3,047 pre-identified stakeholders). Overall 54.5 and 27.6 % of the emailed and mailed surveys were returned ( n = 1,510). A non-respondent survey revealed that respondents and non-respondents had similar interest in wildlife, and a similar education level. The desired abundance of game differed significantly among stakeholders, species, and islands. The desired abundance scores were higher for axis deer, mouflon, and turkeys compared to pigs, goats or doves. Enjoyment at seeing game and the cultural value of game were widespread explanatory variables for desired abundance. Models for Lanai emphasized the economic value of game, whereas models for Maui identified the potential for game to contaminate soil and water. Models for Oahu and Kauai revealed concern for human health and safety. Given our findings we recommend managers design separate management plans for each island taking into consideration the values of residents.

  16. The phase diagrams of the ± K model on the Bethe lattice

    NASA Astrophysics Data System (ADS)

    Albayrak, Erhan

    2015-07-01

    The biquadratic exchange interaction is randomized in a bimodal form with probabilities (p) and (1 - p) for the cases with K > 0 (attractive case) and K < 0 (repulsive case), respectively, and its effects on the phase diagrams of the spin-1 Blume-Emery-Griffiths model are studied on the Bethe lattice by using the recursion relations. It was found that the critical behaviors of the model change drastically.

  17. Concatenated shift registers generating maximally spaced phase shifts of PN-sequences

    NASA Technical Reports Server (NTRS)

    Hurd, W. J.; Welch, L. R.

    1977-01-01

    A large class of linearly concatenated shift registers is shown to generate approximately maximally spaced phase shifts of pn-sequences, for use in pseudorandom number generation. A constructive method is presented for finding members of this class, for almost all degrees for which primitive trinomials exist. The sequences which result are not normally characterized by trinomial recursions, which is desirable since trinomial sequences can have some undesirable randomness properties.

  18. Gaussian Random Fields Methods for Fork-Join Network with Synchronization Constraints

    DTIC Science & Technology

    2014-12-22

    substantial efforts were dedicated to the study of the max-plus recursions [21, 3, 12]. More recently, Atar et al. [2] have studied a fork-join...feedback and NES, Atar et al. [2] show that a dynamic priority discipline achieves throughput optimal- ity asymptotically in the conventional heavy...2011) Patient flow in hospitals: a data-based queueing-science perspective. Submitted to Stochastic Systems, 20. [2] R. Atar , A. Mandelbaum and A

  19. Rényi Entropies from Random Quenches in Atomic Hubbard and Spin Models.

    PubMed

    Elben, A; Vermersch, B; Dalmonte, M; Cirac, J I; Zoller, P

    2018-02-02

    We present a scheme for measuring Rényi entropies in generic atomic Hubbard and spin models using single copies of a quantum state and for partitions in arbitrary spatial dimensions. Our approach is based on the generation of random unitaries from random quenches, implemented using engineered time-dependent disorder potentials, and standard projective measurements, as realized by quantum gas microscopes. By analyzing the properties of the generated unitaries and the role of statistical errors, with respect to the size of the partition, we show that the protocol can be realized in existing quantum simulators and used to measure, for instance, area law scaling of entanglement in two-dimensional spin models or the entanglement growth in many-body localized systems.

  20. Rényi Entropies from Random Quenches in Atomic Hubbard and Spin Models

    NASA Astrophysics Data System (ADS)

    Elben, A.; Vermersch, B.; Dalmonte, M.; Cirac, J. I.; Zoller, P.

    2018-02-01

    We present a scheme for measuring Rényi entropies in generic atomic Hubbard and spin models using single copies of a quantum state and for partitions in arbitrary spatial dimensions. Our approach is based on the generation of random unitaries from random quenches, implemented using engineered time-dependent disorder potentials, and standard projective measurements, as realized by quantum gas microscopes. By analyzing the properties of the generated unitaries and the role of statistical errors, with respect to the size of the partition, we show that the protocol can be realized in existing quantum simulators and used to measure, for instance, area law scaling of entanglement in two-dimensional spin models or the entanglement growth in many-body localized systems.

  1. Cell Partition in Two Polymer Aqueous Phases

    NASA Technical Reports Server (NTRS)

    Harris, J. M.

    1985-01-01

    Partition of biological cells in two phase aqueous polymer systems is recognized as a powerful separation technique which is limited by gravity. The synthesis of new, selective polymer ligand conjugates to be used in affinity partition separations is of interest. The two most commonly used polymers in two phase partitioning are dextran and polyethylene glycol. A thorough review of the chemistry of these polymers was begun, particularly in the area of protein attachment. Preliminary studies indicate the importance in affinity partitioning of minimizing gravity induced randomizing forces in the phase separation process. The PEG-protein conjugates that were prepared appear to be ideally suited for achieving high quality purifications in a microgravity environment. An interesting spin-off of this synthetic work was the observation of catalytic activity for certain of our polymer derivatives.

  2. Cognitive representation of "musical fractals": Processing hierarchy and recursion in the auditory domain.

    PubMed

    Martins, Mauricio Dias; Gingras, Bruno; Puig-Waldmueller, Estela; Fitch, W Tecumseh

    2017-04-01

    The human ability to process hierarchical structures has been a longstanding research topic. However, the nature of the cognitive machinery underlying this faculty remains controversial. Recursion, the ability to embed structures within structures of the same kind, has been proposed as a key component of our ability to parse and generate complex hierarchies. Here, we investigated the cognitive representation of both recursive and iterative processes in the auditory domain. The experiment used a two-alternative forced-choice paradigm: participants were exposed to three-step processes in which pure-tone sequences were built either through recursive or iterative processes, and had to choose the correct completion. Foils were constructed according to generative processes that did not match the previous steps. Both musicians and non-musicians were able to represent recursion in the auditory domain, although musicians performed better. We also observed that general 'musical' aptitudes played a role in both recursion and iteration, although the influence of musical training was somehow independent from melodic memory. Moreover, unlike iteration, recursion in audition was well correlated with its non-auditory (recursive) analogues in the visual and action sequencing domains. These results suggest that the cognitive machinery involved in establishing recursive representations is domain-general, even though this machinery requires access to information resulting from domain-specific processes. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  3. Accurate human limb angle measurement: sensor fusion through Kalman, least mean squares and recursive least-squares adaptive filtering

    NASA Astrophysics Data System (ADS)

    Olivares, A.; Górriz, J. M.; Ramírez, J.; Olivares, G.

    2011-02-01

    Inertial sensors are widely used in human body motion monitoring systems since they permit us to determine the position of the subject's limbs. Limb angle measurement is carried out through the integration of the angular velocity measured by a rate sensor and the decomposition of the components of static gravity acceleration measured by an accelerometer. Different factors derived from the sensors' nature, such as the angle random walk and dynamic bias, lead to erroneous measurements. Dynamic bias effects can be reduced through the use of adaptive filtering based on sensor fusion concepts. Most existing published works use a Kalman filtering sensor fusion approach. Our aim is to perform a comparative study among different adaptive filters. Several least mean squares (LMS), recursive least squares (RLS) and Kalman filtering variations are tested for the purpose of finding the best method leading to a more accurate and robust limb angle measurement. A new angle wander compensation sensor fusion approach based on LMS and RLS filters has been developed.

  4. A combinatorial model for the Macdonald polynomials.

    PubMed

    Haglund, J

    2004-11-16

    We introduce a polynomial C(mu)[Z; q, t], depending on a set of variables Z = z(1), z(2),..., a partition mu, and two extra parameters q, t. The definition of C(mu) involves a pair of statistics (maj(sigma, mu), inv(sigma, mu)) on words sigma of positive integers, and the coefficients of the z(i) are manifestly in N[q,t]. We conjecture that C(mu)[Z; q, t] is none other than the modified Macdonald polynomial H(mu)[Z; q, t]. We further introduce a general family of polynomials F(T)[Z; q, S], where T is an arbitrary set of squares in the first quadrant of the xy plane, and S is an arbitrary subset of T. The coefficients of the F(T)[Z; q, S] are in N[q], and C(mu)[Z; q, t] is a sum of certain F(T)[Z; q, S] times nonnegative powers of t. We prove F(T)[Z; q, S] is symmetric in the z(i) and satisfies other properties consistent with the conjecture. We also show how the coefficient of a monomial in F(T)[Z; q, S] can be expressed recursively. maple calculations indicate the F(T)[Z; q, S] are Schur-positive, and we present a combinatorial conjecture for their Schur coefficients when the set T is a partition with at most three columns.

  5. Sparse Regression as a Sparse Eigenvalue Problem

    NASA Technical Reports Server (NTRS)

    Moghaddam, Baback; Gruber, Amit; Weiss, Yair; Avidan, Shai

    2008-01-01

    We extend the l0-norm "subspectral" algorithms for sparse-LDA [5] and sparse-PCA [6] to general quadratic costs such as MSE in linear (kernel) regression. The resulting "Sparse Least Squares" (SLS) problem is also NP-hard, by way of its equivalence to a rank-1 sparse eigenvalue problem (e.g., binary sparse-LDA [7]). Specifically, for a general quadratic cost we use a highly-efficient technique for direct eigenvalue computation using partitioned matrix inverses which leads to dramatic x103 speed-ups over standard eigenvalue decomposition. This increased efficiency mitigates the O(n4) scaling behaviour that up to now has limited the previous algorithms' utility for high-dimensional learning problems. Moreover, the new computation prioritizes the role of the less-myopic backward elimination stage which becomes more efficient than forward selection. Similarly, branch-and-bound search for Exact Sparse Least Squares (ESLS) also benefits from partitioned matrix inverse techniques. Our Greedy Sparse Least Squares (GSLS) generalizes Natarajan's algorithm [9] also known as Order-Recursive Matching Pursuit (ORMP). Specifically, the forward half of GSLS is exactly equivalent to ORMP but more efficient. By including the backward pass, which only doubles the computation, we can achieve lower MSE than ORMP. Experimental comparisons to the state-of-the-art LARS algorithm [3] show forward-GSLS is faster, more accurate and more flexible in terms of choice of regularization

  6. Diagnostic performance and safety of a three-dimensional 14-core systematic biopsy method.

    PubMed

    Takeshita, Hideki; Kawakami, Satoru; Numao, Noboru; Sakura, Mizuaki; Tatokoro, Manabu; Yamamoto, Shinya; Kijima, Toshiki; Komai, Yoshinobu; Saito, Kazutaka; Koga, Fumitaka; Fujii, Yasuhisa; Fukui, Iwao; Kihara, Kazunori

    2015-03-01

    To investigate the diagnostic performance and safety of a three-dimensional 14-core biopsy (3D14PBx) method, which is a combination of the transrectal six-core and transperineal eight-core biopsy methods. Between December 2005 and August 2010, 1103 men underwent 3D14PBx at our institutions and were analysed prospectively. Biopsy criteria included a PSA level of 2.5-20 ng/mL or abnormal digital rectal examination (DRE) findings, or both. The primary endpoint of the study was diagnostic performance and the secondary endpoint was safety. We applied recursive partitioning to the entire study cohort to delineate the unique contribution of each sampling site to overall and clinically significant cancer detection. Prostate cancer was detected in 503 of the 1103 patients (45.6%). Age, family history of prostate cancer, DRE, PSA, percentage of free PSA and prostate volume were associated with the positive biopsy results significantly and independently. Of the 503 cancers detected, 39 (7.8%) were clinically locally advanced (≥cT3a), 348 (69%) had a biopsy Gleason score (GS) of ≥7, and 463 (92%) met the definition of biopsy-based significant cancer. Recursive partitioning analysis showed that each sampling site contributed uniquely to both the overall and the biopsy-based significant cancer detection rate of the 3D14PBx method. The overall cancer-positive rate of each sampling site ranged from 14.5% in the transrectal far lateral base to 22.8% in the transrectal far lateral apex. As of August 2010, 210 patients (42%) had undergone radical prostatectomy, of whom 55 (26%) were found to have pathologically non-organ-confined disease, 174 (83%) had prostatectomy GS ≥7 and 185 (88%) met the definition of prostatectomy-based significant cancer. This is the first prospective analysis of the diagnostic performance of an extended biopsy method, which is a simplified version of the somewhat redundant super-extended three-dimensional 26-core biopsy. As expected, each sampling site uniquely contributed not only to overall cancer detection, but also to significant cancer detection. 3D14PBx is a feasible systematic biopsy method in men with PSA <20 ng/mL. © 2014 The Authors. BJU International © 2014 BJU International.

  7. Recursive Subsystems in Aphasia and Alzheimer's Disease: Case Studies in Syntax and Theory of Mind.

    PubMed

    Bánréti, Zoltán; Hoffmann, Ildikó; Vincze, Veronika

    2016-01-01

    The relationship between recursive sentence embedding and theory-of-mind (ToM) inference is investigated in three persons with Broca's aphasia, two persons with Wernicke's aphasia, and six persons with mild and moderate Alzheimer's disease (AD). We asked questions of four types about photographs of various real-life situations. Type 4 questions asked participants about intentions, thoughts, or utterances of the characters in the pictures ("What may X be thinking/asking Y to do?"). The expected answers typically involved subordinate clauses introduced by conjunctions or direct quotations of the characters' utterances. Broca's aphasics did not produce answers with recursive sentence embedding. Rather, they projected themselves into the characters' mental states and gave direct answers in the first person singular, with relevant ToM content. We call such replies "situative statements." Where the question concerned the mental state of the character but did not require an answer with sentence embedding ("What does X hate?"), aphasics gave descriptive answers rather than situative statements. Most replies given by persons with AD to Type 4 questions were grammatical instances of recursive sentence embedding. They also gave a few situative statements but the ToM content of these was irrelevant. In more than one third of their well-formed sentence embeddings, too, they conveyed irrelevant ToM contents. Persons with moderate AD were unable to pass secondary false belief tests. The results reveal double dissociation: Broca's aphasics are unable to access recursive sentence embedding but they can make appropriate ToM inferences; moderate AD persons make the wrong ToM inferences but they are able to access recursive sentence embedding. The double dissociation may be relevant for the nature of the relationship between the two recursive capacities. Broca's aphasics compensated for the lack of recursive sentence embedding by recursive ToM reasoning represented in very simple syntactic forms: they used one recursive subsystem to stand in for another recursive subsystem.

  8. Recursive Subsystems in Aphasia and Alzheimer's Disease: Case Studies in Syntax and Theory of Mind

    PubMed Central

    Bánréti, Zoltán; Hoffmann, Ildikó; Vincze, Veronika

    2016-01-01

    The relationship between recursive sentence embedding and theory-of-mind (ToM) inference is investigated in three persons with Broca's aphasia, two persons with Wernicke's aphasia, and six persons with mild and moderate Alzheimer's disease (AD). We asked questions of four types about photographs of various real-life situations. Type 4 questions asked participants about intentions, thoughts, or utterances of the characters in the pictures (“What may X be thinking/asking Y to do?”). The expected answers typically involved subordinate clauses introduced by conjunctions or direct quotations of the characters' utterances. Broca's aphasics did not produce answers with recursive sentence embedding. Rather, they projected themselves into the characters' mental states and gave direct answers in the first person singular, with relevant ToM content. We call such replies “situative statements.” Where the question concerned the mental state of the character but did not require an answer with sentence embedding (“What does X hate?”), aphasics gave descriptive answers rather than situative statements. Most replies given by persons with AD to Type 4 questions were grammatical instances of recursive sentence embedding. They also gave a few situative statements but the ToM content of these was irrelevant. In more than one third of their well-formed sentence embeddings, too, they conveyed irrelevant ToM contents. Persons with moderate AD were unable to pass secondary false belief tests. The results reveal double dissociation: Broca's aphasics are unable to access recursive sentence embedding but they can make appropriate ToM inferences; moderate AD persons make the wrong ToM inferences but they are able to access recursive sentence embedding. The double dissociation may be relevant for the nature of the relationship between the two recursive capacities. Broca's aphasics compensated for the lack of recursive sentence embedding by recursive ToM reasoning represented in very simple syntactic forms: they used one recursive subsystem to stand in for another recursive subsystem. PMID:27064887

  9. A Bibliography on Non-Gaussian Signal Processing: 1971-1980.

    DTIC Science & Technology

    1980-08-20

    Narrowband Acoustic Signals in Noise," DDC Report AD-A069 829, May 1979. 9. Evans, James, Kersten , Paul, Kurz, Ludwik, "Robustized Recursive Esti...Kazakos, D., and Papantoni-Kazakos, P., "Nonparanietric Methods in Communication Systems," Marcel Dekker, Inc., New York, 1977. 17. Kersten , P., Kurz...ARRAY PFOCESSING 1. Adams , S.L.; Doubek, J.W., "Frequency Coherence and Time Cbherence in Random Multipath Channels", DDC Peport, PD-A047 456, August

  10. Synthesis of Polyferrocenylsilane Block Copolymers and their Crystallization-Driven Self-Assembly in Protic Solvents

    NASA Astrophysics Data System (ADS)

    Zhou, Hang

    Quantum walks are the quantum mechanical analogue of classical random walks. Discrete-time quantum walks have been introduced and studied mostly on the line Z or higher dimensional space Zd but rarely defined on graphs with fractal dimensions because the coin operator depends on the position and the Fourier transform on the fractals is not defined. Inspired by its nature of classical walks, different quantum walks will be defined by choosing different shift and coin operators. When the coin operator is uniform, the results of classical walks will be obtained upon measurement at each step. Moreover, with measurement at each step, our results reveal more information about the classical random walks. In this dissertation, two graphs with fractal dimensions will be considered. The first one is Sierpinski gasket, a degree-4 regular graph with Hausdorff dimension of df = ln 3/ ln 2. The second is the Cantor graph derived like Cantor set, with Hausdorff dimension of df = ln 2/ ln 3. The definitions and amplitude functions of the quantum walks will be introduced. The main part of this dissertation is to derive a recursive formula to compute the amplitude Green function. The exiting probability will be computed and compared with the classical results. When the generation of graphs goes to infinity, the recursion of the walks will be investigated and the convergence rates will be obtained and compared with the classical counterparts.

  11. Balancing a U-Shaped Assembly Line by Applying Nested Partitions Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhagwat, Nikhil V.

    2005-01-01

    In this study, we applied the Nested Partitions method to a U-line balancing problem and conducted experiments to evaluate the application. From the results, it is quite evident that the Nested Partitions method provided near optimal solutions (optimal in some cases). Besides, the execution time is quite short as compared to the Branch and Bound algorithm. However, for larger data sets, the algorithm took significantly longer times for execution. One of the reasons could be the way in which the random samples are generated. In the present study, a random sample is a solution in itself which requires assignment ofmore » tasks to various stations. The time taken to assign tasks to stations is directly proportional to the number of tasks. Thus, if the number of tasks increases, the time taken to generate random samples for the different regions also increases. The performance index for the Nested Partitions method in the present study was the number of stations in the random solutions (samples) generated. The total idle time for the samples can be used as another performance index. ULINO method is known to have used a combination of bounds to come up with good solutions. This approach of combining different performance indices can be used to evaluate the random samples and obtain even better solutions. Here, we used deterministic time values for the tasks. In industries where majority of tasks are performed manually, the stochastic version of the problem could be of vital importance. Experimenting with different objective functions (No. of stations was used in this study) could be of some significance to some industries where in the cost associated with creation of a new station is not the same. For such industries, the results obtained by using the present approach will not be of much value. Labor costs, task incompletion costs or a combination of those can be effectively used as alternate objective functions.« less

  12. An application of change-point recursive models to the relationship between litter size and number of stillborns in pigs.

    PubMed

    Ibáñez-Escriche, N; López de Maturana, E; Noguera, J L; Varona, L

    2010-11-01

    We developed and implemented change-point recursive models and compared them with a linear recursive model and a standard mixed model (SMM), in the scope of the relationship between litter size (LS) and number of stillborns (NSB) in pigs. The proposed approach allows us to estimate the point of change in multiple-segment modeling of a nonlinear relationship between phenotypes. We applied the procedure to a data set provided by a commercial Large White selection nucleus. The data file consisted of LS and NSB records of 4,462 parities. The results of the analysis clearly identified the location of the change points between different structural regression coefficients. The magnitude of these coefficients increased with LS, indicating an increasing incidence of LS on the NSB ratio. However, posterior distributions of correlations were similar across subpopulations (defined by the change points on LS), except for those between residuals. The heritability estimates of NSB did not present differences between recursive models. Nevertheless, these heritabilities were greater than those obtained for SMM (0.05) with a posterior probability of 85%. These results suggest a nonlinear relationship between LS and NSB, which supports the adequacy of a change-point recursive model for its analysis. Furthermore, the results from model comparisons support the use of recursive models. However, the adequacy of the different recursive models depended on the criteria used: the linear recursive model was preferred on account of its smallest deviance value, whereas nonlinear recursive models provided a better fit and predictive ability based on the cross-validation approach.

  13. Recursion to food plants by free-ranging Bornean elephant

    PubMed Central

    Gillespie, Graeme; Goossens, Benoit; Ismail, Sulaiman; Ancrenaz, Marc; Linklater, Wayne

    2015-01-01

    Plant recovery rates after herbivory are thought to be a key factor driving recursion by herbivores to sites and plants to optimise resource-use but have not been investigated as an explanation for recursion in large herbivores. We investigated the relationship between plant recovery and recursion by elephants (Elephas maximus borneensis) in the Lower Kinabatangan Wildlife Sanctuary, Sabah. We identified 182 recently eaten food plants, from 30 species, along 14 × 50 m transects and measured their recovery growth each month over nine months or until they were re-browsed by elephants. The monthly growth in leaf and branch or shoot length for each plant was used to calculate the time required (months) for each species to recover to its pre-eaten length. Elephant returned to all but two transects with 10 eaten plants, a further 26 plants died leaving 146 plants that could be re-eaten. Recursion occurred to 58% of all plants and 12 of the 30 species. Seventy-seven percent of the re-eaten plants were grasses. Recovery times to all plants varied from two to twenty months depending on the species. Recursion to all grasses coincided with plant recovery whereas recursion to most browsed plants occurred four to twelve months before they had recovered to their previous length. The small sample size of many browsed plants that received recursion and uneven plant species distribution across transects limits our ability to generalise for most browsed species but a prominent pattern in plant-scale recursion did emerge. Plant recovery time was a good predictor of time to recursion but varied as a function of growth form (grass, ginger, palm, liana and woody) and differences between sites. Time to plant recursion coincided with plant recovery time for the elephant’s preferred food, grasses, and perhaps also gingers, but not the other browsed species. Elephants are bulk feeders so it is likely that they time their returns to bulk feed on these grass species when quantities have recovered sufficiently to meet their intake requirements. The implications for habitat and elephant management are discussed. PMID:26290779

  14. Recursion to food plants by free-ranging Bornean elephant.

    PubMed

    English, Megan; Gillespie, Graeme; Goossens, Benoit; Ismail, Sulaiman; Ancrenaz, Marc; Linklater, Wayne

    2015-01-01

    Plant recovery rates after herbivory are thought to be a key factor driving recursion by herbivores to sites and plants to optimise resource-use but have not been investigated as an explanation for recursion in large herbivores. We investigated the relationship between plant recovery and recursion by elephants (Elephas maximus borneensis) in the Lower Kinabatangan Wildlife Sanctuary, Sabah. We identified 182 recently eaten food plants, from 30 species, along 14 × 50 m transects and measured their recovery growth each month over nine months or until they were re-browsed by elephants. The monthly growth in leaf and branch or shoot length for each plant was used to calculate the time required (months) for each species to recover to its pre-eaten length. Elephant returned to all but two transects with 10 eaten plants, a further 26 plants died leaving 146 plants that could be re-eaten. Recursion occurred to 58% of all plants and 12 of the 30 species. Seventy-seven percent of the re-eaten plants were grasses. Recovery times to all plants varied from two to twenty months depending on the species. Recursion to all grasses coincided with plant recovery whereas recursion to most browsed plants occurred four to twelve months before they had recovered to their previous length. The small sample size of many browsed plants that received recursion and uneven plant species distribution across transects limits our ability to generalise for most browsed species but a prominent pattern in plant-scale recursion did emerge. Plant recovery time was a good predictor of time to recursion but varied as a function of growth form (grass, ginger, palm, liana and woody) and differences between sites. Time to plant recursion coincided with plant recovery time for the elephant's preferred food, grasses, and perhaps also gingers, but not the other browsed species. Elephants are bulk feeders so it is likely that they time their returns to bulk feed on these grass species when quantities have recovered sufficiently to meet their intake requirements. The implications for habitat and elephant management are discussed.

  15. Distinctive signatures of recursion.

    PubMed

    Martins, Maurício Dias

    2012-07-19

    Although recursion has been hypothesized to be a necessary capacity for the evolution of language, the multiplicity of definitions being used has undermined the broader interpretation of empirical results. I propose that only a definition focused on representational abilities allows the prediction of specific behavioural traits that enable us to distinguish recursion from non-recursive iteration and from hierarchical embedding: only subjects able to represent recursion, i.e. to represent different hierarchical dependencies (related by parenthood) with the same set of rules, are able to generalize and produce new levels of embedding beyond those specified a priori (in the algorithm or in the input). The ability to use such representations may be advantageous in several domains: action sequencing, problem-solving, spatial navigation, social navigation and for the emergence of conventionalized communication systems. The ability to represent contiguous hierarchical levels with the same rules may lead subjects to expect unknown levels and constituents to behave similarly, and this prior knowledge may bias learning positively. Finally, a new paradigm to test for recursion is presented. Preliminary results suggest that the ability to represent recursion in the spatial domain recruits both visual and verbal resources. Implications regarding language evolution are discussed.

  16. Multidimensional density shaping by sigmoids.

    PubMed

    Roth, Z; Baram, Y

    1996-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.

  17. The Recursive Paradigm: Suppose We Already Knew.

    ERIC Educational Resources Information Center

    Maurer, Stephen B.

    1995-01-01

    Explains the recursive model in discrete mathematics through five examples and problems. Discusses the relationship between the recursive model, mathematical induction, and inductive reasoning and the relevance of these concepts in the school curriculum. Provides ideas for approaching this material with students. (Author/DDD)

  18. The language faculty that wasn't: a usage-based account of natural language recursion

    PubMed Central

    Christiansen, Morten H.; Chater, Nick

    2015-01-01

    In the generative tradition, the language faculty has been shrinking—perhaps to include only the mechanism of recursion. This paper argues that even this view of the language faculty is too expansive. We first argue that a language faculty is difficult to reconcile with evolutionary considerations. We then focus on recursion as a detailed case study, arguing that our ability to process recursive structure does not rely on recursion as a property of the grammar, but instead emerges gradually by piggybacking on domain-general sequence learning abilities. Evidence from genetics, comparative work on non-human primates, and cognitive neuroscience suggests that humans have evolved complex sequence learning skills, which were subsequently pressed into service to accommodate language. Constraints on sequence learning therefore have played an important role in shaping the cultural evolution of linguistic structure, including our limited abilities for processing recursive structure. Finally, we re-evaluate some of the key considerations that have often been taken to require the postulation of a language faculty. PMID:26379567

  19. The language faculty that wasn't: a usage-based account of natural language recursion.

    PubMed

    Christiansen, Morten H; Chater, Nick

    2015-01-01

    In the generative tradition, the language faculty has been shrinking-perhaps to include only the mechanism of recursion. This paper argues that even this view of the language faculty is too expansive. We first argue that a language faculty is difficult to reconcile with evolutionary considerations. We then focus on recursion as a detailed case study, arguing that our ability to process recursive structure does not rely on recursion as a property of the grammar, but instead emerges gradually by piggybacking on domain-general sequence learning abilities. Evidence from genetics, comparative work on non-human primates, and cognitive neuroscience suggests that humans have evolved complex sequence learning skills, which were subsequently pressed into service to accommodate language. Constraints on sequence learning therefore have played an important role in shaping the cultural evolution of linguistic structure, including our limited abilities for processing recursive structure. Finally, we re-evaluate some of the key considerations that have often been taken to require the postulation of a language faculty.

  20. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.

  1. Theory of Mind Development in Adolescence and Early Adulthood: The Growing Complexity of Recursive Thinking Ability

    PubMed Central

    Valle, Annalisa; Massaro, Davide; Castelli, Ilaria; Marchetti, Antonella

    2015-01-01

    This study explores the development of theory of mind, operationalized as recursive thinking ability, from adolescence to early adulthood (N = 110; young adolescents = 47; adolescents = 43; young adults = 20). The construct of theory of mind has been operationalized in two different ways: as the ability to recognize the correct mental state of a character, and as the ability to attribute the correct mental state in order to predict the character’s behaviour. The Imposing Memory Task, with five recursive thinking levels, and a third-order false-belief task with three recursive thinking levels (devised for this study) have been used. The relationship among working memory, executive functions, and linguistic skills are also analysed. Results show that subjects exhibit less understanding of elevated recursive thinking levels (third, fourth, and fifth) compared to the first and second levels. Working memory is correlated with total recursive thinking, whereas performance on the linguistic comprehension task is related to third level recursive thinking in both theory of mind tasks. An effect of age on third-order false-belief task performance was also found. A key finding of the present study is that the third-order false-belief task shows significant age differences in the application of recursive thinking that involves the prediction of others’ behaviour. In contrast, such an age effect is not observed in the Imposing Memory Task. These results may support the extension of the investigation of the third order false belief after childhood. PMID:27247645

  2. Improving the Statistical Modeling of the TRMM Extreme Precipitation Monitoring System

    NASA Astrophysics Data System (ADS)

    Demirdjian, L.; Zhou, Y.; Huffman, G. J.

    2016-12-01

    This project improves upon an existing extreme precipitation monitoring system based on the Tropical Rainfall Measuring Mission (TRMM) daily product (3B42) using new statistical models. The proposed system utilizes a regional modeling approach, where data from similar grid locations are pooled to increase the quality and stability of the resulting model parameter estimates to compensate for the short data record. The regional frequency analysis is divided into two stages. In the first stage, the region defined by the TRMM measurements is partitioned into approximately 27,000 non-overlapping clusters using a recursive k-means clustering scheme. In the second stage, a statistical model is used to characterize the extreme precipitation events occurring in each cluster. Instead of utilizing the block-maxima approach used in the existing system, where annual maxima are fit to the Generalized Extreme Value (GEV) probability distribution at each cluster separately, the present work adopts the peak-over-threshold (POT) method of classifying points as extreme if they exceed a pre-specified threshold. Theoretical considerations motivate the use of the Generalized-Pareto (GP) distribution for fitting threshold exceedances. The fitted parameters can be used to construct simple and intuitive average recurrence interval (ARI) maps which reveal how rare a particular precipitation event is given its spatial location. The new methodology eliminates much of the random noise that was produced by the existing models due to a short data record, producing more reasonable ARI maps when compared with NOAA's long-term Climate Prediction Center (CPC) ground based observations. The resulting ARI maps can be useful for disaster preparation, warning, and management, as well as increased public awareness of the severity of precipitation events. Furthermore, the proposed methodology can be applied to various other extreme climate records.

  3. Dose Escalation of Whole-Brain Radiotherapy for Brain Metastases From Melanoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rades, Dirk, E-mail: Rades.Dirk@gmx.ne; Heisterkamp, Christine; Huttenlocher, Stefan

    2010-06-01

    Purpose: The majority of patients with brain metastases from melanoma receive whole-brain radiotherapy (WBRT). However, the results are poor. Hypofractionation regimens failed to improve the outcome of these patients. This study investigates a potential benefit from escalation of the WBRT dose beyond the 'standard' regimen 30 Gy in 10 fractions (10x3 Gy). Methods and Materials: Data from 51 melanoma patients receiving WBRT alone were retrospectively analyzed. A dosage of 10x3 Gy (n = 33) was compared with higher doses including 40 Gy/20 fractions (n = 11) and 45 Gy/15 fractions (n = 7) for survival (OS) and local (intracerebral) controlmore » (LC). Additional potential prognostic factors were evaluated: age, gender, performance status, number of metastases, extracerebral metastases, and recursive partitioning analysis (RPA) class. Results: At 6 months, OS rates were 27% after 10x3 Gy and 50% after higher doses (p = 0.009). The OS rates at 12 months were 4% and 20%. On multivariate analysis, higher WBRT doses (p = 0.010), fewer than four brain metastases (p = 0.012), no extracerebral metastases (p = 0.006), and RPA class 1 (p = 0.005) were associated with improved OS. The LC rates at 6 months were 23% after 10x3 Gy and 50% after higher doses (p = 0.021). The LC rates at 12 months were 0% and 13%. On multivariate analysis, higher WBRT doses (p = 0.020) and fewer than brain metastases (p = 0.002) were associated with better LC. Conclusions: Given the limitations of a retrospective study, the findings suggest that patients with brain metastases from melanoma receiving WBRT alone may benefit from dose escalation beyond 10x3 Gy. The hypothesis generated by this study must be confirmed in a randomized trial stratifying for significant prognostic factors.« less

  4. A Machine Learning Approach to Identifying Placebo Responders in Late-Life Depression Trials.

    PubMed

    Zilcha-Mano, Sigal; Roose, Steven P; Brown, Patrick J; Rutherford, Bret R

    2018-01-11

    Despite efforts to identify characteristics associated with medication-placebo differences in antidepressant trials, few consistent findings have emerged to guide participant selection in drug development settings and differential therapeutics in clinical practice. Limitations in the methodologies used, particularly searching for a single moderator while treating all other variables as noise, may partially explain the failure to generate consistent results. The present study tested whether interactions between pretreatment patient characteristics, rather than a single-variable solution, may better predict who is most likely to benefit from placebo versus medication. Data were analyzed from 174 patients aged 75 years and older with unipolar depression who were randomly assigned to citalopram or placebo. Model-based recursive partitioning analysis was conducted to identify the most robust significant moderators of placebo versus citalopram response. The greatest signal detection between medication and placebo in favor of medication was among patients with fewer years of education (≤12) who suffered from a longer duration of depression since their first episode (>3.47 years) (B = 2.53, t(32) = 3.01, p = 0.004). Compared with medication, placebo had the greatest response for those who were more educated (>12 years), to the point where placebo almost outperformed medication (B = -0.57, t(96) = -1.90, p = 0.06). Machine learning approaches capable of evaluating the contributions of multiple predictor variables may be a promising methodology for identifying placebo versus medication responders. Duration of depression and education should be considered in the efforts to modulate placebo magnitude in drug development settings and in clinical practice. Copyright © 2018 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.

  5. Selective Perioperative Administration of Pasireotide is More Cost-Effective Than Routine Administration for Pancreatic Fistula Prophylaxis.

    PubMed

    Denbo, Jason W; Slack, Rebecca S; Bruno, Morgan; Cloyd, Jordan M; Prakash, Laura; Fleming, Jason B; Kim, Michael P; Aloia, Thomas A; Vauthey, Jean-Nicolas; Lee, Jeffrey E; Katz, Matthew H G

    2017-04-01

    In a randomized trial, pasireotide significantly decreased the incidence and severity of postoperative pancreatic fistula (POPF). Subsequent analyses concluded that its routine use is cost-effective. We hypothesized that selective administration of the drug to patients at high risk for POPF would be more cost-effective. Consecutive patients who did not receive pasireotide and underwent pancreatoduodenectomy (PD) or distal pancreatectomy (DP) between July 2011 and January 2014 were distributed into groups based on their risk of POPF using a multivariate recursive partitioning regression tree analysis (RPA) of preoperative clinical factors. The costs of treating hypothetical patients in each risk group were then computed based upon actual institutional hospital costs and previously published relative risk values associated with pasireotide. Among 315 patients who underwent pancreatectomy, grade B/C POPF occurred in 64 (20%). RPA allocated patients who underwent PD into four groups with a risk for grade B/C POPF of 0, 10, 29, or 60% (P < 0.001) on the basis of diagnosis, pancreatic duct diameter, and body mass index. Patients who underwent DP were allocated to three groups with a grade B/C POPF risk of 14, 26, or 44% (P = 0.05) on the basis of pancreatic duct diameter alone. Although the routine administration of pasireotide to all 315 patients would have theoretically saved $30,892 over standard care, restriction of pasireotide to only patients at high risk for POPF would have led to a cost savings of $831,916. Preoperative clinical characteristics can be used to characterize patients' risk for POPF following pancreatectomy. Selective administration of pasireotide only to patients at high risk for grade B/C POPF may maximize the cost-efficacy of prophylactic pasireotide.

  6. The phenotypic manifestations of rare genic CNVs in autism spectrum disorder

    PubMed Central

    Merikangas, A K; Segurado, R; Heron, E A; Anney, R J L; Paterson, A D; Cook, E H; Pinto, D; Scherer, S W; Szatmari, P; Gill, M; Corvin, A P; Gallagher, L

    2015-01-01

    Significant evidence exists for the association between copy number variants (CNVs) and Autism Spectrum Disorder (ASD); however, most of this work has focused solely on the diagnosis of ASD. There is limited understanding of the impact of CNVs on the ‘sub-phenotypes' of ASD. The objective of this paper is to evaluate associations between CNVs in differentially brain expressed (DBE) genes or genes previously implicated in ASD/intellectual disability (ASD/ID) and specific sub-phenotypes of ASD. The sample consisted of 1590 cases of European ancestry from the Autism Genome Project (AGP) with a diagnosis of an ASD and at least one rare CNV impacting any gene and a core set of phenotypic measures, including symptom severity, language impairments, seizures, gait disturbances, intelligence quotient (IQ) and adaptive function, as well as paternal and maternal age. Classification analyses using a non-parametric recursive partitioning method (random forests) were employed to define sets of phenotypic characteristics that best classify the CNV-defined groups. There was substantial variation in the classification accuracy of the two sets of genes. The best variables for classification were verbal IQ for the ASD/ID genes, paternal age at birth for the DBE genes and adaptive function for de novo CNVs. CNVs in the ASD/ID list were primarily associated with communication and language domains, whereas CNVs in DBE genes were related to broader manifestations of adaptive function. To our knowledge, this is the first study to examine the associations between sub-phenotypes and CNVs genome-wide in ASD. This work highlights the importance of examining the diverse sub-phenotypic manifestations of CNVs in ASD, including the specific features, comorbid conditions and clinical correlates of ASD that comprise underlying characteristics of the disorder. PMID:25421404

  7. The phenotypic manifestations of rare genic CNVs in autism spectrum disorder.

    PubMed

    Merikangas, A K; Segurado, R; Heron, E A; Anney, R J L; Paterson, A D; Cook, E H; Pinto, D; Scherer, S W; Szatmari, P; Gill, M; Corvin, A P; Gallagher, L

    2015-11-01

    Significant evidence exists for the association between copy number variants (CNVs) and Autism Spectrum Disorder (ASD); however, most of this work has focused solely on the diagnosis of ASD. There is limited understanding of the impact of CNVs on the 'sub-phenotypes' of ASD. The objective of this paper is to evaluate associations between CNVs in differentially brain expressed (DBE) genes or genes previously implicated in ASD/intellectual disability (ASD/ID) and specific sub-phenotypes of ASD. The sample consisted of 1590 cases of European ancestry from the Autism Genome Project (AGP) with a diagnosis of an ASD and at least one rare CNV impacting any gene and a core set of phenotypic measures, including symptom severity, language impairments, seizures, gait disturbances, intelligence quotient (IQ) and adaptive function, as well as paternal and maternal age. Classification analyses using a non-parametric recursive partitioning method (random forests) were employed to define sets of phenotypic characteristics that best classify the CNV-defined groups. There was substantial variation in the classification accuracy of the two sets of genes. The best variables for classification were verbal IQ for the ASD/ID genes, paternal age at birth for the DBE genes and adaptive function for de novo CNVs. CNVs in the ASD/ID list were primarily associated with communication and language domains, whereas CNVs in DBE genes were related to broader manifestations of adaptive function. To our knowledge, this is the first study to examine the associations between sub-phenotypes and CNVs genome-wide in ASD. This work highlights the importance of examining the diverse sub-phenotypic manifestations of CNVs in ASD, including the specific features, comorbid conditions and clinical correlates of ASD that comprise underlying characteristics of the disorder.

  8. A clinical return-to-work rule for patients with back pain.

    PubMed

    Dionne, Clermont E; Bourbonnais, Renée; Frémont, Pierre; Rossignol, Michel; Stock, Susan R; Larocque, Isabelle

    2005-06-07

    Tools for early identification of workers with back pain who are at high risk of adverse occupational outcome would help concentrate clinical attention on the patients who need it most, while helping reduce unnecessary interventions (and costs) among the others. This study was conducted to develop and validate clinical rules to predict the 2-year work disability status of people consulting for nonspecific back pain in primary care settings. This was a 2-year prospective cohort study conducted in 7 primary care settings in the Quebec City area. The study enrolled 1007 workers (participation, 68.4% of potential participants expected to be eligible) aged 18-64 years who consulted for nonspecific back pain associated with at least 1 day's absence from work. The majority (86%) completed 5 telephone interviews documenting a large array of variables. Clinical information was abstracted from the medical files. The outcome measure was "return to work in good health" at 2 years, a variable that combined patients' occupational status, functional limitations and recurrences of work absence. Predictive models of 2-year outcome were developed with a recursive partitioning approach on a 40% random sample of our study subjects, then validated on the rest. The best predictive model included 7 baseline variables (patient's recovery expectations, radiating pain, previous back surgery, pain intensity, frequent change of position because of back pain, irritability and bad temper, and difficulty sleeping) and was particularly efficient at identifying patients with no adverse occupational outcome (negative predictive value 78%- 94%). A clinical prediction rule accurately identified a large proportion of workers with back pain consulting in a primary care setting who were at a low risk of an adverse occupational outcome.

  9. Factors affecting exits from homelessness among persons with serious mental illness and substance use disorders.

    PubMed

    Gabrielian, Sonya; Bromley, Elizabeth; Hellemann, Gerhard S; Kern, Robert S; Goldenson, Nicholas I; Danley, Megan E; Young, Alexander S

    2015-04-01

    We sought to understand the housing trajectories of homeless consumers with serious mental illness (SMI) and co-occurring substance use disorders (SUD) and to identify factors that best predicted achievement of independent housing. Using administrative data, we identified homeless persons with SMI and SUD admitted to a residential rehabilitation program from December 2008 to November 2011. Our primary outcome measure was independent housing status. On a random sample (N = 36), we assessed a range of potential predictors of housing outcomes, including symptoms, cognition, and social/community supports. We used the Residential Time-Line Follow-Back (TLFB) Inventory to gather housing histories since exiting rehabilitation and to identify housing outcomes. We used Recursive Partitioning (RP) to identify variables that best differentiated participants by these outcomes. We identified 3 housing trajectories: stable housing (n = 14), unstable housing (n = 15), and continuously engaged in housing services (n = 7). In RP analysis, 2 variables (Symbol Digit Modalities Test [SDMT], a neurocognitive speed of processing measure, and Behavior and Symptom Identification Scale [BASIS-24] Relationships subscale, which quantifies symptoms affecting relationships) were sufficient to capture information provided by 26 predictors to classify participants by housing outcome. Participants predicted to continuously engage in services had impaired processing speeds (SDMT score < 32.5). Among consumers with SDMT score ≥ 32.5, those predicted to achieve stable housing had fewer interpersonal symptoms (BASIS-24 Relationships subscale score < 0.81) than those predicted to have unstable housing. This model explains 57% of this sample's variability and 14% of this population's variability in housing outcomes. Because cognition and symptoms influencing relationships predicted housing outcomes for homeless adults with SMI and SUD, cognitive and social skills training may be useful for this population. © Copyright 2015 Physicians Postgraduate Press, Inc.

  10. Predicting Implantation Outcome of In Vitro Fertilization and Intracytoplasmic Sperm Injection Using Data Mining Techniques.

    PubMed

    Hafiz, Pegah; Nematollahi, Mohtaram; Boostani, Reza; Namavar Jahromi, Bahia

    2017-10-01

    In vitro fertilization (IVF) and intracytoplasmic sperm injection (ICSI) are two important subsets of the assisted reproductive techniques, used for the treatment of infertility. Predicting implantation outcome of IVF/ICSI or the chance of pregnancy is essential for infertile couples, since these treatments are complex and expensive with a low probability of conception. In this cross-sectional study, the data of 486 patients were collected using census method. The IVF/ICSI dataset contains 29 variables along with an identifier for each patient that is either negative or positive. Mean accuracy and mean area under the receiver operating characteristic (ROC) curve are calculated for the classifiers. Sensitivity, specificity, positive and negative predictive values, and likelihood ratios of classifiers are employed as indicators of performance. The state-of-art classifiers which are candidates for this study include support vector machines, recursive partitioning (RPART), random forest (RF), adaptive boosting, and one-nearest neighbor. RF and RPART outperform the other comparable methods. The results revealed the areas under the ROC curve (AUC) as 84.23 and 82.05%, respectively. The importance of IVF/ICSI features was extracted from the output of RPART. Our findings demonstrate that the probability of pregnancy is low for women aged above 38. Classifiers RF and RPART are better at predicting IVF/ICSI cases compared to other decision makers that were tested in our study. Elicited decision rules of RPART determine useful predictive features of IVF/ICSI. Out of 20 factors, the age of woman, number of developed embryos, and serum estradiol level on the day of human chorionic gonadotropin administration are the three best features for such prediction. Copyright© by Royan Institute. All rights reserved.

  11. A clinical return-to-work rule for patients with back pain

    PubMed Central

    Dionne, Clermont E.; Bourbonnais, Renée; Frémont, Pierre; Rossignol, Michel; Stock, Susan R.; Larocque, Isabelle

    2005-01-01

    Background Tools for early identification of workers with back pain who are at high risk of adverse occupational outcome would help concentrate clinical attention on the patients who need it most, while helping reduce unnecessary interventions (and costs) among the others. This study was conducted to develop and validate clinical rules to predict the 2-year work disability status of people consulting for nonspecific back pain in primary care settings. Methods This was a 2-year prospective cohort study conducted in 7 primary care settings in the Quebec City area. The study enrolled 1007 workers (participation, 68.4% of potential participants expected to be eligible) aged 18–64 years who consulted for nonspecific back pain associated with at least 1 day's absence from work. The majority (86%) completed 5 telephone interviews documenting a large array of variables. Clinical information was abstracted from the medical files. The outcome measure was “return to work in good health” at 2 years, a variable that combined patients' occupational status, functional limitations and recurrences of work absence. Predictive models of 2-year outcome were developed with a recursive partitioning approach on a 40% random sample of our study subjects, then validated on the rest. Results The best predictive model included 7 baseline variables (patient's recovery expectations, radiating pain, previous back surgery, pain intensity, frequent change of position because of back pain, irritability and bad temper, and difficulty sleeping) and was particularly efficient at identifying patients with no adverse occupational outcome (negative predictive value 78%– 94%). Interpretation A clinical prediction rule accurately identified a large proportion of workers with back pain consulting in a primary care setting who were at a low risk of an adverse occupational outcome. PMID:15939915

  12. Meyer-Overton reforged: The origins of alcohol and anesthetic potency in membranes as determined by a new NMR partitioning probe, benzyl alcohol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janes, N.; Ma, L.; Hsu, J.W.

    1992-01-01

    The Meyer-Overton hypothesis--that anesthesia arises from the nonspecific action of solutes on membrane lipids--is reformulated using colligative thermodynamics. Configurational entropy, the randomness imparted by the solute through the partitioning process, is implicated as the energetic driving force that pertubs cooperative membrane equilibria. A proton NMR partitioning approach based on the anesthetic benzyl alcohol is developed to assess the reformulation. Ring resonances from the partitioned drug are shielded by 0.2 ppm and resolved from the free, aqueous drug. Free alcohol is quantitated in dilute lipid dispersions using an acetate internal standard. Cooperative equilibria in model dipalmitoyl lecithin membranes are examined withmore » changes in temperature and alcohol concentration. The L[sub [beta][prime

  13. The partition function of the Bures ensemble as the τ-function of BKP and DKP hierarchies: continuous and discrete

    NASA Astrophysics Data System (ADS)

    Hu, Xing-Biao; Li, Shi-Hao

    2017-07-01

    The relationship between matrix integrals and integrable systems was revealed more than 20 years ago. As is known, matrix integrals over a Gaussian ensemble used in random matrix theory could act as the τ-function of several hierarchies of integrable systems. In this article, we will show that the time-dependent partition function of the Bures ensemble, whose measure has many interesting geometric properties, could act as the τ-function of BKP and DKP hierarchies. In addition, if discrete time variables are introduced, then this partition function could act as the τ-function of discrete BKP and DKP hierarchies. In particular, there are some links between the partition function of the Bures ensemble and Toda-type equations.

  14. Scene-based nonuniformity correction for airborne point target detection systems.

    PubMed

    Zhou, Dabiao; Wang, Dejiang; Huo, Lijun; Liu, Rang; Jia, Ping

    2017-06-26

    Images acquired by airborne infrared search and track (IRST) systems are often characterized by nonuniform noise. In this paper, a scene-based nonuniformity correction method for infrared focal-plane arrays (FPAs) is proposed based on the constant statistics of the received radiation ratios of adjacent pixels. The gain of each pixel is computed recursively based on the ratios between adjacent pixels, which are estimated through a median operation. Then, an elaborate mathematical model describing the error propagation, derived from random noise and the recursive calculation procedure, is established. The proposed method maintains the characteristics of traditional methods in calibrating the whole electro-optics chain, in compensating for temporal drifts, and in not preserving the radiometric accuracy of the system. Moreover, the proposed method is robust since the frame number is the only variant, and is suitable for real-time applications owing to its low computational complexity and simplicity of implementation. The experimental results, on different scenes from a proof-of-concept point target detection system with a long-wave Sofradir FPA, demonstrate the compelling performance of the proposed method.

  15. Eigenvalues of normalized Laplacian matrices of fractal trees and dendrimers: Analytical results and applications

    NASA Astrophysics Data System (ADS)

    Julaiti, Alafate; Wu, Bin; Zhang, Zhongzhi

    2013-05-01

    The eigenvalues of the normalized Laplacian matrix of a network play an important role in its structural and dynamical aspects associated with the network. In this paper, we study the spectra and their applications of normalized Laplacian matrices of a family of fractal trees and dendrimers modeled by Cayley trees, both of which are built in an iterative way. For the fractal trees, we apply the spectral decimation approach to determine analytically all the eigenvalues and their corresponding multiplicities, with the eigenvalues provided by a recursive relation governing the eigenvalues of networks at two successive generations. For Cayley trees, we show that all their eigenvalues can be obtained by computing the roots of several small-degree polynomials defined recursively. By using the relation between normalized Laplacian spectra and eigentime identity, we derive the explicit solution to the eigentime identity for random walks on the two treelike networks, the leading scalings of which follow quite different behaviors. In addition, we corroborate the obtained eigenvalues and their degeneracies through the link between them and the number of spanning trees.

  16. Probabilistic Multi-Person Tracking Using Dynamic Bayes Networks

    NASA Astrophysics Data System (ADS)

    Klinger, T.; Rottensteiner, F.; Heipke, C.

    2015-08-01

    Tracking-by-detection is a widely used practice in recent tracking systems. These usually rely on independent single frame detections that are handled as observations in a recursive estimation framework. If these observations are imprecise the generated trajectory is prone to be updated towards a wrong position. In contrary to existing methods our novel approach uses a Dynamic Bayes Network in which the state vector of a recursive Bayes filter, as well as the location of the tracked object in the image are modelled as unknowns. These unknowns are estimated in a probabilistic framework taking into account a dynamic model, and a state-of-the-art pedestrian detector and classifier. The classifier is based on the Random Forest-algorithm and is capable of being trained incrementally so that new training samples can be incorporated at runtime. This allows the classifier to adapt to the changing appearance of a target and to unlearn outdated features. The approach is evaluated on a publicly available benchmark. The results confirm that our approach is well suited for tracking pedestrians over long distances while at the same time achieving comparatively good geometric accuracy.

  17. A novel recursive Fourier transform for nonuniform sampled signals: application to heart rate variability spectrum estimation.

    PubMed

    Holland, Alexander; Aboy, Mateo

    2009-07-01

    We present a novel method to iteratively calculate discrete Fourier transforms for discrete time signals with sample time intervals that may be widely nonuniform. The proposed recursive Fourier transform (RFT) does not require interpolation of the samples to uniform time intervals, and each iterative transform update of N frequencies has computational order N. Because of the inherent non-uniformity in the time between successive heart beats, an application particularly well suited for this transform is power spectral density (PSD) estimation for heart rate variability. We compare RFT based spectrum estimation with Lomb-Scargle Transform (LST) based estimation. PSD estimation based on the LST also does not require uniform time samples, but the LST has a computational order greater than Nlog(N). We conducted an assessment study involving the analysis of quasi-stationary signals with various levels of randomly missing heart beats. Our results indicate that the RFT leads to comparable estimation performance to the LST with significantly less computational overhead and complexity for applications requiring iterative spectrum estimations.

  18. Recursive processes in self-affirmation: intervening to close the minority achievement gap.

    PubMed

    Cohen, Geoffrey L; Garcia, Julio; Purdie-Vaughns, Valerie; Apfel, Nancy; Brzustoski, Patricia

    2009-04-17

    A 2-year follow-up of a randomized field experiment previously reported in Science is presented. A subtle intervention to lessen minority students' psychological threat related to being negatively stereotyped in school was tested in an experiment conducted three times with three independent cohorts (N = 133, 149, and 134). The intervention, a series of brief but structured writing assignments focusing students on a self-affirming value, reduced the racial achievement gap. Over 2 years, the grade point average (GPA) of African Americans was, on average, raised by 0.24 grade points. Low-achieving African Americans were particularly benefited. Their GPA improved, on average, 0.41 points, and their rate of remediation or grade repetition was less (5% versus 18%). Additionally, treated students' self-perceptions showed long-term benefits. Findings suggest that because initial psychological states and performance determine later outcomes by providing a baseline and initial trajectory for a recursive process, apparently small but early alterations in trajectory can have long-term effects. Implications for psychological theory and educational practice are discussed.

  19. An efficient ASIC implementation of 16-channel on-line recursive ICA processor for real-time EEG system.

    PubMed

    Fang, Wai-Chi; Huang, Kuan-Ju; Chou, Chia-Ching; Chang, Jui-Chung; Cauwenberghs, Gert; Jung, Tzyy-Ping

    2014-01-01

    This is a proposal for an efficient very-large-scale integration (VLSI) design, 16-channel on-line recursive independent component analysis (ORICA) processor ASIC for real-time EEG system, implemented with TSMC 40 nm CMOS technology. ORICA is appropriate to be used in real-time EEG system to separate artifacts because of its highly efficient and real-time process features. The proposed ORICA processor is composed of an ORICA processing unit and a singular value decomposition (SVD) processing unit. Compared with previous work [1], this proposed ORICA processor has enhanced effectiveness and reduced hardware complexity by utilizing a deeper pipeline architecture, shared arithmetic processing unit, and shared registers. The 16-channel random signals which contain 8-channel super-Gaussian and 8-channel sub-Gaussian components are used to analyze the dependence of the source components, and the average correlation coefficient is 0.95452 between the original source signals and extracted ORICA signals. Finally, the proposed ORICA processor ASIC is implemented with TSMC 40 nm CMOS technology, and it consumes 15.72 mW at 100 MHz operating frequency.

  20. Single-cell full-length total RNA sequencing uncovers dynamics of recursive splicing and enhancer RNAs.

    PubMed

    Hayashi, Tetsutaro; Ozaki, Haruka; Sasagawa, Yohei; Umeda, Mana; Danno, Hiroki; Nikaido, Itoshi

    2018-02-12

    Total RNA sequencing has been used to reveal poly(A) and non-poly(A) RNA expression, RNA processing and enhancer activity. To date, no method for full-length total RNA sequencing of single cells has been developed despite the potential of this technology for single-cell biology. Here we describe random displacement amplification sequencing (RamDA-seq), the first full-length total RNA-sequencing method for single cells. Compared with other methods, RamDA-seq shows high sensitivity to non-poly(A) RNA and near-complete full-length transcript coverage. Using RamDA-seq with differentiation time course samples of mouse embryonic stem cells, we reveal hundreds of dynamically regulated non-poly(A) transcripts, including histone transcripts and long noncoding RNA Neat1. Moreover, RamDA-seq profiles recursive splicing in >300-kb introns. RamDA-seq also detects enhancer RNAs and their cell type-specific activity in single cells. Taken together, we demonstrate that RamDA-seq could help investigate the dynamics of gene expression, RNA-processing events and transcriptional regulation in single cells.

  1. Model parameter estimation approach based on incremental analysis for lithium-ion batteries without using open circuit voltage

    NASA Astrophysics Data System (ADS)

    Wu, Hongjie; Yuan, Shifei; Zhang, Xi; Yin, Chengliang; Ma, Xuerui

    2015-08-01

    To improve the suitability of lithium-ion battery model under varying scenarios, such as fluctuating temperature and SoC variation, dynamic model with parameters updated realtime should be developed. In this paper, an incremental analysis-based auto regressive exogenous (I-ARX) modeling method is proposed to eliminate the modeling error caused by the OCV effect and improve the accuracy of parameter estimation. Then, its numerical stability, modeling error, and parametric sensitivity are analyzed at different sampling rates (0.02, 0.1, 0.5 and 1 s). To identify the model parameters recursively, a bias-correction recursive least squares (CRLS) algorithm is applied. Finally, the pseudo random binary sequence (PRBS) and urban dynamic driving sequences (UDDSs) profiles are performed to verify the realtime performance and robustness of the newly proposed model and algorithm. Different sampling rates (1 Hz and 10 Hz) and multiple temperature points (5, 25, and 45 °C) are covered in our experiments. The experimental and simulation results indicate that the proposed I-ARX model can present high accuracy and suitability for parameter identification without using open circuit voltage.

  2. Recursive sequences in first-year calculus

    NASA Astrophysics Data System (ADS)

    Krainer, Thomas

    2016-02-01

    This article provides ready-to-use supplementary material on recursive sequences for a second-semester calculus class. It equips first-year calculus students with a basic methodical procedure based on which they can conduct a rigorous convergence or divergence analysis of many simple recursive sequences on their own without the need to invoke inductive arguments as is typically required in calculus textbooks. The sequences that are accessible to this kind of analysis are predominantly (eventually) monotonic, but also certain recursive sequences that alternate around their limit point as they converge can be considered.

  3. On Fusing Recursive Traversals of K-d Trees

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajbhandari, Samyam; Kim, Jinsung; Krishnamoorthy, Sriram

    Loop fusion is a key program transformation for data locality optimization that is implemented in production compilers. But optimizing compilers currently cannot exploit fusion opportunities across a set of recursive tree traversal computations with producer-consumer relationships. In this paper, we develop a compile-time approach to dependence characterization and program transformation to enable fusion across recursively specified traversals over k-ary trees. We present the FuseT source-to-source code transformation framework to automatically generate fused composite recursive operators from an input program containing a sequence of primitive recursive operators. We use our framework to implement fused operators for MADNESS, Multiresolution Adaptive Numerical Environmentmore » for Scientific Simulation. We show that locality optimization through fusion can offer more than an order of magnitude performance improvement.« less

  4. A Method to Predict the Structure and Stability of RNA/RNA Complexes.

    PubMed

    Xu, Xiaojun; Chen, Shi-Jie

    2016-01-01

    RNA/RNA interactions are essential for genomic RNA dimerization and regulation of gene expression. Intermolecular loop-loop base pairing is a widespread and functionally important tertiary structure motif in RNA machinery. However, computational prediction of intermolecular loop-loop base pairing is challenged by the entropy and free energy calculation due to the conformational constraint and the intermolecular interactions. In this chapter, we describe a recently developed statistical mechanics-based method for the prediction of RNA/RNA complex structures and stabilities. The method is based on the virtual bond RNA folding model (Vfold). The main emphasis in the method is placed on the evaluation of the entropy and free energy for the loops, especially tertiary kissing loops. The method also uses recursive partition function calculations and two-step screening algorithm for large, complicated structures of RNA/RNA complexes. As case studies, we use the HIV-1 Mal dimer and the siRNA/HIV-1 mutant (T4) to illustrate the method.

  5. In silico models for the prediction of dose-dependent human hepatotoxicity

    NASA Astrophysics Data System (ADS)

    Cheng, Ailan; Dixon, Steven L.

    2003-12-01

    The liver is extremely vulnerable to the effects of xenobiotics due to its critical role in metabolism. Drug-induced hepatotoxicity may involve any number of different liver injuries, some of which lead to organ failure and, ultimately, patient death. Understandably, liver toxicity is one of the most important dose-limiting considerations in the drug development cycle, yet there remains a serious shortage of methods to predict hepatotoxicity from chemical structure. We discuss our latest findings in this area and present a new, fully general in silico model which is able to predict the occurrence of dose-dependent human hepatotoxicity with greater than 80% accuracy. Utilizing an ensemble recursive partitioning approach, the model classifies compounds as toxic or non-toxic and provides a confidence level to indicate which predictions are most likely to be correct. Only 2D structural information is required and predictions can be made quite rapidly, so this approach is entirely appropriate for data mining applications and for profiling large synthetic and/or virtual libraries.

  6. Anger, preoccupied attachment, and domain disorganization in borderline personality disorder

    PubMed Central

    Morse, Jennifer Q.; Hill, Jonathan; Pilkonis, Paul A.; Yaggi, Kirsten; Broyden, Nichaela; Stepp, Stephanie; Reed, Lawrence Ian; Feske, Ulrike

    2010-01-01

    Emotional dysregulation and attachment insecurity have been reported in borderline personality disorder (BPD). Domain disorganization, evidenced in poor regulation of emotions and behaviors in relation to the demands of different social domains, may be a distinguishing feature of BPD. Understanding the interplay between these factors may be critical for identifying interacting processes in BPD and potential subtypes of BPD. Therefore, we examined the joint and interactive effects of anger, preoccupied attachment, and domain disorganization on BPD traits in clinical sample of 128 psychiatric patients. The results suggest that these factors contribute to BPD both independently and in interaction, even when controlling for other personality disorder traits and Axis I symptoms. In regression analyses, the interaction between anger and domain disorganization predicted BPD traits. In recursive partitioning analyses, two possible paths to BPD were identified: high anger combined with high domain disorganization and low anger combined with preoccupied attachment. These results may suggest possible subtypes of BPD or possible mechanisms by which BPD traits are established and maintained. PMID:19538080

  7. A Recursive Partitioning Method for the Prediction of Preference Rankings Based Upon Kemeny Distances.

    PubMed

    D'Ambrosio, Antonio; Heiser, Willem J

    2016-09-01

    Preference rankings usually depend on the characteristics of both the individuals judging a set of objects and the objects being judged. This topic has been handled in the literature with log-linear representations of the generalized Bradley-Terry model and, recently, with distance-based tree models for rankings. A limitation of these approaches is that they only work with full rankings or with a pre-specified pattern governing the presence of ties, and/or they are based on quite strict distributional assumptions. To overcome these limitations, we propose a new prediction tree method for ranking data that is totally distribution-free. It combines Kemeny's axiomatic approach to define a unique distance between rankings with the CART approach to find a stable prediction tree. Furthermore, our method is not limited by any particular design of the pattern of ties. The method is evaluated in an extensive full-factorial Monte Carlo study with a new simulation design.

  8. Anonymizing and Sharing Medical Text Records

    PubMed Central

    Li, Xiao-Bai; Qin, Jialun

    2017-01-01

    Health information technology has increased accessibility of health and medical data and benefited medical research and healthcare management. However, there are rising concerns about patient privacy in sharing medical and healthcare data. A large amount of these data are in free text form. Existing techniques for privacy-preserving data sharing deal largely with structured data. Current privacy approaches for medical text data focus on detection and removal of patient identifiers from the data, which may be inadequate for protecting privacy or preserving data quality. We propose a new systematic approach to extract, cluster, and anonymize medical text records. Our approach integrates methods developed in both data privacy and health informatics fields. The key novel elements of our approach include a recursive partitioning method to cluster medical text records based on the similarity of the health and medical information and a value-enumeration method to anonymize potentially identifying information in the text data. An experimental study is conducted using real-world medical documents. The results of the experiments demonstrate the effectiveness of the proposed approach. PMID:29569650

  9. Prognostic value of lymphovascular invasion of the primary tumor in hypopharyngeal carcinoma after total laryngopharyngectomy.

    PubMed

    Saito, Yuki; Omura, Go; Yasuhara, Kazuo; Rikitake, Ryoko; Akashi, Ken; Fukuoka, Osamu; Yoshida, Masafumi; Ando, Mizuo; Asakage, Takahiro; Yamasoba, Tatsuya

    2017-08-01

    We aimed to determinate the prognostic value of lymphovascular invasion in the specimens resected during total laryngopharyngectomy for hypopharyngeal carcinoma. Patients who underwent total laryngopharyngectomy at our institution between 2004 and 2014 were included in this study and retrospectively analyzed. We then discriminated for vascular invasion and lymphatic invasion of the primary tumor in all cases. We reviewed 135 records (120 men and 15 women; age range, 36-84 years). Tumors with lymphatic invasion tended to be associated with more metastatic lymph nodes and extracapsular spread (ECS) of metastatic lymph nodes. Tumors with vascular invasion tended to be associated with nonpyriform sinus locations. In a multivariate analysis, nonpyriform sinus locations, >3 metastatic lymph nodes, and vascular invasion remained significant prognostic factors for overall survival (OS); in recursive partitioning analysis, ECS and vascular invasion remained important categorical variables for OS. Vascular invasion is a strong prognostic biomarker for advanced hypopharyngeal carcinoma. © 2017 Wiley Periodicals, Inc. Head Neck 39: 1535-1543, 2017. © 2017 Wiley Periodicals, Inc.

  10. Proceedings of the second SISAL users` conference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feo, J T; Frerking, C; Miller, P J

    1992-12-01

    This report contains papers on the following topics: A sisal code for computing the fourier transform on S{sub N}; five ways to fill your knapsack; simulating material dislocation motion in sisal; candis as an interface for sisal; parallelisation and performance of the burg algorithm on a shared-memory multiprocessor; use of genetic algorithm in sisal to solve the file design problem; implementing FFT`s in sisal; programming and evaluating the performance of signal processing applications in the sisal programming environment; sisal and Von Neumann-based languages: translation and intercommunication; an IF2 code generator for ADAM architecture; program partitioning for NUMA multiprocessor computer systems;more » mapping functional parallelism on distributed memory machines; implicit array copying: prevention is better than cure ; mathematical syntax for sisal; an approach for optimizing recursive functions; implementing arrays in sisal 2.0; Fol: an object oriented extension to the sisal language; twine: a portable, extensible sisal execution kernel; and investigating the memory performance of the optimizing sisal compiler.« less

  11. Recursive Feature Extraction in Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-08-14

    ReFeX extracts recursive topological features from graph data. The input is a graph as a csv file and the output is a csv file containing feature values for each node in the graph. The features are based on topological counts in the neighborhoods of each nodes, as well as recursive summaries of neighbors' features.

  12. Recursion, Language, and Starlings

    ERIC Educational Resources Information Center

    Corballis, Michael C.

    2007-01-01

    It has been claimed that recursion is one of the properties that distinguishes human language from any other form of animal communication. Contrary to this claim, a recent study purports to demonstrate center-embedded recursion in starlings. I show that the performance of the birds in this study can be explained by a counting strategy, without any…

  13. Parametric output-only identification of time-varying structures using a kernel recursive extended least squares TARMA approach

    NASA Astrophysics Data System (ADS)

    Ma, Zhi-Sai; Liu, Li; Zhou, Si-Da; Yu, Lei; Naets, Frank; Heylen, Ward; Desmet, Wim

    2018-01-01

    The problem of parametric output-only identification of time-varying structures in a recursive manner is considered. A kernelized time-dependent autoregressive moving average (TARMA) model is proposed by expanding the time-varying model parameters onto the basis set of kernel functions in a reproducing kernel Hilbert space. An exponentially weighted kernel recursive extended least squares TARMA identification scheme is proposed, and a sliding-window technique is subsequently applied to fix the computational complexity for each consecutive update, allowing the method to operate online in time-varying environments. The proposed sliding-window exponentially weighted kernel recursive extended least squares TARMA method is employed for the identification of a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudo-linear regression TARMA method via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics. Furthermore, the comparisons demonstrate the superior achievable accuracy, lower computational complexity and enhanced online identification capability of the proposed kernel recursive extended least squares TARMA approach.

  14. Implementation of genomic recursions in single-step genomic best linear unbiased predictor for US Holsteins with a large number of genotyped animals.

    PubMed

    Masuda, Y; Misztal, I; Tsuruta, S; Legarra, A; Aguilar, I; Lourenco, D A L; Fragomeni, B O; Lawlor, T J

    2016-03-01

    The objectives of this study were to develop and evaluate an efficient implementation in the computation of the inverse of genomic relationship matrix with the recursion algorithm, called the algorithm for proven and young (APY), in single-step genomic BLUP. We validated genomic predictions for young bulls with more than 500,000 genotyped animals in final score for US Holsteins. Phenotypic data included 11,626,576 final scores on 7,093,380 US Holstein cows, and genotypes were available for 569,404 animals. Daughter deviations for young bulls with no classified daughters in 2009, but at least 30 classified daughters in 2014 were computed using all the phenotypic data. Genomic predictions for the same bulls were calculated with single-step genomic BLUP using phenotypes up to 2009. We calculated the inverse of the genomic relationship matrix GAPY(-1) based on a direct inversion of genomic relationship matrix on a small subset of genotyped animals (core animals) and extended that information to noncore animals by recursion. We tested several sets of core animals including 9,406 bulls with at least 1 classified daughter, 9,406 bulls and 1,052 classified dams of bulls, 9,406 bulls and 7,422 classified cows, and random samples of 5,000 to 30,000 animals. Validation reliability was assessed by the coefficient of determination from regression of daughter deviation on genomic predictions for the predicted young bulls. The reliabilities were 0.39 with 5,000 randomly chosen core animals, 0.45 with the 9,406 bulls, and 7,422 cows as core animals, and 0.44 with the remaining sets. With phenotypes truncated in 2009 and the preconditioned conjugate gradient to solve mixed model equations, the number of rounds to convergence for core animals defined by bulls was 1,343; defined by bulls and cows, 2,066; and defined by 10,000 random animals, at most 1,629. With complete phenotype data, the number of rounds decreased to 858, 1,299, and at most 1,092, respectively. Setting up GAPY(-1) for 569,404 genotyped animals with 10,000 core animals took 1.3h and 57 GB of memory. The validation reliability with APY reaches a plateau when the number of core animals is at least 10,000. Predictions with APY have little differences in reliability among definitions of core animals. Single-step genomic BLUP with APY is applicable to millions of genotyped animals. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  15. Inner and Outer Recursive Neural Networks for Chemoinformatics Applications.

    PubMed

    Urban, Gregor; Subrahmanya, Niranjan; Baldi, Pierre

    2018-02-26

    Deep learning methods applied to problems in chemoinformatics often require the use of recursive neural networks to handle data with graphical structure and variable size. We present a useful classification of recursive neural network approaches into two classes, the inner and outer approach. The inner approach uses recursion inside the underlying graph, to essentially "crawl" the edges of the graph, while the outer approach uses recursion outside the underlying graph, to aggregate information over progressively longer distances in an orthogonal direction. We illustrate the inner and outer approaches on several examples. More importantly, we provide open-source implementations [available at www.github.com/Chemoinformatics/InnerOuterRNN and cdb.ics.uci.edu ] for both approaches in Tensorflow which can be used in combination with training data to produce efficient models for predicting the physical, chemical, and biological properties of small molecules.

  16. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  17. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1978-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  18. Recursive Deadbeat Controller Design

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Phan, Minh Q.

    1997-01-01

    This paper presents a recursive algorithm for a deadbeat predictive controller design. The method combines together the concepts of system identification and deadbeat controller designs. It starts with the multi-step output prediction equation and derives the control force in terms of past input and output time histories. The formulation thus derived satisfies simultaneously system identification and deadbeat controller design requirements. As soon as the coefficient matrices are identified satisfying the output prediction equation, no further work is required to compute the deadbeat control gain matrices. The method can be implemented recursively just as any typical recursive system identification techniques.

  19. A basic recursion concept inventory

    NASA Astrophysics Data System (ADS)

    Hamouda, Sally; Edwards, Stephen H.; Elmongui, Hicham G.; Ernst, Jeremy V.; Shaffer, Clifford A.

    2017-04-01

    Recursion is both an important and a difficult topic for introductory Computer Science students. Students often develop misconceptions about the topic that need to be diagnosed and corrected. In this paper, we report on our initial attempts to develop a concept inventory that measures student misconceptions on basic recursion topics. We present a collection of misconceptions and difficulties encountered by students when learning introductory recursion as presented in a typical CS2 course. Based on this collection, a draft concept inventory in the form of a series of questions was developed and evaluated, with the question rubric tagged to the list of misconceptions and difficulties.

  20. The Paradigm Recursion: Is It More Accessible When Introduced in Middle School?

    ERIC Educational Resources Information Center

    Gunion, Katherine; Milford, Todd; Stege, Ulrike

    2009-01-01

    Recursion is a programming paradigm as well as a problem solving strategy thought to be very challenging to grasp for university students. This article outlines a pilot study, which expands the age range of students exposed to the concept of recursion in computer science through instruction in a series of interesting and engaging activities. In…

  1. A Preliminary Instrument for Measuring Students' Subjective Perceptions of Difficulties in Learning Recursion

    ERIC Educational Resources Information Center

    Lacave, Carmen; Molina, Ana I.; Redondo, Miguel A.

    2018-01-01

    Contribution: Findings are provided from an initial survey to evaluate the magnitude of the recursion problem from the student point of view. Background: A major difficulty that programming students must overcome--the learning of recursion--has been addressed by many authors, using various approaches, but none have considered how students perceive…

  2. Using Spreadsheets to Help Students Think Recursively

    ERIC Educational Resources Information Center

    Webber, Robert P.

    2012-01-01

    Spreadsheets lend themselves naturally to recursive computations, since a formula can be defined as a function of one of more preceding cells. A hypothesized closed form for the "n"th term of a recursive sequence can be tested easily by using a spreadsheet to compute a large number of the terms. Similarly, a conjecture about the limit of a series…

  3. Generalized Path Analysis and Generalized Simultaneous Equations Model for Recursive Systems with Responses of Mixed Types

    ERIC Educational Resources Information Center

    Tsai, Tien-Lung; Shau, Wen-Yi; Hu, Fu-Chang

    2006-01-01

    This article generalizes linear path analysis (PA) and simultaneous equations models (SiEM) to deal with mixed responses of different types in a recursive or triangular system. An efficient instrumental variable (IV) method for estimating the structural coefficients of a 2-equation partially recursive generalized path analysis (GPA) model and…

  4. CREATIVE COMPUTATION.

    DTIC Science & Technology

    ARTIFICIAL INTELLIGENCE , RECURSIVE FUNCTIONS), (*RECURSIVE FUNCTIONS, ARTIFICIAL INTELLIGENCE ), (*MATHEMATICAL LOGIC, ARTIFICIAL INTELLIGENCE ), METAMATHEMATICS, AUTOMATA, NUMBER THEORY, INFORMATION THEORY, COMBINATORIAL ANALYSIS

  5. On recursion.

    PubMed

    Watumull, Jeffrey; Hauser, Marc D; Roberts, Ian G; Hornstein, Norbert

    2014-01-08

    It is a truism that conceptual understanding of a hypothesis is required for its empirical investigation. However, the concept of recursion as articulated in the context of linguistic analysis has been perennially confused. Nowhere has this been more evident than in attempts to critique and extend Hauseretal's. (2002) articulation. These authors put forward the hypothesis that what is uniquely human and unique to the faculty of language-the faculty of language in the narrow sense (FLN)-is a recursive system that generates and maps syntactic objects to conceptual-intentional and sensory-motor systems. This thesis was based on the standard mathematical definition of recursion as understood by Gödel and Turing, and yet has commonly been interpreted in other ways, most notably and incorrectly as a thesis about the capacity for syntactic embedding. As we explain, the recursiveness of a function is defined independent of such output, whether infinite or finite, embedded or unembedded-existent or non-existent. And to the extent that embedding is a sufficient, though not necessary, diagnostic of recursion, it has not been established that the apparent restriction on embedding in some languages is of any theoretical import. Misunderstanding of these facts has generated research that is often irrelevant to the FLN thesis as well as to other theories of language competence that focus on its generative power of expression. This essay is an attempt to bring conceptual clarity to such discussions as well as to future empirical investigations by explaining three criterial properties of recursion: computability (i.e., rules in intension rather than lists in extension); definition by induction (i.e., rules strongly generative of structure); and mathematical induction (i.e., rules for the principled-and potentially unbounded-expansion of strongly generated structure). By these necessary and sufficient criteria, the grammars of all natural languages are recursive.

  6. Experiments with recursive estimation in astronomical image processing

    NASA Technical Reports Server (NTRS)

    Busko, I.

    1992-01-01

    Recursive estimation concepts were applied to image enhancement problems since the 70's. However, very few applications in the particular area of astronomical image processing are known. These concepts were derived, for 2-dimensional images, from the well-known theory of Kalman filtering in one dimension. The historic reasons for application of these techniques to digital images are related to the images' scanned nature, in which the temporal output of a scanner device can be processed on-line by techniques borrowed directly from 1-dimensional recursive signal analysis. However, recursive estimation has particular properties that make it attractive even in modern days, when big computer memories make the full scanned image available to the processor at any given time. One particularly important aspect is the ability of recursive techniques to deal with non-stationary phenomena, that is, phenomena which have their statistical properties variable in time (or position in a 2-D image). Many image processing methods make underlying stationary assumptions either for the stochastic field being imaged, for the imaging system properties, or both. They will underperform, or even fail, when applied to images that deviate significantly from stationarity. Recursive methods, on the contrary, make it feasible to perform adaptive processing, that is, to process the image by a processor with properties tuned to the image's local statistical properties. Recursive estimation can be used to build estimates of images degraded by such phenomena as noise and blur. We show examples of recursive adaptive processing of astronomical images, using several local statistical properties to drive the adaptive processor, as average signal intensity, signal-to-noise and autocorrelation function. Software was developed under IRAF, and as such will be made available to interested users.

  7. On recursion

    PubMed Central

    Watumull, Jeffrey; Hauser, Marc D.; Roberts, Ian G.; Hornstein, Norbert

    2014-01-01

    It is a truism that conceptual understanding of a hypothesis is required for its empirical investigation. However, the concept of recursion as articulated in the context of linguistic analysis has been perennially confused. Nowhere has this been more evident than in attempts to critique and extend Hauseretal's. (2002) articulation. These authors put forward the hypothesis that what is uniquely human and unique to the faculty of language—the faculty of language in the narrow sense (FLN)—is a recursive system that generates and maps syntactic objects to conceptual-intentional and sensory-motor systems. This thesis was based on the standard mathematical definition of recursion as understood by Gödel and Turing, and yet has commonly been interpreted in other ways, most notably and incorrectly as a thesis about the capacity for syntactic embedding. As we explain, the recursiveness of a function is defined independent of such output, whether infinite or finite, embedded or unembedded—existent or non-existent. And to the extent that embedding is a sufficient, though not necessary, diagnostic of recursion, it has not been established that the apparent restriction on embedding in some languages is of any theoretical import. Misunderstanding of these facts has generated research that is often irrelevant to the FLN thesis as well as to other theories of language competence that focus on its generative power of expression. This essay is an attempt to bring conceptual clarity to such discussions as well as to future empirical investigations by explaining three criterial properties of recursion: computability (i.e., rules in intension rather than lists in extension); definition by induction (i.e., rules strongly generative of structure); and mathematical induction (i.e., rules for the principled—and potentially unbounded—expansion of strongly generated structure). By these necessary and sufficient criteria, the grammars of all natural languages are recursive. PMID:24409164

  8. Convergence of moment expansions for expectation values with embedded random matrix ensembles and quantum chaos

    NASA Astrophysics Data System (ADS)

    Kota, V. K. B.

    2003-07-01

    Smoothed forms for expectation values < K> E of positive definite operators K follow from the K-density moments either directly or in many other ways each giving a series expansion (involving polynomials in E). In large spectroscopic spaces one has to partition the many particle spaces into subspaces. Partitioning leads to new expansions for expectation values. It is shown that all the expansions converge to compact forms depending on the nature of the operator K and the operation of embedded random matrix ensembles and quantum chaos in many particle spaces. Explicit results are given for occupancies < ni> E, spin-cutoff factors < JZ2> E and strength sums < O†O> E, where O is a one-body transition operator.

  9. Serial turbo trellis coded modulation using a serially concatenated coder

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Dolinar, Samuel J. (Inventor); Pollara, Fabrizio (Inventor)

    2010-01-01

    Serial concatenated trellis coded modulation (SCTCM) includes an outer coder, an interleaver, a recursive inner coder and a mapping element. The outer coder receives data to be coded and produces outer coded data. The interleaver permutes the outer coded data to produce interleaved data. The recursive inner coder codes the interleaved data to produce inner coded data. The mapping element maps the inner coded data to a symbol. The recursive inner coder has a structure which facilitates iterative decoding of the symbols at a decoder system. The recursive inner coder and the mapping element are selected to maximize the effective free Euclidean distance of a trellis coded modulator formed from the recursive inner coder and the mapping element. The decoder system includes a demodulation unit, an inner SISO (soft-input soft-output) decoder, a deinterleaver, an outer SISO decoder, and an interleaver.

  10. Predictive Parameters of Symptomatic Hematochezia Following 5-Fraction Gantry-Based SABR in Prostate Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Musunuru, Hima Bindu; Department of Radiation Oncology, University of Toronto, Toronto, Ontario; Davidson, Melanie

    2016-04-01

    Purpose: This study identified predictors of high-grade late hematochezia (HH) following 5-fraction gantry-based stereotactic ablative radiation therapy (SABR). Methods and Materials: Hematochezia data for 258 patients who received 35 to 40 Gy SABR in 5-fractions as part of sequential phase 2 prospective trials was retrieved. Grade 2 or higher late rectal bleeding was labeled HH. Hematochezia needing steroid suppositories, 4% formalin, or 1 to 2 sessions of argon plasma coagulation (APC) was labeled grade 2. More than 2 sessions of APC, blood transfusion, or a course of hyperbaric oxygen was grade 3 and development of visceral fistula, grade 4. Various dosimetricmore » and clinical factors were analyzed using univariate and multivariate analyses. Receiver operating characteristic (ROC) curve analysis and recursive partitioning analysis were used to determine clinically valid cut-off points and identify risk groups, respectively. Results: HH was observed in 19.4%, grade ≥3 toxicity in 3.1%. Median follow-up was 29.7 months (interquartile range [IQR]: 20.6-61.7) Median time to develop HH was 11.7 months (IQR: 9.0-15.2) from the start of radiation. At 2 years, cumulative HH was 4.9%, 27.2%, and 42.1% in patients who received 35 Gy to prostate (4-mm planning target volume [PTV] margin), 40 Gy to prostate (5-mm PTV margin), and 40 Gy to prostate/seminal vesicles (5-mm PTV margin), respectively (P<.0001). In the ROC analysis, volume of rectum receiving radiation dose of 38 Gy (V38) was a strong predictor of HH with an area under the curve of 0.65. In multivariate analysis, rectal V38 (≥2.0 cm{sup 3}; odds ratio [OR]: 4.7); use of anticoagulants in the follow-up period (OR: 6.5) and presence of hemorrhoids (OR: 2.7) were the strongest predictors. Recursive partitioning analysis showed rectal V38 < 2.0 cm{sup 3}, and use of anticoagulants or rectal V38 ≥ 2.0 cm{sup 3} plus 1 other risk factor resulted in an HH risk of >30%. Conclusions: Rectal V38 and 2 clinical factors were strong predictors of HH following 5-fraction SABR. Planning constraints should keep rectal V38 below 2.0 cm{sup 3}.« less

  11. Language, Mind, Practice: Families of Recursive Thinking in Human Reasoning

    ERIC Educational Resources Information Center

    Josephson, Marika

    2011-01-01

    In 2002, Chomsky, Hauser, and Fitch asserted that recursion may be the one aspect of the human language faculty that makes human language unique in the narrow sense--unique to language and unique to human beings. They also argue somewhat more quietly (as do Pinker and Jackendoff 2005) that recursion may be possible outside of language: navigation,…

  12. Lord-Wingersky Algorithm Version 2.0 for Hierarchical Item Factor Models with Applications in Test Scoring, Scale Alignment, and Model Fit Testing. CRESST Report 830

    ERIC Educational Resources Information Center

    Cai, Li

    2013-01-01

    Lord and Wingersky's (1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined…

  13. Number Partitioning via Quantum Adiabatic Computation

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, Vadim N.; Toussaint, Udo

    2002-01-01

    We study both analytically and numerically the complexity of the adiabatic quantum evolution algorithm applied to random instances of combinatorial optimization problems. We use as an example the NP-complete set partition problem and obtain an asymptotic expression for the minimal gap separating the ground and exited states of a system during the execution of the algorithm. We show that for computationally hard problem instances the size of the minimal gap scales exponentially with the problem size. This result is in qualitative agreement with the direct numerical simulation of the algorithm for small instances of the set partition problem. We describe the statistical properties of the optimization problem that are responsible for the exponential behavior of the algorithm.

  14. Analytical prediction of the interior noise for cylindrical models of aircraft fuselages for prescribed exterior noise fields. Phase 2: Models for sidewall trim, stiffened structures and cabin acoustics with floor partition

    NASA Technical Reports Server (NTRS)

    Pope, L. D.; Wilby, E. G.

    1982-01-01

    An airplane interior noise prediction model is developed to determine the important parameters associated with sound transmission into the interiors of airplanes, and to identify apropriate noise control methods. Models for stiffened structures, and cabin acoustics with floor partition are developed. Validation studies are undertaken using three test articles: a ring stringer stiffened cylinder, an unstiffened cylinder with floor partition, and ring stringer stiffened cylinder with floor partition and sidewall trim. The noise reductions of the three test articles are computed using the heoretical models and compared to measured values. A statistical analysis of the comparison data indicates that there is no bias in the predictions although a substantial random error exists so that a discrepancy of more than five or six dB can be expected for about one out of three predictions.

  15. Quantum speedup of Monte Carlo methods.

    PubMed

    Montanaro, Ashley

    2015-09-08

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.

  16. Quantum speedup of Monte Carlo methods

    PubMed Central

    Montanaro, Ashley

    2015-01-01

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079

  17. Recursive heuristic classification

    NASA Technical Reports Server (NTRS)

    Wilkins, David C.

    1994-01-01

    The author will describe a new problem-solving approach called recursive heuristic classification, whereby a subproblem of heuristic classification is itself formulated and solved by heuristic classification. This allows the construction of more knowledge-intensive classification programs in a way that yields a clean organization. Further, standard knowledge acquisition and learning techniques for heuristic classification can be used to create, refine, and maintain the knowledge base associated with the recursively called classification expert system. The method of recursive heuristic classification was used in the Minerva blackboard shell for heuristic classification. Minerva recursively calls itself every problem-solving cycle to solve the important blackboard scheduler task, which involves assigning a desirability rating to alternative problem-solving actions. Knowing these ratings is critical to the use of an expert system as a component of a critiquing or apprenticeship tutoring system. One innovation of this research is a method called dynamic heuristic classification, which allows selection among dynamically generated classification categories instead of requiring them to be prenumerated.

  18. Syntactic Recursion Facilitates and Working Memory Predicts Recursive Theory of Mind

    PubMed Central

    Arslan, Burcu; Hohenberger, Annette; Verbrugge, Rineke

    2017-01-01

    In this study, we focus on the possible roles of second-order syntactic recursion and working memory in terms of simple and complex span tasks in the development of second-order false belief reasoning. We tested 89 Turkish children in two age groups, one younger (4;6–6;5 years) and one older (6;7–8;10 years). Although second-order syntactic recursion is significantly correlated with the second-order false belief task, results of ordinal logistic regressions revealed that the main predictor of second-order false belief reasoning is complex working memory span. Unlike simple working memory and second-order syntactic recursion tasks, the complex working memory task required processing information serially with additional reasoning demands that require complex working memory strategies. Based on our results, we propose that children’s second-order theory of mind develops when they have efficient reasoning rules to process embedded beliefs serially, thus overcoming a possible serial processing bottleneck. PMID:28072823

  19. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods.

    PubMed

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J Sunil

    2014-08-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called "Patient Recursive Survival Peeling" is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called "combined" cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication.

  20. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods

    PubMed Central

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil

    2015-01-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called “Patient Recursive Survival Peeling” is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called “combined” cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication. PMID:26997922

  1. Recursive formulas for determining perturbing accelerations in intermediate satellite motion

    NASA Astrophysics Data System (ADS)

    Stoianov, L.

    Recursive formulas for Legendre polynomials and associated Legendre functions are used to obtain recursive relationships for determining acceleration components which perturb intermediate satellite motion. The formulas are applicable in all cases when the perturbation force function is presented as a series in spherical functions (gravitational, tidal, thermal, geomagnetic, and other perturbations of intermediate motion). These formulas can be used to determine the order of perturbing accelerations.

  2. Staggered chiral random matrix theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, James C.

    2011-02-01

    We present a random matrix theory for the staggered lattice QCD Dirac operator. The staggered random matrix theory is equivalent to the zero-momentum limit of the staggered chiral Lagrangian and includes all taste breaking terms at their leading order. This is an extension of previous work which only included some of the taste breaking terms. We will also present some results for the taste breaking contributions to the partition function and the Dirac eigenvalues.

  3. Random location of fuel treatments in wildland community interfaces: a percolation approach

    Treesearch

    Michael Bevers; Philip N. Omi; John G. Hof

    2004-01-01

    We explore the use of spatially correlated random treatments to reduce fuels in landscape patterns that appear somewhat natural while forming fully connected fuelbreaks between wildland forests and developed protection zones. From treatment zone maps partitioned into grids of hexagonal forest cells representing potential treatment sites, we selected cells to be treated...

  4. Application of recursive approaches to differential orbit correction of near Earth asteroids

    NASA Astrophysics Data System (ADS)

    Dmitriev, Vasily; Lupovka, Valery; Gritsevich, Maria

    2016-10-01

    Comparison of three approaches to the differential orbit correction of celestial bodies was performed: batch least squares fitting, Kalman filter, and recursive least squares filter. The first two techniques are well known and widely used (Montenbruck, O. & Gill, E., 2000). The most attention is paid to the algorithm and details of program realization of recursive least squares filter. The filter's algorithm was derived based on recursive least squares technique that are widely used in data processing applications (Simon, D, 2006). Usage recursive least squares filter, makes possible to process a new set of observational data, without reprocessing data, which has been processed before. Specific feature of such approach is that number of observation in data set may be variable. This feature makes recursive least squares filter more flexible approach compare to batch least squares (process complete set of observations in each iteration) and Kalman filtering (suppose updating state vector on each epoch with measurements).Advantages of proposed approach are demonstrated by processing of real astrometric observations of near Earth asteroids. The case of 2008 TC3 was studied. 2008 TC3 was discovered just before its impact with Earth. There are a many closely spaced observations of 2008 TC3 on the interval between discovering and impact, which creates favorable conditions for usage of recursive approaches. Each of approaches has very similar precision in case of 2008 TC3. At the same time, recursive least squares approaches have much higher performance. Thus, this approach more favorable for orbit fitting of a celestial body, which was detected shortly before the collision or close approach to the Earth.This work was carried out at MIIGAiK and supported by the Russian Science Foundation, Project no. 14-22-00197.References:O. Montenbruck and E. Gill, "Satellite Orbits, Models, Methods and Applications," Springer-Verlag, 2000, pp. 1-369.D. Simon, "Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches",1 edition. Hoboken, N.J.: Wiley-Interscience, 2006.

  5. An observational study identifying obese subgroups among older adults at increased risk of mobility disability: do perceptions of the neighborhood environment matter?

    PubMed

    King, Abby C; Salvo, Deborah; Banda, Jorge A; Ahn, David K; Gill, Thomas M; Miller, Michael; Newman, Anne B; Fielding, Roger A; Siordia, Carlos; Moore, Spencer; Folta, Sara; Spring, Bonnie; Manini, Todd; Pahor, Marco

    2015-12-18

    Obesity is an increasingly prevalent condition among older adults, yet relatively little is known about how built environment variables may be associated with obesity in older age groups. This is particularly the case for more vulnerable older adults already showing functional limitations associated with subsequent disability. The Lifestyle Interventions and Independence for Elders (LIFE) trial dataset (n = 1600) was used to explore the associations between perceived built environment variables and baseline obesity levels. Age-stratified recursive partitioning methods were applied to identify distinct subgroups with varying obesity prevalence. Among participants aged 70-78 years, four distinct subgroups, defined by combinations of perceived environment and race-ethnicity variables, were identified. The subgroups with the lowest obesity prevalence (45.5-59.4%) consisted of participants who reported living in neighborhoods with higher residential density. Among participants aged 79-89 years, the subgroup (of three distinct subgroups identified) with the lowest obesity prevalence (19.4%) consisted of non-African American/Black participants who reported living in neighborhoods with friends or acquaintances similar in demographic characteristics to themselves. Overall support for the partitioned subgroupings was obtained using mixed model regression analysis. The results suggest that, in combination with race/ethnicity, features of the perceived neighborhood built and social environments differentiated distinct groups of vulnerable older adults from different age strata that differed in obesity prevalence. Pending further verification, the results may help to inform subsequent targeting of such subgroups for further investigation. Clinicaltrials.gov Identifier =  NCT01072500.

  6. Recursive random forest algorithm for constructing multilayered hierarchical gene regulatory networks that govern biological pathways.

    PubMed

    Deng, Wenping; Zhang, Kui; Busov, Victor; Wei, Hairong

    2017-01-01

    Present knowledge indicates a multilayered hierarchical gene regulatory network (ML-hGRN) often operates above a biological pathway. Although the ML-hGRN is very important for understanding how a pathway is regulated, there is almost no computational algorithm for directly constructing ML-hGRNs. A backward elimination random forest (BWERF) algorithm was developed for constructing the ML-hGRN operating above a biological pathway. For each pathway gene, the BWERF used a random forest model to calculate the importance values of all transcription factors (TFs) to this pathway gene recursively with a portion (e.g. 1/10) of least important TFs being excluded in each round of modeling, during which, the importance values of all TFs to the pathway gene were updated and ranked until only one TF was remained in the list. The above procedure, termed BWERF. After that, the importance values of a TF to all pathway genes were aggregated and fitted to a Gaussian mixture model to determine the TF retention for the regulatory layer immediately above the pathway layer. The acquired TFs at the secondary layer were then set to be the new bottom layer to infer the next upper layer, and this process was repeated until a ML-hGRN with the expected layers was obtained. BWERF improved the accuracy for constructing ML-hGRNs because it used backward elimination to exclude the noise genes, and aggregated the individual importance values for determining the TFs retention. We validated the BWERF by using it for constructing ML-hGRNs operating above mouse pluripotency maintenance pathway and Arabidopsis lignocellulosic pathway. Compared to GENIE3, BWERF showed an improvement in recognizing authentic TFs regulating a pathway. Compared to the bottom-up Gaussian graphical model algorithm we developed for constructing ML-hGRNs, the BWERF can construct ML-hGRNs with significantly reduced edges that enable biologists to choose the implicit edges for experimental validation.

  7. Recursive Fact-finding: A Streaming Approach to Truth Estimation in Crowdsourcing Applications

    DTIC Science & Technology

    2013-07-01

    are reported over the course of the campaign, lending themselves better to the abstraction of a data stream arriving from the community of sources. In...EM Recursive EM Figure 4. Recursive EM Algorithm Convergence V. RELATED WORK Social sensing which is also referred to as human- centric sensing [4...systems, where different sources offer reviews on products (or brands, companies) they have experienced [16]. Customers are affected by those reviews

  8. Recursive computation of mutual potential between two polyhedra

    NASA Astrophysics Data System (ADS)

    Hirabayashi, Masatoshi; Scheeres, Daniel J.

    2013-11-01

    Recursive computation of mutual potential, force, and torque between two polyhedra is studied. Based on formulations by Werner and Scheeres (Celest Mech Dyn Astron 91:337-349, 2005) and Fahnestock and Scheeres (Celest Mech Dyn Astron 96:317-339, 2006) who applied the Legendre polynomial expansion to gravity interactions and expressed each order term by a shape-dependent part and a shape-independent part, this paper generalizes the computation of each order term, giving recursive relations of the shape-dependent part. To consider the potential, force, and torque, we introduce three tensors. This method is applicable to any multi-body systems. Finally, we implement this recursive computation to simulate the dynamics of a two rigid-body system that consists of two equal-sized parallelepipeds.

  9. Dynamics of Quantum Adiabatic Evolution Algorithm for Number Partitioning

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, V. N.; Toussaint, U. V.; Timucin, D. A.

    2002-01-01

    We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size n. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxiliary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum excitation gap. g min, = O(n 2(exp -n/2), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simultaneous flipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to 'the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimensional quantum diffusion in the energy space. This effect provides a general limitation of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.

  10. Regional variation in the hierarchical partitioning of diversity in coral-dwelling fishes.

    PubMed

    Belmaker, Jonathan; Ziv, Yaron; Shashar, Nadav; Connolly, Sean R

    2008-10-01

    The size of the regional species pool may influence local patterns of diversity. However, it is unclear whether certain spatial scales are less sensitive to regional influences than others. Additive partitioning was used to separate coral-dwelling fish diversity to its alpha and beta components, at multiple scales, in several regions across the Indo-Pacific. We then examined how the relative contribution of these components changes with increased regional diversity. By employing specific random-placement null models, we overcome methodological problems with local-regional regressions. We show that, although alpha and beta diversities within each region are consistently different from random-placement null models, the increase in beta diversities among regions was similar to that predicted once heterogeneity in coral habitat was accounted for. In contrast, alpha diversity within single coral heads was limited and increased less than predicted by the null models. This was correlated with increased intraspecific aggregation in more diverse regions and is consistent with ecological limitations on the number of coexisting species at the local scale. These results suggest that, apart from very small spatial scales, variation in the partitioning of fish diversity along regional species richness gradients is driven overwhelmingly by the corresponding gradients in coral assemblage structure.

  11. Dynamics of Quantum Adiabatic Evolution Algorithm for Number Partitioning

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, Vadius; vonToussaint, Udo V.; Timucin, Dogan A.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size n. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxiliary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum exitation gap, gmin = O(n2(sup -n/2)), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simultaneous flipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimensional quantum diffusion in the energy space. This effect provides a general limitation of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.

  12. Harnessing the Bethe free energy†

    PubMed Central

    Bapst, Victor

    2016-01-01

    ABSTRACT A wide class of problems in combinatorics, computer science and physics can be described along the following lines. There are a large number of variables ranging over a finite domain that interact through constraints that each bind a few variables and either encourage or discourage certain value combinations. Examples include the k‐SAT problem or the Ising model. Such models naturally induce a Gibbs measure on the set of assignments, which is characterised by its partition function. The present paper deals with the partition function of problems where the interactions between variables and constraints are induced by a sparse random (hyper)graph. According to physics predictions, a generic recipe called the “replica symmetric cavity method” yields the correct value of the partition function if the underlying model enjoys certain properties [Krzkala et al., PNAS (2007) 10318–10323]. Guided by this conjecture, we prove general sufficient conditions for the success of the cavity method. The proofs are based on a “regularity lemma” for probability measures on sets of the form Ωn for a finite Ω and a large n that may be of independent interest. © 2016 Wiley Periodicals, Inc. Random Struct. Alg., 49, 694–741, 2016 PMID:28035178

  13. A combination of spatial and recursive temporal filtering for noise reduction when using region of interest (ROI) fluoroscopy for patient dose reduction in image guided vascular interventions with significant anatomical motion

    NASA Astrophysics Data System (ADS)

    Setlur Nagesh, S. V.; Khobragade, P.; Ionita, C.; Bednarek, D. R.; Rudin, S.

    2015-03-01

    Because x-ray based image-guided vascular interventions are minimally invasive they are currently the most preferred method of treating disorders such as stroke, arterial stenosis, and aneurysms; however, the x-ray exposure to the patient during long image-guided interventional procedures could cause harmful effects such as cancer in the long run and even tissue damage in the short term. ROI fluoroscopy reduces patient dose by differentially attenuating the incident x-rays outside the region-of-interest. To reduce the noise in the dose-reduced regions previously recursive temporal filtering was successfully demonstrated for neurovascular interventions. However, in cardiac interventions, anatomical motion is significant and excessive recursive filtering could cause blur. In this work the effects of three noise-reduction schemes, including recursive temporal filtering, spatial mean filtering, and a combination of spatial and recursive temporal filtering, were investigated in a simulated ROI dose-reduced cardiac intervention. First a model to simulate the aortic arch and its movement was built. A coronary stent was used to simulate a bioprosthetic valve used in TAVR procedures and was deployed under dose-reduced ROI fluoroscopy during the simulated heart motion. The images were then retrospectively processed for noise reduction in the periphery, using recursive temporal filtering, spatial filtering and a combination of both. Quantitative metrics for all three noise reduction schemes are calculated and are presented as results. From these it can be concluded that with significant anatomical motion, a combination of spatial and recursive temporal filtering scheme is best suited for reducing the excess quantum noise in the periphery. This new noise-reduction technique in combination with ROI fluoroscopy has the potential for substantial patient-dose savings in cardiac interventions.

  14. Methods for assessing movement path recursion with application to African buffalo in South Africa

    USGS Publications Warehouse

    Bar-David, S.; Bar-David, I.; Cross, P.C.; Ryan, S.J.; Knechtel, C.U.; Getz, W.M.

    2009-01-01

    Recent developments of automated methods for monitoring animal movement, e.g., global positioning systems (GPS) technology, yield high-resolution spatiotemporal data. To gain insights into the processes creating movement patterns, we present two new techniques for extracting information from these data on repeated visits to a particular site or patch ("recursions"). Identification of such patches and quantification of recursion pathways, when combined with patch-related ecological data, should contribute to our understanding of the habitat requirements of large herbivores, of factors governing their space-use patterns, and their interactions with the ecosystem. We begin by presenting output from a simple spatial model that simulates movements of large-herbivore groups based on minimal parameters: resource availability and rates of resource recovery after a local depletion. We then present the details of our new techniques of analyses (recursion analysis and circle analysis) and apply them to data generated by our model, as well as two sets of empirical data on movements of African buffalo (Syncerus coffer): the first collected in Klaserie Private Nature Reserve and the second in Kruger National Park, South Africa. Our recursion analyses of model outputs provide us with a basis for inferring aspects of the processes governing the production of buffalo recursion patterns, particularly the potential influence of resource recovery rate. Although the focus of our simulations was a comparison of movement patterns produced by different resource recovery rates, we conclude our paper with a comprehensive discussion of how recursion analyses can be used when appropriate ecological data are available to elucidate various factors influencing movement. Inter alia, these include the various limiting and preferred resources, parasites, and topographical and landscape factors. ?? 2009 by the Ecological Society of America.

  15. Optimum random and age replacement policies for customer-demand multi-state system reliability under imperfect maintenance

    NASA Astrophysics Data System (ADS)

    Chen, Yen-Luan; Chang, Chin-Chih; Sheu, Dwan-Fang

    2016-04-01

    This paper proposes the generalised random and age replacement policies for a multi-state system composed of multi-state elements. The degradation of the multi-state element is assumed to follow the non-homogeneous continuous time Markov process which is a continuous time and discrete state process. A recursive approach is presented to efficiently compute the time-dependent state probability distribution of the multi-state element. The state and performance distribution of the entire multi-state system is evaluated via the combination of the stochastic process and the Lz-transform method. The concept of customer-centred reliability measure is developed based on the system performance and the customer demand. We develop the random and age replacement policies for an aging multi-state system subject to imperfect maintenance in a failure (or unacceptable) state. For each policy, the optimum replacement schedule which minimises the mean cost rate is derived analytically and discussed numerically.

  16. New distributed fusion filtering algorithm based on covariances over sensor networks with random packet dropouts

    NASA Astrophysics Data System (ADS)

    Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J.

    2017-07-01

    This paper studies the distributed fusion estimation problem from multisensor measured outputs perturbed by correlated noises and uncertainties modelled by random parameter matrices. Each sensor transmits its outputs to a local processor over a packet-erasure channel and, consequently, random losses may occur during transmission. Different white sequences of Bernoulli variables are introduced to model the transmission losses. For the estimation, each lost output is replaced by its estimator based on the information received previously, and only the covariances of the processes involved are used, without requiring the signal evolution model. First, a recursive algorithm for the local least-squares filters is derived by using an innovation approach. Then, the cross-correlation matrices between any two local filters is obtained. Finally, the distributed fusion filter weighted by matrices is obtained from the local filters by applying the least-squares criterion. The performance of the estimators and the influence of both sensor uncertainties and transmission losses on the estimation accuracy are analysed in a numerical example.

  17. Lord-Wingersky Algorithm Version 2.0 for Hierarchical Item Factor Models with Applications in Test Scoring, Scale Alignment, and Model Fit Testing.

    PubMed

    Cai, Li

    2015-06-01

    Lord and Wingersky's (Appl Psychol Meas 8:453-461, 1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined on a grid formed by direct products of quadrature points. However, the increase in computational burden remains exponential in the number of dimensions, making the implementation of the recursive algorithm cumbersome for truly high-dimensional models. In this paper, a dimension reduction method that is specific to the Lord-Wingersky recursions is developed. This method can take advantage of the restrictions implied by hierarchical item factor models, e.g., the bifactor model, the testlet model, or the two-tier model, such that a version of the Lord-Wingersky recursive algorithm can operate on a dramatically reduced set of quadrature points. For instance, in a bifactor model, the dimension of integration is always equal to 2, regardless of the number of factors. The new algorithm not only provides an effective mechanism to produce summed score to IRT scaled score translation tables properly adjusted for residual dependence, but leads to new applications in test scoring, linking, and model fit checking as well. Simulated and empirical examples are used to illustrate the new applications.

  18. Simple recursion relations for general field theories

    DOE PAGES

    Cheung, Clifford; Shen, Chia -Hsien; Trnka, Jaroslav

    2015-06-17

    On-shell methods offer an alternative definition of quantum field theory at tree-level, replacing Feynman diagrams with recursion relations and interaction vertices with a handful of seed scattering amplitudes. In this paper we determine the simplest recursion relations needed to construct a general four-dimensional quantum field theory of massless particles. For this purpose we define a covering space of recursion relations which naturally generalizes all existing constructions, including those of BCFW and Risager. The validity of each recursion relation hinges on the large momentum behavior of an n-point scattering amplitude under an m-line momentum shift, which we determine solely from dimensionalmore » analysis, Lorentz invariance, and locality. We show that all amplitudes in a renormalizable theory are 5-line constructible. Amplitudes are 3-line constructible if an external particle carries spin or if the scalars in the theory carry equal charge under a global or gauge symmetry. Remarkably, this implies the 3-line constructibility of all gauge theories with fermions and complex scalars in arbitrary representations, all supersymmetric theories, and the standard model. Moreover, all amplitudes in non-renormalizable theories without derivative interactions are constructible; with derivative interactions, a subset of amplitudes is constructible. We illustrate our results with examples from both renormalizable and non-renormalizable theories. In conclusion, our study demonstrates both the power and limitations of recursion relations as a self-contained formulation of quantum field theory.« less

  19. A simple approach to nonlinear estimation of physical systems

    USGS Publications Warehouse

    Christakos, G.

    1988-01-01

    Recursive algorithms for estimating the states of nonlinear physical systems are developed. This requires some key hypotheses regarding the structure of the underlying processes. Members of this class of random processes have several desirable properties for the nonlinear estimation of random signals. An assumption is made about the form of the estimator, which may then take account of a wide range of applications. Under the above assumption, the estimation algorithm is mathematically suboptimal but effective and computationally attractive. It may be compared favorably to Taylor series-type filters, nonlinear filters which approximate the probability density by Edgeworth or Gram-Charlier series, as well as to conventional statistical linearization-type estimators. To link theory with practice, some numerical results for a simulated system are presented, in which the responses from the proposed and the extended Kalman algorithms are compared. ?? 1988.

  20. Overlapping communities detection based on spectral analysis of line graphs

    NASA Astrophysics Data System (ADS)

    Gui, Chun; Zhang, Ruisheng; Hu, Rongjing; Huang, Guoming; Wei, Jiaxuan

    2018-05-01

    Community in networks are often overlapping where one vertex belongs to several clusters. Meanwhile, many networks show hierarchical structure such that community is recursively grouped into hierarchical organization. In order to obtain overlapping communities from a global hierarchy of vertices, a new algorithm (named SAoLG) is proposed to build the hierarchical organization along with detecting the overlap of community structure. SAoLG applies the spectral analysis into line graphs to unify the overlap and hierarchical structure of the communities. In order to avoid the limitation of absolute distance such as Euclidean distance, SAoLG employs Angular distance to compute the similarity between vertices. Furthermore, we make a micro-improvement partition density to evaluate the quality of community structure and use it to obtain the more reasonable and sensible community numbers. The proposed SAoLG algorithm achieves a balance between overlap and hierarchy by applying spectral analysis to edge community detection. The experimental results on one standard network and six real-world networks show that the SAoLG algorithm achieves higher modularity and reasonable community number values than those generated by Ahn's algorithm, the classical CPM and GN ones.

  1. Statistical properties and condensate fluctuation of attractive Bose gas with finite number of particles

    NASA Astrophysics Data System (ADS)

    Bera, Sangita; Lekala, Mantile Leslie; Chakrabarti, Barnali; Bhattacharyya, Satadal; Rampho, Gaotsiwe Joel

    2017-09-01

    'We study the condensate fluctuation and several statistics of weakly interacting attractive Bose gas of 7 Li atoms in harmonic trap. Using exact recursion relation we calculate canonical ensemble partition function and study the thermal evolution of the condensate. As 7 Li condensate is associated with collapse, the number of condensate atom is truly finite and it facilitates to study the condensate in mesoscopic region. Being highly correlated, we utilize the two-body correlated basis function to get the many-body effective potential which is further used to calculate the energy levels. Taking van der Waals interaction as interatomic interaction we calculate several quantities like condensate fraction N, root-mean-square fluctuation δn0 and different orders of central moments. We observe the effect of finite size on the calculation of condensate fluctuations and the effect of attractive interaction over the noninteracting limit. We observe the depletion of the condensate with increase in temperature. The calculated moments nicely exhibit the mesoscopic effect. The sharp fall in the root-mean-square fluctuation near the critical point signifies the possibility of phase transition.

  2. A fast ellipse extended target PHD filter using box-particle implementation

    NASA Astrophysics Data System (ADS)

    Zhang, Yongquan; Ji, Hongbing; Hu, Qi

    2018-01-01

    This paper presents a box-particle implementation of the ellipse extended target probability hypothesis density (ET-PHD) filter, called the ellipse extended target box particle PHD (EET-BP-PHD) filter, where the extended targets are described as a Poisson model developed by Gilholm et al. and the term "box" is here equivalent to the term "interval" used in interval analysis. The proposed EET-BP-PHD filter is capable of dynamically tracking multiple ellipse extended targets and estimating the target states and the number of targets, in the presence of clutter measurements, false alarms and missed detections. To derive the PHD recursion of the EET-BP-PHD filter, a suitable measurement likelihood is defined for a given partitioning cell, and the main implementation steps are presented along with the necessary box approximations and manipulations. The limitations and capabilities of the proposed EET-BP-PHD filter are illustrated by simulation examples. The simulation results show that a box-particle implementation of the ET-PHD filter can avoid the high number of particles and reduce computational burden, compared to a particle implementation of that for extended target tracking.

  3. SH c realization of minimal model CFT: triality, poset and Burge condition

    NASA Astrophysics Data System (ADS)

    Fukuda, M.; Nakamura, S.; Matsuo, Y.; Zhu, R.-D.

    2015-11-01

    Recently an orthogonal basis of {{W}}_N -algebra (AFLT basis) labeled by N-tuple Young diagrams was found in the context of 4D/2D duality. Recursion relations among the basis are summarized in the form of an algebra SH c which is universal for any N. We show that it has an {{S}}_3 automorphism which is referred to as triality. We study the level-rank duality between minimal models, which is a special example of the automorphism. It is shown that the nonvanishing states in both systems are described by N or M Young diagrams with the rows of boxes appropriately shuffled. The reshuffling of rows implies there exists partial ordering of the set which labels them. For the simplest example, one can compute the partition functions for the partially ordered set (poset) explicitly, which reproduces the Rogers-Ramanujan identities. We also study the description of minimal models by SH c . Simple analysis reproduces some known properties of minimal models, the structure of singular vectors and the N-Burge condition in the Hilbert space.

  4. Recursive Implementations of the Consider Filter

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato; DSouza, Chris

    2012-01-01

    One method to account for parameters errors in the Kalman filter is to consider their effect in the so-called Schmidt-Kalman filter. This work addresses issues that arise when implementing a consider Kalman filter as a real-time, recursive algorithm. A favorite implementation of the Kalman filter as an onboard navigation subsystem is the UDU formulation. A new way to implement a UDU consider filter is proposed. The non-optimality of the recursive consider filter is also analyzed, and a modified algorithm is proposed to overcome this limitation.

  5. Recursive computer architecture for VLSI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Treleaven, P.C.; Hopkins, R.P.

    1982-01-01

    A general-purpose computer architecture based on the concept of recursion and suitable for VLSI computer systems built from replicated (lego-like) computing elements is presented. The recursive computer architecture is defined by presenting a program organisation, a machine organisation and an experimental machine implementation oriented to VLSI. The experimental implementation is being restricted to simple, identical microcomputers each containing a memory, a processor and a communications capability. This future generation of lego-like computer systems are termed fifth generation computers by the Japanese. 30 references.

  6. Self-Avoiding Walks on the Random Lattice and the Random Hopping Model on a Cayley Tree

    NASA Astrophysics Data System (ADS)

    Kim, Yup

    Using a field theoretic method based on the replica trick, it is proved that the three-parameter renormalization group for an n-vector model with quenched randomness reduces to a two-parameter one in the limit n (--->) 0 which corresponds to self-avoiding walks (SAWs). This is also shown by the explicit calculation of the renormalization group recursion relations to second order in (epsilon). From this reduction we find that SAWs on the random lattice are in the same universality class as SAWs on the regular lattice. By analogy with the case of the n-vector model with cubic anisotropy in the limit n (--->) 1, the fixed-point structure of the n-vector model with randomness is analyzed in the SAW limit, so that a physical interpretation of the unphysical fixed point is given. Corrections of the values of critical exponents of the unphysical fixed point published previously is also given. Next we formulate an integral equation and recursion relations for the configurationally averaged one particle Green's function of the random hopping model on a Cayley tree of coordination number ((sigma) + 1). This formalism is tested by applying it successfully to the nonrandom model. Using this scheme for 1 << (sigma) < (INFIN) we calculate the density of states of this model with a Gaussian distribution of hopping matrix elements in the range of energy E('2) > E(,c)('2), where E(,c) is a critical energy described below. The singularity in the Green's function which occurs at energy E(,1)('(0)) for (sigma) = (INFIN) is shifted to complex energy E(,1) (on the unphysical sheet of energy E) for small (sigma)('-1). This calculation shows that the density of states is smooth function of energy E around the critical energy E(,c) = Re E(,1) in accord with Wegner's theorem. In this formulation the density of states has no sharp phase transition on the real axis of E because E(,1) has developed an imaginary part. Using the Lifschitz argument, we calculate the density of states near the band edge for the model when the hopping matrix elements are governed by a bounded probability distribution. It is also shown within the dynamical system language that the density of states of the model with a bounded distribution never vanishes inside the band and we suggest a theoretical mechanism for the formation of energy bands.

  7. Spatial coding-based approach for partitioning big spatial data in Hadoop

    NASA Astrophysics Data System (ADS)

    Yao, Xiaochuang; Mokbel, Mohamed F.; Alarabi, Louai; Eldawy, Ahmed; Yang, Jianyu; Yun, Wenju; Li, Lin; Ye, Sijing; Zhu, Dehai

    2017-09-01

    Spatial data partitioning (SDP) plays a powerful role in distributed storage and parallel computing for spatial data. However, due to skew distribution of spatial data and varying volume of spatial vector objects, it leads to a significant challenge to ensure both optimal performance of spatial operation and data balance in the cluster. To tackle this problem, we proposed a spatial coding-based approach for partitioning big spatial data in Hadoop. This approach, firstly, compressed the whole big spatial data based on spatial coding matrix to create a sensing information set (SIS), including spatial code, size, count and other information. SIS was then employed to build spatial partitioning matrix, which was used to spilt all spatial objects into different partitions in the cluster finally. Based on our approach, the neighbouring spatial objects can be partitioned into the same block. At the same time, it also can minimize the data skew in Hadoop distributed file system (HDFS). The presented approach with a case study in this paper is compared against random sampling based partitioning, with three measurement standards, namely, the spatial index quality, data skew in HDFS, and range query performance. The experimental results show that our method based on spatial coding technique can improve the query performance of big spatial data, as well as the data balance in HDFS. We implemented and deployed this approach in Hadoop, and it is also able to support efficiently any other distributed big spatial data systems.

  8. dPCR: A Technology Review

    PubMed Central

    Quan, Phenix-Lan; Sauzade, Martin

    2018-01-01

    Digital Polymerase Chain Reaction (dPCR) is a novel method for the absolute quantification of target nucleic acids. Quantification by dPCR hinges on the fact that the random distribution of molecules in many partitions follows a Poisson distribution. Each partition acts as an individual PCR microreactor and partitions containing amplified target sequences are detected by fluorescence. The proportion of PCR-positive partitions suffices to determine the concentration of the target sequence without a need for calibration. Advances in microfluidics enabled the current revolution of digital quantification by providing efficient partitioning methods. In this review, we compare the fundamental concepts behind the quantification of nucleic acids by dPCR and quantitative real-time PCR (qPCR). We detail the underlying statistics of dPCR and explain how it defines its precision and performance metrics. We review the different microfluidic digital PCR formats, present their underlying physical principles, and analyze the technological evolution of dPCR platforms. We present the novel multiplexing strategies enabled by dPCR and examine how isothermal amplification could be an alternative to PCR in digital assays. Finally, we determine whether the theoretical advantages of dPCR over qPCR hold true by perusing studies that directly compare assays implemented with both methods. PMID:29677144

  9. Probabilistic hazard assessment for skin sensitization potency by dose–response modeling using feature elimination instead of quantitative structure–activity relationships

    PubMed Central

    McKim, James M.; Hartung, Thomas; Kleensang, Andre; Sá-Rocha, Vanessa

    2016-01-01

    Supervised learning methods promise to improve integrated testing strategies (ITS), but must be adjusted to handle high dimensionality and dose–response data. ITS approaches are currently fueled by the increasing mechanistic understanding of adverse outcome pathways (AOP) and the development of tests reflecting these mechanisms. Simple approaches to combine skin sensitization data sets, such as weight of evidence, fail due to problems in information redundancy and high dimension-ality. The problem is further amplified when potency information (dose/response) of hazards would be estimated. Skin sensitization currently serves as the foster child for AOP and ITS development, as legislative pressures combined with a very good mechanistic understanding of contact dermatitis have led to test development and relatively large high-quality data sets. We curated such a data set and combined a recursive variable selection algorithm to evaluate the information available through in silico, in chemico and in vitro assays. Chemical similarity alone could not cluster chemicals’ potency, and in vitro models consistently ranked high in recursive feature elimination. This allows reducing the number of tests included in an ITS. Next, we analyzed with a hidden Markov model that takes advantage of an intrinsic inter-relationship among the local lymph node assay classes, i.e. the monotonous connection between local lymph node assay and dose. The dose-informed random forest/hidden Markov model was superior to the dose-naive random forest model on all data sets. Although balanced accuracy improvement may seem small, this obscures the actual improvement in misclassifications as the dose-informed hidden Markov model strongly reduced "false-negatives" (i.e. extreme sensitizers as non-sensitizer) on all data sets. PMID:26046447

  10. Probabilistic hazard assessment for skin sensitization potency by dose-response modeling using feature elimination instead of quantitative structure-activity relationships.

    PubMed

    Luechtefeld, Thomas; Maertens, Alexandra; McKim, James M; Hartung, Thomas; Kleensang, Andre; Sá-Rocha, Vanessa

    2015-11-01

    Supervised learning methods promise to improve integrated testing strategies (ITS), but must be adjusted to handle high dimensionality and dose-response data. ITS approaches are currently fueled by the increasing mechanistic understanding of adverse outcome pathways (AOP) and the development of tests reflecting these mechanisms. Simple approaches to combine skin sensitization data sets, such as weight of evidence, fail due to problems in information redundancy and high dimensionality. The problem is further amplified when potency information (dose/response) of hazards would be estimated. Skin sensitization currently serves as the foster child for AOP and ITS development, as legislative pressures combined with a very good mechanistic understanding of contact dermatitis have led to test development and relatively large high-quality data sets. We curated such a data set and combined a recursive variable selection algorithm to evaluate the information available through in silico, in chemico and in vitro assays. Chemical similarity alone could not cluster chemicals' potency, and in vitro models consistently ranked high in recursive feature elimination. This allows reducing the number of tests included in an ITS. Next, we analyzed with a hidden Markov model that takes advantage of an intrinsic inter-relationship among the local lymph node assay classes, i.e. the monotonous connection between local lymph node assay and dose. The dose-informed random forest/hidden Markov model was superior to the dose-naive random forest model on all data sets. Although balanced accuracy improvement may seem small, this obscures the actual improvement in misclassifications as the dose-informed hidden Markov model strongly reduced " false-negatives" (i.e. extreme sensitizers as non-sensitizer) on all data sets. Copyright © 2015 John Wiley & Sons, Ltd.

  11. A Synthetic Recursive “+1” Pathway for Carbon Chain Elongation

    PubMed Central

    Marcheschi, Ryan J.; Li, Han; Zhang, Kechun; Noey, Elizabeth L.; Kim, Seonah; Chaubey, Asha; Houk, K. N.; Liao, James C.

    2013-01-01

    Nature uses four methods of carbon chain elongation for the production of 2-ketoacids, fatty acids, polyketides, and isoprenoids. Using a combination of quantum mechanical (QM) modeling, protein–substrate modeling, and protein and metabolic engineering, we have engineered the enzymes involved in leucine biosynthesis for use as a synthetic “+1” recursive metabolic pathway to extend the carbon chain of 2-ketoacids. This modified pathway preferentially selects longer-chain substrates for catalysis, as compared to the non-recursive natural pathway, and can recursively catalyze five elongation cycles to synthesize bulk chemicals, such as 1-heptanol, 1-octanol, and phenylpropanol directly from glucose. The “+1” chemistry is a valuable metabolic tool in addition to the “+5” chemistry and “+2” chemistry for the biosynthesis of isoprenoids, fatty acids, or polyketides. PMID:22242720

  12. A recursive field-normalized bibliometric performance indicator: an application to the field of library and information science.

    PubMed

    Waltman, Ludo; Yan, Erjia; van Eck, Nees Jan

    2011-10-01

    Two commonly used ideas in the development of citation-based research performance indicators are the idea of normalizing citation counts based on a field classification scheme and the idea of recursive citation weighing (like in PageRank-inspired indicators). We combine these two ideas in a single indicator, referred to as the recursive mean normalized citation score indicator, and we study the validity of this indicator. Our empirical analysis shows that the proposed indicator is highly sensitive to the field classification scheme that is used. The indicator also has a strong tendency to reinforce biases caused by the classification scheme. Based on these observations, we advise against the use of indicators in which the idea of normalization based on a field classification scheme and the idea of recursive citation weighing are combined.

  13. A spatial operator algebra for manipulator modeling and control

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Kreutz, K.; Milman, M.

    1988-01-01

    A powerful new spatial operator algebra for modeling, control, and trajectory design of manipulators is discussed along with its implementation in the Ada programming language. Applications of this algebra to robotics include an operator representation of the manipulator Jacobian matrix; the robot dynamical equations formulated in terms of the spatial algebra, showing the complete equivalence between the recursive Newton-Euler formulations to robot dynamics; the operator factorization and inversion of the manipulator mass matrix which immediately results in O(N) recursive forward dynamics algorithms; the joint accelerations of a manipulator due to a tip contact force; the recursive computation of the equivalent mass matrix as seen at the tip of a manipulator; and recursive forward dynamics of a closed chain system. Finally, additional applications and current research involving the use of the spatial operator algebra are discussed in general terms.

  14. Mining IP to Domain Name Interactions to Detect DNS Flood Attacks on Recursive DNS Servers.

    PubMed

    Alonso, Roberto; Monroy, Raúl; Trejo, Luis A

    2016-08-17

    The Domain Name System (DNS) is a critical infrastructure of any network, and, not surprisingly a common target of cybercrime. There are numerous works that analyse higher level DNS traffic to detect anomalies in the DNS or any other network service. By contrast, few efforts have been made to study and protect the recursive DNS level. In this paper, we introduce a novel abstraction of the recursive DNS traffic to detect a flooding attack, a kind of Distributed Denial of Service (DDoS). The crux of our abstraction lies on a simple observation: Recursive DNS queries, from IP addresses to domain names, form social groups; hence, a DDoS attack should result in drastic changes on DNS social structure. We have built an anomaly-based detection mechanism, which, given a time window of DNS usage, makes use of features that attempt to capture the DNS social structure, including a heuristic that estimates group composition. Our detection mechanism has been successfully validated (in a simulated and controlled setting) and with it the suitability of our abstraction to detect flooding attacks. To the best of our knowledge, this is the first time that work is successful in using this abstraction to detect these kinds of attacks at the recursive level. Before concluding the paper, we motivate further research directions considering this new abstraction, so we have designed and tested two additional experiments which exhibit promising results to detect other types of anomalies in recursive DNS servers.

  15. Mining IP to Domain Name Interactions to Detect DNS Flood Attacks on Recursive DNS Servers

    PubMed Central

    Alonso, Roberto; Monroy, Raúl; Trejo, Luis A.

    2016-01-01

    The Domain Name System (DNS) is a critical infrastructure of any network, and, not surprisingly a common target of cybercrime. There are numerous works that analyse higher level DNS traffic to detect anomalies in the DNS or any other network service. By contrast, few efforts have been made to study and protect the recursive DNS level. In this paper, we introduce a novel abstraction of the recursive DNS traffic to detect a flooding attack, a kind of Distributed Denial of Service (DDoS). The crux of our abstraction lies on a simple observation: Recursive DNS queries, from IP addresses to domain names, form social groups; hence, a DDoS attack should result in drastic changes on DNS social structure. We have built an anomaly-based detection mechanism, which, given a time window of DNS usage, makes use of features that attempt to capture the DNS social structure, including a heuristic that estimates group composition. Our detection mechanism has been successfully validated (in a simulated and controlled setting) and with it the suitability of our abstraction to detect flooding attacks. To the best of our knowledge, this is the first time that work is successful in using this abstraction to detect these kinds of attacks at the recursive level. Before concluding the paper, we motivate further research directions considering this new abstraction, so we have designed and tested two additional experiments which exhibit promising results to detect other types of anomalies in recursive DNS servers. PMID:27548169

  16. WE-E-17A-06: Assessing the Scale of Tumor Heterogeneity by Complete Hierarchical Segmentation On MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gensheimer, M; Trister, A; Ermoian, R

    2014-06-15

    Purpose: In many cancers, intratumoral heterogeneity exists in vascular and genetic structure. We developed an algorithm which uses clinical imaging to interrogate different scales of heterogeneity. We hypothesize that heterogeneity of perfusion at large distance scales may correlate with propensity for disease recurrence. We applied the algorithm to initial diagnosis MRI of rhabdomyosarcoma patients to predict recurrence. Methods: The Spatial Heterogeneity Analysis by Recursive Partitioning (SHARP) algorithm recursively segments the tumor image. The tumor is repeatedly subdivided, with each dividing line chosen to maximize signal intensity difference between the two subregions. This process continues to the voxel level, producing segmentsmore » at multiple scales. Heterogeneity is measured by comparing signal intensity histograms between each segmented region and the adjacent region. We measured the scales of contrast enhancement heterogeneity of the primary tumor in 18 rhabdomyosarcoma patients. Using Cox proportional hazards regression, we explored the influence of heterogeneity parameters on relapse-free survival (RFS). To compare with existing methods, fractal and Haralick texture features were also calculated. Results: The complete segmentation produced by SHARP allows extraction of diverse features, including the amount of heterogeneity at various distance scales, the area of the tumor with the most heterogeneity at each scale, and for a given point in the tumor, the heterogeneity at different scales. 10/18 rhabdomyosarcoma patients suffered disease recurrence. On contrast-enhanced MRI, larger scale of maximum signal intensity heterogeneity, relative to tumor diameter, predicted for shorter RFS (p=0.05). Fractal dimension, fractal fit, and three Haralick features did not predict RFS (p=0.09-0.90). Conclusion: SHARP produces an automatic segmentation of tumor regions and reports the amount of heterogeneity at various distance scales. In rhabdomyosarcoma, RFS was shorter when the primary tumor exhibited larger scale of heterogeneity on contrast-enhanced MRI. If validated on a larger dataset, this imaging biomarker could be useful to help personalize treatment.« less

  17. Non-random nectar unloading interactions between foragers and their receivers in the honeybee hive

    NASA Astrophysics Data System (ADS)

    Goyret, Joaquín; Farina, Walter M.

    2005-09-01

    Nectar acquisition in the honeybee Apis mellifera is a partitioned task in which foragers gather nectar and bring it to the hive, where nest mates unload via trophallaxis (i.e. mouth-to-mouth transfer) the collected food for further storage. Because forager mates exploit different feeding places simultaneously, this study addresses the question of whether nectar unloading interactions between foragers and hive-bees are established randomly, as it is commonly assumed. Two groups of foragers were trained to exploit a different scented food source for 5 days. We recorded their trophallaxes with hive-mates, marking the latter ones according to the forager group they were unloading. We found non-random probabilities for the occurrence of trophallaxes between experimental foragers and hive-bees, instead, we found that trophallactic interactions were more likely to involve groups of individuals which had formerly interacted orally. We propose that olfactory cues present in the transferred nectar promoted the observed bias, and we discuss this bias in the context of the organization of nectar acquisition: a partitioned task carried out in a decentralized insect society.

  18. Comparative study of feature selection with ensemble learning using SOM variants

    NASA Astrophysics Data System (ADS)

    Filali, Ameni; Jlassi, Chiraz; Arous, Najet

    2017-03-01

    Ensemble learning has succeeded in the growth of stability and clustering accuracy, but their runtime prohibits them from scaling up to real-world applications. This study deals the problem of selecting a subset of the most pertinent features for every cluster from a dataset. The proposed method is another extension of the Random Forests approach using self-organizing maps (SOM) variants to unlabeled data that estimates the out-of-bag feature importance from a set of partitions. Every partition is created using a various bootstrap sample and a random subset of the features. Then, we show that the process internal estimates are used to measure variable pertinence in Random Forests are also applicable to feature selection in unsupervised learning. This approach aims to the dimensionality reduction, visualization and cluster characterization at the same time. Hence, we provide empirical results on nineteen benchmark data sets indicating that RFS can lead to significant improvement in terms of clustering accuracy, over several state-of-the-art unsupervised methods, with a very limited subset of features. The approach proves promise to treat with very broad domains.

  19. Hierarchical Solution of the Traveling Salesman Problem with Random Dyadic Tilings

    NASA Astrophysics Data System (ADS)

    Kalmár-Nagy, Tamás; Bak, Bendegúz Dezső

    We propose a hierarchical heuristic approach for solving the Traveling Salesman Problem (TSP) in the unit square. The points are partitioned with a random dyadic tiling and clusters are formed by the points located in the same tile. Each cluster is represented by its geometrical barycenter and a “coarse” TSP solution is calculated for these barycenters. Midpoints are placed at the middle of each edge in the coarse solution. Near-optimal (or optimal) minimum tours are computed for each cluster. The tours are concatenated using the midpoints yielding a solution for the original TSP. The method is tested on random TSPs (independent, identically distributed points in the unit square) up to 10,000 points as well as on a popular benchmark problem (att532 — coordinates of 532 American cities). Our solutions are 8-13% longer than the optimal ones. We also present an optimization algorithm for the partitioning to improve our solutions. This algorithm further reduces the solution errors (by several percent using 1000 iteration steps). The numerical experiments demonstrate the viability of the approach.

  20. Genuine multipartite entanglement of symmetric Gaussian states: Strong monogamy, unitary localization, scaling behavior, and molecular sharing structure

    NASA Astrophysics Data System (ADS)

    Adesso, Gerardo; Illuminati, Fabrizio

    2008-10-01

    We investigate the structural aspects of genuine multipartite entanglement in Gaussian states of continuous variable systems. Generalizing the results of Adesso and Illuminati [Phys. Rev. Lett. 99, 150501 (2007)], we analyze whether the entanglement shared by blocks of modes distributes according to a strong monogamy law. This property, once established, allows us to quantify the genuine N -partite entanglement not encoded into 2,…,K,…,(N-1) -partite quantum correlations. Strong monogamy is numerically verified, and the explicit expression of the measure of residual genuine multipartite entanglement is analytically derived, by a recursive formula, for a subclass of Gaussian states. These are fully symmetric (permutation-invariant) states that are multipartitioned into blocks, each consisting of an arbitrarily assigned number of modes. We compute the genuine multipartite entanglement shared by the blocks of modes and investigate its scaling properties with the number and size of the blocks, the total number of modes, the global mixedness of the state, and the squeezed resources needed for state engineering. To achieve the exact computation of the block entanglement, we introduce and prove a general result of symplectic analysis: Correlations among K blocks in N -mode multisymmetric and multipartite Gaussian states, which are locally invariant under permutation of modes within each block, can be transformed by a local (with respect to the partition) unitary operation into correlations shared by K single modes, one per block, in effective nonsymmetric states where N-K modes are completely uncorrelated. Due to this theorem, the above results, such as the derivation of the explicit expression for the residual multipartite entanglement, its nonnegativity, and its scaling properties, extend to the subclass of non-symmetric Gaussian states that are obtained by the unitary localization of the multipartite entanglement of symmetric states. These findings provide strong numerical evidence that the distributed Gaussian entanglement is strongly monogamous under and possibly beyond specific symmetry constraints, and that the residual continuous-variable tangle is a proper measure of genuine multipartite entanglement for permutation-invariant Gaussian states under any multipartition of the modes.

  1. Recursive inverse kinematics for robot arms via Kalman filtering and Bryson-Frazier smoothing

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Scheid, R. E., Jr.

    1987-01-01

    This paper applies linear filtering and smoothing theory to solve recursively the inverse kinematics problem for serial multilink manipulators. This problem is to find a set of joint angles that achieve a prescribed tip position and/or orientation. A widely applicable numerical search solution is presented. The approach finds the minimum of a generalized distance between the desired and the actual manipulator tip position and/or orientation. Both a first-order steepest-descent gradient search and a second-order Newton-Raphson search are developed. The optimal relaxation factor required for the steepest descent method is computed recursively using an outward/inward procedure similar to those used typically for recursive inverse dynamics calculations. The second-order search requires evaluation of a gradient and an approximate Hessian. A Gauss-Markov approach is used to approximate the Hessian matrix in terms of products of first-order derivatives. This matrix is inverted recursively using a two-stage process of inward Kalman filtering followed by outward smoothing. This two-stage process is analogous to that recently developed by the author to solve by means of spatial filtering and smoothing the forward dynamics problem for serial manipulators.

  2. Recursion equations in predicting band width under gradient elution.

    PubMed

    Liang, Heng; Liu, Ying

    2004-06-18

    The evolution of solute zone under gradient elution is a typical problem of non-linear continuity equation since the local diffusion coefficient and local migration velocity of the mass cells of solute zones are the functions of position and time due to space- and time-variable mobile phase composition. In this paper, based on the mesoscopic approaches (Lagrangian description, the continuity theory and the local equilibrium assumption), the evolution of solute zones in space- and time-dependent fields is described by the iterative addition of local probability density of the mass cells of solute zones. Furthermore, on macroscopic levels, the recursion equations have been proposed to simulate zone migration and spreading in reversed-phase high-performance liquid chromatography (RP-HPLC) through directly relating local retention factor and local diffusion coefficient to local mobile phase concentration. This new approach differs entirely from the traditional theories on plate concept with Eulerian description, since band width recursion equation is actually the accumulation of local diffusion coefficients of solute zones to discrete-time slices. Recursion equations and literature equations were used in dealing with same experimental data in RP-HPLC, and the comparison results show that the recursion equations can accurately predict band width under gradient elution.

  3. Partition dataset according to amino acid type improves the prediction of deleterious non-synonymous SNPs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Jing; Li, Yuan-Yuan; Shanghai Center for Bioinformation Technology, Shanghai 200235

    2012-03-02

    Highlights: Black-Right-Pointing-Pointer Proper dataset partition can improve the prediction of deleterious nsSNPs. Black-Right-Pointing-Pointer Partition according to original residue type at nsSNP is a good criterion. Black-Right-Pointing-Pointer Similar strategy is supposed promising in other machine learning problems. -- Abstract: Many non-synonymous SNPs (nsSNPs) are associated with diseases, and numerous machine learning methods have been applied to train classifiers for sorting disease-associated nsSNPs from neutral ones. The continuously accumulated nsSNP data allows us to further explore better prediction approaches. In this work, we partitioned the training data into 20 subsets according to either original or substituted amino acid type at the nsSNPmore » site. Using support vector machine (SVM), training classification models on each subset resulted in an overall accuracy of 76.3% or 74.9% depending on the two different partition criteria, while training on the whole dataset obtained an accuracy of only 72.6%. Moreover, the dataset was also randomly divided into 20 subsets, but the corresponding accuracy was only 73.2%. Our results demonstrated that partitioning the whole training dataset into subsets properly, i.e., according to the residue type at the nsSNP site, will improve the performance of the trained classifiers significantly, which should be valuable in developing better tools for predicting the disease-association of nsSNPs.« less

  4. AN INTEGRATED APPROACH TO CHARACTERIZING BYPASSED OIL IN HETEROGENEOUS AND FRACTURED RESERVOIRS USING PARTITIONING TRACERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akhil Datta-Gupta

    2003-08-01

    We explore the use of efficient streamline-based simulation approaches for modeling partitioning interwell tracer tests in hydrocarbon reservoirs. Specifically, we utilize the unique features of streamline models to develop an efficient approach for interpretation and history matching of field tracer response. A critical aspect here is the underdetermined and highly ill-posed nature of the associated inverse problems. We have adopted an integrated approach whereby we combine data from multiple sources to minimize the uncertainty and non-uniqueness in the interpreted results. For partitioning interwell tracer tests, these are primarily the distribution of reservoir permeability and oil saturation distribution. A novel approachmore » to multiscale data integration using Markov Random Fields (MRF) has been developed to integrate static data sources from the reservoir such as core, well log and 3-D seismic data. We have also explored the use of a finite difference reservoir simulator, UTCHEM, for field-scale design and optimization of partitioning interwell tracer tests. The finite-difference model allows us to include detailed physics associated with reactive tracer transport, particularly those related with transverse and cross-streamline mechanisms. We have investigated the potential use of downhole tracer samplers and also the use of natural tracers for the design of partitioning tracer tests. Finally, the behavior of partitioning tracer tests in fractured reservoirs is investigated using a dual-porosity finite-difference model.« less

  5. Fluctuations of the partition function in the generalized random energy model with external field

    NASA Astrophysics Data System (ADS)

    Bovier, Anton; Klimovsky, Anton

    2008-12-01

    We study Derrida's generalized random energy model (GREM) in the presence of uniform external field. We compute the fluctuations of the ground state and of the partition function in the thermodynamic limit for all admissible values of parameters. We find that the fluctuations are described by a hierarchical structure which is obtained by a certain coarse graining of the initial hierarchical structure of the GREM with external field. We provide an explicit formula for the free energy of the model. We also derive some large deviation results providing an expression for the free energy in a class of models with Gaussian Hamiltonians and external field. Finally, we prove that the coarse-grained parts of the system emerging in the thermodynamic limit tend to have a certain optimal magnetization, as prescribed by the strength of the external field and by parameters of the GREM.

  6. A structural model of the dimensions of teacher stress.

    PubMed

    Boyle, G J; Borg, M G; Falzon, J M; Baglioni, A J

    1995-03-01

    A comprehensive survey of teacher stress, job satisfaction and career commitment among 710 full-time primary school teachers was undertaken by Borg, Riding & Falzon (1991) in the Mediterranean islands of Malta and Gozo. A principal components analysis of a 20-item sources of teacher stress inventory had suggested four distinct dimensions which were labelled: Pupil Misbehaviour, Time/Resource Difficulties, Professional Recognition Needs, and Poor Relationships, respectively. To check on the validity of the Borg et al. factor solution, the group of 710 teachers was randomly split into two separate samples. Exploratory factor analysis was carried out on the data from Sample 1 (N = 335), while Sample 2 (N = 375) provided the cross-validational data for a LISREL confirmatory factor analysis. Results supported the proposed dimensionality of the sources of teacher stress (measurement model), along with evidence of an additional teacher stress factor (Workload). Consequently, structural modelling of the 'causal relationships' between the various latent variables and self-reported stress was undertaken on the combined samples (N = 710). Although both non-recursive and recursive models incorporating Poor Colleague Relations as a mediating variable were tested for their goodness-of-fit, a simple regression model provided the most parsimonious fit to the empirical data, wherein Workload and Student Misbehaviour accounted for most of the variance in predicting teaching stress.

  7. Recursive multibody dynamics and discrete-time optimal control

    NASA Technical Reports Server (NTRS)

    Deleuterio, G. M. T.; Damaren, C. J.

    1989-01-01

    A recursive algorithm is developed for the solution of the simulation dynamics problem for a chain of rigid bodies. Arbitrary joint constraints are permitted, that is, joints may allow translational and/or rotational degrees of freedom. The recursive procedure is shown to be identical to that encountered in a discrete-time optimal control problem. For each relevant quantity in the multibody dynamics problem, there exists an analog in the context of optimal control. The performance index that is minimized in the control problem is identified as Gibbs' function for the chain of bodies.

  8. A decoupled recursive approach for constrained flexible multibody system dynamics

    NASA Technical Reports Server (NTRS)

    Lai, Hao-Jan; Kim, Sung-Soo; Haug, Edward J.; Bae, Dae-Sung

    1989-01-01

    A variational-vector calculus approach is employed to derive a recursive formulation for dynamic analysis of flexible multibody systems. Kinematic relationships for adjacent flexible bodies are derived in a companion paper, using a state vector notation that represents translational and rotational components simultaneously. Cartesian generalized coordinates are assigned for all body and joint reference frames, to explicitly formulate deformation kinematics under small deformation kinematics and an efficient flexible dynamics recursive algorithm is developed. Dynamic analysis of a closed loop robot is performed to illustrate efficiency of the algorithm.

  9. Contribution of zonal harmonics to gravitational moment

    NASA Technical Reports Server (NTRS)

    Roithmayr, Carlos M.

    1991-01-01

    It is presently demonstrated that a recursive vector-dyadic expression for the contribution of a zonal harmonic of degree n to the gravitational moment about a small body's center-of-mass is obtainable with a procedure that involves twice differentiating a celestial body's gravitational potential with respect to a vector. The recursive property proceeds from taking advantage of a recursion relation for Legendre polynomials which appear in the gravitational potential. The contribution of the zonal harmonic of degree 2 is consistent with the gravitational moment exerted by an oblate spheroid.

  10. Contribution of zonal harmonics to gravitational moment

    NASA Astrophysics Data System (ADS)

    Roithmayr, Carlos M.

    1991-02-01

    It is presently demonstrated that a recursive vector-dyadic expression for the contribution of a zonal harmonic of degree n to the gravitational moment about a small body's center-of-mass is obtainable with a procedure that involves twice differentiating a celestial body's gravitational potential with respect to a vector. The recursive property proceeds from taking advantage of a recursion relation for Legendre polynomials which appear in the gravitational potential. The contribution of the zonal harmonic of degree 2 is consistent with the gravitational moment exerted by an oblate spheroid.

  11. Recursive Directional Ligation Approach for Cloning Recombinant Spider Silks.

    PubMed

    Dinjaski, Nina; Huang, Wenwen; Kaplan, David L

    2018-01-01

    Recent advances in genetic engineering have provided a route to produce various types of recombinant spider silks. Different cloning strategies have been applied to achieve this goal (e.g., concatemerization, step-by-step ligation, recursive directional ligation). Here we describe recursive directional ligation as an approach that allows for facile modularity and control over the size of the genetic cassettes. This approach is based on sequential ligation of genetic cassettes (monomers) where the junctions between them are formed without interrupting key gene sequences with additional base pairs.

  12. Recursive Construction of Noiseless Subsystem for Qudits

    NASA Astrophysics Data System (ADS)

    Güngördü, Utkan; Li, Chi-Kwong; Nakahara, Mikio; Poon, Yiu-Tung; Sze, Nung-Sing

    2014-03-01

    When the environmental noise acting on the system has certain symmetries, a subsystem of the total system can avoid errors. Encoding information into such a subsystem is advantageous since it does not require any error syndrome measurements, which may introduce further errors to the system. However, utilizing such a subsystem for large systems gets impractical with the increasing number of qudits. A recursive scheme offers a solution to this problem. Here, we review the recursive construct introduced in, which can asymptotically protect 1/d of the qudits in system against collective errors.

  13. Parallel scheduling of recursively defined arrays

    NASA Technical Reports Server (NTRS)

    Myers, T. J.; Gokhale, M. B.

    1986-01-01

    A new method of automatic generation of concurrent programs which constructs arrays defined by sets of recursive equations is described. It is assumed that the time of computation of an array element is a linear combination of its indices, and integer programming is used to seek a succession of hyperplanes along which array elements can be computed concurrently. The method can be used to schedule equations involving variable length dependency vectors and mutually recursive arrays. Portions of the work reported here have been implemented in the PS automatic program generation system.

  14. Hierarchical Recursive Organization and the Free Energy Principle: From Biological Self-Organization to the Psychoanalytic Mind

    PubMed Central

    Connolly, Patrick; van Deventer, Vasi

    2017-01-01

    The present paper argues that a systems theory epistemology (and particularly the notion of hierarchical recursive organization) provides the critical theoretical context within which the significance of Friston's (2010a) Free Energy Principle (FEP) for both evolution and psychoanalysis is best understood. Within this perspective, the FEP occupies a particular level of the hierarchical organization of the organism, which is the level of biological self-organization. This form of biological self-organization is in turn understood as foundational and pervasive to the higher levels of organization of the human organism that are of interest to both neuroscience as well as psychoanalysis. Consequently, central psychoanalytic claims should be restated, in order to be located in their proper place within a hierarchical recursive organization of the (situated) organism. In light of the FEP the realization of the psychoanalytic mind by the brain should be seen in terms of the evolution of different levels of systematic organization where the concepts of psychoanalysis describe a level of hierarchical recursive organization superordinate to that of biological self-organization and the FEP. The implication of this formulation is that while “psychoanalytic” mental processes are fundamentally subject to the FEP, they nonetheless also add their own principles of process over and above that of the FEP. A model found in Grobbelaar (1989) offers a recursive bottom-up description of the self-organization of the psychoanalytic ego as dependent on the organization of language (and affect), which is itself founded upon the tendency toward autopoiesis (self-making) within the organism, which is in turn described as formally similar to the FEP. Meaningful consilience between Grobbelaar's model and the hierarchical recursive description available in Friston's (2010a) theory is described. The paper concludes that the valuable contribution of the FEP to psychoanalysis underscores the necessity of reengagement with the core concepts of psychoanalytic theory, and the usefulness that a systems theory epistemology—particularly hierarchical recursive description—can have for this goal. PMID:29038652

  15. Hierarchical Recursive Organization and the Free Energy Principle: From Biological Self-Organization to the Psychoanalytic Mind.

    PubMed

    Connolly, Patrick; van Deventer, Vasi

    2017-01-01

    The present paper argues that a systems theory epistemology (and particularly the notion of hierarchical recursive organization) provides the critical theoretical context within which the significance of Friston's (2010a) Free Energy Principle (FEP) for both evolution and psychoanalysis is best understood. Within this perspective, the FEP occupies a particular level of the hierarchical organization of the organism, which is the level of biological self-organization. This form of biological self-organization is in turn understood as foundational and pervasive to the higher levels of organization of the human organism that are of interest to both neuroscience as well as psychoanalysis. Consequently, central psychoanalytic claims should be restated, in order to be located in their proper place within a hierarchical recursive organization of the (situated) organism. In light of the FEP the realization of the psychoanalytic mind by the brain should be seen in terms of the evolution of different levels of systematic organization where the concepts of psychoanalysis describe a level of hierarchical recursive organization superordinate to that of biological self-organization and the FEP. The implication of this formulation is that while "psychoanalytic" mental processes are fundamentally subject to the FEP, they nonetheless also add their own principles of process over and above that of the FEP. A model found in Grobbelaar (1989) offers a recursive bottom-up description of the self-organization of the psychoanalytic ego as dependent on the organization of language (and affect), which is itself founded upon the tendency toward autopoiesis (self-making) within the organism, which is in turn described as formally similar to the FEP. Meaningful consilience between Grobbelaar's model and the hierarchical recursive description available in Friston's (2010a) theory is described. The paper concludes that the valuable contribution of the FEP to psychoanalysis underscores the necessity of reengagement with the core concepts of psychoanalytic theory, and the usefulness that a systems theory epistemology-particularly hierarchical recursive description-can have for this goal.

  16. Language and Recursion

    NASA Astrophysics Data System (ADS)

    Lowenthal, Francis

    2010-11-01

    This paper examines whether the recursive structure imbedded in some exercises used in the Non Verbal Communication Device (NVCD) approach is actually the factor that enables this approach to favor language acquisition and reacquisition in the case of children with cerebral lesions. For that a definition of the principle of recursion as it is used by logicians is presented. The two opposing approaches to the problem of language development are explained. For many authors such as Chomsky [1] the faculty of language is innate. This is known as the Standard Theory; the other researchers in this field, e.g. Bates and Elman [2], claim that language is entirely constructed by the young child: they thus speak of Language Acquisition. It is also shown that in both cases, a version of the principle of recursion is relevant for human language. The NVCD approach is defined and the results obtained in the domain of language while using this approach are presented: young subjects using this approach acquire a richer language structure or re-acquire such a structure in the case of cerebral lesions. Finally it is shown that exercises used in this framework imply the manipulation of recursive structures leading to regular grammars. It is thus hypothesized that language development could be favored using recursive structures with the young child. It could also be the case that the NVCD like exercises used with children lead to the elaboration of a regular language, as defined by Chomsky [3], which could be sufficient for language development but would not require full recursion. This double claim could reconcile Chomsky's approach with psychological observations made by adherents of the Language Acquisition approach, if it is confirmed by researches combining the use of NVCDs, psychometric methods and the use of Neural Networks. This paper thus suggests that a research group oriented towards this problematic should be organized.

  17. Strong scaling and speedup to 16,384 processors in cardiac electro-mechanical simulations.

    PubMed

    Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U J; Seemann, Gunnar; Dossel, Olaf; Pitman, Michael C; Rice, John J

    2009-01-01

    High performance computing is required to make feasible simulations of whole organ models of the heart with biophysically detailed cellular models in a clinical setting. Increasing model detail by simulating electrophysiology and mechanical models increases computation demands. We present scaling results of an electro - mechanical cardiac model of two ventricles and compare them to our previously published results using an electrophysiological model only. The anatomical data-set was given by both ventricles of the Visible Female data-set in a 0.2 mm resolution. Fiber orientation was included. Data decomposition for the distribution onto the distributed memory system was carried out by orthogonal recursive bisection. Load weight ratios for non-tissue vs. tissue elements used in the data decomposition were 1:1, 1:2, 1:5, 1:10, 1:25, 1:38.85, 1:50 and 1:100. The ten Tusscher et al. (2004) electrophysiological cell model was used and the Rice et al. (1999) model for the computation of the calcium transient dependent force. Scaling results for 512, 1024, 2048, 4096, 8192 and 16,384 processors were obtained for 1 ms simulation time. The simulations were carried out on an IBM Blue Gene/L supercomputer. The results show linear scaling from 512 to 16,384 processors with speedup factors between 1.82 and 2.14 between partitions. The most optimal load ratio was 1:25 for on all partitions. However, a shift towards load ratios with higher weight for the tissue elements can be recognized as can be expected when adding computational complexity to the model while keeping the same communication setup. This work demonstrates that it is potentially possible to run simulations of 0.5 s using the presented electro-mechanical cardiac model within 1.5 hours.

  18. Are species photosynthetic characteristics good predictors of seedling post-hurricane demographic patterns and species spatiotemporal distribution in a hurricane impacted wet montane forest?

    NASA Astrophysics Data System (ADS)

    Luke, Denneko; McLaren, Kurt

    2018-05-01

    In situ measurements of leaf level photosynthetic response to light were collected from seedlings of ten tree species from a tropical montane wet forest, the John Crow Mountains, Jamaica. A model-based recursive partitioning ('mob') algorithm was then used to identify species associations based on their fitted photosynthetic response curves. Leaf area dark respiration (RD) and light saturated maximum photosynthetic (Amax) rates were also used as 'mob' partitioning variables, to identify species associations based on seedling demographic patterns (from June 2007 to May 2010) following a hurricane (Aug. 2007) and the spatiotemporal distribution patterns of stems in 2006 and 2012. RD and Amax rates ranged from 1.14 to 2.02 μmol (CO2) m-2s-1 and 2.97-5.87 μmol (CO2) m-2s-1, respectively, placing the ten species in the range of intermediate shade tolerance. Several parsimonious species 'mob' groups were formed based on 1) interspecific differences among species response curves, 2) variations in post-hurricane seedling demographic trends and 3) RD rates and species spatiotemporal distribution patterns at aspects that are more or less exposed to hurricanes. The composition of parsimonious groupings based on photosynthetic curves was not concordant with the groups based on demographic trends but was partially concordant with the RD - species spatiotemporal distribution groups. Our results indicated that the influence of photosynthetic characteristics on demographic traits and species distributions was not straightforward. Rather, there was a complex pattern of interaction between ecophysiological and demographic traits, which determined species successional status, post-hurricane response and ultimately, species distribution at our study site.

  19. Derivation of an eigenvalue probability density function relating to the Poincaré disk

    NASA Astrophysics Data System (ADS)

    Forrester, Peter J.; Krishnapur, Manjunath

    2009-09-01

    A result of Zyczkowski and Sommers (2000 J. Phys. A: Math. Gen. 33 2045-57) gives the eigenvalue probability density function for the top N × N sub-block of a Haar distributed matrix from U(N + n). In the case n >= N, we rederive this result, starting from knowledge of the distribution of the sub-blocks, introducing the Schur decomposition and integrating over all variables except the eigenvalues. The integration is done by identifying a recursive structure which reduces the dimension. This approach is inspired by an analogous approach which has been recently applied to determine the eigenvalue probability density function for random matrices A-1B, where A and B are random matrices with entries standard complex normals. We relate the eigenvalue distribution of the sub-blocks to a many-body quantum state, and to the one-component plasma, on the pseudosphere.

  20. Tests of peak flow scaling in simulated self-similar river networks

    USGS Publications Warehouse

    Menabde, M.; Veitzer, S.; Gupta, V.; Sivapalan, M.

    2001-01-01

    The effect of linear flow routing incorporating attenuation and network topology on peak flow scaling exponent is investigated for an instantaneously applied uniform runoff on simulated deterministic and random self-similar channel networks. The flow routing is modelled by a linear mass conservation equation for a discrete set of channel links connected in parallel and series, and having the same topology as the channel network. A quasi-analytical solution for the unit hydrograph is obtained in terms of recursion relations. The analysis of this solution shows that the peak flow has an asymptotically scaling dependence on the drainage area for deterministic Mandelbrot-Vicsek (MV) and Peano networks, as well as for a subclass of random self-similar channel networks. However, the scaling exponent is shown to be different from that predicted by the scaling properties of the maxima of the width functions. ?? 2001 Elsevier Science Ltd. All rights reserved.

  1. Do Patients Receiving Whole-Brain Radiotherapy for Brain Metastases From Renal Cell Carcinoma Benefit From Escalation of the Radiation Dose?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rades, Dirk, E-mail: Rades.Dirk@gmx.ne; Department of Radiation Oncology, University Hospital Hamburg-Eppendorf, Hamburg; Heisterkamp, Christine

    2010-10-01

    Purpose: Whole-brain radiotherapy (WBRT) is the most common treatment for brain metastases from renal cell carcinoma (RCC). Most patients cannot receive more aggressive therapies including surgery or radiosurgery. The standard WBRT regimen, 30 Gy/10 fractions (10 x 3 Gy), has resulted in poor survival (OS). This study investigates whether escalation of the WBRT dose improves treatment outcomes. Methods and Materials: Data from 60 patients receiving WBRT for brain metastases from RCC were retrospectively analyzed. A dose of 10 x 3 Gy (n = 31) was compared with higher doses (40 Gy/20 fractions or 45 Gy/15 fractions; n = 29) formore » OS and local control (LC). Additional factors evaluated were patient age, sex, performance status, number of metastases, interval from diagnosis of RCC to WBRT, extracerebral metastases, recursive partitioning analysis (RPA) class, and year of WBRT. Results: The OS at 6 months was 29% after 10 x 3 Gy and 52% after higher doses (p = 0.003). The OS at 12 months was 13% and 47%, respectively. On multivariate analysis, higher WBRT doses (p = 0.022), Karnofsky performance status score {>=}70 (p = 0.017), fewer than four brain metastases (p = 0.035), and RPA Class 1 (p = 0.003) resulted in better OS. The LC at 6 months was 21% after 10 x 3 Gy and 57% after higher doses (p = 0.013). The LC at 12 months was 7% and 35%, respectively. On multivariate analysis, fewer than four brain metastases (p < 0.001) were associated with LC. A trend was found for WBRT regimen (p = 0.06) and RPA class (p = 0.06). Conclusions: The findings suggest that escalation of the WBRT dose beyond 10 x 3 Gy improves outcomes in patients with brain metastases from RCC. The results should be confirmed in a randomized trial stratifying for significant prognostic factors.« less

  2. Predicting Radiation Pneumonitis After Chemoradiation Therapy for Lung Cancer: An International Individual Patient Data Meta-analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palma, David A., E-mail: david.palma@uwo.ca; Senan, Suresh; Tsujino, Kayoko

    2013-02-01

    Background: Radiation pneumonitis is a dose-limiting toxicity for patients undergoing concurrent chemoradiation therapy (CCRT) for non-small cell lung cancer (NSCLC). We performed an individual patient data meta-analysis to determine factors predictive of clinically significant pneumonitis. Methods and Materials: After a systematic review of the literature, data were obtained on 836 patients who underwent CCRT in Europe, North America, and Asia. Patients were randomly divided into training and validation sets (two-thirds vs one-third of patients). Factors predictive of symptomatic pneumonitis (grade {>=}2 by 1 of several scoring systems) or fatal pneumonitis were evaluated using logistic regression. Recursive partitioning analysis (RPA) wasmore » used to define risk groups. Results: The median radiation therapy dose was 60 Gy, and the median follow-up time was 2.3 years. Most patients received concurrent cisplatin/etoposide (38%) or carboplatin/paclitaxel (26%). The overall rate of symptomatic pneumonitis was 29.8% (n=249), with fatal pneumonitis in 1.9% (n=16). In the training set, factors predictive of symptomatic pneumonitis were lung volume receiving {>=}20 Gy (V{sub 20}) (odds ratio [OR] 1.03 per 1% increase, P=.008), and carboplatin/paclitaxel chemotherapy (OR 3.33, P<.001), with a trend for age (OR 1.24 per decade, P=.09); the model remained predictive in the validation set with good discrimination in both datasets (c-statistic >0.65). On RPA, the highest risk of pneumonitis (>50%) was in patients >65 years of age receiving carboplatin/paclitaxel. Predictors of fatal pneumonitis were daily dose >2 Gy, V{sub 20}, and lower-lobe tumor location. Conclusions: Several treatment-related risk factors predict the development of symptomatic pneumonitis, and elderly patients who undergo CCRT with carboplatin-paclitaxel chemotherapy are at highest risk. Fatal pneumonitis, although uncommon, is related to dosimetric factors and tumor location.« less

  3. Clinical outcomes and prognostic factors in cisplatin versus cetuximab chemoradiation for locally advanced p16 positive oropharyngeal carcinoma.

    PubMed

    Barney, Christian L; Walston, Steve; Zamora, Pedro; Healy, Erin H; Nolan, Nicole; Diavolitsis, Virginia M; Neki, Anterpreet; Rupert, Robert; Savvides, Panos; Agrawal, Amit; Old, Matthew; Ozer, Enver; Carrau, Ricardo; Kang, Stephen; Rocco, James; Teknos, Theodoros; Grecula, John C; Wobb, Jessica; Mitchell, Darrion; Blakaj, Dukagjin; Bhatt, Aashish D

    2018-04-01

    Randomized trials evaluating cisplatin versus cetuximab chemoradiation (CRT) for p16+ oropharyngeal cancer (OPC) have yet to report preliminary data. Meanwhile, as a preemptive step toward morbidity reduction, the off-trial use of cetuximab in p16+ patients is increasing, even in those who could potentially tolerate cisplatin. The purpose of this study was to compare the efficacy of cisplatin versus cetuximab CRT in the treatment of p16+ OPC and to identify prognostic factors and predictors of tumor response. Cases of p16+ OPC treated with cisplatin or cetuximab CRT at our institution from 2010 to 2014 were identified. Recursive partitioning analysis (RPA) classification was used to determine low-risk (LR-RPA) and intermediate-risk (IR-RPA) groups. Log-rank/Kaplan-Meier and Cox Regression methods were used to compare groups. We identified 205 patients who received cisplatin (n = 137) or cetuximab (n = 68) CRT in the definitive (n = 178) or postoperative (n = 27) setting. Median follow-up was 3 years. Cisplatin improved 3-year locoregional control (LRC) [92.7 vs 65.4%], distant metastasis-free survival (DMFS) [88.3 vs 71.2%], recurrence-free survival (RFS) [86.6 vs 50.6%], and overall survival (OS) [92.6 vs 72.2%] compared to cetuximab [all p < .001]. Concurrent cisplatin improved 3-year OS for LR-RPA (97.1 vs 80.3%, p < .001) and IR-RPA (97.1 vs 80.3%, p < .001) groupings. When treating p16+ OPC with CRT, the threshold for substitution of cisplatin with cetuximab should be maintained appropriately high in order to prolong survival times and optimize locoregional and distant tumor control. When cetuximab is used in cisplatin-ineligible patients, altered fractionation RT should be considered in an effort to improve LRC. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Mid-Treatment Sleep Duration Predicts Clinically Significant Knee Osteoarthritis Pain reduction at 6 months: Effects From a Behavioral Sleep Medicine Clinical Trial.

    PubMed

    Salwen, Jessica K; Smith, Michael T; Finan, Patrick H

    2017-02-01

    To determine the relative influence of sleep continuity (sleep efficiency, sleep onset latency, total sleep time [TST], and wake after sleep onset) on clinical pain outcomes within a trial of cognitive behavioral therapy for insomnia (CBT-I) for patients with comorbid knee osteoarthritis and insomnia. Secondary analyses were performed on data from 74 patients with comorbid insomnia and knee osteoarthritis who completed a randomized clinical trial of 8-session multicomponent CBT-I versus an active behavioral desensitization control condition (BD), including a 6-month follow-up assessment. Data used herein include daily diaries of sleep parameters, actigraphy data, and self-report questionnaires administered at specific time points. Patients who reported at least 30% improvement in self-reported pain from baseline to 6-month follow-up were considered responders (N = 31). Pain responders and nonresponders did not differ significantly at baseline across any sleep continuity measures. At mid-treatment, only TST predicted pain response via t tests and logistic regression, whereas other measures of sleep continuity were nonsignificant. Recursive partitioning analyses identified a minimum cut-point of 382 min of TST achieved at mid-treatment in order to best predict pain improvements 6-month posttreatment. Actigraphy results followed the same pattern as daily diary-based results. Clinically significant pain reductions in response to both CBT-I and BD were optimally predicted by achieving approximately 6.5 hr sleep duration by mid-treatment. Thus, tailoring interventions to increase TST early in treatment may be an effective strategy to promote long-term pain reductions. More comprehensive research on components of behavioral sleep medicine treatments that contribute to pain response is warranted. © Sleep Research Society 2016. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.

  5. Predicting Overall Survival After Stereotactic Ablative Radiation Therapy in Early-Stage Lung Cancer: Development and External Validation of the Amsterdam Prognostic Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Louie, Alexander V., E-mail: Dr.alexlouie@gmail.com; Department of Radiation Oncology, London Regional Cancer Program, University of Western Ontario, London, Ontario; Department of Epidemiology, Harvard School of Public Health, Harvard University, Boston, Massachusetts

    Purpose: A prognostic model for 5-year overall survival (OS), consisting of recursive partitioning analysis (RPA) and a nomogram, was developed for patients with early-stage non-small cell lung cancer (ES-NSCLC) treated with stereotactic ablative radiation therapy (SABR). Methods and Materials: A primary dataset of 703 ES-NSCLC SABR patients was randomly divided into a training (67%) and an internal validation (33%) dataset. In the former group, 21 unique parameters consisting of patient, treatment, and tumor factors were entered into an RPA model to predict OS. Univariate and multivariate models were constructed for RPA-selected factors to evaluate their relationship with OS. A nomogrammore » for OS was constructed based on factors significant in multivariate modeling and validated with calibration plots. Both the RPA and the nomogram were externally validated in independent surgical (n=193) and SABR (n=543) datasets. Results: RPA identified 2 distinct risk classes based on tumor diameter, age, World Health Organization performance status (PS) and Charlson comorbidity index. This RPA had moderate discrimination in SABR datasets (c-index range: 0.52-0.60) but was of limited value in the surgical validation cohort. The nomogram predicting OS included smoking history in addition to RPA-identified factors. In contrast to RPA, validation of the nomogram performed well in internal validation (r{sup 2}=0.97) and external SABR (r{sup 2}=0.79) and surgical cohorts (r{sup 2}=0.91). Conclusions: The Amsterdam prognostic model is the first externally validated prognostication tool for OS in ES-NSCLC treated with SABR available to individualize patient decision making. The nomogram retained strong performance across surgical and SABR external validation datasets. RPA performance was poor in surgical patients, suggesting that 2 different distinct patient populations are being treated with these 2 effective modalities.« less

  6. Recursive flexible multibody system dynamics using spatial operators

    NASA Technical Reports Server (NTRS)

    Jain, A.; Rodriguez, G.

    1992-01-01

    This paper uses spatial operators to develop new spatially recursive dynamics algorithms for flexible multibody systems. The operator description of the dynamics is identical to that for rigid multibody systems. Assumed-mode models are used for the deformation of each individual body. The algorithms are based on two spatial operator factorizations of the system mass matrix. The first (Newton-Euler) factorization of the mass matrix leads to recursive algorithms for the inverse dynamics, mass matrix evaluation, and composite-body forward dynamics for the systems. The second (innovations) factorization of the mass matrix, leads to an operator expression for the mass matrix inverse and to a recursive articulated-body forward dynamics algorithm. The primary focus is on serial chains, but extensions to general topologies are also described. A comparison of computational costs shows that the articulated-body, forward dynamics algorithm is much more efficient than the composite-body algorithm for most flexible multibody systems.

  7. Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2016-01-01

    A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.

  8. Phylogeny, paleontology, and primates: do incomplete fossils bias the tree of life?

    PubMed

    Pattinson, David J; Thompson, Richard S; Piotrowski, Aleks K; Asher, Robert J

    2015-03-01

    Paleontological systematics relies heavily on morphological data that have undergone decay and fossilization. Here, we apply a heuristic means to assess how a fossil's incompleteness detracts from inferring its phylogenetic relationships. We compiled a phylogenetic matrix for primates and simulated the extinction of living species by deleting an extant taxon's molecular data and keeping only those morphological characters present in actual fossils. The choice of characters present in a given living taxon (the subject) was defined by those present in a given fossil (the template). By measuring congruence between a well-corroborated phylogeny to those incorporating artificial fossils, and by comparing real vs. random character distributions and states, we tested the information content of paleontological datasets and determined if extinction of a living species leads to bias in phylogeny reconstruction. We found a positive correlation between fossil completeness and topological congruence. Real fossil templates sampled for 36 or more of the 360 available morphological characters (including dental) performed significantly better than similarly complete templates with random states. Templates dominated by only one partition performed worse than templates with randomly sampled characters across partitions. The template based on the Eocene primate Darwinius masillae performs better than most other templates with a similar number of sampled characters, likely due to preservation of data across multiple partitions. Our results support the interpretation that Darwinius is strepsirhine, not haplorhine, and suggest that paleontological datasets are reliable in primate phylogeny reconstruction. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Displacement Threshold Energy and Recovery in an Al-Ti Nanolayered System with Intrinsic Point Defect Partitioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerboth, Matthew D.; Setyawan, Wahyu; Henager, Charles H.

    2014-01-07

    A method is established and validated using molecular dynamics (MD) to determine the displacement threshold energies as Ed in nanolayered, multilayered systems of dissimilar metals. The method is applied to specifically oriented nanolayered films of Al-Ti where the crystal structure and interface orientations are varied in atomic models and Ed is calculated. Methods for defect detection are developed and discussed based on prior research in the literature and based on specific crystallographic directions available in the nanolayered systems. These are compared and contrasted to similar calculations in corresponding bulk materials, including fcc Al, fcc Ti, hcp Al, and hcp Ti.more » In all cases, the calculated Ed in the multilayers are intermediate to the corresponding bulk values but exhibit some important directionality. In the nanolayer, defect detection demonstrated systematic differences in the behavior of Ed in each layer. Importantly, collision cascade damage exhibits significant defect partitioning within the Al and Ti layers that is hypothesized to be an intrinsic property of dissimilar nanolayered systems. This type of partitioning could be partly responsible for observed asymmetric radiation damage responses in many multilayered systems. In addition, a pseudo-random direction was introduced to approximate the average Ed without performing numerous simulations with random directions.« less

  10. Multigene interactions and the prediction of depression in the Wisconsin Longitudinal Study

    PubMed Central

    Roetker, Nicholas S; Yonker, James A; Lee, Chee; Chang, Vicky; Basson, Jacob J; Roan, Carol L; Hauser, Taissa S; Hauser, Robert M

    2012-01-01

    Objectives Single genetic loci offer little predictive power for the identification of depression. This study examined whether an analysis of gene–gene (G × G) interactions of 78 single nucleotide polymorphisms (SNPs) in genes associated with depression and age-related diseases would identify significant interactions with increased predictive power for depression. Design A retrospective cohort study. Setting A survey of participants in the Wisconsin Longitudinal Study. Participants A total of 4811 persons (2464 women and 2347 men) who provided saliva for genotyping; the group comes from a randomly selected sample of Wisconsin high school graduates from the class of 1957 as well as a randomly selected sibling, almost all of whom are non-Hispanic white. Primary outcome measure Depression as determine by the Composite International Diagnostic Interview–Short-Form. Results Using a classification tree approach (recursive partitioning (RP)), the authors identified a number of candidate G × G interactions associated with depression. The primary SNP splits revealed by RP (ANKK1 rs1800497 (also known as DRD2 Taq1A) in men and DRD2 rs224592 in women) were found to be significant as single factors by logistic regression (LR) after controlling for multiple testing (p=0.001 for both). Without considering interaction effects, only one of the five subsequent RP splits reached nominal significance in LR (FTO rs1421085 in women, p=0.008). However, after controlling for G × G interactions by running LR on RP-specific subsets, every split became significant and grew larger in magnitude (OR (before) → (after): men: GNRH1 novel SNP: (1.43 → 1.57); women: APOC3 rs2854116: (1.28 → 1.55), ACVR2B rs3749386: (1.11 → 2.17), FTO rs1421085: (1.32 → 1.65), IL6 rs1800795: (1.12 → 1.85)). Conclusions The results suggest that examining G × G interactions improves the identification of genetic associations predictive of depression. 4 of the SNPs identified in these interactions were located in two pathways well known to impact depression: neurotransmitter (ANKK1 and DRD2) and neuroendocrine (GNRH1 and ACVR2B) signalling. This study demonstrates the utility of RP analysis as an efficient and powerful exploratory analysis technique for uncovering genetic and molecular pathway interactions associated with disease aetiology. PMID:22761283

  11. A Parallel Implementation of Multilevel Recursive Spectral Bisection for Application to Adaptive Unstructured Meshes. Chapter 1

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen T.; Simon, Horst; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    The design of a parallel implementation of multilevel recursive spectral bisection is described. The goal is to implement a code that is fast enough to enable dynamic repartitioning of adaptive meshes.

  12. Exact Asymptotics of the Freezing Transition of a Logarithmically Correlated Random Energy Model

    NASA Astrophysics Data System (ADS)

    Webb, Christian

    2011-12-01

    We consider a logarithmically correlated random energy model, namely a model for directed polymers on a Cayley tree, which was introduced by Derrida and Spohn. We prove asymptotic properties of a generating function of the partition function of the model by studying a discrete time analogy of the KPP-equation—thus translating Bramson's work on the KPP-equation into a discrete time case. We also discuss connections to extreme value statistics of a branching random walk and a rescaled multiplicative cascade measure beyond the critical point.

  13. Finding and testing network communities by lumped Markov chains.

    PubMed

    Piccardi, Carlo

    2011-01-01

    Identifying communities (or clusters), namely groups of nodes with comparatively strong internal connectivity, is a fundamental task for deeply understanding the structure and function of a network. Yet, there is a lack of formal criteria for defining communities and for testing their significance. We propose a sharp definition that is based on a quality threshold. By means of a lumped Markov chain model of a random walker, a quality measure called "persistence probability" is associated to a cluster, which is then defined as an "α-community" if such a probability is not smaller than α. Consistently, a partition composed of α-communities is an "α-partition." These definitions turn out to be very effective for finding and testing communities. If a set of candidate partitions is available, setting the desired α-level allows one to immediately select the α-partition with the finest decomposition. Simultaneously, the persistence probabilities quantify the quality of each single community. Given its ability in individually assessing each single cluster, this approach can also disclose single well-defined communities even in networks that overall do not possess a definite clusterized structure.

  14. Tensor Spectral Clustering for Partitioning Higher-order Network Structures.

    PubMed

    Benson, Austin R; Gleich, David F; Leskovec, Jure

    2015-01-01

    Spectral graph theory-based methods represent an important class of tools for studying the structure of networks. Spectral methods are based on a first-order Markov chain derived from a random walk on the graph and thus they cannot take advantage of important higher-order network substructures such as triangles, cycles, and feed-forward loops. Here we propose a Tensor Spectral Clustering (TSC) algorithm that allows for modeling higher-order network structures in a graph partitioning framework. Our TSC algorithm allows the user to specify which higher-order network structures (cycles, feed-forward loops, etc.) should be preserved by the network clustering. Higher-order network structures of interest are represented using a tensor, which we then partition by developing a multilinear spectral method. Our framework can be applied to discovering layered flows in networks as well as graph anomaly detection, which we illustrate on synthetic networks. In directed networks, a higher-order structure of particular interest is the directed 3-cycle, which captures feedback loops in networks. We demonstrate that our TSC algorithm produces large partitions that cut fewer directed 3-cycles than standard spectral clustering algorithms.

  15. Tensor Spectral Clustering for Partitioning Higher-order Network Structures

    PubMed Central

    Benson, Austin R.; Gleich, David F.; Leskovec, Jure

    2016-01-01

    Spectral graph theory-based methods represent an important class of tools for studying the structure of networks. Spectral methods are based on a first-order Markov chain derived from a random walk on the graph and thus they cannot take advantage of important higher-order network substructures such as triangles, cycles, and feed-forward loops. Here we propose a Tensor Spectral Clustering (TSC) algorithm that allows for modeling higher-order network structures in a graph partitioning framework. Our TSC algorithm allows the user to specify which higher-order network structures (cycles, feed-forward loops, etc.) should be preserved by the network clustering. Higher-order network structures of interest are represented using a tensor, which we then partition by developing a multilinear spectral method. Our framework can be applied to discovering layered flows in networks as well as graph anomaly detection, which we illustrate on synthetic networks. In directed networks, a higher-order structure of particular interest is the directed 3-cycle, which captures feedback loops in networks. We demonstrate that our TSC algorithm produces large partitions that cut fewer directed 3-cycles than standard spectral clustering algorithms. PMID:27812399

  16. Spin dynamics of random Ising chain in coexisting transverse and longitudinal magnetic fields

    NASA Astrophysics Data System (ADS)

    Liu, Zhong-Qiang; Jiang, Su-Rong; Kong, Xiang-Mu; Xu, Yu-Liang

    2017-05-01

    The dynamics of the random Ising spin chain in coexisting transverse and longitudinal magnetic fields is studied by the recursion method. Both the spin autocorrelation function and its spectral density are investigated by numerical calculations. It is found that system's dynamical behaviors depend on the deviation σJ of the random exchange coupling between nearest-neighbor spins and the ratio rlt of the longitudinal and the transverse fields: (i) For rlt = 0, the system undergoes two crossovers from N independent spins precessing about the transverse magnetic field to a collective-mode behavior, and then to a central-peak behavior as σJ increases. (ii) For rlt ≠ 0, the system may exhibit a coexistence behavior of a collective-mode one and a central-peak one. When σJ is small (or large enough), system undergoes a crossover from a coexistence behavior (or a disordered behavior) to a central-peak behavior as rlt increases. (iii) Increasing σJ depresses effects of both the transverse and the longitudinal magnetic fields. (iv) Quantum random Ising chain in coexisting magnetic fields may exhibit under-damping and critical-damping characteristics simultaneously. These results indicate that changing the external magnetic fields may control and manipulate the dynamics of the random Ising chain.

  17. On the Hosoya index of a family of deterministic recursive trees

    NASA Astrophysics Data System (ADS)

    Chen, Xufeng; Zhang, Jingyuan; Sun, Weigang

    2017-01-01

    In this paper, we calculate the Hosoya index in a family of deterministic recursive trees with a special feature that includes new nodes which are connected to existing nodes with a certain rule. We then obtain a recursive solution of the Hosoya index based on the operations of a determinant. The computational complexity of our proposed algorithm is O(log2 n) with n being the network size, which is lower than that of the existing numerical methods. Finally, we give a weighted tree shrinking method as a graphical interpretation of the recurrence formula for the Hosoya index.

  18. Identifying CpG sites associated with eczema via random forest screening of epigenome-scale DNA methylation.

    PubMed

    Quraishi, B M; Zhang, H; Everson, T M; Ray, M; Lockett, G A; Holloway, J W; Tetali, S R; Arshad, S H; Kaushal, A; Rezwan, F I; Karmaus, W

    2015-01-01

    The prevalence of eczema is increasing in industrialized nations. Limited evidence has shown the association of DNA methylation (DNA-M) with eczema. We explored this association at the epigenome-scale to better understand the role of DNA-M. Data from the first generation (F1) of the Isle of Wight (IoW) birth cohort participants and the second generation (F2) were examined in our study. Epigenome-scale DNA methylation of F1 at age 18 years and F2 in cord blood was measured using the Illumina Infinium HumanMethylation450 Beadchip. A total of 307,357 cytosine-phosphate-guanine sites (CpGs) in the F1 generation were screened via recursive random forest (RF) for their potential association with eczema at age 18. Functional enrichment and pathway analysis of resulting genes were carried out using DAVID gene functional classification tool. Log-linear models were performed in F1 to corroborate the identified CpGs. Findings in F1 were further replicated in F2. The recursive RF yielded 140 CpGs, 88 of which showed statistically significant associations with eczema at age 18, corroborated by log-linear models after controlling for false discovery rate (FDR) of 0.05. These CpGs were enriched among many biological pathways, including pathways related to creating transcriptional variety and pathways mechanistically linked to eczema such as cadherins, cell adhesion, gap junctions, tight junctions, melanogenesis, and apoptosis. In the F2 generation, about half of the 83 CpGs identified in F1 showed the same direction of association with eczema risk as in F1, of which two CpGs were significantly associated with eczema risk, cg04850479 of the PROZ gene (risk ratio (RR) = 15.1 in F1, 95 % confidence interval (CI) 1.71, 79.5; RR = 6.82 in F2, 95 % CI 1.52, 30.62) and cg01427769 of the NEU1 gene (RR = 0.13 in F1, 95 % CI 0.03, 0.46; RR = 0.09 in F2, 95 % CI 0.03, 0.36). Via epigenome-scaled analyses using recursive RF followed by log-linear models, we identified 88 CpGs associated with eczema in F1, of which 41 were replicated in F2. Several identified CpGs are located within genes in biological pathways relating to skin barrier integrity, which is central to the pathogenesis of eczema. Novel genes associated with eczema risk were identified (e.g., the PROZ and NEU1 genes).

  19. Online damage detection using recursive principal component analysis and recursive condition indicators

    NASA Astrophysics Data System (ADS)

    Krishnan, M.; Bhowmik, B.; Tiwari, A. K.; Hazra, B.

    2017-08-01

    In this paper, a novel baseline free approach for continuous online damage detection of multi degree of freedom vibrating structures using recursive principal component analysis (RPCA) in conjunction with online damage indicators is proposed. In this method, the acceleration data is used to obtain recursive proper orthogonal modes in online using the rank-one perturbation method, and subsequently utilized to detect the change in the dynamic behavior of the vibrating system from its pristine state to contiguous linear/nonlinear-states that indicate damage. The RPCA algorithm iterates the eigenvector and eigenvalue estimates for sample covariance matrices and new data point at each successive time instants, using the rank-one perturbation method. An online condition indicator (CI) based on the L2 norm of the error between actual response and the response projected using recursive eigenvector matrix updates over successive iterations is proposed. This eliminates the need for offline post processing and facilitates online damage detection especially when applied to streaming data. The proposed CI, named recursive residual error, is also adopted for simultaneous spatio-temporal damage detection. Numerical simulations performed on five-degree of freedom nonlinear system under white noise and El Centro excitations, with different levels of nonlinearity simulating the damage scenarios, demonstrate the robustness of the proposed algorithm. Successful results obtained from practical case studies involving experiments performed on a cantilever beam subjected to earthquake excitation, for full sensors and underdetermined cases; and data from recorded responses of the UCLA Factor building (full data and its subset) demonstrate the efficacy of the proposed methodology as an ideal candidate for real-time, reference free structural health monitoring.

  20. Using Replicates in Information Retrieval Evaluation.

    PubMed

    Voorhees, Ellen M; Samarov, Daniel; Soboroff, Ian

    2017-09-01

    This article explores a method for more accurately estimating the main effect of the system in a typical test-collection-based evaluation of information retrieval systems, thus increasing the sensitivity of system comparisons. Randomly partitioning the test document collection allows for multiple tests of a given system and topic (replicates). Bootstrap ANOVA can use these replicates to extract system-topic interactions-something not possible without replicates-yielding a more precise value for the system effect and a narrower confidence interval around that value. Experiments using multiple TREC collections demonstrate that removing the topic-system interactions substantially reduces the confidence intervals around the system effect as well as increases the number of significant pairwise differences found. Further, the method is robust against small changes in the number of partitions used, against variability in the documents that constitute the partitions, and the measure of effectiveness used to quantify system effectiveness.

  1. Using Replicates in Information Retrieval Evaluation

    PubMed Central

    VOORHEES, ELLEN M.; SAMAROV, DANIEL; SOBOROFF, IAN

    2018-01-01

    This article explores a method for more accurately estimating the main effect of the system in a typical test-collection-based evaluation of information retrieval systems, thus increasing the sensitivity of system comparisons. Randomly partitioning the test document collection allows for multiple tests of a given system and topic (replicates). Bootstrap ANOVA can use these replicates to extract system-topic interactions—something not possible without replicates—yielding a more precise value for the system effect and a narrower confidence interval around that value. Experiments using multiple TREC collections demonstrate that removing the topic-system interactions substantially reduces the confidence intervals around the system effect as well as increases the number of significant pairwise differences found. Further, the method is robust against small changes in the number of partitions used, against variability in the documents that constitute the partitions, and the measure of effectiveness used to quantify system effectiveness. PMID:29905334

  2. Recursive Vocal Pattern Learning and Generalization in Starlings

    ERIC Educational Resources Information Center

    Bloomfield, Tiffany Corinna

    2012-01-01

    Among known communication systems, human language alone exhibits open-ended productivity of meaning. Interest in the psychological mechanisms supporting this ability, and their evolutionary origins, has resurged following the suggestion that the only uniquely human ability underlying language is a mechanism of recursion. This "Unique…

  3. Fibonacci Imposters

    ERIC Educational Resources Information Center

    Simons, C. S.; Wright, M.

    2007-01-01

    With Simson's 1753 paper as a starting point, the current paper reports investigations of Simson's identity (also known as Cassini's) for the Fibonacci sequence as a means to explore some fundamental ideas about recursion. Simple algebraic operations allow one to reduce the standard linear Fibonacci recursion to the nonlinear Simon's recursion…

  4. Recursive least squares method of regression coefficients estimation as a special case of Kalman filter

    NASA Astrophysics Data System (ADS)

    Borodachev, S. M.

    2016-06-01

    The simple derivation of recursive least squares (RLS) method equations is given as special case of Kalman filter estimation of a constant system state under changing observation conditions. A numerical example illustrates application of RLS to multicollinearity problem.

  5. A Note on Discrete Mathematics and Calculus.

    ERIC Educational Resources Information Center

    O'Reilly, Thomas J.

    1987-01-01

    Much of the current literature on the topic of discrete mathematics and calculus during the first two years of an undergraduate mathematics curriculum is cited. A relationship between the recursive integration formulas and recursively defined polynomials is described. A Pascal program is included. (Author/RH)

  6. Dynamic State Estimation of Power Systems With Quantization Effects: A Recursive Filter Approach.

    PubMed

    Hu, Liang; Wang, Zidong; Liu, Xiaohui

    2016-08-01

    In this paper, a recursive filter algorithm is developed to deal with the state estimation problem for power systems with quantized nonlinear measurements. The measurements from both the remote terminal units and the phasor measurement unit are subject to quantizations described by a logarithmic quantizer. Attention is focused on the design of a recursive filter such that, in the simultaneous presence of nonlinear measurements and quantization effects, an upper bound for the estimation error covariance is guaranteed and subsequently minimized. Instead of using the traditional approximation methods in nonlinear estimation that simply ignore the linearization errors, we treat both the linearization and quantization errors as norm-bounded uncertainties in the algorithm development so as to improve the performance of the estimator. For the power system with such kind of introduced uncertainties, a filter is designed in the framework of robust recursive estimation, and the developed filter algorithm is tested on the IEEE benchmark power system to demonstrate its effectiveness.

  7. Algorithm for Training a Recurrent Multilayer Perceptron

    NASA Technical Reports Server (NTRS)

    Parlos, Alexander G.; Rais, Omar T.; Menon, Sunil K.; Atiya, Amir F.

    2004-01-01

    An improved algorithm has been devised for training a recurrent multilayer perceptron (RMLP) for optimal performance in predicting the behavior of a complex, dynamic, and noisy system multiple time steps into the future. [An RMLP is a computational neural network with self-feedback and cross-talk (both delayed by one time step) among neurons in hidden layers]. Like other neural-network-training algorithms, this algorithm adjusts network biases and synaptic-connection weights according to a gradient-descent rule. The distinguishing feature of this algorithm is a combination of global feedback (the use of predictions as well as the current output value in computing the gradient at each time step) and recursiveness. The recursive aspect of the algorithm lies in the inclusion of the gradient of predictions at each time step with respect to the predictions at the preceding time step; this recursion enables the RMLP to learn the dynamics. It has been conjectured that carrying the recursion to even earlier time steps would enable the RMLP to represent a noisier, more complex system.

  8. Fermionic Approach to Weighted Hurwitz Numbers and Topological Recursion

    NASA Astrophysics Data System (ADS)

    Alexandrov, A.; Chapuy, G.; Eynard, B.; Harnad, J.

    2017-12-01

    A fermionic representation is given for all the quantities entering in the generating function approach to weighted Hurwitz numbers and topological recursion. This includes: KP and 2D Toda {τ} -functions of hypergeometric type, which serve as generating functions for weighted single and double Hurwitz numbers; the Baker function, which is expanded in an adapted basis obtained by applying the same dressing transformation to all vacuum basis elements; the multipair correlators and the multicurrent correlators. Multiplicative recursion relations and a linear differential system are deduced for the adapted bases and their duals, and a Christoffel-Darboux type formula is derived for the pair correlator. The quantum and classical spectral curves linking this theory with the topological recursion program are derived, as well as the generalized cut-and-join equations. The results are detailed for four special cases: the simple single and double Hurwitz numbers, the weakly monotone case, corresponding to signed enumeration of coverings, the strongly monotone case, corresponding to Belyi curves and the simplest version of quantum weighted Hurwitz numbers.

  9. Fermionic Approach to Weighted Hurwitz Numbers and Topological Recursion

    NASA Astrophysics Data System (ADS)

    Alexandrov, A.; Chapuy, G.; Eynard, B.; Harnad, J.

    2018-06-01

    A fermionic representation is given for all the quantities entering in the generating function approach to weighted Hurwitz numbers and topological recursion. This includes: KP and 2 D Toda {τ} -functions of hypergeometric type, which serve as generating functions for weighted single and double Hurwitz numbers; the Baker function, which is expanded in an adapted basis obtained by applying the same dressing transformation to all vacuum basis elements; the multipair correlators and the multicurrent correlators. Multiplicative recursion relations and a linear differential system are deduced for the adapted bases and their duals, and a Christoffel-Darboux type formula is derived for the pair correlator. The quantum and classical spectral curves linking this theory with the topological recursion program are derived, as well as the generalized cut-and-join equations. The results are detailed for four special cases: the simple single and double Hurwitz numbers, the weakly monotone case, corresponding to signed enumeration of coverings, the strongly monotone case, corresponding to Belyi curves and the simplest version of quantum weighted Hurwitz numbers.

  10. Discovery of a Recursive Principle: An Artificial Grammar Investigation of Human Learning of a Counting Recursion Language

    PubMed Central

    Cho, Pyeong Whan; Szkudlarek, Emily; Tabor, Whitney

    2016-01-01

    Learning is typically understood as a process in which the behavior of an organism is progressively shaped until it closely approximates a target form. It is easy to comprehend how a motor skill or a vocabulary can be progressively learned—in each case, one can conceptualize a series of intermediate steps which lead to the formation of a proficient behavior. With grammar, it is more difficult to think in these terms. For example, center embedding recursive structures seem to involve a complex interplay between multiple symbolic rules which have to be in place simultaneously for the system to work at all, so it is not obvious how the mechanism could gradually come into being. Here, we offer empirical evidence from a new artificial language (or “artificial grammar”) learning paradigm, Locus Prediction, that, despite the conceptual conundrum, recursion acquisition occurs gradually, at least for a simple formal language. In particular, we focus on a variant of the simplest recursive language, anbn, and find evidence that (i) participants trained on two levels of structure (essentially ab and aabb) generalize to the next higher level (aaabbb) more readily than participants trained on one level of structure (ab) combined with a filler sentence; nevertheless, they do not generalize immediately; (ii) participants trained up to three levels (ab, aabb, aaabbb) generalize more readily to four levels than participants trained on two levels generalize to three; (iii) when we present the levels in succession, starting with the lower levels and including more and more of the higher levels, participants show evidence of transitioning between the levels gradually, exhibiting intermediate patterns of behavior on which they were not trained; (iv) the intermediate patterns of behavior are associated with perturbations of an attractor in the sense of dynamical systems theory. We argue that all of these behaviors indicate a theory of mental representation in which recursive systems lie on a continuum of grammar systems which are organized so that grammars that produce similar behaviors are near one another, and that people learning a recursive system are navigating progressively through the space of these grammars. PMID:27375543

  11. Discovery of a Recursive Principle: An Artificial Grammar Investigation of Human Learning of a Counting Recursion Language.

    PubMed

    Cho, Pyeong Whan; Szkudlarek, Emily; Tabor, Whitney

    2016-01-01

    Learning is typically understood as a process in which the behavior of an organism is progressively shaped until it closely approximates a target form. It is easy to comprehend how a motor skill or a vocabulary can be progressively learned-in each case, one can conceptualize a series of intermediate steps which lead to the formation of a proficient behavior. With grammar, it is more difficult to think in these terms. For example, center embedding recursive structures seem to involve a complex interplay between multiple symbolic rules which have to be in place simultaneously for the system to work at all, so it is not obvious how the mechanism could gradually come into being. Here, we offer empirical evidence from a new artificial language (or "artificial grammar") learning paradigm, Locus Prediction, that, despite the conceptual conundrum, recursion acquisition occurs gradually, at least for a simple formal language. In particular, we focus on a variant of the simplest recursive language, a (n) b (n) , and find evidence that (i) participants trained on two levels of structure (essentially ab and aabb) generalize to the next higher level (aaabbb) more readily than participants trained on one level of structure (ab) combined with a filler sentence; nevertheless, they do not generalize immediately; (ii) participants trained up to three levels (ab, aabb, aaabbb) generalize more readily to four levels than participants trained on two levels generalize to three; (iii) when we present the levels in succession, starting with the lower levels and including more and more of the higher levels, participants show evidence of transitioning between the levels gradually, exhibiting intermediate patterns of behavior on which they were not trained; (iv) the intermediate patterns of behavior are associated with perturbations of an attractor in the sense of dynamical systems theory. We argue that all of these behaviors indicate a theory of mental representation in which recursive systems lie on a continuum of grammar systems which are organized so that grammars that produce similar behaviors are near one another, and that people learning a recursive system are navigating progressively through the space of these grammars.

  12. Adaptable Iterative and Recursive Kalman Filter Schemes

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato

    2014-01-01

    Nonlinear filters are often very computationally expensive and usually not suitable for real-time applications. Real-time navigation algorithms are typically based on linear estimators, such as the extended Kalman filter (EKF) and, to a much lesser extent, the unscented Kalman filter. The Iterated Kalman filter (IKF) and the Recursive Update Filter (RUF) are two algorithms that reduce the consequences of the linearization assumption of the EKF by performing N updates for each new measurement, where N is the number of recursions, a tuning parameter. This paper introduces an adaptable RUF algorithm to calculate N on the go, a similar technique can be used for the IKF as well.

  13. Tree-manipulating systems and Church-Rosser theorems.

    NASA Technical Reports Server (NTRS)

    Rosen, B. K.

    1973-01-01

    Study of a broad class of tree-manipulating systems called subtree replacement systems. The use of this framework is illustrated by general theorems analogous to the Church-Rosser theorem and by applications of these theorems. Sufficient conditions are derived for the Church-Rosser property, and their applications to recursive definitions, the lambda calculus, and parallel programming are discussed. McCarthy's (1963) recursive calculus is extended by allowing a choice between call-by-value and call-by-name. It is shown that recursively defined functions are single-valued despite the nondeterminism of the evaluation algorithm. It is also shown that these functions solve their defining equations in a 'canonical' manner.

  14. Teaching Non-Recursive Binary Searching: Establishing a Conceptual Framework.

    ERIC Educational Resources Information Center

    Magel, E. Terry

    1989-01-01

    Discusses problems associated with teaching non-recursive binary searching in computer language classes, and describes a teacher-directed dialog based on dictionary use that helps students use their previous searching experiences to conceptualize the binary search process. Algorithmic development is discussed and appropriate classroom discussion…

  15. A Fractal Excursion.

    ERIC Educational Resources Information Center

    Camp, Dane R.

    1991-01-01

    After introducing the two-dimensional Koch curve, which is generated by simple recursions on an equilateral triangle, the process is extended to three dimensions with simple recursions on a regular tetrahedron. Included, for both fractal sequences, are iterative formulae, illustrations of the first several iterations, and a sample PASCAL program.…

  16. The Free Energy in the Derrida-Retaux Recursive Model

    NASA Astrophysics Data System (ADS)

    Hu, Yueyun; Shi, Zhan

    2018-05-01

    We are interested in a simple max-type recursive model studied by Derrida and Retaux (J Stat Phys 156:268-290, 2014) in the context of a physics problem, and find a wide range for the exponent in the free energy in the nearly supercritical regime.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kishi, Takahiro; Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.jp; Ueki, Nami

    Purpose: This study aimed to evaluate the prognostic significance of the modified Glasgow Prognostic Score (mGPS) in patients with non-small cell lung cancer (NSCLC) who received stereotactic body radiation therapy (SBRT). Methods and Materials: Data from 165 patients who underwent SBRT for stage I NSCLC with histologic confirmation from January 1999 to September 2010 were collected retrospectively. Factors, including age, performance status, histology, Charlson comorbidity index, mGPS, and recursive partitioning analysis (RPA) class based on sex and T stage, were evaluated with regard to overall survival (OS) using the Cox proportional hazards model. The impact of the mGPS on causemore » of death and failure patterns was also analyzed. Results: The 3-year OS was 57.9%, with a median follow-up time of 3.5 years. A higher mGPS correlated significantly with poor OS (P<.001). The 3-year OS of lower mGPS patients was 66.4%, whereas that of higher mGPS patients was 44.5%. On multivariate analysis, mGPS and RPA class were significant factors for OS. A higher mGPS correlated significantly with lung cancer death (P=.019) and distant metastasis (P=.013). Conclusions: The mGPS was a significant predictor of clinical outcomes for SBRT in NSCLC patients.« less

  18. Classification of visible and infrared hyperspectral images based on image segmentation and edge-preserving filtering

    NASA Astrophysics Data System (ADS)

    Cui, Binge; Ma, Xiudan; Xie, Xiaoyun; Ren, Guangbo; Ma, Yi

    2017-03-01

    The classification of hyperspectral images with a few labeled samples is a major challenge which is difficult to meet unless some spatial characteristics can be exploited. In this study, we proposed a novel spectral-spatial hyperspectral image classification method that exploited spatial autocorrelation of hyperspectral images. First, image segmentation is performed on the hyperspectral image to assign each pixel to a homogeneous region. Second, the visible and infrared bands of hyperspectral image are partitioned into multiple subsets of adjacent bands, and each subset is merged into one band. Recursive edge-preserving filtering is performed on each merged band which utilizes the spectral information of neighborhood pixels. Third, the resulting spectral and spatial feature band set is classified using the SVM classifier. Finally, bilateral filtering is performed to remove "salt-and-pepper" noise in the classification result. To preserve the spatial structure of hyperspectral image, edge-preserving filtering is applied independently before and after the classification process. Experimental results on different hyperspectral images prove that the proposed spectral-spatial classification approach is robust and offers more classification accuracy than state-of-the-art methods when the number of labeled samples is small.

  19. A prediction model of drug-induced ototoxicity developed by an optimal support vector machine (SVM) method.

    PubMed

    Zhou, Shu; Li, Guo-Bo; Huang, Lu-Yi; Xie, Huan-Zhang; Zhao, Ying-Lan; Chen, Yu-Zong; Li, Lin-Li; Yang, Sheng-Yong

    2014-08-01

    Drug-induced ototoxicity, as a toxic side effect, is an important issue needed to be considered in drug discovery. Nevertheless, current experimental methods used to evaluate drug-induced ototoxicity are often time-consuming and expensive, indicating that they are not suitable for a large-scale evaluation of drug-induced ototoxicity in the early stage of drug discovery. We thus, in this investigation, established an effective computational prediction model of drug-induced ototoxicity using an optimal support vector machine (SVM) method, GA-CG-SVM. Three GA-CG-SVM models were developed based on three training sets containing agents bearing different risk levels of drug-induced ototoxicity. For comparison, models based on naïve Bayesian (NB) and recursive partitioning (RP) methods were also used on the same training sets. Among all the prediction models, the GA-CG-SVM model II showed the best performance, which offered prediction accuracies of 85.33% and 83.05% for two independent test sets, respectively. Overall, the good performance of the GA-CG-SVM model II indicates that it could be used for the prediction of drug-induced ototoxicity in the early stage of drug discovery. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. An introduction to tree-structured modeling with application to quality of life data.

    PubMed

    Su, Xiaogang; Azuero, Andres; Cho, June; Kvale, Elizabeth; Meneses, Karen M; McNees, M Patrick

    2011-01-01

    Investigators addressing nursing research are faced increasingly with the need to analyze data that involve variables of mixed types and are characterized by complex nonlinearity and interactions. Tree-based methods, also called recursive partitioning, are gaining popularity in various fields. In addition to efficiency and flexibility in handling multifaceted data, tree-based methods offer ease of interpretation. The aims of this study were to introduce tree-based methods, discuss their advantages and pitfalls in application, and describe their potential use in nursing research. In this article, (a) an introduction to tree-structured methods is presented, (b) the technique is illustrated via quality of life (QOL) data collected in the Breast Cancer Education Intervention study, and (c) implications for their potential use in nursing research are discussed. As illustrated by the QOL analysis example, tree methods generate interesting and easily understood findings that cannot be uncovered via traditional linear regression analysis. The expanding breadth and complexity of nursing research may entail the use of new tools to improve efficiency and gain new insights. In certain situations, tree-based methods offer an attractive approach that help address such needs.

  1. The simultaneous evolution of author and paper networks

    PubMed Central

    Börner, Katy; Maru, Jeegar T.; Goldstone, Robert L.

    2004-01-01

    There has been a long history of research into the structure and evolution of mankind's scientific endeavor. However, recent progress in applying the tools of science to understand science itself has been unprecedented because only recently has there been access to high-volume and high-quality data sets of scientific output (e.g., publications, patents, grants) and computers and algorithms capable of handling this enormous stream of data. This article reviews major work on models that aim to capture and recreate the structure and dynamics of scientific evolution. We then introduce a general process model that simultaneously grows coauthor and paper citation networks. The statistical and dynamic properties of the networks generated by this model are validated against a 20-year data set of articles published in PNAS. Systematic deviations from a power law distribution of citations to papers are well fit by a model that incorporates a partitioning of authors and papers into topics, a bias for authors to cite recent papers, and a tendency for authors to cite papers cited by papers that they have read. In this TARL model (for topics, aging, and recursive linking), the number of topics is linearly related to the clustering coefficient of the simulated paper citation network. PMID:14976254

  2. Predicting appointment misses in hospitals using data analytics

    PubMed Central

    Karpagam, Sylvia; Ma, Nang Laik

    2017-01-01

    Background There is growing attention over the last few years about non-attendance in hospitals and its clinical and economic consequences. There have been several studies documenting the various aspects of non-attendance in hospitals. Project Predicting Appoint Misses (PAM) was started with the intention of being able to predict the type of patients that would not come for appointments after making bookings. Methods Historic hospital appointment data merged with “distance from hospital” variable was used to run Logistic Regression, Support Vector Machine and Recursive Partitioning to decide the contributing variables to missed appointments. Results Variables that are “class”, “time”, “demographics” related have an effect on the target variable, however, prediction models may not perform effectively due to very subtle influence on the target variable. Previously assumed major contributors like “age”, “distance” did not have a major effect on the target variable. Conclusions With the given data it will be very difficult to make any moderate/strong prediction of the Appointment misses. That being said with the help of the cut off we are able to capture all of the “appointment misses” in addition to also capturing the actualized appointments. PMID:28567409

  3. Preliminary clinical outcomes of image-guided 3-dimensional conformal radiotherapy for limited brain metastases instead of stereotactic irradiation referral.

    PubMed

    Ohtakara, Kazuhiro; Hoshi, Hiroaki

    2014-06-01

    To determine the preliminary clinical outcomes of image-guided 3-dimensional conformal radiotherapy (IG-3DCRT) for limited but variably-sized brain metastases (BM). Sixty-two lesions in 24 patients were retrospectively evaluated; out of these patients 75% were ≥ 65 years of age, and 37.5% were categorized into recursive partitioning analysis (RPA) class 3. The median value for the maximum diameter of the lesions was 19 mm (range=4-72 mm). The median sole treatment dose was 36 Gy in 10 fractions. The median survival durations after IG-3DCRT were 12.0 months and 3.2 months for patients categorized into RPA classes ≤ 2 and 3, respectively. Local recurrences occurred in two lesions with a 6-month local control probability of 93.0%. Major toxicities included radiation necrosis in two patients. IG-3DCRT is feasible even for patients with limited BM who are categorized into RPA class 3, and confers clinical outcomes comparable to those of stereotactic radiosurgery, including excellent local control and minimal toxicity even for large tumors. Copyright© 2014 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  4. The association between neighborhood characteristics and body size and physical activity in the California teachers study cohort.

    PubMed

    Keegan, Theresa H M; Hurley, Susan; Goldberg, Debbie; Nelson, David O; Reynolds, Peggy; Bernstein, Leslie; Horn-Ross, Pam L; Gomez, Scarlett L

    2012-04-01

    We considered interactions between physical activity and body mass index (BMI) and neighborhood factors. We used recursive partitioning to identify predictors of low recreational physical activity (< 2.5 hours/week) and overweight and obesity (BMI ≥ 25.0 kg/m(2)) among 118,315 women in the California Teachers Study. Neighborhood characteristics were based on 2000 US Census data and Reference US business listings. Low physical activity and being overweight or obese were associated with individual sociodemographic characteristics, including race/ethnicity and age. Among White women aged 36 to 75 years, living in neighborhoods with more household crowding was associated with a higher probability of low physical activity (54% vs 45% to 51%). In less crowded neighborhoods where more people worked outside the home, the existence of fewer neighborhood amenities was associated with a higher probability of low physical activity (51% vs 46%). Among non-African American middle-aged women, living in neighborhoods with a lower socioeconomic status was associated with a higher probability of being overweight or obese (46% to 59% vs 38% in high-socioeconomic status neighborhoods). Associations between physical activity, overweight and obesity, and the built environment varied by sociodemographic characteristics in this educated population.

  5. Development of novel prediction model for drug-induced mitochondrial toxicity by using naïve Bayes classifier method.

    PubMed

    Zhang, Hui; Yu, Peng; Ren, Ji-Xia; Li, Xi-Bo; Wang, He-Li; Ding, Lan; Kong, Wei-Bao

    2017-12-01

    Mitochondrial dysfunction has been considered as an important contributing factor in the etiology of drug-induced organ toxicity, and even plays an important role in the pathogenesis of some diseases. The objective of this investigation was to develop a novel prediction model of drug-induced mitochondrial toxicity by using a naïve Bayes classifier. For comparison, the recursive partitioning classifier prediction model was also constructed. Among these methods, the prediction performance of naïve Bayes classifier established here showed best, which yielded average overall prediction accuracies for the internal 5-fold cross validation of the training set and external test set were 95 ± 0.6% and 81 ± 1.1%, respectively. In addition, four important molecular descriptors and some representative substructures of toxicants produced by ECFP_6 fingerprints were identified. We hope the established naïve Bayes prediction model can be employed for the mitochondrial toxicity assessment, and these obtained important information of mitochondrial toxicants can provide guidance for medicinal chemists working in drug discovery and lead optimization. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Superimposed Code Theorectic Analysis of DNA Codes and DNA Computing

    DTIC Science & Technology

    2010-03-01

    because only certain collections (partitioned by font type) of sequences are allowed to be in each position (e.g., Arial = position 0, Comic ...rigidity of short oligos and the shape of the polar charge. Oligo movement was modeled by a Brownian motion 3 dimensional random walk. The one...temperature, kB is Boltz he viscosity of the medium. The random walk motion is modeled by assuming the oligo is on a three dimensional lattice and may

  7. Check-Standard Testing Across Multiple Transonic Wind Tunnels with the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    Deloach, Richard

    2012-01-01

    This paper reports the result of an analysis of wind tunnel data acquired in support of the Facility Analysis Verification & Operational Reliability (FAVOR) project. The analysis uses methods referred to collectively at Langley Research Center as the Modern Design of Experiments (MDOE). These methods quantify the total variance in a sample of wind tunnel data and partition it into explained and unexplained components. The unexplained component is further partitioned in random and systematic components. This analysis was performed on data acquired in similar wind tunnel tests executed in four different U.S. transonic facilities. The measurement environment of each facility was quantified and compared.

  8. Scanning gate microscopy of quantum rings: effects of an external magnetic field and of charged defects.

    PubMed

    Pala, M G; Baltazar, S; Martins, F; Hackens, B; Sellier, H; Ouisse, T; Bayot, V; Huant, S

    2009-07-01

    We study scanning gate microscopy (SGM) in open quantum rings obtained from buried semiconductor InGaAs/InAlAs heterostructures. By performing a theoretical analysis based on the Keldysh-Green function approach we interpret the radial fringes observed in experiments as the effect of randomly distributed charged defects. We associate SGM conductance images with the local density of states (LDOS) of the system. We show that such an association cannot be made with the current density distribution. By varying an external magnetic field we are able to reproduce recursive quasi-classical orbits in LDOS and conductance images, which bear the same periodicity as the Aharonov-Bohm effect.

  9. Classification with spatio-temporal interpixel class dependency contexts

    NASA Technical Reports Server (NTRS)

    Jeon, Byeungwoo; Landgrebe, David A.

    1992-01-01

    A contextual classifier which can utilize both spatial and temporal interpixel dependency contexts is investigated. After spatial and temporal neighbors are defined, a general form of maximum a posterior spatiotemporal contextual classifier is derived. This contextual classifier is simplified under several assumptions. Joint prior probabilities of the classes of each pixel and its spatial neighbors are modeled by the Gibbs random field. The classification is performed in a recursive manner to allow a computationally efficient contextual classification. Experimental results with bitemporal TM data show significant improvement of classification accuracy over noncontextual pixelwise classifiers. This spatiotemporal contextual classifier should find use in many applications of remote sensing, especially when the classification accuracy is important.

  10. Digital Simulation Of Precise Sensor Degradations Including Non-Linearities And Shift Variance

    NASA Astrophysics Data System (ADS)

    Kornfeld, Gertrude H.

    1987-09-01

    Realistic atmospheric and Forward Looking Infrared Radiometer (FLIR) degradations were digitally simulated. Inputs to the routine are environmental observables and the FLIR specifications. It was possible to achieve realism in the thermal domain within acceptable computer time and random access memory (RAM) requirements because a shift variant recursive convolution algorithm that well describes thermal properties was invented and because each picture element (pixel) has radiative temperature, a materials parameter and range and altitude information. The computer generation steps start with the image synthesis of an undegraded scene. Atmospheric and sensor degradation follow. The final result is a realistic representation of an image seen on the display of a specific FLIR.

  11. Estimation in Linear Systems Featuring Correlated Uncertain Observations Coming from Multiple Sensors

    NASA Astrophysics Data System (ADS)

    Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J.

    2009-08-01

    In this paper, the state least-squares linear estimation problem from correlated uncertain observations coming from multiple sensors is addressed. It is assumed that, at each sensor, the state is measured in the presence of additive white noise and that the uncertainty in the observations is characterized by a set of Bernoulli random variables which are only correlated at consecutive time instants. Assuming that the statistical properties of such variables are not necessarily the same for all the sensors, a recursive filtering algorithm is proposed, and the performance of the estimators is illustrated by a numerical simulation example wherein a signal is estimated from correlated uncertain observations coming from two sensors with different uncertainty characteristics.

  12. International Conference on Random Mappings, Partitions and Permutations Held in Los Angeles, California on 3-6 January 1992

    DTIC Science & Technology

    1992-08-01

    cryptography to the simula- tion of epidemic processes and tests of the intrinsic randomness of quantum mechanics. Discussed in this paper are... theorem concerning the height of a random labelled rooted tree [4]; letting f( s ) = ½ + 82, G(t) = 1 - e-t if t > 0, G(t) = 0 otherwise, and A (0], p...o thi cOoection of infromaltion.% no|uaing• viggitIOns for riviucirg this curoon. to WasnaImllor "@as s n o w, i-i1it,"ersondO Daisr hqhvhwao. SWite

  13. Asymptotic Expansion of β Matrix Models in the One-cut Regime

    NASA Astrophysics Data System (ADS)

    Borot, G.; Guionnet, A.

    2013-01-01

    We prove the existence of a 1/ N expansion to all orders in β matrix models with a confining, offcritical potential corresponding to an equilibrium measure with a connected support. Thus, the coefficients of the expansion can be obtained recursively by the "topological recursion" derived in Chekhov and Eynard (JHEP 0612:026, 2006). Our method relies on the combination of a priori bounds on the correlators and the study of Schwinger-Dyson equations, thanks to the uses of classical complex analysis techniques. These a priori bounds can be derived following (Boutet de Monvel et al. in J Stat Phys 79(3-4):585-611, 1995; Johansson in Duke Math J 91(1):151-204, 1998; Kriecherbauer and Shcherbina in Fluctuations of eigenvalues of matrix models and their applications, 2010) or for strictly convex potentials by using concentration of measure (Anderson et al. in An introduction to random matrices, Sect. 2.3, Cambridge University Press, Cambridge, 2010). Doing so, we extend the strategy of Guionnet and Maurel-Segala (Ann Probab 35:2160-2212, 2007), from the hermitian models ( β = 2) and perturbative potentials, to general β models. The existence of the first correction in 1/ N was considered in Johansson (1998) and more recently in Kriecherbauer and Shcherbina (2010). Here, by taking similar hypotheses, we extend the result to all orders in 1/ N.

  14. Decision Variants for the Automatic Determination of Optimal Feature Subset in RF-RFE.

    PubMed

    Chen, Qi; Meng, Zhaopeng; Liu, Xinyi; Jin, Qianguo; Su, Ran

    2018-06-15

    Feature selection, which identifies a set of most informative features from the original feature space, has been widely used to simplify the predictor. Recursive feature elimination (RFE), as one of the most popular feature selection approaches, is effective in data dimension reduction and efficiency increase. A ranking of features, as well as candidate subsets with the corresponding accuracy, is produced through RFE. The subset with highest accuracy (HA) or a preset number of features (PreNum) are often used as the final subset. However, this may lead to a large number of features being selected, or if there is no prior knowledge about this preset number, it is often ambiguous and subjective regarding final subset selection. A proper decision variant is in high demand to automatically determine the optimal subset. In this study, we conduct pioneering work to explore the decision variant after obtaining a list of candidate subsets from RFE. We provide a detailed analysis and comparison of several decision variants to automatically select the optimal feature subset. Random forest (RF)-recursive feature elimination (RF-RFE) algorithm and a voting strategy are introduced. We validated the variants on two totally different molecular biology datasets, one for a toxicogenomic study and the other one for protein sequence analysis. The study provides an automated way to determine the optimal feature subset when using RF-RFE.

  15. User's Guide for the Precision Recursive Estimator for Ephemeris Refinement (PREFER)

    NASA Technical Reports Server (NTRS)

    Gibbs, B. P.

    1982-01-01

    PREFER is a recursive orbit determination program which is used to refine the ephemerides produced by a batch least squares program (e.g., GTDS). It is intended to be used primarily with GTDS and, thus, is compatible with some of the GTDS input/output files.

  16. Split-remerge method for eliminating processing window artifacts in recursive hierarchical segmentation

    NASA Technical Reports Server (NTRS)

    Tilton, James C. (Inventor)

    2010-01-01

    A method, computer readable storage, and apparatus for implementing recursive segmentation of data with spatial characteristics into regions including splitting-remerging of pixels with contagious region designations and a user controlled parameter for providing a preference for merging adjacent regions to eliminate window artifacts.

  17. A Recursive Theory for the Mathematical Understanding--Some Elements and Implications.

    ERIC Educational Resources Information Center

    Pirie, Susan; Kieren, Thomas

    There has been considerable interest in mathematical understanding. Both those attempting to build, and those questioning the possibility of building intelligent artificial tutoring systems, struggle with the notions of mathematical understanding. The purpose of this essay is to show a transcendently recursive theory of mathematical understanding…

  18. Recursion, Computers and Art

    ERIC Educational Resources Information Center

    Kemp, Andy

    2007-01-01

    "Geomlab" is a functional programming language used to describe pictures that are made up of tiles. The beauty of "Geomlab" is that it introduces students to recursion, a very powerful mathematical concept, through a very simple and enticing graphical environment. Alongside the software is a series of eight worksheets which lead into producing…

  19. Improved image decompression for reduced transform coding artifacts

    NASA Technical Reports Server (NTRS)

    Orourke, Thomas P.; Stevenson, Robert L.

    1994-01-01

    The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.

  20. Non-recursive augmented Lagrangian algorithms for the forward and inverse dynamics of constrained flexible multibodies

    NASA Technical Reports Server (NTRS)

    Bayo, Eduardo; Ledesma, Ragnar

    1993-01-01

    A technique is presented for solving the inverse dynamics of flexible planar multibody systems. This technique yields the non-causal joint efforts (inverse dynamics) as well as the internal states (inverse kinematics) that produce a prescribed nominal trajectory of the end effector. A non-recursive global Lagrangian approach is used in formulating the equations for motion as well as in solving the inverse dynamics equations. Contrary to the recursive method previously presented, the proposed method solves the inverse problem in a systematic and direct manner for both open-chain as well as closed-chain configurations. Numerical simulation shows that the proposed procedure provides an excellent tracking of the desired end effector trajectory.

  1. Recursive analytical solution describing artificial satellite motion perturbed by an arbitrary number of zonal terms

    NASA Technical Reports Server (NTRS)

    Mueller, A. C.

    1977-01-01

    An analytical first order solution has been developed which describes the motion of an artificial satellite perturbed by an arbitrary number of zonal harmonics of the geopotential. A set of recursive relations for the solution, which was deduced from recursive relations of the geopotential, was derived. The method of solution is based on Von-Zeipel's technique applied to a canonical set of two-body elements in the extended phase space which incorporates the true anomaly as a canonical element. The elements are of Poincare type, that is, they are regular for vanishing eccentricities and inclinations. Numerical results show that this solution is accurate to within a few meters after 500 revolutions.

  2. The Massachusetts abscess rule: a clinical decision rule using ultrasound to identify methicillin-resistant Staphylococcus aureus in skin abscesses.

    PubMed

    Gaspari, Romolo J; Blehar, David; Polan, David; Montoya, Anthony; Alsulaibikh, Amal; Liteplo, Andrew

    2014-05-01

    Treatment failure rates for incision and drainage (I&D) of skin abscesses have increased in recent years and may be attributable to an increased prevalence of community-acquired methicillin-resistant Staphylococcus aureus (CA-MRSA). Previous authors have described sonographic features of abscesses, such as the presence of interstitial fluid, characteristics of abscess debris, and depth of abscess cavity. It is possible that the sonographic features are associated with MRSA and can be used to predict the presence of MRSA. The authors describe a potential clinical decision rule (CDR) using sonographic images to predict the presence of CA-MRSA. This was a pilot CDR derivation study using databases from two emergency departments (EDs) of patients presenting to the ED with uncomplicated skin abscesses who underwent I&D and culture of the abscess contents. Patients underwent ultrasound (US) imaging of the abscesses prior to I&D. Abscess contents were sent for culture and sensitivity. Two independent physicians experienced in soft tissue US blinded to the culture results and clinical data reviewed the images in a standardized fashion for the presence or absence of the predetermined image characteristics. In the instance of a disagreement between the initial two investigators, a third reviewer adjudicated the findings prior to analysis. The association between the primary outcome (presence of MRSA) and each sonographic feature was assessed using univariate and multivariate analysis. The reliability of each sonographic feature was measured by calculating the kappa (κ) coefficient of interobserver agreement. The decision tree model for the CDR was created with recursive partitioning using variables that were both reliable and strongly associated with MRSA. Of the total of 2,167 patients who presented with skin and soft tissue infections during the study period, 605 patients met inclusion criteria with US imaging and culture and sensitivity of purulence. Among the pathogenic organisms, MRSA was the most frequently isolated, representing 50.1% of all patients. Six of the sonographic features were associated with the presence of MRSA, but only four of these features were reliable using the kappa analysis. Recursive partitioning identified three independent variables that were both associated with MRSA and reliable: 1) the lack of a well-defined edge, 2) small volume, and 3) irregular or indistinct shape. This decision rule demonstrates a sensitivity of 89.2% (95% confidence interval [CI] = 84.7% to 92.7%), a specificity of 44.7% (95% CI = 40.9% to 47.8%), a positive predictive value of 57.9 (95% CI = 55.0 to 60.2), a negative predictive value of 82.9 (95% CI = 75.9 to 88.5), and an odds ratio (OR) of 7.0 (95% CI = 4.0 to 12.2). According to our putative CDR, patients with skin abscesses that are small, irregularly shaped, or indistinct, with ill-defined edges, are seven times more likely to demonstrate MRSA on culture. © 2014 by the Society for Academic Emergency Medicine.

  3. Reoperation and readmission after clipping of an unruptured intracranial aneurysm: a National Surgical Quality Improvement Program analysis.

    PubMed

    Dasenbrock, Hormuzdiyar H; Smith, Timothy R; Rudy, Robert F; Gormley, William B; Aziz-Sultan, M Ali; Du, Rose

    2018-03-01

    OBJECTIVE Although reoperation and readmission have been used as quality metrics, there are limited data evaluating the rate of, reasons for, and predictors of reoperation and readmission after microsurgical clipping of unruptured aneurysms. METHODS Adult patients who underwent craniotomy for clipping of an unruptured aneurysm electively were extracted from the prospective National Surgical Quality Improvement Program registry (2011-2014). Multivariable logistic regression and recursive partitioning analysis evaluated the independent predictors of nonroutine hospital discharge, unplanned 30-day reoperation, and readmission. Predictors screened included patient age, sex, comorbidities, American Society of Anesthesiologists (ASA) classification, functional status, aneurysm location, preoperative laboratory values, operative time, and postoperative complications. RESULTS Among the 460 patients evaluated, 4.2% underwent any reoperation at a median of 7 days (interquartile range [IQR] 2-17 days) postoperatively, and 1.1% required a cranial reoperation. The most common reoperation was ventricular shunt placement (23.5%); other reoperations were tracheostomy, craniotomy for hematoma evacuation, and decompressive hemicraniectomy. Independent predictors of any unplanned reoperation were age greater than 51 years and longer operative time (p ≤ 0.04). Readmission occurred in 6.3% of patients at a median of 6 days (IQR 5-13 days) after discharge from the surgical hospitalization; 59.1% of patients were readmitted within 1 week and 86.4% within 2 weeks of discharge. The most common reason for readmission was seizure (26.7%); other causes of readmission included hydrocephalus, cerebrovascular accidents, and headache. Unplanned readmission was independently associated with age greater than 65 years, Class II or III obesity (body mass index > 35 kg/m 2 ), preoperative hyponatremia, and preoperative anemia (p ≤ 0.04). Readmission was not associated with operative time, complications during the surgical hospitalization, length of stay, or discharge disposition. Recursive partitioning analysis identified the same 4 variables, as well as ASA classification, as associated with unplanned readmission. The most potent predictors of nonroutine hospital discharge (16.7%) were postoperative neurological and cardiopulmonary complications; other predictors were age greater than 51 years, preoperative hyponatremia, African American and Asian race, and a complex vertebrobasilar circulation aneurysm. CONCLUSIONS In this national analysis, patient age greater than 65 years, Class II or III obesity, preoperative hyponatremia, and anemia were associated with adverse events, highlighting patients who may be at risk for complications after clipping of unruptured cerebral aneurysms. The preponderance of early readmissions highlights the importance of early surveillance and follow-up after discharge; the frequency of readmission for seizure emphasizes the need for additional data evaluating the utility and duration of postcraniotomy seizure prophylaxis. Moreover, readmission was primarily associated with preoperative characteristics rather than metrics of perioperative care, suggesting that readmission may be a suboptimal indicator of the quality of care received during the surgical hospitalization in this patient population.

  4. Recursion and the Competence/Performance Distinction in AGL Tasks

    ERIC Educational Resources Information Center

    Lobina, David J.

    2011-01-01

    The term "recursion" is used in at least four distinct theoretical senses within cognitive science. Some of these senses in turn relate to the different levels of analysis described by David Marr some 20 years ago; namely, the underlying competence capacity (the "computational" level), the performance operations used in real-time processing (the…

  5. Recursivity: A Working Paper on Rhetoric and "Mnesis"

    ERIC Educational Resources Information Center

    Stormer, Nathan

    2013-01-01

    This essay proposes the genealogical study of remembering and forgetting as recursive rhetorical capacities that enable discourse to place itself in an ever-changing present. "Mnesis" is a meta-concept for the arrangements of remembering and forgetting that enable rhetoric to function. Most of the essay defines the materiality of "mnesis", first…

  6. Recursive Optimization of Digital Circuits

    DTIC Science & Technology

    1990-12-14

    Obverse- Specification . . . A-23 A.14 Non-MDS Optimization of SAMPLE .. .. .. .. .. .. ..... A-24 Appendix B . BORIS Recursive Optimization System...Software ...... B -i B .1 DESIGN.S File . .... .. .. .. .. .. .. .. .. .. ... ... B -2 B .2 PARSE.S File. .. .. .. .. .. .. .. .. ... .. ... .... B -1i B .3...TABULAR.S File. .. .. .. .. .. .. ... .. ... .. ... B -22 B .4 MDS.S File. .. .. .. .. .. .. .. ... .. ... .. ...... B -28 B .5 COST.S File

  7. Testing the Relationship between Three-Component Organizational/Occupational Commitment and Organizational/Occupational Turnover Intention Using a Non-Recursive Model

    ERIC Educational Resources Information Center

    Chang, Huo-Tsan; Chi, Nai-Wen; Miao, Min-Chih

    2007-01-01

    This study explored the relationship between three-component organizational/occupational commitment and organizational/occupational turnover intention, and the reciprocal relationship between organizational and occupational turnover intention with a non-recursive model in collectivist cultural settings. We selected 177 nursing staffs out of 30…

  8. TORTIS (Toddler's Own Recursive Turtle Interpreter System).

    ERIC Educational Resources Information Center

    Perlman, Radia

    TORTIS (Toddler's Own Recursive Turtle Interpreter System) is a device which can be used to study or nurture the cognitive development of preschool children. The device consists of a "turtle" which the child can control by use of buttons on a control panel. The "turtle" can be made to move in prescribed directions, to take a…

  9. Recursive Inversion By Finite-Impulse-Response Filters

    NASA Technical Reports Server (NTRS)

    Bach, Ralph E., Jr.; Baram, Yoram

    1991-01-01

    Recursive approximation gives least-squares best fit to exact response. Algorithm yields finite-impulse-response approximation of unknown single-input/single-output, causal, time-invariant, linear, real system, response of which is sequence of impulses. Applicable to such system-inversion problems as suppression of echoes and identification of target from its scatter response to incident impulse.

  10. An Accelerated Recursive Doubling Algorithm for Block Tridiagonal Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seal, Sudip K

    2014-01-01

    Block tridiagonal systems of linear equations arise in a wide variety of scientific and engineering applications. Recursive doubling algorithm is a well-known prefix computation-based numerical algorithm that requires O(M^3(N/P + log P)) work to compute the solution of a block tridiagonal system with N block rows and block size M on P processors. In real-world applications, solutions of tridiagonal systems are most often sought with multiple, often hundreds and thousands, of different right hand sides but with the same tridiagonal matrix. Here, we show that a recursive doubling algorithm is sub-optimal when computing solutions of block tridiagonal systems with multiplemore » right hand sides and present a novel algorithm, called the accelerated recursive doubling algorithm, that delivers O(R) improvement when solving block tridiagonal systems with R distinct right hand sides. Since R is typically about 100 1000, this improvement translates to very significant speedups in practice. Detailed complexity analyses of the new algorithm with empirical confirmation of runtime improvements are presented. To the best of our knowledge, this algorithm has not been reported before in the literature.« less

  11. Interacting multiple model forward filtering and backward smoothing for maneuvering target tracking

    NASA Astrophysics Data System (ADS)

    Nandakumaran, N.; Sutharsan, S.; Tharmarasa, R.; Lang, Tom; McDonald, Mike; Kirubarajan, T.

    2009-08-01

    The Interacting Multiple Model (IMM) estimator has been proven to be effective in tracking agile targets. Smoothing or retrodiction, which uses measurements beyond the current estimation time, provides better estimates of target states. Various methods have been proposed for multiple model smoothing in the literature. In this paper, a new smoothing method, which involves forward filtering followed by backward smoothing while maintaining the fundamental spirit of the IMM, is proposed. The forward filtering is performed using the standard IMM recursion, while the backward smoothing is performed using a novel interacting smoothing recursion. This backward recursion mimics the IMM estimator in the backward direction, where each mode conditioned smoother uses standard Kalman smoothing recursion. Resulting algorithm provides improved but delayed estimates of target states. Simulation studies are performed to demonstrate the improved performance with a maneuvering target scenario. The comparison with existing methods confirms the improved smoothing accuracy. This improvement results from avoiding the augmented state vector used by other algorithms. In addition, the new technique to account for model switching in smoothing is a key in improving the performance.

  12. Do we represent intentional action as recursively embedded? The answer must be empirical. A comment on Vicari and Adenzato (2014).

    PubMed

    Martins, Mauricio D; Fitch, W Tecumseh

    2015-12-15

    The relationship between linguistic syntax and action planning is of considerable interest in cognitive science because many researchers suggest that "motor syntax" shares certain key traits with language. In a recent manuscript in this journal, Vicari and Adenzato (henceforth VA) critiqued Hauser, Chomsky and Fitch's 2002 (henceforth HCF's) hypothesis that recursion is language-specific, and that its usage in other domains is parasitic on language resources. VA's main argument is that HCF's hypothesis is falsified by the fact that recursion typifies the structure of intentional action, and recursion in the domain of action is independent of language. Here, we argue that VA's argument is incomplete, and that their formalism can be contrasted with alternative frameworks that are equally consistent with existing data. Therefore their conclusions are premature without further empirical testing and support. In particular, to accept VA's argument it would be necessary to demonstrate both that humans in fact represent self-embedding in the structure of intentional action, and that language is not used to construct these representations. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Superimposed Code Theoretic Analysis of Deoxyribonucleic Acid (DNA) Codes and DNA Computing

    DTIC Science & Technology

    2010-01-01

    partitioned by font type) of sequences are allowed to be in each position (e.g., Arial = position 0, Comic = position 1, etc. ) and within each collection...movement was modeled by a Brownian motion 3 dimensional random walk. The one dimensional diffusion coefficient D for the ellipsoid shape with 3...temperature, kB is Boltzmann’s constant, and η is the viscosity of the medium. The random walk motion is modeled by assuming the oligo is on a three

  14. The recursive combination filter approach of pre-processing for the estimation of standard deviation of RR series.

    PubMed

    Mishra, Alok; Swati, D

    2015-09-01

    Variation in the interval between the R-R peaks of the electrocardiogram represents the modulation of the cardiac oscillations by the autonomic nervous system. This variation is contaminated by anomalous signals called ectopic beats, artefacts or noise which mask the true behaviour of heart rate variability. In this paper, we have proposed a combination filter of recursive impulse rejection filter and recursive 20% filter, with recursive application and preference of replacement over removal of abnormal beats to improve the pre-processing of the inter-beat intervals. We have tested this novel recursive combinational method with median method replacement to estimate the standard deviation of normal to normal (SDNN) beat intervals of congestive heart failure (CHF) and normal sinus rhythm subjects. This work discusses the improvement in pre-processing over single use of impulse rejection filter and removal of abnormal beats for heart rate variability for the estimation of SDNN and Poncaré plot descriptors (SD1, SD2, and SD1/SD2) in detail. We have found the 22 ms value of SDNN and 36 ms value of SD2 descriptor of Poincaré plot as clinical indicators in discriminating the normal cases from CHF cases. The pre-processing is also useful in calculation of Lyapunov exponent which is a nonlinear index as Lyapunov exponents calculated after proposed pre-processing modified in a way that it start following the notion of less complex behaviour of diseased states.

  15. Automated parameterization of intermolecular pair potentials using global optimization techniques

    NASA Astrophysics Data System (ADS)

    Krämer, Andreas; Hülsmann, Marco; Köddermann, Thorsten; Reith, Dirk

    2014-12-01

    In this work, different global optimization techniques are assessed for the automated development of molecular force fields, as used in molecular dynamics and Monte Carlo simulations. The quest of finding suitable force field parameters is treated as a mathematical minimization problem. Intricate problem characteristics such as extremely costly and even abortive simulations, noisy simulation results, and especially multiple local minima naturally lead to the use of sophisticated global optimization algorithms. Five diverse algorithms (pure random search, recursive random search, CMA-ES, differential evolution, and taboo search) are compared to our own tailor-made solution named CoSMoS. CoSMoS is an automated workflow. It models the parameters' influence on the simulation observables to detect a globally optimal set of parameters. It is shown how and why this approach is superior to other algorithms. Applied to suitable test functions and simulations for phosgene, CoSMoS effectively reduces the number of required simulations and real time for the optimization task.

  16. Optimizing classification performance in an object-based very-high-resolution land use-land cover urban application

    NASA Astrophysics Data System (ADS)

    Georganos, Stefanos; Grippa, Tais; Vanhuysse, Sabine; Lennert, Moritz; Shimoni, Michal; Wolff, Eléonore

    2017-10-01

    This study evaluates the impact of three Feature Selection (FS) algorithms in an Object Based Image Analysis (OBIA) framework for Very-High-Resolution (VHR) Land Use-Land Cover (LULC) classification. The three selected FS algorithms, Correlation Based Selection (CFS), Mean Decrease in Accuracy (MDA) and Random Forest (RF) based Recursive Feature Elimination (RFE), were tested on Support Vector Machine (SVM), K-Nearest Neighbor, and Random Forest (RF) classifiers. The results demonstrate that the accuracy of SVM and KNN classifiers are the most sensitive to FS. The RF appeared to be more robust to high dimensionality, although a significant increase in accuracy was found by using the RFE method. In terms of classification accuracy, SVM performed the best using FS, followed by RF and KNN. Finally, only a small number of features is needed to achieve the highest performance using each classifier. This study emphasizes the benefits of rigorous FS for maximizing performance, as well as for minimizing model complexity and interpretation.

  17. Using GIS to generate spatially balanced random survey designs for natural resource applications.

    PubMed

    Theobald, David M; Stevens, Don L; White, Denis; Urquhart, N Scott; Olsen, Anthony R; Norman, John B

    2007-07-01

    Sampling of a population is frequently required to understand trends and patterns in natural resource management because financial and time constraints preclude a complete census. A rigorous probability-based survey design specifies where to sample so that inferences from the sample apply to the entire population. Probability survey designs should be used in natural resource and environmental management situations because they provide the mathematical foundation for statistical inference. Development of long-term monitoring designs demand survey designs that achieve statistical rigor and are efficient but remain flexible to inevitable logistical or practical constraints during field data collection. Here we describe an approach to probability-based survey design, called the Reversed Randomized Quadrant-Recursive Raster, based on the concept of spatially balanced sampling and implemented in a geographic information system. This provides environmental managers a practical tool to generate flexible and efficient survey designs for natural resource applications. Factors commonly used to modify sampling intensity, such as categories, gradients, or accessibility, can be readily incorporated into the spatially balanced sample design.

  18. Recursive mass matrix factorization and inversion: An operator approach to open- and closed-chain multibody dynamics

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Kreutz, K.

    1988-01-01

    This report advances a linear operator approach for analyzing the dynamics of systems of joint-connected rigid bodies.It is established that the mass matrix M for such a system can be factored as M=(I+H phi L)D(I+H phi L) sup T. This yields an immediate inversion M sup -1=(I-H psi L) sup T D sup -1 (I-H psi L), where H and phi are given by known link geometric parameters, and L, psi and D are obtained recursively by a spatial discrete-step Kalman filter and by the corresponding Riccati equation associated with this filter. The factors (I+H phi L) and (I-H psi L) are lower triangular matrices which are inverses of each other, and D is a diagonal matrix. This factorization and inversion of the mass matrix leads to recursive algortihms for forward dynamics based on spatially recursive filtering and smoothing. The primary motivation for advancing the operator approach is to provide a better means to formulate, analyze and understand spatial recursions in multibody dynamics. This is achieved because the linear operator notation allows manipulation of the equations of motion using a very high-level analytical framework (a spatial operator algebra) that is easy to understand and use. Detailed lower-level recursive algorithms can readily be obtained for inspection from the expressions involving spatial operators. The report consists of two main sections. In Part 1, the problem of serial chain manipulators is analyzed and solved. Extensions to a closed-chain system formed by multiple manipulators moving a common task object are contained in Part 2. To retain ease of exposition in the report, only these two types of multibody systems are considered. However, the same methods can be easily applied to arbitrary multibody systems formed by a collection of joint-connected regid bodies.

  19. COMMUNITY-RANDOMIZED INTERVENTION TRIAL WITH UV DISINFECTION FOR ESTIMATING THE RISK OF PEDIATRIC ILLNESS FROM MUNICIPAL GROUNDWATER CONSUMPTION

    EPA Science Inventory

    The goal of this study is to estimate the risk of childhood febrile and gastrointestinal illnesses associated with drinking municipal water from a groundwater source. The risk estimate will be partitioned into two separate components— illness attributable to contaminated...

  20. A Tutorial on Multilevel Survival Analysis: Methods, Models and Applications

    PubMed Central

    Austin, Peter C.

    2017-01-01

    Summary Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster-specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow-up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster-specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow-up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log–log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within-cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata). PMID:29307954

  1. A Tutorial on Multilevel Survival Analysis: Methods, Models and Applications.

    PubMed

    Austin, Peter C

    2017-08-01

    Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster-specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow-up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster-specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow-up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log-log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within-cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata).

  2. BPS/CFT Correspondence III: Gauge Origami Partition Function and qq-Characters

    NASA Astrophysics Data System (ADS)

    Nekrasov, Nikita

    2018-03-01

    We study generalized gauge theories engineered by taking the low energy limit of the Dp branes wrapping {X × {T}^{p-3}}, with X a possibly singular surface in a Calabi-Yau fourfold Z. For toric Z and X the partition function can be computed by localization, making it a statistical mechanical model, called the gauge origami. The random variables are the ensembles of Young diagrams. The building block of the gauge origami is associated with a tetrahedron, whose edges are colored by vector spaces. We show the properly normalized partition function is an entire function of the Coulomb moduli, for generic values of the {Ω} -background parameters. The orbifold version of the theory defines the qq-character operators, with and without the surface defects. The analytic properties are the consequence of a relative compactness of the moduli spaces M({ěc n}, k) of crossed and spiked instantons, demonstrated in "BPS/CFT correspondence II: instantons at crossroads, moduli and compactness theorem".

  3. Generalization of multifractal theory within quantum calculus

    NASA Astrophysics Data System (ADS)

    Olemskoi, A.; Shuda, I.; Borisyuk, V.

    2010-03-01

    On the basis of the deformed series in quantum calculus, we generalize the partition function and the mass exponent of a multifractal, as well as the average of a random variable distributed over a self-similar set. For the partition function, such expansion is shown to be determined by binomial-type combinations of the Tsallis entropies related to manifold deformations, while the mass exponent expansion generalizes the known relation τq=Dq(q-1). We find the equation for the set of averages related to ordinary, escort, and generalized probabilities in terms of the deformed expansion as well. Multifractals related to the Cantor binomial set, exchange currency series, and porous-surface condensates are considered as examples.

  4. The recursive maximum likelihood proportion estimator: User's guide and test results

    NASA Technical Reports Server (NTRS)

    Vanrooy, D. L.

    1976-01-01

    Implementation of the recursive maximum likelihood proportion estimator is described. A user's guide to programs as they currently exist on the IBM 360/67 at LARS, Purdue is included, and test results on LANDSAT data are described. On Hill County data, the algorithm yields results comparable to the standard maximum likelihood proportion estimator.

  5. Using Recursive Regression to Explore Nonlinear Relationships and Interactions: A Tutorial Applied to a Multicultural Education Study

    ERIC Educational Resources Information Center

    Strang, Kenneth David

    2009-01-01

    This paper discusses how a seldom-used statistical procedure, recursive regression (RR), can numerically and graphically illustrate data-driven nonlinear relationships and interaction of variables. This routine falls into the family of exploratory techniques, yet a few interesting features make it a valuable compliment to factor analysis and…

  6. Vehicle Sprung Mass Estimation for Rough Terrain

    DTIC Science & Technology

    2011-03-01

    distributions are greater than zero. The multivariate polynomials are functions of the Legendre polynomials (Poularikas (1999...developed methods based on polynomial chaos theory and on the maximum likelihood approach to estimate the most likely value of the vehicle sprung...mass. The polynomial chaos estimator is compared to benchmark algorithms including recursive least squares, recursive total least squares, extended

  7. N =4 supergravity next-to-maximally-helicity-violating six-point one-loop amplitude

    NASA Astrophysics Data System (ADS)

    Dunbar, David C.; Perkins, Warren B.

    2016-12-01

    We construct the six-point, next-to-maximally-helicity-violating one-loop amplitude in N =4 supergravity using unitarity and recursion. The use of recursion requires the introduction of rational descendants of the cut-constructible pieces of the amplitude and the computation of the nonstandard factorization terms arising from the loop integrals.

  8. On the design of recursive digital filters

    NASA Technical Reports Server (NTRS)

    Shenoi, K.; Narasimha, M. J.; Peterson, A. M.

    1976-01-01

    A change of variables is described which transforms the problem of designing a recursive digital filter to that of approximation by a ratio of polynomials on a finite interval. Some analytic techniques for the design of low-pass filters are presented, illustrating the use of the transformation. Also considered are methods for the design of phase equalizers.

  9. Refinement Types ML

    DTIC Science & Technology

    1994-03-16

    105 2.10 Decidability ........ ................................ 116 3 Declaring Refinements of Recursive Data Types 165 3.1...However, when we introduce polymorphic constructors in Chapter 5, tuples will become a polymorphic data type very similar to other polymorphic data types...terminate. 0 Chapter 3 Declaring Refinements of Recursive Data Types 3.1 Introduction The previous chapter defined refinement type inference in terms of

  10. Becoming with Data: Developing Self-Assessing Recursive Pedagogies in Schools and Using Second-Order Cybernetics as a Thinking Tool

    ERIC Educational Resources Information Center

    Reinertsen, Anne Beate

    2014-01-01

    This article is about developing school-based self-assessing recursive pedagogies and case/action research practices and/or approaches in schools, and teachers, teacher researchers and researchers simultaneously producing and theorising their own practices using second-order cybernetics as a thinking tool. It is a move towards pragmatic…

  11. Recursive restriction estimation: an alternative to post-stratification in surveys of land and forest cover

    Treesearch

    Raymond L. Czaplewski

    2010-01-01

    Numerous government surveys of natural resources use Post-Stratification to improve statistical efficiency, where strata are defined by full-coverage, remotely sensed data and geopolitical boundaries. Recursive Restriction Estimation, which may be considered a special case of the static Kalman filter, is an attractive alternative. It decomposes a complex estimation...

  12. "There and Back Again" in the Writing Classroom: A Graduate Student's Recursive Journey through Pedagogical Research and Theory Development

    ERIC Educational Resources Information Center

    Mori, Miki

    2013-01-01

    This article discusses my (recursive) process of theory building and the relationship between research, teaching, and theory development for graduate students. It shows how graduate students can reshape their conceptual frameworks not only through course work, but also through researching classes they teach. Specifically, while analyzing the…

  13. Semantics Boosts Syntax in Artificial Grammar Learning Tasks with Recursion

    ERIC Educational Resources Information Center

    Fedor, Anna; Varga, Mate; Szathmary, Eors

    2012-01-01

    Center-embedded recursion (CER) in natural language is exemplified by sentences such as "The malt that the rat ate lay in the house." Parsing center-embedded structures is in the focus of attention because this could be one of the cognitive capacities that make humans distinct from all other animals. The ability to parse CER is usually…

  14. A Cognitive Processing Account of Individual Differences in Novice Logo Programmers' Conceptualisation and Use of Recursion.

    ERIC Educational Resources Information Center

    Gibbons, Pamela

    1995-01-01

    Describes a study that investigated individual differences in the construction of mental models of recursion in LOGO programming. The learning process was investigated from the perspective of Norman's mental models theory and employed diSessa's ontology regarding distributed, functional, and surrogate mental models, and the Luria model of brain…

  15. Attitude determination and calibration using a recursive maximum likelihood-based adaptive Kalman filter

    NASA Technical Reports Server (NTRS)

    Kelly, D. A.; Fermelia, A.; Lee, G. K. F.

    1990-01-01

    An adaptive Kalman filter design that utilizes recursive maximum likelihood parameter identification is discussed. At the center of this design is the Kalman filter itself, which has the responsibility for attitude determination. At the same time, the identification algorithm is continually identifying the system parameters. The approach is applicable to nonlinear, as well as linear systems. This adaptive Kalman filter design has much potential for real time implementation, especially considering the fast clock speeds, cache memory and internal RAM available today. The recursive maximum likelihood algorithm is discussed in detail, with special attention directed towards its unique matrix formulation. The procedure for using the algorithm is described along with comments on how this algorithm interacts with the Kalman filter.

  16. Event-based recursive filtering for a class of nonlinear stochastic parameter systems over fading channels

    NASA Astrophysics Data System (ADS)

    Shen, Yuxuan; Wang, Zidong; Shen, Bo; Alsaadi, Fuad E.

    2018-07-01

    In this paper, the recursive filtering problem is studied for a class of time-varying nonlinear systems with stochastic parameter matrices. The measurement transmission between the sensor and the filter is conducted through a fading channel characterized by the Rice fading model. An event-based transmission mechanism is adopted to decide whether the sensor measurement should be transmitted to the filter. A recursive filter is designed such that, in the simultaneous presence of the stochastic parameter matrices and fading channels, the filtering error covariance is guaranteed to have an upper bound and such an upper bound is then minimized by appropriately choosing filter gain matrix. Finally, a simulation example is presented to demonstrate the effectiveness of the proposed filtering scheme.

  17. Deciding Termination for Ancestor Match- Bounded String Rewriting Systems

    NASA Technical Reports Server (NTRS)

    Geser, Alfons; Hofbauer, Dieter; Waldmann, Johannes

    2005-01-01

    Termination of a string rewriting system can be characterized by termination on suitable recursively defined languages. This kind of termination criteria has been criticized for its lack of automation. In an earlier paper we have shown how to construct an automated termination criterion if the recursion is aligned with the rewrite relation. We have demonstrated the technique with Dershowitz's forward closure criterion. In this paper we show that a different approach is suitable when the recursion is aligned with the inverse of the rewrite relation. We apply this idea to Kurth's ancestor graphs and obtain ancestor match-bounded string rewriting systems. Termination is shown to be decidable for this class. The resulting method improves upon those based on match-boundedness or inverse match-boundedness.

  18. A Note on Local Stability Conditions for Two Types of Monetary Models with Recursive Utility

    NASA Astrophysics Data System (ADS)

    Miyazaki, Kenji; Utsunomiya, Hitoshi

    2009-09-01

    This note explores local stability conditions for money-in-utility-function (MIUF) and transaction-costs (TC) models with recursive utility. Although Chen et al. [Chen, B.-L., M. Hsu, and C.-H. Lin, 2008, Inflation and growth: impatience and a qualitative equivalent, Journal of Money, Credit, and Banking, Vol. 40, No. 6, 1310-1323] investigated the relationship between inflation and growth in MIUF and TC models with recursive utility, they conducted only a comparative static analysis in a steady state. By establishing sufficient conditions for local stability, this note proves that impatience should be increasing in consumption and real balances. Increasing impatience, although less plausible from an empirical point of view, receives more support from a theoretical viewpoint.

  19. Data Randomization and Cluster-Based Partitioning for Botnet Intrusion Detection.

    PubMed

    Al-Jarrah, Omar Y; Alhussein, Omar; Yoo, Paul D; Muhaidat, Sami; Taha, Kamal; Kim, Kwangjo

    2016-08-01

    Botnets, which consist of remotely controlled compromised machines called bots, provide a distributed platform for several threats against cyber world entities and enterprises. Intrusion detection system (IDS) provides an efficient countermeasure against botnets. It continually monitors and analyzes network traffic for potential vulnerabilities and possible existence of active attacks. A payload-inspection-based IDS (PI-IDS) identifies active intrusion attempts by inspecting transmission control protocol and user datagram protocol packet's payload and comparing it with previously seen attacks signatures. However, the PI-IDS abilities to detect intrusions might be incapacitated by packet encryption. Traffic-based IDS (T-IDS) alleviates the shortcomings of PI-IDS, as it does not inspect packet payload; however, it analyzes packet header to identify intrusions. As the network's traffic grows rapidly, not only the detection-rate is critical, but also the efficiency and the scalability of IDS become more significant. In this paper, we propose a state-of-the-art T-IDS built on a novel randomized data partitioned learning model (RDPLM), relying on a compact network feature set and feature selection techniques, simplified subspacing and a multiple randomized meta-learning technique. The proposed model has achieved 99.984% accuracy and 21.38 s training time on a well-known benchmark botnet dataset. Experiment results demonstrate that the proposed methodology outperforms other well-known machine-learning models used in the same detection task, namely, sequential minimal optimization, deep neural network, C4.5, reduced error pruning tree, and randomTree.

  20. A Lego Mindstorms NXT based test bench for multiagent exploratory systems and distributed network partitioning

    NASA Astrophysics Data System (ADS)

    Patil, Riya Raghuvir

    Networks of communicating agents require distributed algorithms for a variety of tasks in the field of network analysis and control. For applications such as swarms of autonomous vehicles, ad hoc and wireless sensor networks, and such military and civilian applications as exploring and patrolling a robust autonomous system that uses a distributed algorithm for selfpartitioning can be significantly helpful. A single team of autonomous vehicles in a field may need to self-dissemble into multiple teams, conducive to completing multiple control tasks. Moreover, because communicating agents are subject to changes, namely, addition or failure of an agent or link, a distributed or decentralized algorithm is favorable over having a central agent. A framework to help with the study of self-partitioning of such multi agent systems that have most basic mobility model not only saves our time in conception but also gives us a cost effective prototype without negotiating the physical realization of the proposed idea. In this thesis I present my work on the implementation of a flexible and distributed stochastic partitioning algorithm on the LegoRTM Mindstorms' NXT on a graphical programming platform using National Instruments' LabVIEW(TM) forming a team of communicating agents via NXT-Bee radio module. We single out mobility, communication and self-partition as the core elements of the work. The goal is to randomly explore a precinct for reference sites. Agents who have discovered the reference sites announce their target acquisition to form a network formed based upon the distance of each agent with the other wherein the self-partitioning begins to find an optimal partition. Further, to illustrate the work, an experimental test-bench of five Lego NXT robots is presented.

  1. [Analytic methods for seed models with genotype x environment interactions].

    PubMed

    Zhu, J

    1996-01-01

    Genetic models with genotype effect (G) and genotype x environment interaction effect (GE) are proposed for analyzing generation means of seed quantitative traits in crops. The total genetic effect (G) is partitioned into seed direct genetic effect (G0), cytoplasm genetic of effect (C), and maternal plant genetic effect (Gm). Seed direct genetic effect (G0) can be further partitioned into direct additive (A) and direct dominance (D) genetic components. Maternal genetic effect (Gm) can also be partitioned into maternal additive (Am) and maternal dominance (Dm) genetic components. The total genotype x environment interaction effect (GE) can also be partitioned into direct genetic by environment interaction effect (G0E), cytoplasm genetic by environment interaction effect (CE), and maternal genetic by environment interaction effect (GmE). G0E can be partitioned into direct additive by environment interaction (AE) and direct dominance by environment interaction (DE) genetic components. GmE can also be partitioned into maternal additive by environment interaction (AmE) and maternal dominance by environment interaction (DmE) genetic components. Partitions of genetic components are listed for parent, F1, F2 and backcrosses. A set of parents, their reciprocal F1 and F2 seeds is applicable for efficient analysis of seed quantitative traits. MINQUE(0/1) method can be used for estimating variance and covariance components. Unbiased estimation for covariance components between two traits can also be obtained by the MINQUE(0/1) method. Random genetic effects in seed models are predictable by the Adjusted Unbiased Prediction (AUP) approach with MINQUE(0/1) method. The jackknife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects, which can be further used in a t-test for parameter. Unbiasedness and efficiency for estimating variance components and predicting genetic effects are tested by Monte Carlo simulations.

  2. Framework for Detection and Localization of Extreme Climate Event with Pixel Recursive Super Resolution

    NASA Astrophysics Data System (ADS)

    Kim, S. K.; Lee, J.; Zhang, C.; Ames, S.; Williams, D. N.

    2017-12-01

    Deep learning techniques have been successfully applied to solve many problems in climate and geoscience using massive-scaled observed and modeled data. For extreme climate event detections, several models based on deep neural networks have been recently proposed and attend superior performance that overshadows all previous handcrafted expert based method. The issue arising, though, is that accurate localization of events requires high quality of climate data. In this work, we propose framework capable of detecting and localizing extreme climate events in very coarse climate data. Our framework is based on two models using deep neural networks, (1) Convolutional Neural Networks (CNNs) to detect and localize extreme climate events, and (2) Pixel recursive recursive super resolution model to reconstruct high resolution climate data from low resolution climate data. Based on our preliminary work, we have presented two CNNs in our framework for different purposes, detection and localization. Our results using CNNs for extreme climate events detection shows that simple neural nets can capture the pattern of extreme climate events with high accuracy from very coarse reanalysis data. However, localization accuracy is relatively low due to the coarse resolution. To resolve this issue, the pixel recursive super resolution model reconstructs the resolution of input of localization CNNs. We present a best networks using pixel recursive super resolution model that synthesizes details of tropical cyclone in ground truth data while enhancing their resolution. Therefore, this approach not only dramat- ically reduces the human effort, but also suggests possibility to reduce computing cost required for downscaling process to increase resolution of data.

  3. A Model of Self-Explanation Strategies of Instructional Text and Examples in the Acquisition of Programming Skills.

    ERIC Educational Resources Information Center

    Recker, Margaret M.; Pirolli, Peter

    Students learning to program recursive LISP functions in a typical school-like lesson on recursion were observed. The typical lesson contains text and examples and involves solving a series of programming problems. The focus of this study is on students' learning strategies in new domains. In this light, a Soar computational model of…

  4. Closed-form recursive formula for an optimal tracker with terminal constraints

    NASA Technical Reports Server (NTRS)

    Juang, J.-N.; Turner, J. D.; Chun, H. M.

    1984-01-01

    Feedback control laws are derived for a class of optimal finite time tracking problems with terminal constraints. Analytical solutions are obtained for the feedback gain and the closed-loop response trajectory. Such formulations are expressed in recursive forms so that a real-time computer implementation becomes feasible. Two examples are given to illustrate the validity and usefulness of the formulations.

  5. Relatively Recursive Rational Choice.

    DTIC Science & Technology

    1981-11-01

    for the decision procedure of recursively representable rational choice. Alternatively phrased, we wish to inquire into its degrees of unsolvability. We...may first make the observation that there are three classic notions of reducibility of decision procedures for subsets of the natural numbers... rational choice function defined as an effectively computable represent- ation of Richter’s [1971] concept of rational choice, attains by means of an

  6. Recursive inversion of externally defined linear systems

    NASA Technical Reports Server (NTRS)

    Bach, Ralph E., Jr.; Baram, Yoram

    1988-01-01

    The approximate inversion of an internally unknown linear system, given by its impulse response sequence, by an inverse system having a finite impulse response, is considered. The recursive least squares procedure is shown to have an exact initialization, based on the triangular Toeplitz structure of the matrix involved. The proposed approach also suggests solutions to the problems of system identification and compensation.

  7. The Recursive Process in and of Critical Literacy: Action Research in an Urban Elementary School

    ERIC Educational Resources Information Center

    Cooper, Karyn; White, Robert E.

    2012-01-01

    This paper provides an overview of the recursive process of initiating an action research project on literacy for students-at-risk in a Canadian urban elementary school. As this paper demonstrates, this requires development of a school-wide framework, which frames the action research project and desired outcomes, and a shared ownership of this…

  8. Centre-Embedded Structures Are a By-Product of Associative Learning and Working Memory Constraints: Evidence from Baboons ("Papio Papio")

    ERIC Educational Resources Information Center

    Rey, Arnaud; Perruchet, Pierre; Fagot, Joel

    2012-01-01

    Influential theories have claimed that the ability for recursion forms the computational core of human language faculty distinguishing our communication system from that of other animals (Hauser, Chomsky, & Fitch, 2002). In the present study, we consider an alternative view on recursion by studying the contribution of associative and working…

  9. Practical recursive solution of degenerate Rayleigh-Schroedinger perturbation theory and application to high-order calculations of the Zeeman effect in hydrogen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silverstone, H.J.; Moats, R.K.

    1981-04-01

    With the aim of high-order calculations, a new recursive solution for the degenerate Rayleigh-Schroedinger perturbation-theory wave function and energy has been derived. The final formulas, chi/sup (N/)/sub sigma/ = R/sup () -sigma/summation/sup N/-1/sub k/ = 0 H/sup (sigma+1+k/)/sub sigma+1/chi/sup (N/-1-k), E/sup (N/+sigma) = <0Vertical BarH/sup (N/+sigma)/sub sigma+1/Vertical Bar0> + < 0Vertical Barsummation/sup N/-2/sub k/ = 0H/sup (sigma+1+k/)/sub sigma+1/ Vertical Barchi/sup (N/-1-k)>,which involve new Hamiltonian-related operators H/sup (sigma+k/)/sub sigma/ and H/sup( sigma+k/)/sub sigma/, strongly resemble the standard nondegenerate recursive formulas. As an illustration, the perturbed energy coefficients for the 3s-3d/sub 0/ states of hydrogen in the Zeeman effect have been calculatedmore » recursively through 87th order in the square of the magnetic field. Our treatment is compared with that of Hirschfelder and Certain (J. Chem. Phys. 60, 1118 (1974)), and some relative advantages of each are pointed out.« less

  10. Niche partitioning in the cestode communities of two elasmobranchs

    Treesearch

    M. M. Friggens; J. H. Brown

    2005-01-01

    Several randomization methods have been used to investigate the influence of competitive interactions in shaping parasite community structure. Marine fish parasite communities have often been regarded as unstructured assemblages with little or no resource limitation and, therefore, not prone to competitive influences. In this study, null models were used to assess the...

  11. Experimental evaluation of a recursive model identification technique for type 1 diabetes.

    PubMed

    Finan, Daniel A; Doyle, Francis J; Palerm, Cesar C; Bevier, Wendy C; Zisser, Howard C; Jovanovic, Lois; Seborg, Dale E

    2009-09-01

    A model-based controller for an artificial beta cell requires an accurate model of the glucose-insulin dynamics in type 1 diabetes subjects. To ensure the robustness of the controller for changing conditions (e.g., changes in insulin sensitivity due to illnesses, changes in exercise habits, or changes in stress levels), the model should be able to adapt to the new conditions by means of a recursive parameter estimation technique. Such an adaptive strategy will ensure that the most accurate model is used for the current conditions, and thus the most accurate model predictions are used in model-based control calculations. In a retrospective analysis, empirical dynamic autoregressive exogenous input (ARX) models were identified from glucose-insulin data for nine type 1 diabetes subjects in ambulatory conditions. Data sets consisted of continuous (5-minute) glucose concentration measurements obtained from a continuous glucose monitor, basal insulin infusion rates and times and amounts of insulin boluses obtained from the subjects' insulin pumps, and subject-reported estimates of the times and carbohydrate content of meals. Two identification techniques were investigated: nonrecursive, or batch methods, and recursive methods. Batch models were identified from a set of training data, whereas recursively identified models were updated at each sampling instant. Both types of models were used to make predictions of new test data. For the purpose of comparison, model predictions were compared to zero-order hold (ZOH) predictions, which were made by simply holding the current glucose value constant for p steps into the future, where p is the prediction horizon. Thus, the ZOH predictions are model free and provide a base case for the prediction metrics used to quantify the accuracy of the model predictions. In theory, recursive identification techniques are needed only when there are changing conditions in the subject that require model adaptation. Thus, the identification and validation techniques were performed with both "normal" data and data collected during conditions of reduced insulin sensitivity. The latter were achieved by having the subjects self-administer a medication, prednisone, for 3 consecutive days. The recursive models were allowed to adapt to this condition of reduced insulin sensitivity, while the batch models were only identified from normal data. Data from nine type 1 diabetes subjects in ambulatory conditions were analyzed; six of these subjects also participated in the prednisone portion of the study. For normal test data, the batch ARX models produced 30-, 45-, and 60-minute-ahead predictions that had average root mean square error (RMSE) values of 26, 34, and 40 mg/dl, respectively. For test data characterized by reduced insulin sensitivity, the batch ARX models produced 30-, 60-, and 90-minute-ahead predictions with average RMSE values of 27, 46, and 59 mg/dl, respectively; the recursive ARX models demonstrated similar performance with corresponding values of 27, 45, and 61 mg/dl, respectively. The identified ARX models (batch and recursive) produced more accurate predictions than the model-free ZOH predictions, but only marginally. For test data characterized by reduced insulin sensitivity, RMSE values for the predictions of the batch ARX models were 9, 5, and 5% more accurate than the ZOH predictions for prediction horizons of 30, 60, and 90 minutes, respectively. In terms of RMSE values, the 30-, 60-, and 90-minute predictions of the recursive models were more accurate than the ZOH predictions, by 10, 5, and 2%, respectively. In this experimental study, the recursively identified ARX models resulted in predictions of test data that were similar, but not superior, to the batch models. Even for the test data characteristic of reduced insulin sensitivity, the batch and recursive models demonstrated similar prediction accuracy. The predictions of the identified ARX models were only marginally more accurate than the model-free ZOH predictions. Given the simplicity of the ARX models and the computational ease with which they are identified, however, even modest improvements may justify the use of these models in a model-based controller for an artificial beta cell. 2009 Diabetes Technology Society.

  12. Recursive Bayesian recurrent neural networks for time-series modeling.

    PubMed

    Mirikitani, Derrick T; Nikolaev, Nikolay

    2010-02-01

    This paper develops a probabilistic approach to recursive second-order training of recurrent neural networks (RNNs) for improved time-series modeling. A general recursive Bayesian Levenberg-Marquardt algorithm is derived to sequentially update the weights and the covariance (Hessian) matrix. The main strengths of the approach are a principled handling of the regularization hyperparameters that leads to better generalization, and stable numerical performance. The framework involves the adaptation of a noise hyperparameter and local weight prior hyperparameters, which represent the noise in the data and the uncertainties in the model parameters. Experimental investigations using artificial and real-world data sets show that RNNs equipped with the proposed approach outperform standard real-time recurrent learning and extended Kalman training algorithms for recurrent networks, as well as other contemporary nonlinear neural models, on time-series modeling.

  13. Recursive utility in a Markov environment with stochastic growth

    PubMed Central

    Hansen, Lars Peter; Scheinkman, José A.

    2012-01-01

    Recursive utility models that feature investor concerns about the intertemporal composition of risk are used extensively in applied research in macroeconomics and asset pricing. These models represent preferences as the solution to a nonlinear forward-looking difference equation with a terminal condition. In this paper we study infinite-horizon specifications of this difference equation in the context of a Markov environment. We establish a connection between the solution to this equation and to an arguably simpler Perron–Frobenius eigenvalue equation of the type that occurs in the study of large deviations for Markov processes. By exploiting this connection, we establish existence and uniqueness results. Moreover, we explore a substantive link between large deviation bounds for tail events for stochastic consumption growth and preferences induced by recursive utility. PMID:22778428

  14. On the Dynamics of an Incursion Describing the Interactions between Functionally Differentiated Subsystems of a Discrete-time Anticipatory System

    NASA Astrophysics Data System (ADS)

    Burke, Mark E.

    2010-11-01

    Dubois coined the term incursion, for an inclusive or implicit recursion, to describe a discrete-time anticipatory system which computes its future states by reference to its future states as well as its current and past states. In this paper, we look at a model which has been proposed in the context of a social system which has functionally differentiated subsystems. The model is derived from a discrete-time compartmental SIS epidemic model. We analyse a low order instance of the model both in its form as a recursion with no anticipatory capacity, and also as an incursion with associated anticipatory capacity. The properties of the incursion are compared and contrasted with those of the underlying recursion.

  15. An iterative approach to region growing using associative memories

    NASA Technical Reports Server (NTRS)

    Snyder, W. E.; Cowart, A.

    1983-01-01

    Region growing, often given as a classical example of the recursive control structures used in image processing which are often awkward to implement in hardware where the intent is the segmentation of an image at raster scan rates, is addressed in light of the postulate that any computation which can be performed recursively can be performed easily and efficiently by iteration coupled with association. Attention is given to an algorithm and hardware structure able to perform region labeling iteratively at scan rates. Every pixel is individually labeled with an identifier which signifies the region to which it belongs. Difficulties otherwise requiring recursion are handled by maintaining an equivalence table in hardware transparent to the computer, which reads the labeled pixels. A simulation of the associative memory has demonstrated its effectiveness.

  16. [Formula: see text]-regularized recursive total least squares based sparse system identification for the error-in-variables.

    PubMed

    Lim, Jun-Seok; Pang, Hee-Suk

    2016-01-01

    In this paper an [Formula: see text]-regularized recursive total least squares (RTLS) algorithm is considered for the sparse system identification. Although recursive least squares (RLS) has been successfully applied in sparse system identification, the estimation performance in RLS based algorithms becomes worse, when both input and output are contaminated by noise (the error-in-variables problem). We proposed an algorithm to handle the error-in-variables problem. The proposed [Formula: see text]-RTLS algorithm is an RLS like iteration using the [Formula: see text] regularization. The proposed algorithm not only gives excellent performance but also reduces the required complexity through the effective inversion matrix handling. Simulations demonstrate the superiority of the proposed [Formula: see text]-regularized RTLS for the sparse system identification setting.

  17. Recursive utility in a Markov environment with stochastic growth.

    PubMed

    Hansen, Lars Peter; Scheinkman, José A

    2012-07-24

    Recursive utility models that feature investor concerns about the intertemporal composition of risk are used extensively in applied research in macroeconomics and asset pricing. These models represent preferences as the solution to a nonlinear forward-looking difference equation with a terminal condition. In this paper we study infinite-horizon specifications of this difference equation in the context of a Markov environment. We establish a connection between the solution to this equation and to an arguably simpler Perron-Frobenius eigenvalue equation of the type that occurs in the study of large deviations for Markov processes. By exploiting this connection, we establish existence and uniqueness results. Moreover, we explore a substantive link between large deviation bounds for tail events for stochastic consumption growth and preferences induced by recursive utility.

  18. A spatial operator algebra for manipulator modeling and control

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Kreutz, K.; Jain, A.

    1989-01-01

    A spatial operator algebra for modeling the control and trajectory design of manipulation is discussed, with emphasis on its analytical formulation and implementation in the Ada programming language. The elements of this algebra are linear operators whose domain and range spaces consist of forces, moments, velocities, and accelerations. The effect of these operators is equivalent to a spatial recursion along the span of the manipulator. Inversion is obtained using techniques of recursive filtering and smoothing. The operator alegbra provides a high-level framework for describing the dynamic and kinematic behavior of a manipulator and control and trajectory design algorithms. Implementable recursive algorithms can be immediately derived from the abstract operator expressions by inspection, thus greatly simplifying the transition from an abstract problem formulation and solution to the detailed mechanization of a specific algorithm.

  19. a Recursive Approach to Compute Normal Forms

    NASA Astrophysics Data System (ADS)

    HSU, L.; MIN, L. J.; FAVRETTO, L.

    2001-06-01

    Normal forms are instrumental in the analysis of dynamical systems described by ordinary differential equations, particularly when singularities close to a bifurcation are to be characterized. However, the computation of a normal form up to an arbitrary order is numerically hard. This paper focuses on the computer programming of some recursive formulas developed earlier to compute higher order normal forms. A computer program to reduce the system to its normal form on a center manifold is developed using the Maple symbolic language. However, it should be stressed that the program relies essentially on recursive numerical computations, while symbolic calculations are used only for minor tasks. Some strategies are proposed to save computation time. Examples are presented to illustrate the application of the program to obtain high order normalization or to handle systems with large dimension.

  20. Caustics, counting maps and semi-classical asymptotics

    NASA Astrophysics Data System (ADS)

    Ercolani, N. M.

    2011-02-01

    This paper develops a deeper understanding of the structure and combinatorial significance of the partition function for Hermitian random matrices. The coefficients of the large N expansion of the logarithm of this partition function, also known as the genus expansion (and its derivatives), are generating functions for a variety of graphical enumeration problems. The main results are to prove that these generating functions are, in fact, specific rational functions of a distinguished irrational (algebraic) function, z0(t). This distinguished function is itself the generating function for the Catalan numbers (or generalized Catalan numbers, depending on the choice of weight of the parameter t). It is also a solution of the inviscid Burgers equation for certain initial data. The shock formation, or caustic, of the Burgers characteristic solution is directly related to the poles of the rational forms of the generating functions. As an intriguing application, one gains new insights into the relation between certain derivatives of the genus expansion, in a double-scaling limit, and the asymptotic expansion of the first Painlevé transcendent. This provides a precise expression of the Painlevé asymptotic coefficients directly in terms of the coefficients of the partial fractions expansion of the rational form of the generating functions established in this paper. Moreover, these insights point towards a more general program relating the first Painlevé hierarchy to the higher order structure of the double-scaling limit through the specific rational structure of generating functions in the genus expansion. The paper closes with a discussion of the relation of this work to recent developments in understanding the asymptotics of graphical enumeration. As a by-product, these results also yield new information about the asymptotics of recurrence coefficients for orthogonal polynomials with respect to exponential weights, the calculation of correlation functions for certain tied random walks on a 1D lattice, and the large time asymptotics of random matrix partition functions.

  1. Report to the High Order Language Working Group (HOLWG)

    DTIC Science & Technology

    1977-01-14

    as running, runnable, suspended or dormant, may be synchronized by semaphore variables, may be schedaled using clock and duration data types and mpy...Recursive and non-recursive routines G6. Parallel processes, synchronization , critical regions G7. User defined parameterized exception handling G8...typed and lacks extensibility, parallel processing, synchronization and real-time features. Overall Evaluation IBM strongly recommended PL/I as a

  2. Recursive inversion of externally defined linear systems by FIR filters

    NASA Technical Reports Server (NTRS)

    Bach, Ralph E., Jr.; Baram, Yoram

    1989-01-01

    The approximate inversion of an internally unknown linear system, given by its impulse response sequence, by an inverse system having a finite impulse response, is considered. The recursive least-squares procedure is shown to have an exact initialization, based on the triangular Toeplitz structure of the matrix involved. The proposed approach also suggests solutions to the problem of system identification and compensation.

  3. Recursive search method for the image elements of functionally defined surfaces

    NASA Astrophysics Data System (ADS)

    Vyatkin, S. I.

    2017-05-01

    This paper touches upon the synthesis of high-quality images in real time and the technique for specifying three-dimensional objects on the basis of perturbation functions. The recursive search method for the image elements of functionally defined objects with the use of graphics processing units is proposed. The advantages of such an approach over the frame-buffer visualization method are shown.

  4. N/om, Change, and Social Work: A Recursive Frame Analysis of the Transformative Rituals of the Ju/'hoan Bushmen

    ERIC Educational Resources Information Center

    Keeney, Hillary; Keeney, Bradford

    2013-01-01

    The Ju/'hoan Bushman origin myth is depicted as a contextual frame for their healing and transformative ways. Using Recursive Frame Analysis, these performances are shown to be an enactment of the border crossing between First and Second Creation, that is, pre-linguistic and linguistic domains of experience. Here n/om, or the presumed creative…

  5. Aesthetic Responses to Exact Fractals Driven by Physical Complexity

    PubMed Central

    Bies, Alexander J.; Blanc-Goldhammer, Daryn R.; Boydston, Cooper R.; Taylor, Richard P.; Sereno, Margaret E.

    2016-01-01

    Fractals are physically complex due to their repetition of patterns at multiple size scales. Whereas the statistical characteristics of the patterns repeat for fractals found in natural objects, computers can generate patterns that repeat exactly. Are these exact fractals processed differently, visually and aesthetically, than their statistical counterparts? We investigated the human aesthetic response to the complexity of exact fractals by manipulating fractal dimensionality, symmetry, recursion, and the number of segments in the generator. Across two studies, a variety of fractal patterns were visually presented to human participants to determine the typical response to exact fractals. In the first study, we found that preference ratings for exact midpoint displacement fractals can be described by a linear trend with preference increasing as fractal dimension increases. For the majority of individuals, preference increased with dimension. We replicated these results for other exact fractal patterns in a second study. In the second study, we also tested the effects of symmetry and recursion by presenting asymmetric dragon fractals, symmetric dragon fractals, and Sierpinski carpets and Koch snowflakes, which have radial and mirror symmetry. We found a strong interaction among recursion, symmetry and fractal dimension. Specifically, at low levels of recursion, the presence of symmetry was enough to drive high preference ratings for patterns with moderate to high levels of fractal dimension. Most individuals required a much higher level of recursion to recover this level of preference in a pattern that lacked mirror or radial symmetry, while others were less discriminating. This suggests that exact fractals are processed differently than their statistical counterparts. We propose a set of four factors that influence complexity and preference judgments in fractals that may extend to other patterns: fractal dimension, recursion, symmetry and the number of segments in a pattern. Conceptualizations such as Berlyne’s and Redies’ theories of aesthetics also provide a suitable framework for interpretation of our data with respect to the individual differences that we detect. Future studies that incorporate physiological methods to measure the human aesthetic response to exact fractal patterns would further elucidate our responses to such timeless patterns. PMID:27242475

  6. Rationale for two phase polymer system microgravity separation experiments

    NASA Technical Reports Server (NTRS)

    Brooks, D. E.; Bamberger, S. B.; Harris, J. M.; Vanalstine, J.

    1984-01-01

    The two-phase systems that result when aqueous solutions of dextran and poly(ethylene glycol) are mixed at concentrations above a few percent are discussed. They provide useful media for the partition and isolation of macromolecules and cell subpopulations. By manipulating their composition, separations based on a variety of molecular and surface properties are achieved, including membrane hydrophobic properties, cell surface charge, and membrane antigenicity. Work on the mechanism of cell partition shows there is a randomizing, nonthermal energy present which reduces separation resolution. This stochastic energy is probably associated with hydrodynamic interactions present during separation. Because such factors should be markedly reduced in microgravity, a series of shuttle experiments to indicate approaches to increasing the resolution of the procedure are planned.

  7. Mercury Stable Isotopes Discriminate Different Populations of European Seabass and Trace Potential Hg Sources around Europe.

    PubMed

    Cransveld, Alice; Amouroux, David; Tessier, Emmanuel; Koutrakis, Emmanuil; Ozturk, Ayaka A; Bettoso, Nicola; Mieiro, Cláudia L; Bérail, Sylvain; Barre, Julien P G; Sturaro, Nicolas; Schnitzler, Joseph; Das, Krishna

    2017-11-07

    Our study reports the first data on mercury (Hg) isotope composition in marine European fish, for seven distinct populations of the European seabass, Dicentrarchus labrax. The use of δ 202 Hg and Δ 199 Hg values in SIBER enabled us to estimate Hg isotopic niches, successfully discriminating several populations. Recursive-partitioning analyses demonstrated the relevance of Hg stable isotopes as discriminating tools. Hg isotopic values also provided insight on Hg contamination sources for biota in coastal environment. The overall narrow range of δ 202 Hg around Europe was suggested to be related to a global atmospheric contamination while δ 202 Hg at some sites was linked either to background contamination, or with local contamination sources. Δ 199 Hg was related to Hg levels of fish but we also suggest a relation with ecological conditions. Throughout this study, results from the Black Sea population stood out, displaying a Hg cycling similar to fresh water lakes. Our findings bring out the possibility to use Hg isotopes in order to discriminate distinct populations, to explore the Hg cycle on a large scale (Europe) and to distinguish sites contaminated by global versus local Hg source. The interest of using Hg sable isotopes to investigate the whole European Hg cycle is clearly highlighted.

  8. Efficient block processing of long duration biotelemetric brain data for health care monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soumya, I.; Zia Ur Rahman, M., E-mail: mdzr-5@ieee.org; Rama Koti Reddy, D. V.

    In real time clinical environment, the brain signals which doctor need to analyze are usually very long. Such a scenario can be made simple by partitioning the input signal into several blocks and applying signal conditioning. This paper presents various block based adaptive filter structures for obtaining high resolution electroencephalogram (EEG) signals, which estimate the deterministic components of the EEG signal by removing noise. To process these long duration signals, we propose Time domain Block Least Mean Square (TDBLMS) algorithm for brain signal enhancement. In order to improve filtering capability, we introduce normalization in the weight update recursion of TDBLMS,more » which results TD-B-normalized-least mean square (LMS). To increase accuracy and resolution in the proposed noise cancelers, we implement the time domain cancelers in frequency domain which results frequency domain TDBLMS and FD-B-Normalized-LMS. Finally, we have applied these algorithms on real EEG signals obtained from human using Emotive Epoc EEG recorder and compared their performance with the conventional LMS algorithm. The results show that the performance of the block based algorithms is superior to the LMS counter-parts in terms of signal to noise ratio, convergence rate, excess mean square error, misadjustment, and coherence.« less

  9. Entropy-based gene ranking without selection bias for the predictive classification of microarray data.

    PubMed

    Furlanello, Cesare; Serafini, Maria; Merler, Stefano; Jurman, Giuseppe

    2003-11-06

    We describe the E-RFE method for gene ranking, which is useful for the identification of markers in the predictive classification of array data. The method supports a practical modeling scheme designed to avoid the construction of classification rules based on the selection of too small gene subsets (an effect known as the selection bias, in which the estimated predictive errors are too optimistic due to testing on samples already considered in the feature selection process). With E-RFE, we speed up the recursive feature elimination (RFE) with SVM classifiers by eliminating chunks of uninteresting genes using an entropy measure of the SVM weights distribution. An optimal subset of genes is selected according to a two-strata model evaluation procedure: modeling is replicated by an external stratified-partition resampling scheme, and, within each run, an internal K-fold cross-validation is used for E-RFE ranking. Also, the optimal number of genes can be estimated according to the saturation of Zipf's law profiles. Without a decrease of classification accuracy, E-RFE allows a speed-up factor of 100 with respect to standard RFE, while improving on alternative parametric RFE reduction strategies. Thus, a process for gene selection and error estimation is made practical, ensuring control of the selection bias, and providing additional diagnostic indicators of gene importance.

  10. The Association Between Neighborhood Characteristics and Body Size and Physical Activity in the California Teachers Study Cohort

    PubMed Central

    Hurley, Susan; Goldberg, Debbie; Nelson, David O.; Reynolds, Peggy; Bernstein, Leslie; Horn-Ross, Pam L.; Gomez, Scarlett L.

    2012-01-01

    Objectives. We considered interactions between physical activity and body mass index (BMI) and neighborhood factors. Methods. We used recursive partitioning to identify predictors of low recreational physical activity (< 2.5 hours/week) and overweight and obesity (BMI ≥ 25.0 kg/m2) among 118 315 women in the California Teachers Study. Neighborhood characteristics were based on 2000 US Census data and Reference US business listings. Results. Low physical activity and being overweight or obese were associated with individual sociodemographic characteristics, including race/ethnicity and age. Among White women aged 36 to 75 years, living in neighborhoods with more household crowding was associated with a higher probability of low physical activity (54% vs 45% to 51%). In less crowded neighborhoods where more people worked outside the home, the existence of fewer neighborhood amenities was associated with a higher probability of low physical activity (51% vs 46%). Among non–African American middle-aged women, living in neighborhoods with a lower socioeconomic status was associated with a higher probability of being overweight or obese (46% to 59% vs 38% in high–socioeconomic status neighborhoods). Conclusions. Associations between physical activity, overweight and obesity, and the built environment varied by sociodemographic characteristics in this educated population. PMID:21852626

  11. [Comparison of Discriminant Analysis and Decision Trees for the Detection of Subclinical Keratoconus].

    PubMed

    Kleinhans, Sonja; Herrmann, Eva; Kohnen, Thomas; Bühren, Jens

    2017-08-15

    Background Iatrogenic keratectasia is one of the most dreaded complications of refractive surgery. In most cases, keratectasia develops after refractive surgery of eyes suffering from subclinical stages of keratoconus with few or no signs. Unfortunately, there has been no reliable procedure for the early detection of keratoconus. In this study, we used binary decision trees (recursive partitioning) to assess their suitability for discrimination between normal eyes and eyes with subclinical keratoconus. Patients and Methods The method of decision tree analysis was compared with discriminant analysis which has shown good results in previous studies. Input data were 32 eyes of 32 patients with newly diagnosed keratoconus in the contralateral eye and preoperative data of 10 eyes of 5 patients with keratectasia after laser in-situ keratomileusis (LASIK). The control group was made up of 245 normal eyes after LASIK and 12-month follow-up without any signs of iatrogenic keratectasia. Results Decision trees gave better accuracy and specificity than did discriminant analysis. The sensitivity of decision trees was lower than the sensitivity of discriminant analysis. Conclusion On the basis of the patient population of this study, decision trees did not prove to be superior to linear discriminant analysis for the detection of subclinical keratoconus. Georg Thieme Verlag KG Stuttgart · New York.

  12. Clinical outcome and molecular characterization of brain metastases from esophageal and gastric cancer: a systematic review.

    PubMed

    Ghidini, Michele; Petrelli, Fausto; Hahne, Jens Claus; De Giorgi, Annamaria; Toppo, Laura; Pizzo, Claudio; Ratti, Margherita; Barni, Sandro; Passalacqua, Rodolfo; Tomasello, Gianluca

    2017-04-01

    The aim of the study was to collect the available data on central nervous system (CNS) metastases from esophageal and gastric cancer. A PubMed, EMBASE, SCOPUS, Web of Science, LILACS, Ovid and Cochrane Library search was performed. Thirty-seven studies including 779 patients were considered. Among the data extracted, treatment of tumor and brain metastases (BMs), time to BMs development, number and subsite, extracerebral metastases rate, median overall survival (OS) and prognostic factors were included. For esophageal cancer, the median OS from diagnosis of BMs was 4.2 months. Prognostic factors for OS included: performance status, multimodal therapy, adjuvant chemotherapy, single BM, brain only disease and surgery. For gastric cancer, median OS was 2.4 months. Prognostic factors for OS included: recursive partitioning analysis class 2, stereotactic radiosurgery (SRT) and use of intrathecal therapy. HER2-positive gastric cancer was shown to be associated with a higher risk and shorter time to CNS relapse. Patients harboring BMs from gastric and esophageal tumors, except cases with single lesions that are treated aggressively, have a poor prognosis. SRT (plus or minus surgery and whole brain radiotherapy) seems to give better results in terms of longer OS after brain relapse.

  13. A clinical analysis of brain metastasis in gynecologic cancer: a retrospective multi-institute analysis.

    PubMed

    Kim, Young Zoon; Kwon, Jae Hyun; Lim, Soyi

    2015-01-01

    This study analyzes the clinical characteristics of the brain metastasis (BM) of gynecologic cancer based on the type of cancer. In addition, the study examines the factors influencing the survival. Total 61 BM patients of gynecologic cancer were analyzed retrospectively from January 2000 to December 2012 in terms of clinical and radiological characteristics by using medical and radiological records from three university hospitals. There were 19 (31.1%) uterine cancers, 32 (52.5%) ovarian cancers, and 10 (16.4%) cervical cancers. The mean interval to BM was 25.4 months (21.6 months in ovarian cancer, 27.8 months in uterine cancer, and 33.1 months in cervical cancer). The mean survival from BM was 16.7 months (14.1 months in ovarian cancer, 23.3 months in uterine cancer, and 8.8 months in cervical cancer). According to a multivariate analysis of factors influencing survival, type of primary cancer, Karnofsky performance score, status of primary cancer, recursive partitioning analysis class, and treatment modality, particularly combined therapies, were significantly related to the overall survival. These results suggest that, in addition to traditional prognostic factors in BM, multiple treatment methods such as neurosurgery and combined chemoradiotherapy may play an important role in prolonging the survival for BM patients of gynecologic cancer.

  14. Ultra-precise tracking control of piezoelectric actuators via a fuzzy hysteresis model.

    PubMed

    Li, Pengzhi; Yan, Feng; Ge, Chuan; Zhang, Mingchao

    2012-08-01

    In this paper, a novel Takagi-Sugeno (T-S) fuzzy system based model is proposed for hysteresis in piezoelectric actuators. The antecedent and consequent structures of the fuzzy hysteresis model (FHM) can be, respectively, identified on-line through uniform partition approach and recursive least squares (RLS) algorithm. With respect to controller design, the inverse of FHM is used to develop a feedforward controller to cancel out the hysteresis effect. Then a hybrid controller is designed for high-performance tracking. It combines the feedforward controller with a proportional integral differential (PID) controller favourable for stabilization and disturbance compensation. To achieve nanometer-scale tracking precision, the enhanced adaptive hybrid controller is further developed. It uses real-time input and output data to update FHM, thus changing the feedforward controller to suit the on-site hysteresis character of the piezoelectric actuator. Finally, as to 3 cases of 50 Hz sinusoidal, multiple frequency sinusoidal and 50 Hz triangular trajectories tracking, experimental results demonstrate the efficiency of the proposed controllers. Especially, being only 0.35% of the maximum desired displacement, the maximum error of 50 Hz sinusoidal tracking is greatly reduced to 5.8 nm, which clearly shows the ultra-precise nanometer-scale tracking performance of the developed adaptive hybrid controller.

  15. Distribution-Preserving Stratified Sampling for Learning Problems.

    PubMed

    Cervellera, Cristiano; Maccio, Danilo

    2017-06-09

    The need for extracting a small sample from a large amount of real data, possibly streaming, arises routinely in learning problems, e.g., for storage, to cope with computational limitations, obtain good training/test/validation sets, and select minibatches for stochastic gradient neural network training. Unless we have reasons to select the samples in an active way dictated by the specific task and/or model at hand, it is important that the distribution of the selected points is as similar as possible to the original data. This is obvious for unsupervised learning problems, where the goal is to gain insights on the distribution of the data, but it is also relevant for supervised problems, where the theory explains how the training set distribution influences the generalization error. In this paper, we analyze the technique of stratified sampling from the point of view of distances between probabilities. This allows us to introduce an algorithm, based on recursive binary partition of the input space, aimed at obtaining samples that are distributed as much as possible as the original data. A theoretical analysis is proposed, proving the (greedy) optimality of the procedure together with explicit error bounds. An adaptive version of the algorithm is also introduced to cope with streaming data. Simulation tests on various data sets and different learning tasks are also provided.

  16. Phase II Trial of Radiosurgery to Magnetic Resonance Spectroscopy-Defined High-Risk Tumor Volumes in Patients With Glioblastoma Multiforme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Einstein, Douglas B., E-mail: douglas.einstein@khnetwork.org; Wessels, Barry; Bangert, Barbara

    2012-11-01

    Purpose: To determine the efficacy of a Gamma Knife stereotactic radiosurgery (SRS) boost to areas of high risk determined by magnetic resonance spectroscopy (MRS) functional imaging in addition to standard radiotherapy for patients with glioblastoma (GBM). Methods and Materials: Thirty-five patients in this prospective Phase II trial underwent surgical resection or biopsy for a GBM followed by SRS directed toward areas of MRS-determined high biological activity within 2 cm of the postoperative enhancing surgical bed. The MRS regions were determined by identifying those voxels within the postoperative T2 magnetic resonance imaging volume that contained an elevated choline/N-acetylaspartate ratio in excessmore » of 2:1. These voxels were marked, digitally fused with the SRS planning magnetic resonance image, targeted with an 8-mm isocenter per voxel, and treated using Radiation Therapy Oncology Group SRS dose guidelines. All patients then received conformal radiotherapy to a total dose of 60 Gy in 2-Gy daily fractions. The primary endpoint was overall survival. Results: The median survival for the entire cohort was 15.8 months. With 75% of recursive partitioning analysis (RPA) Class 3 patients still alive 18 months after treatment, the median survival for RPA Class 3 has not yet been reached. The median survivals for RPA Class 4, 5, and 6 patients were 18.7, 12.5, and 3.9 months, respectively, compared with Radiation Therapy Oncology Group radiotherapy-alone historical control survivals of 11.1, 8.9, and 4.6 months. For the 16 of 35 patients who received concurrent temozolomide in addition to protocol radiotherapeutic treatment, the median survival was 20.8 months, compared with European Organization for Research and Treatment of Cancer historical controls of 14.6 months using radiotherapy and temozolomide. Grade 3/4 toxicities possibly attributable to treatment were 11%. Conclusions: This represents the first prospective trial using selective MRS-targeted functional SRS combined with radiotherapy for patients with GBM. This treatment is feasible, with acceptable toxicity and patient survivals higher than in historical controls. This study can form the basis for a multicenter, randomized trial.« less

  17. Survival Impact of Increasing Time to Treatment Initiation for Patients With Head and Neck Cancer in the United States.

    PubMed

    Murphy, Colin T; Galloway, Thomas J; Handorf, Elizabeth A; Egleston, Brian L; Wang, Lora S; Mehra, Ranee; Flieder, Douglas B; Ridge, John A

    2016-01-10

    To estimate the overall survival (OS) impact from increasing time to treatment initiation (TTI) for patients with head and neck squamous cell carcinoma (HNSCC). Using the National Cancer Data Base (NCDB), we examined patients who received curative therapy for the following sites: oral tongue, oropharynx, larynx, and hypopharynx. TTI was the number of days from diagnosis to initiation of curative treatment. The effect of TTI on OS was determined by using Cox regression models (MVA). Recursive partitioning analysis (RPA) identified TTI thresholds via conditional inference trees to estimate the greatest differences in OS on the basis of randomly selected training and validation sets, and repeated this 1,000 times to ensure robustness of TTI thresholds. A total of 51,655 patients were included. On MVA, TTI of 61 to 90 days versus less than 30 days (hazard ratio [HR], 1.13; 95% CI, 1.08 to 1.19) independently increased mortality risk. TTI of 67 days appeared as the optimal threshold on the training RPA, statistical significance was confirmed in the validation set (P < .001), and the 67-day TTI was the optimal threshold in 54% of repeated simulations. Overall, 96% of simulations validated two optimal TTI thresholds, with ranges of 46 to 52 days and 62 to 67 days. The median OS for TTI of 46 to 52 days or fewer versus 53 to 67 days versus greater than 67 days was 71.9 months (95% CI, 70.3 to 73.5 months) versus 61 months (95% CI, 57 to 66.1 months) versus 46.6 months (95% CI, 42.8 to 50.7 months), respectively (P < .001). In the most recent year with available data (2011), 25% of patients had TTI of greater than 46 days. TTI independently affects survival. One in four patients experienced treatment delay. TTI of greater than 46 to 52 days introduced an increased risk of death that was most consistently detrimental beyond 60 days. Prolonged TTI is currently affecting survival. © 2015 by American Society of Clinical Oncology.

  18. Survival Impact of Increasing Time to Treatment Initiation for Patients With Head and Neck Cancer in the United States

    PubMed Central

    Murphy, Colin T.; Handorf, Elizabeth A.; Egleston, Brian L.; Wang, Lora S.; Mehra, Ranee; Flieder, Douglas B.; Ridge, John A.

    2016-01-01

    Purpose To estimate the overall survival (OS) impact from increasing time to treatment initiation (TTI) for patients with head and neck squamous cell carcinoma (HNSCC). Methods Using the National Cancer Data Base (NCDB), we examined patients who received curative therapy for the following sites: oral tongue, oropharynx, larynx, and hypopharynx. TTI was the number of days from diagnosis to initiation of curative treatment. The effect of TTI on OS was determined by using Cox regression models (MVA). Recursive partitioning analysis (RPA) identified TTI thresholds via conditional inference trees to estimate the greatest differences in OS on the basis of randomly selected training and validation sets, and repeated this 1,000 times to ensure robustness of TTI thresholds. Results A total of 51,655 patients were included. On MVA, TTI of 61 to 90 days versus less than 30 days (hazard ratio [HR], 1.13; 95% CI, 1.08 to 1.19) independently increased mortality risk. TTI of 67 days appeared as the optimal threshold on the training RPA, statistical significance was confirmed in the validation set (P < .001), and the 67-day TTI was the optimal threshold in 54% of repeated simulations. Overall, 96% of simulations validated two optimal TTI thresholds, with ranges of 46 to 52 days and 62 to 67 days. The median OS for TTI of 46 to 52 days or fewer versus 53 to 67 days versus greater than 67 days was 71.9 months (95% CI, 70.3 to 73.5 months) versus 61 months (95% CI, 57 to 66.1 months) versus 46.6 months (95% CI, 42.8 to 50.7 months), respectively (P < .001). In the most recent year with available data (2011), 25% of patients had TTI of greater than 46 days. Conclusion TTI independently affects survival. One in four patients experienced treatment delay. TTI of greater than 46 to 52 days introduced an increased risk of death that was most consistently detrimental beyond 60 days. Prolonged TTI is currently affecting survival. PMID:26628469

  19. Phase 3 Trials of Stereotactic Radiosurgery With or Without Whole-Brain Radiation Therapy for 1 to 4 Brain Metastases: Individual Patient Data Meta-Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sahgal, Arjun, E-mail: arjun.sahgal@sunnybrook.ca; Aoyama, Hidefumi; Kocher, Martin

    Purpose: To perform an individual patient data (IPD) meta-analysis of randomized controlled trials evaluating stereotactic radiosurgery (SRS) with or without whole-brain radiation therapy (WBRT) for patients presenting with 1 to 4 brain metastases. Method and Materials: Three trials were identified through a literature search, and IPD were obtained. Outcomes of interest were survival, local failure, and distant brain failure. The treatment effect was estimated after adjustments for age, recursive partitioning analysis (RPA) score, number of brain metastases, and treatment arm. Results: A total of 364 of the pooled 389 patients met eligibility criteria, of whom 51% were treated with SRSmore » alone and 49% were treated with SRS plus WBRT. For survival, age was a significant effect modifier (P=.04) favoring SRS alone in patients ≤50 years of age, and no significant differences were observed in older patients. Hazard ratios (HRs) for patients 35, 40, 45, and 50 years of age were 0.46 (95% confidence interval [CI] = 0.24-0.90), 0.52 (95% CI = 0.29-0.92), 0.58 (95% CI = 0.35-0.95), and 0.64 (95% CI = 0.42-0.99), respectively. Patients with a single metastasis had significantly better survival than those who had 2 to 4 metastases. For distant brain failure, age was a significant effect modifier (P=.043), with similar rates in the 2 arms for patients ≤50 of age; otherwise, the risk was reduced with WBRT for patients >50 years of age. Patients with a single metastasis also had a significantly lower risk of distant brain failure than patients who had 2 to 4 metastases. Local control significantly favored additional WBRT in all age groups. Conclusions: For patients ≤50 years of age, SRS alone favored survival, in addition, the initial omission of WBRT did not impact distant brain relapse rates. SRS alone may be the preferred treatment for this age group.« less

  20. Weight distributions for turbo codes using random and nonrandom permutations

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Divsalar, D.

    1995-01-01

    This article takes a preliminary look at the weight distributions achievable for turbo codes using random, nonrandom, and semirandom permutations. Due to the recursiveness of the encoders, it is important to distinguish between self-terminating and non-self-terminating input sequences. The non-self-terminating sequences have little effect on decoder performance, because they accumulate high encoded weight until they are artificially terminated at the end of the block. From probabilistic arguments based on selecting the permutations randomly, it is concluded that the self-terminating weight-2 data sequences are the most important consideration in the design of constituent codes; higher-weight self-terminating sequences have successively decreasing importance. Also, increasing the number of codes and, correspondingly, the number of permutations makes it more and more likely that the bad input sequences will be broken up by one or more of the permuters. It is possible to design nonrandom permutations that ensure that the minimum distance due to weight-2 input sequences grows roughly as the square root of (2N), where N is the block length. However, these nonrandom permutations amplify the bad effects of higher-weight inputs, and as a result they are inferior in performance to randomly selected permutations. But there are 'semirandom' permutations that perform nearly as well as the designed nonrandom permutations with respect to weight-2 input sequences and are not as susceptible to being foiled by higher-weight inputs.

  1. Patellar segmentation from 3D magnetic resonance images using guided recursive ray-tracing for edge pattern detection

    NASA Astrophysics Data System (ADS)

    Cheng, Ruida; Jackson, Jennifer N.; McCreedy, Evan S.; Gandler, William; Eijkenboom, J. J. F. A.; van Middelkoop, M.; McAuliffe, Matthew J.; Sheehan, Frances T.

    2016-03-01

    The paper presents an automatic segmentation methodology for the patellar bone, based on 3D gradient recalled echo and gradient recalled echo with fat suppression magnetic resonance images. Constricted search space outlines are incorporated into recursive ray-tracing to segment the outer cortical bone. A statistical analysis based on the dependence of information in adjacent slices is used to limit the search in each image to between an outer and inner search region. A section based recursive ray-tracing mechanism is used to skip inner noise regions and detect the edge boundary. The proposed method achieves higher segmentation accuracy (0.23mm) than the current state-of-the-art methods with the average dice similarity coefficient of 96.0% (SD 1.3%) agreement between the auto-segmentation and ground truth surfaces.

  2. Expansion of all multitrace tree level EYM amplitudes

    NASA Astrophysics Data System (ADS)

    Du, Yi-Jian; Feng, Bo; Teng, Fei

    2017-12-01

    In this paper, we investigate the expansion of tree level multitrace Einstein-Yang-Mills (EYM) amplitudes. First, we propose two types of recursive expansions of tree level EYM amplitudes with an arbitrary number of gluons, gravitons and traces by those amplitudes with fewer traces or/and gravitons. Then we give many support evidence, including proofs using the Cachazo-He-Yuan (CHY) formula and Britto-Cachazo-Feng-Witten (BCFW) recursive relation. As a byproduct, two types of generalized BCJ relations for multitrace EYM are further proposed, which will be useful in the BCFW proof. After one applies the recursive expansions repeatedly, any multitrace EYM amplitudes can be given in the Kleiss-Kuijf (KK) basis of tree level color ordered Yang-Mills (YM) amplitudes. Thus the Bern-Carrasco-Johansson (BCJ) numerators, as the expansion coefficients, for all multitrace EYM amplitudes are naturally constructed.

  3. Health monitoring system for transmission shafts based on adaptive parameter identification

    NASA Astrophysics Data System (ADS)

    Souflas, I.; Pezouvanis, A.; Ebrahimi, K. M.

    2018-05-01

    A health monitoring system for a transmission shaft is proposed. The solution is based on the real-time identification of the physical characteristics of the transmission shaft i.e. stiffness and damping coefficients, by using a physical oriented model and linear recursive identification. The efficacy of the suggested condition monitoring system is demonstrated on a prototype transient engine testing facility equipped with a transmission shaft capable of varying its physical properties. Simulation studies reveal that coupling shaft faults can be detected and isolated using the proposed condition monitoring system. Besides, the performance of various recursive identification algorithms is addressed. The results of this work recommend that the health status of engine dynamometer shafts can be monitored using a simple lumped-parameter shaft model and a linear recursive identification algorithm which makes the concept practically viable.

  4. A recursively formulated first-order semianalytic artificial satellite theory based on the generalized method of averaging. Volume 1: The generalized method of averaging applied to the artificial satellite problem

    NASA Technical Reports Server (NTRS)

    Mcclain, W. D.

    1977-01-01

    A recursively formulated, first-order, semianalytic artificial satellite theory, based on the generalized method of averaging is presented in two volumes. Volume I comprehensively discusses the theory of the generalized method of averaging applied to the artificial satellite problem. Volume II presents the explicit development in the nonsingular equinoctial elements of the first-order average equations of motion. The recursive algorithms used to evaluate the first-order averaged equations of motion are also presented in Volume II. This semianalytic theory is, in principle, valid for a term of arbitrary degree in the expansion of the third-body disturbing function (nonresonant cases only) and for a term of arbitrary degree and order in the expansion of the nonspherical gravitational potential function.

  5. Geomagnetic modeling by optimal recursive filtering

    NASA Technical Reports Server (NTRS)

    Gibbs, B. P.; Estes, R. H.

    1981-01-01

    The results of a preliminary study to determine the feasibility of using Kalman filter techniques for geomagnetic field modeling are given. Specifically, five separate field models were computed using observatory annual means, satellite, survey and airborne data for the years 1950 to 1976. Each of the individual field models used approximately five years of data. These five models were combined using a recursive information filter (a Kalman filter written in terms of information matrices rather than covariance matrices.) The resulting estimate of the geomagnetic field and its secular variation was propogated four years past the data to the time of the MAGSAT data. The accuracy with which this field model matched the MAGSAT data was evaluated by comparisons with predictions from other pre-MAGSAT field models. The field estimate obtained by recursive estimation was found to be superior to all other models.

  6. Renormalization group estimates of transport coefficients in the advection of a passive scalar by incompressible turbulence

    NASA Technical Reports Server (NTRS)

    Zhou, YE; Vahala, George

    1993-01-01

    The advection of a passive scalar by incompressible turbulence is considered using recursive renormalization group procedures in the differential sub grid shell thickness limit. It is shown explicitly that the higher order nonlinearities induced by the recursive renormalization group procedure preserve Galilean invariance. Differential equations, valid for the entire resolvable wave number k range, are determined for the eddy viscosity and eddy diffusivity coefficients, and it is shown that higher order nonlinearities do not contribute as k goes to 0, but have an essential role as k goes to k(sub c) the cutoff wave number separating the resolvable scales from the sub grid scales. The recursive renormalization transport coefficients and the associated eddy Prandtl number are in good agreement with the k-dependent transport coefficients derived from closure theories and experiments.

  7. A recursive linear predictive vocoder

    NASA Astrophysics Data System (ADS)

    Janssen, W. A.

    1983-12-01

    A non-real time 10 pole recursive autocorrelation linear predictive coding vocoder was created for use in studying effects of recursive autocorrelation on speech. The vocoder is composed of two interchangeable pitch detectors, a speech analyzer, and speech synthesizer. The time between updating filter coefficients is allowed to vary from .125 msec to 20 msec. The best quality was found using .125 msec between each update. The greatest change in quality was noted when changing from 20 msec/update to 10 msec/update. Pitch period plots for the center clipping autocorrelation pitch detector and simplified inverse filtering technique are provided. Plots of speech into and out of the vocoder are given. Formant versus time three dimensional plots are shown. Effects of noise on pitch detection and formants are shown. Noise effects the voiced/unvoiced decision process causing voiced speech to be re-constructed as unvoiced.

  8. Dietary fat and not calcium supplementation or dairy product consumption is associated with changes in anthropometrics during a randomized, placebo-controlled energy-restriction trial

    USDA-ARS?s Scientific Manuscript database

    Insufficient calcium intake has been proposed to cause unbalanced energy partitioning leading to obesity. However, weight loss interventions including dietary calcium or dairy product consumption have not reported changes in lipid metabolism measured by the plasma lipidome. Methods. The objective ...

  9. Pareto 80/20 Law: Derivation via Random Partitioning

    ERIC Educational Resources Information Center

    Lipovetsky, Stan

    2009-01-01

    The Pareto 80/20 Rule, also known as the Pareto principle or law, states that a small number of causes (20%) is responsible for a large percentage (80%) of the effect. Although widely recognized as a heuristic rule, this proportion has not been theoretically based. The article considers derivation of this 80/20 rule and some other standard…

  10. Roll Angle Estimation Using Thermopiles for a Flight Controlled Mortar

    DTIC Science & Technology

    2012-06-01

    Using Xilinx’s System generator, the entire design was implemented at a relatively high level within Malab’s Simulink. This allowed VHDL code to...thermopile data with a Recursive Least Squares (RLS) filter implemented on a field programmable gate array (FPGA). These results demonstrate the...accurately estimated by processing the thermopile data with a Recursive Least Squares (RLS) filter implemented on a field programmable gate array (FPGA

  11. Closed-form recursive formula for an optimal tracker with terminal constraints

    NASA Technical Reports Server (NTRS)

    Juang, J. N.; Turner, J. D.; Chun, H. M.

    1986-01-01

    Feedback control laws are derived for a class of optimal finite time tracking problems with terminal constraints. Analytical solutions are obtained for the feedback gain and the closed-loop response trajectory. Such formulations are expressed in recursive forms so that a real-time computer implementation becomes feasible. An example involving the feedback slewing of a flexible spacecraft is given to illustrate the validity and usefulness of the formulations.

  12. Recursive Hierarchical Image Segmentation by Region Growing and Constrained Spectral Clustering

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2002-01-01

    This paper describes an algorithm for hierarchical image segmentation (referred to as HSEG) and its recursive formulation (referred to as RHSEG). The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HS WO) approach to region growing, which seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing. In addition, HSEG optionally interjects between HSWO region growing iterations merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the segmentation results, especially for larger images, it also significantly increases HSEG's computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) has been devised and is described herein. Included in this description is special code that is required to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. Implementations for single processor and for multiple processor computer systems are described. Results with Landsat TM data are included comparing HSEG with classic region growing. Finally, an application to image information mining and knowledge discovery is discussed.

  13. Toward using games to teach fundamental computer science concepts

    NASA Astrophysics Data System (ADS)

    Edgington, Jeffrey Michael

    Video and computer games have become an important area of study in the field of education. Games have been designed to teach mathematics, physics, raise social awareness, teach history and geography, and train soldiers in the military. Recent work has created computer games for teaching computer programming and understanding basic algorithms. We present an investigation where computer games are used to teach two fundamental computer science concepts: boolean expressions and recursion. The games are intended to teach the concepts and not how to implement them in a programming language. For this investigation, two computer games were created. One is designed to teach basic boolean expressions and operators and the other to teach fundamental concepts of recursion. We describe the design and implementation of both games. We evaluate the effectiveness of these games using before and after surveys. The surveys were designed to ascertain basic understanding, attitudes and beliefs regarding the concepts. The boolean game was evaluated with local high school students and students in a college level introductory computer science course. The recursion game was evaluated with students in a college level introductory computer science course. We present the analysis of the collected survey information for both games. This analysis shows a significant positive change in student attitude towards recursion and modest gains in student learning outcomes for both topics.

  14. Recursive linearization of multibody dynamics equations of motion

    NASA Technical Reports Server (NTRS)

    Lin, Tsung-Chieh; Yae, K. Harold

    1989-01-01

    The equations of motion of a multibody system are nonlinear in nature, and thus pose a difficult problem in linear control design. One approach is to have a first-order approximation through the numerical perturbations at a given configuration, and to design a control law based on the linearized model. Here, a linearized model is generated analytically by following the footsteps of the recursive derivation of the equations of motion. The equations of motion are first written in a Newton-Euler form, which is systematic and easy to construct; then, they are transformed into a relative coordinate representation, which is more efficient in computation. A new computational method for linearization is obtained by applying a series of first-order analytical approximations to the recursive kinematic relationships. The method has proved to be computationally more efficient because of its recursive nature. It has also turned out to be more accurate because of the fact that analytical perturbation circumvents numerical differentiation and other associated numerical operations that may accumulate computational error, thus requiring only analytical operations of matrices and vectors. The power of the proposed linearization algorithm is demonstrated, in comparison to a numerical perturbation method, with a two-link manipulator and a seven degrees of freedom robotic manipulator. Its application to control design is also demonstrated.

  15. Algebraic function operator expectation value based quantum eigenstate determination: A case of twisted or bent Hamiltonian, or, a spatially univariate quantum system on a curved space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baykara, N. A.

    Recent studies on quantum evolutionary problems in Demiralp’s group have arrived at a stage where the construction of an expectation value formula for a given algebraic function operator depending on only position operator becomes possible. It has also been shown that this formula turns into an algebraic recursion amongst some finite number of consecutive elements in a set of expectation values of an appropriately chosen basis set over the natural number powers of the position operator as long as the function under consideration and the system Hamiltonian are both autonomous. This recursion corresponds to a denumerable infinite number of algebraicmore » equations whose solutions can or can not be obtained analytically. This idea is not completely original. There are many recursive relations amongst the expectation values of the natural number powers of position operator. However, those recursions may not be always efficient to get the system energy values and especially the eigenstate wavefunctions. The present approach is somehow improved and generalized form of those expansions. We focus on this issue for a specific system where the Hamiltonian is defined on the coordinate of a curved space instead of the Cartesian one.« less

  16. New recursive-least-squares algorithms for nonlinear active control of sound and vibration using neural networks.

    PubMed

    Bouchard, M

    2001-01-01

    In recent years, a few articles describing the use of neural networks for nonlinear active control of sound and vibration were published. Using a control structure with two multilayer feedforward neural networks (one as a nonlinear controller and one as a nonlinear plant model), steepest descent algorithms based on two distinct gradient approaches were introduced for the training of the controller network. The two gradient approaches were sometimes called the filtered-x approach and the adjoint approach. Some recursive-least-squares algorithms were also introduced, using the adjoint approach. In this paper, an heuristic procedure is introduced for the development of recursive-least-squares algorithms based on the filtered-x and the adjoint gradient approaches. This leads to the development of new recursive-least-squares algorithms for the training of the controller neural network in the two networks structure. These new algorithms produce a better convergence performance than previously published algorithms. Differences in the performance of algorithms using the filtered-x and the adjoint gradient approaches are discussed in the paper. The computational load of the algorithms discussed in the paper is evaluated for multichannel systems of nonlinear active control. Simulation results are presented to compare the convergence performance of the algorithms, showing the convergence gain provided by the new algorithms.

  17. Inflation with a graceful exit in a random landscape

    NASA Astrophysics Data System (ADS)

    Pedro, F. G.; Westphal, A.

    2017-03-01

    We develop a stochastic description of small-field inflationary histories with a graceful exit in a random potential whose Hessian is a Gaussian random matrix as a model of the unstructured part of the string landscape. The dynamical evolution in such a random potential from a small-field inflation region towards a viable late-time de Sitter (dS) minimum maps to the dynamics of Dyson Brownian motion describing the relaxation of non-equilibrium eigenvalue spectra in random matrix theory. We analytically compute the relaxation probability in a saddle point approximation of the partition function of the eigenvalue distribution of the Wigner ensemble describing the mass matrices of the critical points. When applied to small-field inflation in the landscape, this leads to an exponentially strong bias against small-field ranges and an upper bound N ≪ 10 on the number of light fields N participating during inflation from the non-observation of negative spatial curvature.

  18. 3.4.1: Numerical Methods for Propagating Uncertainty Across Scales and for Hybrid Stochastic-Deterministic Systems

    DTIC Science & Technology

    2017-10-04

    Fisher’s equation, as well as a two‐dimensional Allen‐ Cahn equation. We  observe  good performance of the  method  for nonlinear problems as well as...rates, discrepancies between  the  two  methods   are  observed ,  hence  revealing  strong  additional  coupling  between  different  fluctuating variables...of random fields based on  observations  of surrogate models or hierarchies of surrogate models.  Our  method  builds upon recent work on recursive

  19. Structurally Dynamic Spin Market Networks

    NASA Astrophysics Data System (ADS)

    Horváth, Denis; Kuscsik, Zoltán

    The agent-based model of stock price dynamics on a directed evolving complex network is suggested and studied by direct simulation. The stationary regime is maintained as a result of the balance between the extremal dynamics, adaptivity of strategic variables and reconnection rules. The inherent structure of node agent "brain" is modeled by a recursive neural network with local and global inputs and feedback connections. For specific parametric combination the complex network displays small-world phenomenon combined with scale-free behavior. The identification of a local leader (network hub, agent whose strategies are frequently adapted by its neighbors) is carried out by repeated random walk process through network. The simulations show empirically relevant dynamics of price returns and volatility clustering. The additional emerging aspects of stylized market statistics are Zipfian distributions of fitness.

  20. Optimal estimation for the satellite attitude using star tracker measurements

    NASA Technical Reports Server (NTRS)

    Lo, J. T.-H.

    1986-01-01

    An optimal estimation scheme is presented, which determines the satellite attitude using the gyro readings and the star tracker measurements of a commonly used satellite attitude measuring unit. The scheme is mainly based on the exponential Fourier densities that have the desirable closure property under conditioning. By updating a finite and fixed number of parameters, the conditional probability density, which is an exponential Fourier density, is recursively determined. Simulation results indicate that the scheme is more accurate and robust than extended Kalman filtering. It is believed that this approach is applicable to many other attitude measuring units. As no linearization and approximation are necessary in the approach, it is ideal for systems involving high levels of randomness and/or low levels of observability and systems for which accuracy is of overriding importance.

Top