Brown, Christopher A.; Brown, Kevin S.
2010-01-01
Correlated amino acid substitution algorithms attempt to discover groups of residues that co-fluctuate due to either structural or functional constraints. Although these algorithms could inform both ab initio protein folding calculations and evolutionary studies, their utility for these purposes has been hindered by a lack of confidence in their predictions due to hard to control sources of error. To complicate matters further, naive users are confronted with a multitude of methods to choose from, in addition to the mechanics of assembling and pruning a dataset. We first introduce a new pair scoring method, called ZNMI (Z-scored-product Normalized Mutual Information), which drastically improves the performance of mutual information for co-fluctuating residue prediction. Second and more important, we recast the process of finding coevolving residues in proteins as a data-processing pipeline inspired by the medical imaging literature. We construct an ensemble of alignment partitions that can be used in a cross-validation scheme to assess the effects of choices made during the procedure on the resulting predictions. This pipeline sensitivity study gives a measure of reproducibility (how similar are the predictions given perturbations to the pipeline?) and accuracy (are residue pairs with large couplings on average close in tertiary structure?). We choose a handful of published methods, along with ZNMI, and compare their reproducibility and accuracy on three diverse protein families. We find that (i) of the algorithms tested, while none appear to be both highly reproducible and accurate, ZNMI is one of the most accurate by far and (ii) while users should be wary of predictions drawn from a single alignment, considering an ensemble of sub-alignments can help to determine both highly accurate and reproducible couplings. Our cross-validation approach should be of interest both to developers and end users of algorithms that try to detect correlated amino acid substitutions. PMID:20531955
Origins of coevolution between residues distant in protein 3D structures
Ovchinnikov, Sergey; Kamisetty, Hetunandan; Baker, David
2017-01-01
Residue pairs that directly coevolve in protein families are generally close in protein 3D structures. Here we study the exceptions to this general trend—directly coevolving residue pairs that are distant in protein structures—to determine the origins of evolutionary pressure on spatially distant residues and to understand the sources of error in contact-based structure prediction. Over a set of 4,000 protein families, we find that 25% of directly coevolving residue pairs are separated by more than 5 Å in protein structures and 3% by more than 15 Å. The majority (91%) of directly coevolving residue pairs in the 5–15 Å range are found to be in contact in at least one homologous structure—these exceptions arise from structural variation in the family in the region containing the residues. Thirty-five percent of the exceptions greater than 15 Å are at homo-oligomeric interfaces, 19% arise from family structural variation, and 27% are in repeat proteins likely reflecting alignment errors. Of the remaining long-range exceptions (<1% of the total number of coupled pairs), many can be attributed to close interactions in an oligomeric state. Overall, the results suggest that directly coevolving residue pairs not in repeat proteins are spatially proximal in at least one biologically relevant protein conformation within the family; we find little evidence for direct coupling between residues at spatially separated allosteric and functional sites or for increased direct coupling between residue pairs on putative allosteric pathways connecting them. PMID:28784799
New Measurement for Correlation of Co-evolution Relationship of Subsequences in Protein.
Gao, Hongyun; Yu, Xiaoqing; Dou, Yongchao; Wang, Jun
2015-12-01
Many computational tools have been developed to measure the protein residues co-evolution. Most of them only focus on co-evolution for pairwise residues in a protein sequence. However, number of residues participate in co-evolution might be multiple. And some co-evolved residues are clustered in several distinct regions in primary structure. Therefore, the co-evolution among the adjacent residues and the correlation between the distinct regions offer insights into function and evolution of the protein and residues. Subsequence is used to represent the adjacent multiple residues in one distinct region. In the paper, co-evolution relationship in each subsequence is represented by mutual information matrix (MIM). Then, Pearson's correlation coefficient: R value is developed to measure the similarity correlation of two MIMs. MSAs from Catalytic Data Base (Catalytic Site Atlas, CSA) are used for testing. R value characterizes a specific class of residues. In contrast to individual pairwise co-evolved residues, adjacent residues without high individual MI values are found since the co-evolved relationship among them is similar to that among another set of adjacent residues. These subsequences possess some flexibility in the composition of side chains, such as the catalyzed environment.
Shen, Hongbo; Xu, Feng; Hu, Hairong; Wang, Feifei; Wu, Qi; Huang, Qiang; Wang, Honghai
2008-12-01
Indole-3-glycerol phosphate synthase (IGPS) is a representative of (beta/alpha)(8)-barrel proteins-the most common enzyme fold in nature. To better understand how the constituent amino-acids work together to define the structure and to facilitate the function, we investigated the evolutionary and dynamical coupling of IGPS residues by combining statistical coupling analysis (SCA) and molecular dynamics (MD) simulations. The coevolving residues identified by the SCA were found to form a network which encloses the active site completely. The MD simulations showed that these coevolving residues are involved in the correlated and anti-correlated motions. The correlated residues are within van der Waals contact and appear to maintain the active site architecture; the anti-correlated residues are mainly distributed on opposite sides of the catalytic cavity and coordinate the motions likely required for the substrate entry and product release. Our findings might have broad implications for proteins with the highly conserved (betaalpha)(8)-barrel in assessing the roles of amino-acids that are moderately conserved and not directly involved in the active site of the (beta/alpha)(8)-barrel. The results of this study could also provide useful information for further exploring the specific residue motions for the catalysis and protein design based on the (beta/alpha)(8)-barrel scaffold.
Wang, Jingwen; Zhao, Yuqi; Wang, Yanjie; Huang, Jingfei
2013-01-16
Coevolution between proteins is crucial for understanding protein-protein interaction. Simultaneous changes allow a protein complex to maintain its overall structural-functional integrity. In this study, we combined statistical coupling analysis (SCA) and molecular dynamics simulations on the CDK6-CDKN2A protein complex to evaluate coevolution between proteins. We reconstructed an inter-protein residue coevolution network, consisting of 37 residues and 37 interactions. It shows that most of the coevolved residue pairs are spatially proximal. When the mutations happened, the stable local structures were broken up and thus the protein interaction was decreased or inhibited, with a following increased risk of melanoma. The identification of inter-protein coevolved residues in the CDK6-CDKN2A complex can be helpful for designing protein engineering experiments. Copyright © 2012 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Guo, Zhan; Yan, Xuefeng
2018-04-01
Different operating conditions of p-xylene oxidation have different influences on the product, purified terephthalic acid. It is necessary to obtain the optimal combination of reaction conditions to ensure the quality of the products, cut down on consumption and increase revenues. A multi-objective differential evolution (MODE) algorithm co-evolved with the population-based incremental learning (PBIL) algorithm, called PBMODE, is proposed. The PBMODE algorithm was designed as a co-evolutionary system. Each individual has its own parameter individual, which is co-evolved by PBIL. PBIL uses statistical analysis to build a model based on the corresponding symbiotic individuals of the superior original individuals during the main evolutionary process. The results of simulations and statistical analysis indicate that the overall performance of the PBMODE algorithm is better than that of the compared algorithms and it can be used to optimize the operating conditions of the p-xylene oxidation process effectively and efficiently.
Coevolving memetic algorithms: a review and progress report.
Smith, Jim E
2007-02-01
Coevolving memetic algorithms are a family of metaheuristic search algorithms in which a rule-based representation of local search (LS) is coadapted alongside candidate solutions within a hybrid evolutionary system. Simple versions of these systems have been shown to outperform other nonadaptive memetic and evolutionary algorithms on a range of problems. This paper presents a rationale for such systems and places them in the context of other recent work on adaptive memetic algorithms. It then proposes a general structure within which a population of LS algorithms can be evolved in tandem with the solutions to which they are applied. Previous research started with a simple self-adaptive system before moving on to more complex models. Results showed that the algorithm was able to discover and exploit certain forms of structure and regularities within the problems. This "metalearning" of problem features provided a means of creating highly scalable algorithms. This work is briefly reviewed to highlight some of the important findings and behaviors exhibited. Based on this analysis, new results are then presented from systems with more flexible representations, which, again, show significant improvements. Finally, the current state of, and future directions for, research in this area is discussed.
Structural Determinants of Sleeping Beauty Transposase Activity
Abrusán, György; Yant, Stephen R; Szilágyi, András; Marsh, Joseph A; Mátés, Lajos; Izsvák, Zsuzsanna; Barabás, Orsolya; Ivics, Zoltán
2016-01-01
Transposases are important tools in genome engineering, and there is considerable interest in engineering more efficient ones. Here, we seek to understand the factors determining their activity using the Sleeping Beauty transposase. Recent work suggests that protein coevolutionary information can be used to classify groups of physically connected, coevolving residues into elements called “sectors”, which have proven useful for understanding the folding, allosteric interactions, and enzymatic activity of proteins. Using extensive mutagenesis data, protein modeling and analysis of folding energies, we show that (i) The Sleeping Beauty transposase contains two sectors, which span across conserved domains, and are enriched in DNA-binding residues, indicating that the DNA binding and endonuclease functions of the transposase coevolve; (ii) Sector residues are highly sensitive to mutations, and most mutations of these residues strongly reduce transposition rate; (iii) Mutations with a strong effect on free energy of folding in the DDE domain of the transposase significantly reduce transposition rate. (iv) Mutations that influence DNA and protein-protein interactions generally reduce transposition rate, although most hyperactive mutants are also located on the protein surface, including residues with protein-protein interactions. This suggests that hyperactivity results from the modification of protein interactions, rather than the stabilization of protein fold. PMID:27401040
Multi-strategy coevolving aging particle optimization.
Iacca, Giovanni; Caraffini, Fabio; Neri, Ferrante
2014-02-01
We propose Multi-Strategy Coevolving Aging Particles (MS-CAP), a novel population-based algorithm for black-box optimization. In a memetic fashion, MS-CAP combines two components with complementary algorithm logics. In the first stage, each particle is perturbed independently along each dimension with a progressively shrinking (decaying) radius, and attracted towards the current best solution with an increasing force. In the second phase, the particles are mutated and recombined according to a multi-strategy approach in the fashion of the ensemble of mutation strategies in Differential Evolution. The proposed algorithm is tested, at different dimensionalities, on two complete black-box optimization benchmarks proposed at the Congress on Evolutionary Computation 2010 and 2013. To demonstrate the applicability of the approach, we also test MS-CAP to train a Feedforward Neural Network modeling the kinematics of an 8-link robot manipulator. The numerical results show that MS-CAP, for the setting considered in this study, tends to outperform the state-of-the-art optimization algorithms on a large set of problems, thus resulting in a robust and versatile optimizer.
An analysis of parameter sensitivities of preference-inspired co-evolutionary algorithms
NASA Astrophysics Data System (ADS)
Wang, Rui; Mansor, Maszatul M.; Purshouse, Robin C.; Fleming, Peter J.
2015-10-01
Many-objective optimisation problems remain challenging for many state-of-the-art multi-objective evolutionary algorithms. Preference-inspired co-evolutionary algorithms (PICEAs) which co-evolve the usual population of candidate solutions with a family of decision-maker preferences during the search have been demonstrated to be effective on such problems. However, it is unknown whether PICEAs are robust with respect to the parameter settings. This study aims to address this question. First, a global sensitivity analysis method - the Sobol' variance decomposition method - is employed to determine the relative importance of the parameters controlling the performance of PICEAs. Experimental results show that the performance of PICEAs is controlled for the most part by the number of function evaluations. Next, we investigate the effect of key parameters identified from the Sobol' test and the genetic operators employed in PICEAs. Experimental results show improved performance of the PICEAs as more preferences are co-evolved. Additionally, some suggestions for genetic operator settings are provided for non-expert users.
Directed evolution to re-adapt a co-evolved network within an enzyme.
Strafford, John; Payongsri, Panwajee; Hibbert, Edward G; Morris, Phattaraporn; Batth, Sukhjeet S; Steadman, David; Smith, Mark E B; Ward, John M; Hailes, Helen C; Dalby, Paul A
2012-01-01
We have previously used targeted active-site saturation mutagenesis to identify a number of transketolase single mutants that improved activity towards either glycolaldehyde (GA), or the non-natural substrate propionaldehyde (PA). Here, all attempts to recombine the singles into double mutants led to unexpected losses of specific activity towards both substrates. A typical trade-off occurred between soluble expression levels and specific activity for all single mutants, but many double mutants decreased both properties more severely suggesting a critical loss of protein stability or native folding. Statistical coupling analysis (SCA) of a large multiple sequence alignment revealed a network of nine co-evolved residues that affected all but one double mutant. Such networks maintain important functional properties such as activity, specificity, folding, stability, and solubility and may be rapidly disrupted by introducing one or more non-naturally occurring mutations. To identify variants of this network that would accept and improve upon our best D469 mutants for activity towards PA, we created a library of random single, double and triple mutants across seven of the co-evolved residues, combining our D469 variants with only naturally occurring mutations at the remaining sites. A triple mutant cluster at D469, E498 and R520 was found to behave synergistically for the specific activity towards PA. Protein expression was severely reduced by E498D and improved by R520Q, yet variants containing both mutations led to improved specific activity and enzyme expression, but with loss of solubility and the formation of inclusion bodies. D469S and R520Q combined synergistically to improve k(cat) 20-fold for PA, more than for any previous transketolase mutant. R520Q also doubled the specific activity of the previously identified D469T to create our most active transketolase mutant to date. Our results show that recombining active-site mutants obtained by saturation mutagenesis can rapidly destabilise critical networks of co-evolved residues, whereas beneficial single mutants can be retained and improved upon by randomly recombining them with natural variants at other positions in the network. Copyright © 2011 Elsevier B.V. All rights reserved.
Fast Algorithms for Mining Co-evolving Time Series
2011-09-01
Keogh et al., 2001, 2004] and (b) forecasting, like an autoregressive integrated moving average model ( ARIMA ) and related meth- ods [Box et al., 1994...computing hardware? We develop models to mine time series with missing values, to extract compact representation from time sequences, to segment the...sequences, and to do forecasting. For large scale data, we propose algorithms for learning time series models , in particular, including Linear Dynamical
Modeling growth and dissemination of lymphoma in a co-evolving lymph node: a diffuse-domain approach
NASA Astrophysics Data System (ADS)
Chuang, Yao-Li; Cristini, Vittorio; Chen, Ying; Li, Xiangrong; Frieboes, Hermann; Lowengrub, John
2013-03-01
While partial differential equation models of tumor growth have successfully described various spatiotemporal phenomena observed for in-vitro tumor spheroid experiments, one challenge towards taking these models to further study in-vivo tumors is that instead of relatively static tissue culture with regular boundary conditions, in-vivo tumors are often confined in organ tissues that co-evolve with the tumor growth. Here we adopt a recently developed diffuse-domain method to account for the co-evolving domain boundaries, adapting our previous in-vitro tumor model for the development of lymphoma encapsulated in a lymph node, which may swell or shrink due to proliferation and dissemination of lymphoma cells and treatment by chemotherapy. We use the model to study the induced spatial heterogeneity, which may arise as an emerging phenomenon in experimental observations and model analysis. Spatial heterogeneity is believed to lead to tumor infiltration patterns and reduce the efficacy of chemotherapy, leaving residuals that cause cancer relapse after the treatment. Understanding the spatiotemporal evolution of in-vivo tumors can be an essential step towards more effective strategies of curing cancer. Supported by NIH-PSOC grant 1U54CA143907-01.
Cleveland, Sean B.; Davies, John; McClure, Marcella A.
2011-01-01
The goal of this Bioinformatic study is to investigate sequence conservation in relation to evolutionary function/structure of the nucleoprotein of the order Mononegavirales. In the combined analysis of 63 representative nucleoprotein (N) sequences from four viral families (Bornaviridae, Filoviridae, Rhabdoviridae, and Paramyxoviridae) we predict the regions of protein disorder, intra-residue contact and co-evolving residues. Correlations between location and conservation of predicted regions illustrate a strong division between families while high- lighting conservation within individual families. These results suggest the conserved regions among the nucleoproteins, specifically within Rhabdoviridae and Paramyxoviradae, but also generally among all members of the order, reflect an evolutionary advantage in maintaining these sites for the viral nucleoprotein as part of the transcription/replication machinery. Results indicate conservation for disorder in the C-terminus region of the representative proteins that is important for interacting with the phosphoprotein and the large subunit polymerase during transcription and replication. Additionally, the C-terminus region of the protein preceding the disordered region, is predicted to be important for interacting with the encapsidated genome. Portions of the N-terminus are responsible for N∶N stability and interactions identified by the presence or lack of co-evolving intra-protein contact predictions. The validation of these prediction results by current structural information illustrates the benefits of the Disorder, Intra-residue contact and Compensatory mutation Correlator (DisICC) pipeline as a method for quickly characterizing proteins and providing the most likely residues and regions necessary to target for disruption in viruses that have little structural information available. PMID:21559282
NASA Astrophysics Data System (ADS)
Champeimont, Raphaël; Laine, Elodie; Hu, Shuang-Wei; Penin, Francois; Carbone, Alessandra
2016-05-01
A novel computational approach of coevolution analysis allowed us to reconstruct the protein-protein interaction network of the Hepatitis C Virus (HCV) at the residue resolution. For the first time, coevolution analysis of an entire viral genome was realized, based on a limited set of protein sequences with high sequence identity within genotypes. The identified coevolving residues constitute highly relevant predictions of protein-protein interactions for further experimental identification of HCV protein complexes. The method can be used to analyse other viral genomes and to predict the associated protein interaction networks.
Fast stochastic algorithm for simulating evolutionary population dynamics
NASA Astrophysics Data System (ADS)
Tsimring, Lev; Hasty, Jeff; Mather, William
2012-02-01
Evolution and co-evolution of ecological communities are stochastic processes often characterized by vastly different rates of reproduction and mutation and a coexistence of very large and very small sub-populations of co-evolving species. This creates serious difficulties for accurate statistical modeling of evolutionary dynamics. In this talk, we introduce a new exact algorithm for fast fully stochastic simulations of birth/death/mutation processes. It produces a significant speedup compared to the direct stochastic simulation algorithm in a typical case when the total population size is large and the mutation rates are much smaller than birth/death rates. We illustrate the performance of the algorithm on several representative examples: evolution on a smooth fitness landscape, NK model, and stochastic predator-prey system.
Oliveira, Alberto; Bleicher, Lucas; Schrago, Carlos G; Silva Junior, Floriano Paes
2018-05-01
Phospholipases A2 (PLA 2 s) comprise a superfamily of glycerophospholipids hydrolyzing enzymes present in many organisms in nature, whose catalytic activity was majorly unveiled by analysis of snake venoms. The latter have pharmaceutical and biotechnological interests and can be divided into different functional sub-classes. Our goal was to identify important residues and their relation to the functional and class-specific characteristics in the PLA 2 s family with special emphasis on snake venom PLA 2 s (svPLA 2 s). We identified such residues by conservation analysis and decomposition of residue coevolution networks (DRCN), annotated the results based on the available literature on PLA 2 s, structural analysis and molecular dynamics simulations, and related the results to the phylogenetic distribution of these proteins. A filtered alignment of PLA 2 s revealed 14 highly conserved positions and 3 sets of coevolved residues, which were annotated according to their structural or functional role. These residues are mostly involved in ligand binding and catalysis, calcium-binding, the formation of disulfide bridges and a hydrophobic cluster close to the binding site. An independent validation of the inference of structure-function relationships from our co-evolution analysis on the svPLA2s family was obtained by the analysis of the pattern of selection acting on the Viperidae and Elapidae lineages. Additionally, a molecular dynamics simulation on the Lys49 PLA 2 from Agkistrodon contortrix laticinctus was carried out to further investigate the correlation of the Lys49-Glu69 pair. Our results suggest this configuration can result in a novel conformation where the binding cavity collapses due to the approximation of two loops caused by a strong salt bridge between Glu69 and Arg34. Finally, phylogenetic analysis indicated a correlation between the presence of residues in the coevolved sets found in this analysis and the clade localization. The results provide a guide for important positions in the family of PLA 2 s, and potential new objects of investigation. Copyright © 2018 Elsevier Ltd. All rights reserved.
Detecting Coevolution in and among Protein Domains
Yeang, Chen-Hsiang; Haussler, David
2007-01-01
Correlated changes of nucleic or amino acids have provided strong information about the structures and interactions of molecules. Despite the rich literature in coevolutionary sequence analysis, previous methods often have to trade off between generality, simplicity, phylogenetic information, and specific knowledge about interactions. Furthermore, despite the evidence of coevolution in selected protein families, a comprehensive screening of coevolution among all protein domains is still lacking. We propose an augmented continuous-time Markov process model for sequence coevolution. The model can handle different types of interactions, incorporate phylogenetic information and sequence substitution, has only one extra free parameter, and requires no knowledge about interaction rules. We employ this model to large-scale screenings on the entire protein domain database (Pfam). Strikingly, with 0.1 trillion tests executed, the majority of the inferred coevolving protein domains are functionally related, and the coevolving amino acid residues are spatially coupled. Moreover, many of the coevolving positions are located at functionally important sites of proteins/protein complexes, such as the subunit linkers of superoxide dismutase, the tRNA binding sites of ribosomes, the DNA binding region of RNA polymerase, and the active and ligand binding sites of various enzymes. The results suggest sequence coevolution manifests structural and functional constraints of proteins. The intricate relations between sequence coevolution and various selective constraints are worth pursuing at a deeper level. PMID:17983264
Stern, L.A.; Brown, Gordon E.; Bird, D.K.; Jahns, R.H.; Foord, E.E.; Shigley, J.E.; Spaulding, L.B.
1986-01-01
Several layered pegmatite-aplite intrusives exposed at the Little Three mine, Ramona, display closely associated fine-grained to giant-textured mineral assemblages which are believed to have co-evolved from a hydrous aluminosilicate residual melt with an exsolved supercritical vapour phase. Calculations of phase relations between the major pegmatite-aplite mineral assemblages and supercritical aqueous fluid were made, assuming equilibrium and closed-system behaviour as a first-order model.-J.A.Z.
Research on wind field algorithm of wind lidar based on BP neural network and grey prediction
NASA Astrophysics Data System (ADS)
Chen, Yong; Chen, Chun-Li; Luo, Xiong; Zhang, Yan; Yang, Ze-hou; Zhou, Jie; Shi, Xiao-ding; Wang, Lei
2018-01-01
This paper uses the BP neural network and grey algorithm to forecast and study radar wind field. In order to reduce the residual error in the wind field prediction which uses BP neural network and grey algorithm, calculating the minimum value of residual error function, adopting the residuals of the gray algorithm trained by BP neural network, using the trained network model to forecast the residual sequence, using the predicted residual error sequence to modify the forecast sequence of the grey algorithm. The test data show that using the grey algorithm modified by BP neural network can effectively reduce the residual value and improve the prediction precision.
Prathiviraj, R; Prisilla, A; Chellapandi, P
2016-06-01
Clostridium botulinum is anaerobic pathogenic bacterium causing food-born botulism in human and animals by producing botulinum neurotoxins A-H, C2, and C3 cytotoxins. Physiological group III strains (type C and D) of this bacterium are capable of producing C2 and C3 toxins in cattle and avian. Herein, we have revealed the structure-function disparity of C3 toxins from two different C. botulinum type C phage (CboC) and type D phage (CboD) to design avirulent toxins rationally. Structure-function discrepancy of the both toxins was computationally evaluated from their homology models based on the conservation in sequence-structure-function relationships upon covariation and point mutations. It has shown that 8 avirulent mutants were generated from CboC of 34 mutants while 27 avirulent mutants resulted from CboD mutants. No major changes were found in tertiary structure of these toxins; however, some structural variations appeared in the coiled and loop regions. Correlated mutation on the first residue would disorder or revolutionize the hydrogen bonding pattern of the coevolved pairs. It suggested that the residues coupling in the local structural environments were compensated with coevolved pairs so as to preserve a pseudocatalytic function in the avirulent mutants. Avirulent mutants of C3 toxins have shown a stable structure with a common blue print of folding process and also attained a near-native backrub ensemble. Thus, we concluded that selecting the site-directed mutagenesis sites are very important criteria for designing avirulent toxins, in development of rational subunit vaccines, to cattle and avian, but the vaccine specificity can be determined by the C3 toxins of C. botulinum harboring phages.
da Fonseca, Néli José; Lima Afonso, Marcelo Querino; Pedersolli, Natan Gonçalves; de Oliveira, Lucas Carrijo; Andrade, Dhiego Souto; Bleicher, Lucas
2017-10-28
Flaviviruses are responsible for serious diseases such as dengue, yellow fever, and zika fever. Their genomes encode a polyprotein which, after cleavage, results in three structural and seven non-structural proteins. Homologous proteins can be studied by conservation and coevolution analysis as detected in multiple sequence alignments, usually reporting positions which are strictly necessary for the structure and/or function of all members in a protein family or which are involved in a specific sub-class feature requiring the coevolution of residue sets. This study provides a complete conservation and coevolution analysis on all flaviviruses non-structural proteins, with results mapped on all well-annotated available sequences. A literature review on the residues found in the analysis enabled us to compile available information on their roles and distribution among different flaviviruses. Also, we provide the mapping of conserved and coevolved residues for all sequences currently in SwissProt as a supplementary material, so that particularities in different viruses can be easily analyzed. Copyright © 2017 Elsevier Inc. All rights reserved.
Adaptive Evolution of Signaling Partners
Urano, Daisuke; Dong, Taoran; Bennetzen, Jeffrey L.; Jones, Alan M.
2015-01-01
Proteins that interact coevolve their structures. When mutation disrupts the interaction, compensation by the partner occurs to restore interaction otherwise counterselection occurs. We show in this study how a destabilizing mutation in one protein is compensated by a stabilizing mutation in its protein partner and their coevolving path. The pathway in this case and likely a general principle of coevolution is that the compensatory change must tolerate both the original and derived structures with equivalence in function and activity. Evolution of the structure of signaling elements in a network is constrained by specific protein pair interactions, by requisite conformational changes, and by catalytic activity. The heterotrimeric G protein-coupled signaling is a paragon of this protein interaction/function complexity and our deep understanding of this pathway in diverse organisms lends itself to evolutionary study. Regulators of G protein Signaling (RGS) proteins accelerate the intrinsic GTP hydrolysis rate of the Gα subunit of the heterotrimeric G protein complex. An important RGS-contact site is a hydroxyl-bearing residue on the switch I region of Gα subunits in animals and most plants, such as Arabidopsis. The exception is the grasses (e.g., rice, maize, sugarcane, millets); these plants have Gα subunits that replaced the critical hydroxyl-bearing threonine with a destabilizing asparagine shown to disrupt interaction between Arabidopsis RGS protein (AtRGS1) and the grass Gα subunit. With one known exception (Setaria italica), grasses do not encode RGS genes. One parsimonious deduction is that the RGS gene was lost in the ancestor to the grasses and then recently acquired horizontally in the lineage S. italica from a nongrass monocot. Like all investigated grasses, S. italica has the Gα subunit with the destabilizing asparagine residue in the protein interface but, unlike other known grass genomes, still encodes an expressed RGS gene, SiRGS1. SiRGS1 accelerates GTP hydrolysis at similar concentration of both Gα subunits containing either the stabilizing (AtGPA1) or destabilizing (RGA1) interface residue. SiRGS1 does not use the hydroxyl-bearing residue on Gα to promote GAP activity and has a larger Gα-interface pocket fitting to the destabilizing Gα. These findings indicate that SiRGS1 adapted to a deleterious mutation on Gα using existing polymorphism in the RGS protein population. PMID:25568345
Translation fidelity coevolves with longevity.
Ke, Zhonghe; Mallik, Pramit; Johnson, Adam B; Luna, Facundo; Nevo, Eviatar; Zhang, Zhengdong D; Gladyshev, Vadim N; Seluanov, Andrei; Gorbunova, Vera
2017-10-01
Whether errors in protein synthesis play a role in aging has been a subject of intense debate. It has been suggested that rare mistakes in protein synthesis in young organisms may result in errors in the protein synthesis machinery, eventually leading to an increasing cascade of errors as organisms age. Studies that followed generally failed to identify a dramatic increase in translation errors with aging. However, whether translation fidelity plays a role in aging remained an open question. To address this issue, we examined the relationship between translation fidelity and maximum lifespan across 17 rodent species with diverse lifespans. To measure translation fidelity, we utilized sensitive luciferase-based reporter constructs with mutations in an amino acid residue critical to luciferase activity, wherein misincorporation of amino acids at this mutated codon re-activated the luciferase. The frequency of amino acid misincorporation at the first and second codon positions showed strong negative correlation with maximum lifespan. This correlation remained significant after phylogenetic correction, indicating that translation fidelity coevolves with longevity. These results give new life to the role of protein synthesis errors in aging: Although the error rate may not significantly change with age, the basal rate of translation errors is important in defining lifespan across mammals. © 2017 The Authors. Aging Cell published by the Anatomical Society and John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Macready, William G.
2005-01-01
Recent work on the mathematical foundations of optimization has begun to uncover its rich structure. In particular, the "No Free Lunch" (NFL) theorems state that any two algorithms are equivalent when their performance is averaged across all possible problems. This highlights the need for exploiting problem-specific knowledge to achieve better than random performance. In this paper we present a general framework covering more search scenarios. In addition to the optimization scenarios addressed in the NFL results, this framework covers multi-armed bandit problems and evolution of multiple co-evolving players. As a particular instance of the latter, it covers "self-play" problems. In these problems the set of players work together to produce a champion, who then engages one or more antagonists in a subsequent multi-player game. In contrast to the traditional optimization case where the NFL results hold, we show that in self-play there are free lunches: in coevolution some algorithms have better performance than other algorithms, averaged across all possible problems. We consider the implications of these results to biology where there is no champion.
Extremal Optimization: Methods Derived from Co-Evolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boettcher, S.; Percus, A.G.
1999-07-13
We describe a general-purpose method for finding high-quality solutions to hard optimization problems, inspired by self-organized critical models of co-evolution such as the Bak-Sneppen model. The method, called Extremal Optimization, successively eliminates extremely undesirable components of sub-optimal solutions, rather than ''breeding'' better components. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, Extremal Optimization improves on a single candidate solution by treating each of its components as species co-evolving according to Darwinian principles. Unlike Simulated Annealing, its non-equilibrium approach effects an algorithm requiring few parameters to tune. With only one adjustable parameter, its performance provesmore » competitive with, and often superior to, more elaborate stochastic optimization procedures. We demonstrate it here on two classic hard optimization problems: graph partitioning and the traveling salesman problem.« less
Pandini, Alessandro; Kleinjung, Jens; Rasool, Shafqat; Khan, Shahid
2015-01-01
Switching of bacterial flagellar rotation is caused by large domain movements of the FliG protein triggered by binding of the signal protein CheY to FliM. FliG and FliM form adjacent multi-subunit arrays within the basal body C-ring. The movements alter the interaction of the FliG C-terminal (FliGC) “torque” helix with the stator complexes. Atomic models based on the Salmonella entrovar C-ring electron microscopy reconstruction have implications for switching, but lack consensus on the relative locations of the FliG armadillo (ARM) domains (amino-terminal (FliGN), middle (FliGM) and FliGC) as well as changes during chemotaxis. The generality of the Salmonella model is challenged by the variation in motor morphology and response between species. We studied coevolved residue mutations to determine the unifying elements of switch architecture. Residue interactions, measured by their coevolution, were formalized as a network, guided by structural data. Our measurements reveal a common design with dedicated switch and motor modules. The FliM middle domain (FliMM) has extensive connectivity most simply explained by conserved intra and inter-subunit contacts. In contrast, FliG has patchy, complex architecture. Conserved structural motifs form interacting nodes in the coevolution network that wire FliMM to the FliGC C-terminal, four-helix motor module (C3-6). FliG C3-6 coevolution is organized around the torque helix, differently from other ARM domains. The nodes form separated, surface-proximal patches that are targeted by deleterious mutations as in other allosteric systems. The dominant node is formed by the EHPQ motif at the FliMMFliGM contact interface and adjacent helix residues at a central location within FliGM. The node interacts with nodes in the N-terminal FliGc α-helix triad (ARM-C) and FliGN. ARM-C, separated from C3-6 by the MFVF motif, has poor intra-network connectivity consistent with its variable orientation revealed by structural data. ARM-C could be the convertor element that provides mechanistic and species diversity. PMID:26561852
Truth and probability in evolutionary games
NASA Astrophysics Data System (ADS)
Barrett, Jeffrey A.
2017-01-01
This paper concerns two composite Lewis-Skyrms signalling games. Each consists in a base game that evolves a language descriptive of nature and a metagame that coevolves a language descriptive of the base game and its evolving language. The first composite game shows how a pragmatic notion of truth might coevolve with a simple descriptive language. The second shows how a pragmatic notion of probability might similarly coevolve. Each of these pragmatic notions is characterised by the particular game and role that it comes to play in the game.
HIV-1 protease-substrate coevolution in nelfinavir resistance.
Kolli, Madhavi; Ozen, Ayşegül; Kurt-Yilmaz, Nese; Schiffer, Celia A
2014-07-01
Resistance to various human immunodeficiency virus type 1 (HIV-1) protease inhibitors (PIs) challenges the effectiveness of therapies in treating HIV-1-infected individuals and AIDS patients. The virus accumulates mutations within the protease (PR) that render the PIs less potent. Occasionally, Gag sequences also coevolve with mutations at PR cleavage sites contributing to drug resistance. In this study, we investigated the structural basis of coevolution of the p1-p6 cleavage site with the nelfinavir (NFV) resistance D30N/N88D protease mutations by determining crystal structures of wild-type and NFV-resistant HIV-1 protease in complex with p1-p6 substrate peptide variants with L449F and/or S451N. Alterations of residue 30's interaction with the substrate are compensated by the coevolving L449F and S451N cleavage site mutations. This interdependency in the PR-p1-p6 interactions enhances intermolecular contacts and reinforces the overall fit of the substrate within the substrate envelope, likely enabling coevolution to sustain substrate recognition and cleavage in the presence of PR resistance mutations. Resistance to human immunodeficiency virus type 1 (HIV-1) protease inhibitors challenges the effectiveness of therapies in treating HIV-1-infected individuals and AIDS patients. Mutations in HIV-1 protease selected under the pressure of protease inhibitors render the inhibitors less potent. Occasionally, Gag sequences also mutate and coevolve with protease, contributing to maintenance of viral fitness and to drug resistance. In this study, we investigated the structural basis of coevolution at the Gag p1-p6 cleavage site with the nelfinavir (NFV) resistance D30N/N88D protease mutations. Our structural analysis reveals the interdependency of protease-substrate interactions and how coevolution may restore substrate recognition and cleavage in the presence of protease drug resistance mutations. Copyright © 2014, American Society for Microbiology. All Rights Reserved.
Accounting for epistatic interactions improves the functional analysis of protein structures.
Wilkins, Angela D; Venner, Eric; Marciano, David C; Erdin, Serkan; Atri, Benu; Lua, Rhonald C; Lichtarge, Olivier
2013-11-01
The constraints under which sequence, structure and function coevolve are not fully understood. Bringing this mutual relationship to light can reveal the molecular basis of binding, catalysis and allostery, thereby identifying function and rationally guiding protein redesign. Underlying these relationships are the epistatic interactions that occur when the consequences of a mutation to a protein are determined by the genetic background in which it occurs. Based on prior data, we hypothesize that epistatic forces operate most strongly between residues nearby in the structure, resulting in smooth evolutionary importance across the structure. We find that when residue scores of evolutionary importance are distributed smoothly between nearby residues, functional site prediction accuracy improves. Accordingly, we designed a novel measure of evolutionary importance that focuses on the interaction between pairs of structurally neighboring residues. This measure that we term pair-interaction Evolutionary Trace yields greater functional site overlap and better structure-based proteome-wide functional predictions. Our data show that the structural smoothness of evolutionary importance is a fundamental feature of the coevolution of sequence, structure and function. Mutations operate on individual residues, but selective pressure depends in part on the extent to which a mutation perturbs interactions with neighboring residues. In practice, this principle led us to redefine the importance of a residue in terms of the importance of its epistatic interactions with neighbors, yielding better annotation of functional residues, motivating experimental validation of a novel functional site in LexA and refining protein function prediction. lichtarge@bcm.edu. Supplementary data are available at Bioinformatics online.
Accounting for epistatic interactions improves the functional analysis of protein structures
Wilkins, Angela D.; Venner, Eric; Marciano, David C.; Erdin, Serkan; Atri, Benu; Lua, Rhonald C.; Lichtarge, Olivier
2013-01-01
Motivation: The constraints under which sequence, structure and function coevolve are not fully understood. Bringing this mutual relationship to light can reveal the molecular basis of binding, catalysis and allostery, thereby identifying function and rationally guiding protein redesign. Underlying these relationships are the epistatic interactions that occur when the consequences of a mutation to a protein are determined by the genetic background in which it occurs. Based on prior data, we hypothesize that epistatic forces operate most strongly between residues nearby in the structure, resulting in smooth evolutionary importance across the structure. Methods and Results: We find that when residue scores of evolutionary importance are distributed smoothly between nearby residues, functional site prediction accuracy improves. Accordingly, we designed a novel measure of evolutionary importance that focuses on the interaction between pairs of structurally neighboring residues. This measure that we term pair-interaction Evolutionary Trace yields greater functional site overlap and better structure-based proteome-wide functional predictions. Conclusions: Our data show that the structural smoothness of evolutionary importance is a fundamental feature of the coevolution of sequence, structure and function. Mutations operate on individual residues, but selective pressure depends in part on the extent to which a mutation perturbs interactions with neighboring residues. In practice, this principle led us to redefine the importance of a residue in terms of the importance of its epistatic interactions with neighbors, yielding better annotation of functional residues, motivating experimental validation of a novel functional site in LexA and refining protein function prediction. Contact: lichtarge@bcm.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24021383
Suplatov, Dmitry; Sharapova, Yana; Timonina, Daria; Kopylov, Kirill; Švedas, Vytas
2018-04-01
The visualCMAT web-server was designed to assist experimental research in the fields of protein/enzyme biochemistry, protein engineering, and drug discovery by providing an intuitive and easy-to-use interface to the analysis of correlated mutations/co-evolving residues. Sequence and structural information describing homologous proteins are used to predict correlated substitutions by the Mutual information-based CMAT approach, classify them into spatially close co-evolving pairs, which either form a direct physical contact or interact with the same ligand (e.g. a substrate or a crystallographic water molecule), and long-range correlations, annotate and rank binding sites on the protein surface by the presence of statistically significant co-evolving positions. The results of the visualCMAT are organized for a convenient visual analysis and can be downloaded to a local computer as a content-rich all-in-one PyMol session file with multiple layers of annotation corresponding to bioinformatic, statistical and structural analyses of the predicted co-evolution, or further studied online using the built-in interactive analysis tools. The online interactivity is implemented in HTML5 and therefore neither plugins nor Java are required. The visualCMAT web-server is integrated with the Mustguseal web-server capable of constructing large structure-guided sequence alignments of protein families and superfamilies using all available information about their structures and sequences in public databases. The visualCMAT web-server can be used to understand the relationship between structure and function in proteins, implemented at selecting hotspots and compensatory mutations for rational design and directed evolution experiments to produce novel enzymes with improved properties, and employed at studying the mechanism of selective ligand's binding and allosteric communication between topologically independent sites in protein structures. The web-server is freely available at https://biokinet.belozersky.msu.ru/visualcmat and there are no login requirements.
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Macready, William G.
2005-01-01
Recent work on the foundations of optimization has begun to uncover its underlying rich structure. In particular, the "No Free Lunch" (NFL) theorems [WM97] state that any two algorithms are equivalent when their performance is averaged across all possible problems. This highlights the need for exploiting problem-specific knowledge to achieve better than random performance. In this paper we present a general framework covering most search scenarios. In addition to the optimization scenarios addressed in the NFL results, this framework covers multi-armed bandit problems and evolution of multiple co-evolving agents. As a particular instance of the latter, it covers "self-play" problems. In these problems the agents work together to produce a champion, who then engages one or more antagonists in a subsequent multi-player game In contrast to the traditional optimization case where the NFL results hold, we show that in self-play there are free lunches: in coevolution some algorithms have better performance than other algorithms, averaged across all possible problems. However in the typical coevolutionary scenarios encountered in biology, where there is no champion, NFL still holds.
Kawano, Yasuhiro; Neeley, Shane; Adachi, Kei; Nakai, Hiroyuki
2013-01-01
Overlapping open reading frames (ORFs) in viral genomes undergo co-evolution; however, how individual amino acids coded by overlapping ORFs are structurally, functionally, and co-evolutionarily constrained remains difficult to address by conventional homologous sequence alignment approaches. We report here a new experimental and computational evolution-based methodology to address this question and report its preliminary application to elucidating a mode of co-evolution of the frame-shifted overlapping ORFs in the adeno-associated virus (AAV) serotype 2 viral genome. These ORFs encode both capsid VP protein and non-structural assembly-activating protein (AAP). To show proof of principle of the new method, we focused on the evolutionarily conserved QVKEVTQ and KSKRSRR motifs, a pair of overlapping heptapeptides in VP and AAP, respectively. In the new method, we first identified a large number of capsid-forming VP3 mutants and functionally competent AAP mutants of these motifs from mutant libraries by experimental directed evolution under no co-evolutionary constraints. We used Illumina sequencing to obtain a large dataset and then statistically assessed the viability of VP and AAP heptapeptide mutants. The obtained heptapeptide information was then integrated into an evolutionary algorithm, with which VP and AAP were co-evolved from random or native nucleotide sequences in silico. As a result, we demonstrate that these two heptapeptide motifs could exhibit high degeneracy if coded by separate nucleotide sequences, and elucidate how overlap-evoked co-evolutionary constraints play a role in making the VP and AAP heptapeptide sequences into the present shape. Specifically, we demonstrate that two valine (V) residues and β-strand propensity in QVKEVTQ are structurally important, the strongly negative and hydrophilic nature of KSKRSRR is functionally important, and overlap-evoked co-evolution imposes strong constraints on serine (S) residues in KSKRSRR, despite high degeneracy of the motifs in the absence of co-evolutionary constraints.
Minimized-Laplacian residual interpolation for color image demosaicking
NASA Astrophysics Data System (ADS)
Kiku, Daisuke; Monno, Yusuke; Tanaka, Masayuki; Okutomi, Masatoshi
2014-03-01
A color difference interpolation technique is widely used for color image demosaicking. In this paper, we propose a minimized-laplacian residual interpolation (MLRI) as an alternative to the color difference interpolation, where the residuals are differences between observed and tentatively estimated pixel values. In the MLRI, we estimate the tentative pixel values by minimizing the Laplacian energies of the residuals. This residual image transfor- mation allows us to interpolate more easily than the standard color difference transformation. We incorporate the proposed MLRI into the gradient based threshold free (GBTF) algorithm, which is one of current state-of- the-art demosaicking algorithms. Experimental results demonstrate that our proposed demosaicking algorithm can outperform the state-of-the-art algorithms for the 30 images of the IMAX and the Kodak datasets.
Rodriguez-Rivas, Juan; Marsili, Simone; Juan, David; Valencia, Alfonso
2016-01-01
Protein–protein interactions are fundamental for the proper functioning of the cell. As a result, protein interaction surfaces are subject to strong evolutionary constraints. Recent developments have shown that residue coevolution provides accurate predictions of heterodimeric protein interfaces from sequence information. So far these approaches have been limited to the analysis of families of prokaryotic complexes for which large multiple sequence alignments of homologous sequences can be compiled. We explore the hypothesis that coevolution points to structurally conserved contacts at protein–protein interfaces, which can be reliably projected to homologous complexes with distantly related sequences. We introduce a domain-centered protocol to study the interplay between residue coevolution and structural conservation of protein–protein interfaces. We show that sequence-based coevolutionary analysis systematically identifies residue contacts at prokaryotic interfaces that are structurally conserved at the interface of their eukaryotic counterparts. In turn, this allows the prediction of conserved contacts at eukaryotic protein–protein interfaces with high confidence using solely mutational patterns extracted from prokaryotic genomes. Even in the context of high divergence in sequence (the twilight zone), where standard homology modeling of protein complexes is unreliable, our approach provides sequence-based accurate information about specific details of protein interactions at the residue level. Selected examples of the application of prokaryotic coevolutionary analysis to the prediction of eukaryotic interfaces further illustrate the potential of this approach. PMID:27965389
Rodriguez-Rivas, Juan; Marsili, Simone; Juan, David; Valencia, Alfonso
2016-12-27
Protein-protein interactions are fundamental for the proper functioning of the cell. As a result, protein interaction surfaces are subject to strong evolutionary constraints. Recent developments have shown that residue coevolution provides accurate predictions of heterodimeric protein interfaces from sequence information. So far these approaches have been limited to the analysis of families of prokaryotic complexes for which large multiple sequence alignments of homologous sequences can be compiled. We explore the hypothesis that coevolution points to structurally conserved contacts at protein-protein interfaces, which can be reliably projected to homologous complexes with distantly related sequences. We introduce a domain-centered protocol to study the interplay between residue coevolution and structural conservation of protein-protein interfaces. We show that sequence-based coevolutionary analysis systematically identifies residue contacts at prokaryotic interfaces that are structurally conserved at the interface of their eukaryotic counterparts. In turn, this allows the prediction of conserved contacts at eukaryotic protein-protein interfaces with high confidence using solely mutational patterns extracted from prokaryotic genomes. Even in the context of high divergence in sequence (the twilight zone), where standard homology modeling of protein complexes is unreliable, our approach provides sequence-based accurate information about specific details of protein interactions at the residue level. Selected examples of the application of prokaryotic coevolutionary analysis to the prediction of eukaryotic interfaces further illustrate the potential of this approach.
NASA Astrophysics Data System (ADS)
Arttini Dwi Prasetyowati, Sri; Susanto, Adhi; Widihastuti, Ida
2017-04-01
Every noise problems require different solution. In this research, the noise that must be cancelled comes from roadway. Least Mean Square (LMS) adaptive is one of the algorithm that can be used to cancel that noise. Residual noise always appears and could not be erased completely. This research aims to know the characteristic of residual noise from vehicle’s noise and analysis so that it is no longer appearing as a problem. LMS algorithm was used to predict the vehicle’s noise and minimize the error. The distribution of the residual noise could be observed to determine the specificity of the residual noise. The statistic of the residual noise close to normal distribution with = 0,0435, = 1,13 and the autocorrelation of the residual noise forming impulse. As a conclusion the residual noise is insignificant.
Restrictions on biological adaptation in language evolution.
Chater, Nick; Reali, Florencia; Christiansen, Morten H
2009-01-27
Language acquisition and processing are governed by genetic constraints. A crucial unresolved question is how far these genetic constraints have coevolved with language, perhaps resulting in a highly specialized and species-specific language "module," and how much language acquisition and processing redeploy preexisting cognitive machinery. In the present work, we explored the circumstances under which genes encoding language-specific properties could have coevolved with language itself. We present a theoretical model, implemented in computer simulations, of key aspects of the interaction of genes and language. Our results show that genes for language could have coevolved only with highly stable aspects of the linguistic environment; a rapidly changing linguistic environment does not provide a stable target for natural selection. Thus, a biological endowment could not coevolve with properties of language that began as learned cultural conventions, because cultural conventions change much more rapidly than genes. We argue that this rules out the possibility that arbitrary properties of language, including abstract syntactic principles governing phrase structure, case marking, and agreement, have been built into a "language module" by natural selection. The genetic basis of human language acquisition and processing did not coevolve with language, but primarily predates the emergence of language. As suggested by Darwin, the fit between language and its underlying mechanisms arose because language has evolved to fit the human brain, rather than the reverse.
Super-resolution reconstruction of MR image with a novel residual learning network algorithm
NASA Astrophysics Data System (ADS)
Shi, Jun; Liu, Qingping; Wang, Chaofeng; Zhang, Qi; Ying, Shihui; Xu, Haoyu
2018-04-01
Spatial resolution is one of the key parameters of magnetic resonance imaging (MRI). The image super-resolution (SR) technique offers an alternative approach to improve the spatial resolution of MRI due to its simplicity. Convolutional neural networks (CNN)-based SR algorithms have achieved state-of-the-art performance, in which the global residual learning (GRL) strategy is now commonly used due to its effectiveness for learning image details for SR. However, the partial loss of image details usually happens in a very deep network due to the degradation problem. In this work, we propose a novel residual learning-based SR algorithm for MRI, which combines both multi-scale GRL and shallow network block-based local residual learning (LRL). The proposed LRL module works effectively in capturing high-frequency details by learning local residuals. One simulated MRI dataset and two real MRI datasets have been used to evaluate our algorithm. The experimental results show that the proposed SR algorithm achieves superior performance to all of the other compared CNN-based SR algorithms in this work.
Mirroring co-evolving trees in the light of their topologies.
Hajirasouliha, Iman; Schönhuth, Alexander; de Juan, David; Valencia, Alfonso; Sahinalp, S Cenk
2012-05-01
Determining the interaction partners among protein/domain families poses hard computational problems, in particular in the presence of paralogous proteins. Available approaches aim to identify interaction partners among protein/domain families through maximizing the similarity between trimmed versions of their phylogenetic trees. Since maximization of any natural similarity score is computationally difficult, many approaches employ heuristics to evaluate the distance matrices corresponding to the tree topologies in question. In this article, we devise an efficient deterministic algorithm which directly maximizes the similarity between two leaf labeled trees with edge lengths, obtaining a score-optimal alignment of the two trees in question. Our algorithm is significantly faster than those methods based on distance matrix comparison: 1 min on a single processor versus 730 h on a supercomputer. Furthermore, we outperform the current state-of-the-art exhaustive search approach in terms of precision, while incurring acceptable losses in recall. A C implementation of the method demonstrated in this article is available at http://compbio.cs.sfu.ca/mirrort.htm
PRMT7 Interacts with ASS1 and Citrullinemia Mutations Disrupt the Interaction.
Verma, Mamta; Charles, Ramya Chandar M; Chakrapani, Baskar; Coumar, Mohane Selvaraj; Govindaraju, Gayathri; Rajavelu, Arumugam; Chavali, Sreenivas; Dhayalan, Arunkumar
2017-07-21
Protein arginine methyltransferase 7 (PRMT7) catalyzes the introduction of monomethylation marks at the arginine residues of substrate proteins. PRMT7 plays important roles in the regulation of gene expression, splicing, DNA damage, paternal imprinting, cancer and metastasis. However, little is known about the interaction partners of PRMT7. To address this, we performed yeast two-hybrid screening of PRMT7 and identified argininosuccinate synthetase (ASS1) as a potential interaction partner of PRMT7. We confirmed that PRMT7 directly interacts with ASS1 using pull-down studies. ASS1 catalyzes the rate-limiting step of arginine synthesis in urea cycle and citrulline-nitric oxide cycle. We mapped the interface of PRMT7-ASS1 complex through computational approaches and validated the predicted interface in vivo by site-directed mutagenesis. Evolutionary analysis revealed that the ASS1 residues important for PRMT7-ASS1 interaction have co-evolved with PRMT7. We showed that ASS1 mutations linked to type I citrullinemia disrupt the ASS1-PRMT7 interaction, which might explain the molecular pathogenesis of the disease. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bellucci, Michael A; Coker, David F
2011-07-28
We describe a new method for constructing empirical valence bond potential energy surfaces using a parallel multilevel genetic program (PMLGP). Genetic programs can be used to perform an efficient search through function space and parameter space to find the best functions and sets of parameters that fit energies obtained by ab initio electronic structure calculations. Building on the traditional genetic program approach, the PMLGP utilizes a hierarchy of genetic programming on two different levels. The lower level genetic programs are used to optimize coevolving populations in parallel while the higher level genetic program (HLGP) is used to optimize the genetic operator probabilities of the lower level genetic programs. The HLGP allows the algorithm to dynamically learn the mutation or combination of mutations that most effectively increase the fitness of the populations, causing a significant increase in the algorithm's accuracy and efficiency. The algorithm's accuracy and efficiency is tested against a standard parallel genetic program with a variety of one-dimensional test cases. Subsequently, the PMLGP is utilized to obtain an accurate empirical valence bond model for proton transfer in 3-hydroxy-gamma-pyrone in gas phase and protic solvent. © 2011 American Institute of Physics
Improved pressure-velocity coupling algorithm based on minimization of global residual norm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatwani, A.U.; Turan, A.
1991-01-01
In this paper an improved pressure velocity coupling algorithm is proposed based on the minimization of the global residual norm. The procedure is applied to SIMPLE and SIMPLEC algorithms to automatically select the pressure underrelaxation factor to minimize the global residual norm at each iteration level. Test computations for three-dimensional turbulent, isothermal flow is a toroidal vortex combustor indicate that velocity underrelaxation factors as high as 0.7 can be used to obtain a converged solution in 300 iterations.
NASA Technical Reports Server (NTRS)
Goodman, Steven J.; Wright, Pat; Christian, Hugh; Blakeslee, Richard; Buechler, Dennis; Scharfen, Greg
1991-01-01
The global lightning signatures were analyzed from the DMSP Optical Linescan System (OLS) imagery archived at the National Snow and Ice Data Center. Transition to analysis of the digital archive becomes available and compare annual, interannual, and seasonal variations with other global data sets. An initial survey of the quality of the existing film archive was completed and lightning signatures were digitized for the summer months of 1986 to 1987. The relationship is studied between: (1) global and regional lightning activity and rainfall, and (2) storm electrical development and environment. Remote sensing data sets obtained from field programs are used in conjunction with satellite/radar/lightning data to develop and improve precipitation estimation algorithms, and to provide a better understanding of the co-evolving electrical, microphysical, and dynamical structure of storms.
Stetz, Gabrielle; Verkhivker, Gennady M.
2017-01-01
Allosteric interactions in the Hsp70 proteins are linked with their regulatory mechanisms and cellular functions. Despite significant progress in structural and functional characterization of the Hsp70 proteins fundamental questions concerning modularity of the allosteric interaction networks and hierarchy of signaling pathways in the Hsp70 chaperones remained largely unexplored and poorly understood. In this work, we proposed an integrated computational strategy that combined atomistic and coarse-grained simulations with coevolutionary analysis and network modeling of the residue interactions. A novel aspect of this work is the incorporation of dynamic residue correlations and coevolutionary residue dependencies in the construction of allosteric interaction networks and signaling pathways. We found that functional sites involved in allosteric regulation of Hsp70 may be characterized by structural stability, proximity to global hinge centers and local structural environment that is enriched by highly coevolving flexible residues. These specific characteristics may be necessary for regulation of allosteric structural transitions and could distinguish regulatory sites from nonfunctional conserved residues. The observed confluence of dynamics correlations and coevolutionary residue couplings with global networking features may determine modular organization of allosteric interactions and dictate localization of key mediating sites. Community analysis of the residue interaction networks revealed that concerted rearrangements of local interacting modules at the inter-domain interface may be responsible for global structural changes and a population shift in the DnaK chaperone. The inter-domain communities in the Hsp70 structures harbor the majority of regulatory residues involved in allosteric signaling, suggesting that these sites could be integral to the network organization and coordination of structural changes. Using a network-based formalism of allostery, we introduced a community-hopping model of allosteric communication. Atomistic reconstruction of signaling pathways in the DnaK structures captured a direction-specific mechanism and molecular details of signal transmission that are fully consistent with the mutagenesis experiments. The results of our study reconciled structural and functional experiments from a network-centric perspective by showing that global properties of the residue interaction networks and coevolutionary signatures may be linked with specificity and diversity of allosteric regulation mechanisms. PMID:28095400
Stetz, Gabrielle; Verkhivker, Gennady M
2017-01-01
Allosteric interactions in the Hsp70 proteins are linked with their regulatory mechanisms and cellular functions. Despite significant progress in structural and functional characterization of the Hsp70 proteins fundamental questions concerning modularity of the allosteric interaction networks and hierarchy of signaling pathways in the Hsp70 chaperones remained largely unexplored and poorly understood. In this work, we proposed an integrated computational strategy that combined atomistic and coarse-grained simulations with coevolutionary analysis and network modeling of the residue interactions. A novel aspect of this work is the incorporation of dynamic residue correlations and coevolutionary residue dependencies in the construction of allosteric interaction networks and signaling pathways. We found that functional sites involved in allosteric regulation of Hsp70 may be characterized by structural stability, proximity to global hinge centers and local structural environment that is enriched by highly coevolving flexible residues. These specific characteristics may be necessary for regulation of allosteric structural transitions and could distinguish regulatory sites from nonfunctional conserved residues. The observed confluence of dynamics correlations and coevolutionary residue couplings with global networking features may determine modular organization of allosteric interactions and dictate localization of key mediating sites. Community analysis of the residue interaction networks revealed that concerted rearrangements of local interacting modules at the inter-domain interface may be responsible for global structural changes and a population shift in the DnaK chaperone. The inter-domain communities in the Hsp70 structures harbor the majority of regulatory residues involved in allosteric signaling, suggesting that these sites could be integral to the network organization and coordination of structural changes. Using a network-based formalism of allostery, we introduced a community-hopping model of allosteric communication. Atomistic reconstruction of signaling pathways in the DnaK structures captured a direction-specific mechanism and molecular details of signal transmission that are fully consistent with the mutagenesis experiments. The results of our study reconciled structural and functional experiments from a network-centric perspective by showing that global properties of the residue interaction networks and coevolutionary signatures may be linked with specificity and diversity of allosteric regulation mechanisms.
Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking †
Kiku, Daisuke; Okutomi, Masatoshi
2017-01-01
Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking. PMID:29194407
Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking.
Monno, Yusuke; Kiku, Daisuke; Tanaka, Masayuki; Okutomi, Masatoshi
2017-12-01
Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking.
Wetzel, Margaret E.; Olsen, Gary J.; Chakravartty, Vandana; ...
2015-11-19
The large repABC plasmids of the order Rhizobiales with Class I quorum-regulated conjugative transfer systems often define the nature of the bacterium that harbors them. These otherwise diverse plasmids contain a core of highly conserved genes for replication and conjugation raising the question of their evolutionary relationships. In an analysis of 18 such plasmids these elements fall into two organizational classes, Group I and Group II, based on the sites at which cargo DNA is located. Cladograms constructed from proteins of the transfer and quorum-sensing components indicated that those of the Group I plasmids, while coevolving, have diverged from thosemore » coevolving proteins of the Group II plasmids. Moreover, within these groups the phylogenies of the proteins usually occupy similar, if not identical, tree topologies. Remarkably, such relationships were not seen among proteins of the replication system; although RepA and RepB coevolve, RepC does not. Nor do the replication proteins coevolve with the proteins of the transfer and quorum-sensing systems. Functional analysis was mostly consistent with phylogenies. TraR activated promoters from plasmids within its group, but not between groups and dimerized with TraR proteins from within but not between groups. However, oriT sequences, which are highly conserved, were processed by the transfer system of plasmids regardless of group. Here, we conclude that these plasmids diverged into two classes based on the locations at which cargo DNA is inserted, that the quorum-sensing and transfer functions are coevolving within but not between the two groups, and that this divergent evolution extends to function.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wetzel, Margaret E.; Olsen, Gary J.; Chakravartty, Vandana
The large repABC plasmids of the order Rhizobiales with Class I quorum-regulated conjugative transfer systems often define the nature of the bacterium that harbors them. These otherwise diverse plasmids contain a core of highly conserved genes for replication and conjugation raising the question of their evolutionary relationships. In an analysis of 18 such plasmids these elements fall into two organizational classes, Group I and Group II, based on the sites at which cargo DNA is located. Cladograms constructed from proteins of the transfer and quorum-sensing components indicated that those of the Group I plasmids, while coevolving, have diverged from thosemore » coevolving proteins of the Group II plasmids. Moreover, within these groups the phylogenies of the proteins usually occupy similar, if not identical, tree topologies. Remarkably, such relationships were not seen among proteins of the replication system; although RepA and RepB coevolve, RepC does not. Nor do the replication proteins coevolve with the proteins of the transfer and quorum-sensing systems. Functional analysis was mostly consistent with phylogenies. TraR activated promoters from plasmids within its group, but not between groups and dimerized with TraR proteins from within but not between groups. However, oriT sequences, which are highly conserved, were processed by the transfer system of plasmids regardless of group. Here, we conclude that these plasmids diverged into two classes based on the locations at which cargo DNA is inserted, that the quorum-sensing and transfer functions are coevolving within but not between the two groups, and that this divergent evolution extends to function.« less
Hotspots for allosteric regulation on protein surfaces
Reynolds, Kimberly A.; McLaughlin, Richard N.; Ranganathan, Rama
2012-01-01
Recent work indicates a general architecture for proteins in which sparse networks of physically contiguous and co-evolving amino acids underlie basic aspects of structure and function. These networks, termed sectors, are spatially organized such that active sites are linked to many surface sites distributed throughout the structure. Using the metabolic enzyme dihydrofolate reductase as a model system, we show that (1) the sector is strongly correlated to a network of residues undergoing millisecond conformational fluctuations associated with enzyme catalysis and (2) sector-connected surface sites are statistically preferred locations for the emergence of allosteric control in vivo. Thus, sectors represent an evolutionarily conserved “wiring” mechanism that can enable perturbations at specific surface positions to rapidly initiate conformational control over protein function. These findings suggest that sectors enable the evolution of intermolecular communication and regulation. PMID:22196731
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zemla, A; Lang, D; Kostova, T
2010-11-29
Most of the currently used methods for protein function prediction rely on sequence-based comparisons between a query protein and those for which a functional annotation is provided. A serious limitation of sequence similarity-based approaches for identifying residue conservation among proteins is the low confidence in assigning residue-residue correspondences among proteins when the level of sequence identity between the compared proteins is poor. Multiple sequence alignment methods are more satisfactory - still, they cannot provide reliable results at low levels of sequence identity. Our goal in the current work was to develop an algorithm that could overcome these difficulties and facilitatemore » the identification of structurally (and possibly functionally) relevant residue-residue correspondences between compared protein structures. Here we present StralSV, a new algorithm for detecting closely related structure fragments and quantifying residue frequency from tight local structure alignments. We apply StralSV in a study of the RNA-dependent RNA polymerase of poliovirus and demonstrate that the algorithm can be used to determine regions of the protein that are relatively unique or that shared structural similarity with structures that are distantly related. By quantifying residue frequencies among many residue-residue pairs extracted from local alignments, one can infer potential structural or functional importance of specific residues that are determined to be highly conserved or that deviate from a consensus. We further demonstrate that considerable detailed structural and phylogenetic information can be derived from StralSV analyses. StralSV is a new structure-based algorithm for identifying and aligning structure fragments that have similarity to a reference protein. StralSV analysis can be used to quantify residue-residue correspondences and identify residues that may be of particular structural or functional importance, as well as unusual or unexpected residues at a given sequence position.« less
Zemla, Adam T; Lang, Dorothy M; Kostova, Tanya; Andino, Raul; Ecale Zhou, Carol L
2011-06-02
Most of the currently used methods for protein function prediction rely on sequence-based comparisons between a query protein and those for which a functional annotation is provided. A serious limitation of sequence similarity-based approaches for identifying residue conservation among proteins is the low confidence in assigning residue-residue correspondences among proteins when the level of sequence identity between the compared proteins is poor. Multiple sequence alignment methods are more satisfactory--still, they cannot provide reliable results at low levels of sequence identity. Our goal in the current work was to develop an algorithm that could help overcome these difficulties by facilitating the identification of structurally (and possibly functionally) relevant residue-residue correspondences between compared protein structures. Here we present StralSV (structure-alignment sequence variability), a new algorithm for detecting closely related structure fragments and quantifying residue frequency from tight local structure alignments. We apply StralSV in a study of the RNA-dependent RNA polymerase of poliovirus, and we demonstrate that the algorithm can be used to determine regions of the protein that are relatively unique, or that share structural similarity with proteins that would be considered distantly related. By quantifying residue frequencies among many residue-residue pairs extracted from local structural alignments, one can infer potential structural or functional importance of specific residues that are determined to be highly conserved or that deviate from a consensus. We further demonstrate that considerable detailed structural and phylogenetic information can be derived from StralSV analyses. StralSV is a new structure-based algorithm for identifying and aligning structure fragments that have similarity to a reference protein. StralSV analysis can be used to quantify residue-residue correspondences and identify residues that may be of particular structural or functional importance, as well as unusual or unexpected residues at a given sequence position. StralSV is provided as a web service at http://proteinmodel.org/AS2TS/STRALSV/.
Polačik, Matej; Smith, Carl; Honza, Marcel; Reichard, Martin
2018-01-01
Obligate brood parasites manipulate other species into raising their offspring. Avian and insect brood parasitic systems demonstrate how interacting species engage in reciprocal coevolutionary arms races through behavioral and morphological adaptations and counteradaptations. Mouthbrooding cichlid fishes are renowned for their remarkable evolutionary radiations and complex behaviors. In Lake Tanganyika, mouthbrooding cichlids are exploited by the only obligate nonavian vertebrate brood parasite, the cuckoo catfish Synodontis multipunctatus. We show that coevolutionary history and individual learning both have a major impact on the success of cuckoo catfish parasitism between coevolved sympatric and evolutionarily naïve allopatric cichlid species. The rate of cuckoo catfish parasitism in coevolved Tanganyikan hosts was 3 to 11 times lower than in evolutionarily naïve cichlids. Moreover, using experimental infections, we demonstrate that parasite egg rejection in sympatric hosts was much higher, leading to seven times greater parasite survival in evolutionarily naïve than sympatric hosts. However, a high rejection frequency of parasitic catfish eggs by coevolved sympatric hosts came at a cost of increased rejection of their own eggs. A significant cost of catfish parasitism was universal, except for coevolved sympatric cichlid species with previous experience of catfish parasitism, demonstrating that learning and individual experience both contribute to a successful host response. PMID:29732407
Mittal, Anuradha; Holehouse, Alex S; Cohan, Megan C; Pappu, Rohit V
2018-05-12
Intrinsically disordered proteins and regions (IDPs / IDRs) are characterized by well-defined sequence-to-conformation relationships (SCRs). These relationships refer to the sequence-specific preferences for average sizes, shapes, residue-specific secondary structure propensities, and amplitudes of multiscale conformational fluctuations. SCRs are discerned from the sequence-specific conformational ensembles of IDPs. A vast majority of IDPs are actually tethered to folded domains (FDs). This raises the question of whether or not SCRs inferred for IDPs are applicable to IDRs tethered to folded domains. Here, we use atomistic simulations based on a well-established forcefield paradigm and an enhanced sampling method to obtain comparative assessments of SCRs for thirteen archetypal IDRs modeled as autonomous units, as C-terminal tails connected to folded domains, and as linkers between pairs of folded domains. Our studies uncover a set of general observations regarding context-independent versus context-dependent SCRs of IDRs. SCRs are minimally perturbed upon tethering to folded domains if the IDRs are deficient in charged residues and for polyampholytic IDRs where the oppositely charged residues within the sequence of the IDR are separated into distinct blocks. In contrast, the interplay between IDRs and tethered folded domains has a significant modulatory effect on SCRs if the IDRs have intermediate fractions of charged residues or if they have sequence-intrinsic conformational preferences for canonical random coils. Our findings suggest that IDRs with context-independent SCRs might be independent evolutionary modules whereas IDRs with context-dependent intrinsic SCRs might co-evolve with the FDs to which they are tethered. Copyright © 2018. Published by Elsevier Ltd.
A critical analysis of computational protein design with sparse residue interaction graphs
Georgiev, Ivelin S.
2017-01-01
Protein design algorithms enumerate a combinatorial number of candidate structures to compute the Global Minimum Energy Conformation (GMEC). To efficiently find the GMEC, protein design algorithms must methodically reduce the conformational search space. By applying distance and energy cutoffs, the protein system to be designed can thus be represented using a sparse residue interaction graph, where the number of interacting residue pairs is less than all pairs of mutable residues, and the corresponding GMEC is called the sparse GMEC. However, ignoring some pairwise residue interactions can lead to a change in the energy, conformation, or sequence of the sparse GMEC vs. the original or the full GMEC. Despite the widespread use of sparse residue interaction graphs in protein design, the above mentioned effects of their use have not been previously analyzed. To analyze the costs and benefits of designing with sparse residue interaction graphs, we computed the GMECs for 136 different protein design problems both with and without distance and energy cutoffs, and compared their energies, conformations, and sequences. Our analysis shows that the differences between the GMECs depend critically on whether or not the design includes core, boundary, or surface residues. Moreover, neglecting long-range interactions can alter local interactions and introduce large sequence differences, both of which can result in significant structural and functional changes. Designs on proteins with experimentally measured thermostability show it is beneficial to compute both the full and the sparse GMEC accurately and efficiently. To this end, we show that a provable, ensemble-based algorithm can efficiently compute both GMECs by enumerating a small number of conformations, usually fewer than 1000. This provides a novel way to combine sparse residue interaction graphs with provable, ensemble-based algorithms to reap the benefits of sparse residue interaction graphs while avoiding their potential inaccuracies. PMID:28358804
Detection of Unexpected High Correlations between Balance Calibration Loads and Load Residuals
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2014-01-01
An algorithm was developed for the assessment of strain-gage balance calibration data that makes it possible to systematically investigate potential sources of unexpected high correlations between calibration load residuals and applied calibration loads. The algorithm investigates correlations on a load series by load series basis. The linear correlation coefficient is used to quantify the correlations. It is computed for all possible pairs of calibration load residuals and applied calibration loads that can be constructed for the given balance calibration data set. An unexpected high correlation between a load residual and a load is detected if three conditions are met: (i) the absolute value of the correlation coefficient of a residual/load pair exceeds 0.95; (ii) the maximum of the absolute values of the residuals of a load series exceeds 0.25 % of the load capacity; (iii) the load component of the load series is intentionally applied. Data from a baseline calibration of a six-component force balance is used to illustrate the application of the detection algorithm to a real-world data set. This analysis also showed that the detection algorithm can identify load alignment errors as long as repeat load series are contained in the balance calibration data set that do not suffer from load alignment problems.
A Distributed and Energy-Efficient Algorithm for Event K-Coverage in Underwater Sensor Networks
Jiang, Peng; Xu, Yiming; Liu, Jun
2017-01-01
For event dynamic K-coverage algorithms, each management node selects its assistant node by using a greedy algorithm without considering the residual energy and situations in which a node is selected by several events. This approach affects network energy consumption and balance. Therefore, this study proposes a distributed and energy-efficient event K-coverage algorithm (DEEKA). After the network achieves 1-coverage, the nodes that detect the same event compete for the event management node with the number of candidate nodes and the average residual energy, as well as the distance to the event. Second, each management node estimates the probability of its neighbor nodes’ being selected by the event it manages with the distance level, the residual energy level, and the number of dynamic coverage event of these nodes. Third, each management node establishes an optimization model that uses expectation energy consumption and the residual energy variance of its neighbor nodes and detects the performance of the events it manages as targets. Finally, each management node uses a constrained non-dominated sorting genetic algorithm (NSGA-II) to obtain the Pareto set of the model and the best strategy via technique for order preference by similarity to an ideal solution (TOPSIS). The algorithm first considers the effect of harsh underwater environments on information collection and transmission. It also considers the residual energy of a node and a situation in which the node is selected by several other events. Simulation results show that, unlike the on-demand variable sensing K-coverage algorithm, DEEKA balances and reduces network energy consumption, thereby prolonging the network’s best service quality and lifetime. PMID:28106837
A Distributed and Energy-Efficient Algorithm for Event K-Coverage in Underwater Sensor Networks.
Jiang, Peng; Xu, Yiming; Liu, Jun
2017-01-19
For event dynamic K-coverage algorithms, each management node selects its assistant node by using a greedy algorithm without considering the residual energy and situations in which a node is selected by several events. This approach affects network energy consumption and balance. Therefore, this study proposes a distributed and energy-efficient event K-coverage algorithm (DEEKA). After the network achieves 1-coverage, the nodes that detect the same event compete for the event management node with the number of candidate nodes and the average residual energy, as well as the distance to the event. Second, each management node estimates the probability of its neighbor nodes' being selected by the event it manages with the distance level, the residual energy level, and the number of dynamic coverage event of these nodes. Third, each management node establishes an optimization model that uses expectation energy consumption and the residual energy variance of its neighbor nodes and detects the performance of the events it manages as targets. Finally, each management node uses a constrained non-dominated sorting genetic algorithm (NSGA-II) to obtain the Pareto set of the model and the best strategy via technique for order preference by similarity to an ideal solution (TOPSIS). The algorithm first considers the effect of harsh underwater environments on information collection and transmission. It also considers the residual energy of a node and a situation in which the node is selected by several other events. Simulation results show that, unlike the on-demand variable sensing K-coverage algorithm, DEEKA balances and reduces network energy consumption, thereby prolonging the network's best service quality and lifetime.
Protein Sectors: Statistical Coupling Analysis versus Conservation
Teşileanu, Tiberiu; Colwell, Lucy J.; Leibler, Stanislas
2015-01-01
Statistical coupling analysis (SCA) is a method for analyzing multiple sequence alignments that was used to identify groups of coevolving residues termed “sectors”. The method applies spectral analysis to a matrix obtained by combining correlation information with sequence conservation. It has been asserted that the protein sectors identified by SCA are functionally significant, with different sectors controlling different biochemical properties of the protein. Here we reconsider the available experimental data and note that it involves almost exclusively proteins with a single sector. We show that in this case sequence conservation is the dominating factor in SCA, and can alone be used to make statistically equivalent functional predictions. Therefore, we suggest shifting the experimental focus to proteins for which SCA identifies several sectors. Correlations in protein alignments, which have been shown to be informative in a number of independent studies, would then be less dominated by sequence conservation. PMID:25723535
NASA Technical Reports Server (NTRS)
Baumeister, Kenneth J.
1990-01-01
The Galerkin weighted residual technique using linear triangular weight functions is employed to develop finite difference formulae in Cartesian coordinates for the Laplacian operator on isolated unstructured triangular grids. The weighted residual coefficients associated with the weak formulation of the Laplacian operator along with linear combinations of the residual equations are used to develop the algorithm. The algorithm was tested for a wide variety of unstructured meshes and found to give satisfactory results.
Nguyen, Tuan; Ruan, Zheng; Oruganty, Krishnadev; Kannan, Natarajan
2015-01-01
Mitogen activated protein kinases (MAPKs) form a closely related family of kinases that control critical pathways associated with cell growth and survival. Although MAPKs have been extensively characterized at the biochemical, cellular, and structural level, an integrated evolutionary understanding of how MAPKs differ from other closely related protein kinases is currently lacking. Here, we perform statistical sequence comparisons of MAPKs and related protein kinases to identify sequence and structural features associated with MAPK functional divergence. We show, for the first time, that virtually all MAPK-distinguishing sequence features, including an unappreciated short insert segment in the β4-β5 loop, physically couple distal functional sites in the kinase domain to the D-domain peptide docking groove via the C-terminal flanking tail (C-tail). The coupling mediated by MAPK-specific residues confers an allosteric regulatory mechanism unique to MAPKs. In particular, the regulatory αC-helix conformation is controlled by a MAPK-conserved salt bridge interaction between an arginine in the αC-helix and an acidic residue in the C-tail. The salt-bridge interaction is modulated in unique ways in individual sub-families to achieve regulatory specificity. Our study is consistent with a model in which the C-tail co-evolved with the D-domain docking site to allosterically control MAPK activity. Our study provides testable mechanistic hypotheses for biochemical characterization of MAPK-conserved residues and new avenues for the design of allosteric MAPK inhibitors. PMID:25799139
Singh, Shailza; Mandlik, Vineetha; Shinde, Sonali
2015-03-01
GPI12 represents an important enzyme in the GPI biosynthetic pathway of several parasites like 'Leishmania'. GPI activity is generally regulated through either the hindrance in GPI complex assembly formation or the modulation of the lipophosphoglycan (LPG) flux to either reduce or enhance the pathogenicity in an organism. Of the various GPI molecules known, GPI12 is an important enzyme in the GPI biosynthetic pathway which can be exploited as a target due to the substrate specificity difference in parasites and humans. In the present study, the functional importance of the co-evolving residues of the GPI12 protein of Leishmania has been highlighted using the GPI proteins belonging to the GlcNAC-deacetylase family. Exploring the active site of the GPI12 protein and designing inhibitors against the functional residues provide ways and means to change the efficiency of deacetylation activity of the enzyme. The activity of de-N-acetylase is low in the absence of metal ions like zinc. Hence we designed eight small molecules in order to modulate the activity of GPI12. Compound 8 was found to be an appropriate choice to target the agonist (GPI12) active site thereby targeting the residues which were essential in the Zn binding and chelation activity. Inhibition of these sites offered a strong constraint to block the protein activity and in turn GPI biosynthesis.
ProtoMD: A prototyping toolkit for multiscale molecular dynamics
NASA Astrophysics Data System (ADS)
Somogyi, Endre; Mansour, Andrew Abi; Ortoleva, Peter J.
2016-05-01
ProtoMD is a toolkit that facilitates the development of algorithms for multiscale molecular dynamics (MD) simulations. It is designed for multiscale methods which capture the dynamic transfer of information across multiple spatial scales, such as the atomic to the mesoscopic scale, via coevolving microscopic and coarse-grained (CG) variables. ProtoMD can be also be used to calibrate parameters needed in traditional CG-MD methods. The toolkit integrates 'GROMACS wrapper' to initiate MD simulations, and 'MDAnalysis' to analyze and manipulate trajectory files. It facilitates experimentation with a spectrum of coarse-grained variables, prototyping rare events (such as chemical reactions), or simulating nanocharacterization experiments such as terahertz spectroscopy, AFM, nanopore, and time-of-flight mass spectroscopy. ProtoMD is written in python and is freely available under the GNU General Public License from github.com/CTCNano/proto_md.
Singh, Jai
2013-01-01
The objective of this study was a thorough reconsideration, within the framework of Newtonian mechanics and work-energy relationships, of the empirically interpreted relationships employed within the CRASH3 damage analysis algorithm in regards to linearity between barrier equivalent velocity (BEV) or peak collision force magnitude and residual damage depth. The CRASH3 damage analysis algorithm was considered, first in terms of the cases of collisions that produced no residual damage, in order to properly explain the damage onset speed and crush resistance terms. Under the modeling constraints of the collision partners representing a closed system and the a priori assumption of linearity between BEV or peak collision force magnitude and residual damage depth, the equations for the sole realistic model were derived. Evaluation of the work-energy relationships for collisions at or below the elastic limit revealed that the BEV or peak collision force magnitude relationships are bifurcated based upon the residual damage depth. Rather than being additive terms from the linear curve fits employed in the CRASH3 damage analysis algorithm, the Campbell b 0 and CRASH3 AL terms represent the maximum values that can be ascribed to the BEV or peak collision force magnitude, respectively, for collisions that produce zero residual damage. Collisions resulting in the production of non-zero residual damage depth already account for the surpassing of the elastic limit during closure and therefore the secondary addition of the elastic limit terms represents a double accounting of the same. This evaluation shows that the current energy absorbed formulation utilized in the CRASH3 damage analysis algorithm extraneously includes terms associated with the A and G stiffness coefficients. This sole realistic model, however, is limited, secondary to reducing the coefficient of restitution to a constant value for all cases in which the residual damage depth is nonzero. Linearity between BEV or peak collision force magnitude and residual damage depth may be applicable for particular ranges of residual damage depth for any given region of any given vehicle. Within the modeling construct employed by the CRASH3 damage algorithm, the case of uniform and ubiquitous linearity cannot be supported. Considerations regarding the inclusion of internal work recovered and restitution for modeling the separation phase change in velocity magnitude should account for not only the effects present during the evaluation of a vehicle-to-vehicle collision of interest but also to the approach taken for modeling the force-deflection response for each collision partner.
Yang, Xiaoxia; Wang, Jia; Sun, Jun; Liu, Rong
2015-01-01
Protein-nucleic acid interactions are central to various fundamental biological processes. Automated methods capable of reliably identifying DNA- and RNA-binding residues in protein sequence are assuming ever-increasing importance. The majority of current algorithms rely on feature-based prediction, but their accuracy remains to be further improved. Here we propose a sequence-based hybrid algorithm SNBRFinder (Sequence-based Nucleic acid-Binding Residue Finder) by merging a feature predictor SNBRFinderF and a template predictor SNBRFinderT. SNBRFinderF was established using the support vector machine whose inputs include sequence profile and other complementary sequence descriptors, while SNBRFinderT was implemented with the sequence alignment algorithm based on profile hidden Markov models to capture the weakly homologous template of query sequence. Experimental results show that SNBRFinderF was clearly superior to the commonly used sequence profile-based predictor and SNBRFinderT can achieve comparable performance to the structure-based template methods. Leveraging the complementary relationship between these two predictors, SNBRFinder reasonably improved the performance of both DNA- and RNA-binding residue predictions. More importantly, the sequence-based hybrid prediction reached competitive performance relative to our previous structure-based counterpart. Our extensive and stringent comparisons show that SNBRFinder has obvious advantages over the existing sequence-based prediction algorithms. The value of our algorithm is highlighted by establishing an easy-to-use web server that is freely accessible at http://ibi.hzau.edu.cn/SNBRFinder.
Tan, Bingyao; Wong, Alexander; Bizheva, Kostadinka
2018-01-01
A novel image processing algorithm based on a modified Bayesian residual transform (MBRT) was developed for the enhancement of morphological and vascular features in optical coherence tomography (OCT) and OCT angiography (OCTA) images. The MBRT algorithm decomposes the original OCT image into multiple residual images, where each image presents information at a unique scale. Scale selective residual adaptation is used subsequently to enhance morphological features of interest, such as blood vessels and tissue layers, and to suppress irrelevant image features such as noise and motion artefacts. The performance of the proposed MBRT algorithm was tested on a series of cross-sectional and enface OCT and OCTA images of retina and brain tissue that were acquired in-vivo. Results show that the MBRT reduces speckle noise and motion-related imaging artefacts locally, thus improving significantly the contrast and visibility of morphological features in the OCT and OCTA images. PMID:29760996
Expanded envelope concepts for aircraft control-element failure detection and identification
NASA Technical Reports Server (NTRS)
Weiss, Jerold L.; Hsu, John Y.
1988-01-01
The purpose of this effort was to develop and demonstrate concepts for expanding the envelope of failure detection and isolation (FDI) algorithms for aircraft-path failures. An algorithm which uses analytic-redundancy in the form of aerodynamic force and moment balance equations was used. Because aircraft-path FDI uses analytical models, there is a tradeoff between accuracy and the ability to detect and isolate failures. For single flight condition operation, design and analysis methods are developed to deal with this robustness problem. When the departure from the single flight condition is significant, algorithm adaptation is necessary. Adaptation requirements for the residual generation portion of the FDI algorithm are interpreted as the need for accurate, large-motion aero-models, over a broad range of velocity and altitude conditions. For the decision-making part of the algorithm, adaptation may require modifications to filtering operations, thresholds, and projection vectors that define the various hypothesis tests performed in the decision mechanism. Methods of obtaining and evaluating adequate residual generation and decision-making designs have been developed. The application of the residual generation ideas to a high-performance fighter is demonstrated by developing adaptive residuals for the AFTI-F-16 and simulating their behavior under a variety of maneuvers using the results of a NASA F-16 simulation.
The minimal residual QR-factorization algorithm for reliably solving subset regression problems
NASA Technical Reports Server (NTRS)
Verhaegen, M. H.
1987-01-01
A new algorithm to solve test subset regression problems is described, called the minimal residual QR factorization algorithm (MRQR). This scheme performs a QR factorization with a new column pivoting strategy. Basically, this strategy is based on the change in the residual of the least squares problem. Furthermore, it is demonstrated that this basic scheme might be extended in a numerically efficient way to combine the advantages of existing numerical procedures, such as the singular value decomposition, with those of more classical statistical procedures, such as stepwise regression. This extension is presented as an advisory expert system that guides the user in solving the subset regression problem. The advantages of the new procedure are highlighted by a numerical example.
Thermal nanostructure: An order parameter multiscale ensemble approach
NASA Astrophysics Data System (ADS)
Cheluvaraja, S.; Ortoleva, P.
2010-02-01
Deductive all-atom multiscale techniques imply that many nanosystems can be understood in terms of the slow dynamics of order parameters that coevolve with the quasiequilibrium probability density for rapidly fluctuating atomic configurations. The result of this multiscale analysis is a set of stochastic equations for the order parameters whose dynamics is driven by thermal-average forces. We present an efficient algorithm for sampling atomistic configurations in viruses and other supramillion atom nanosystems. This algorithm allows for sampling of a wide range of configurations without creating an excess of high-energy, improbable ones. It is implemented and used to calculate thermal-average forces. These forces are then used to search the free-energy landscape of a nanosystem for deep minima. The methodology is applied to thermal structures of Cowpea chlorotic mottle virus capsid. The method has wide applicability to other nanosystems whose properties are described by the CHARMM or other interatomic force field. Our implementation, denoted SIMNANOWORLD™, achieves calibration-free nanosystem modeling. Essential atomic-scale detail is preserved via a quasiequilibrium probability density while overall character is provided via predicted values of order parameters. Applications from virology to the computer-aided design of nanocapsules for delivery of therapeutic agents and of vaccines for nonenveloped viruses are envisioned.
Projections for fast protein structure retrieval
Bhattacharya, Sourangshu; Bhattacharyya, Chiranjib; Chandra, Nagasuma R
2006-01-01
Background In recent times, there has been an exponential rise in the number of protein structures in databases e.g. PDB. So, design of fast algorithms capable of querying such databases is becoming an increasingly important research issue. This paper reports an algorithm, motivated from spectral graph matching techniques, for retrieving protein structures similar to a query structure from a large protein structure database. Each protein structure is specified by the 3D coordinates of residues of the protein. The algorithm is based on a novel characterization of the residues, called projections, leading to a similarity measure between the residues of the two proteins. This measure is exploited to efficiently compute the optimal equivalences. Results Experimental results show that, the current algorithm outperforms the state of the art on benchmark datasets in terms of speed without losing accuracy. Search results on SCOP 95% nonredundant database, for fold similarity with 5 proteins from different SCOP classes show that the current method performs competitively with the standard algorithm CE. The algorithm is also capable of detecting non-topological similarities between two proteins which is not possible with most of the state of the art tools like Dali. PMID:17254310
Li, Yiyang; Jin, Weiqi; Li, Shuo; Zhang, Xu; Zhu, Jin
2017-05-08
Cooled infrared detector arrays always suffer from undesired ripple residual nonuniformity (RNU) in sky scene observations. The ripple residual nonuniformity seriously affects the imaging quality, especially for small target detection. It is difficult to eliminate it using the calibration-based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified temporal high-pass nonuniformity correction algorithm using fuzzy scene classification. The fuzzy scene classification is designed to control the correction threshold so that the algorithm can remove ripple RNU without degrading the scene details. We test the algorithm on a real infrared sequence by comparing it to several well-established methods. The result shows that the algorithm has obvious advantages compared with the tested methods in terms of detail conservation and convergence speed for ripple RNU correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA), which has two advantages: (1) low resources consumption; and (2) small hardware delay (less than 10 image rows). It has been successfully applied in an actual system.
Samaan, Michael A; Weinhandl, Joshua T; Bawab, Sebastian Y; Ringleb, Stacie I
2016-12-01
Musculoskeletal modeling allows for the determination of various parameters during dynamic maneuvers by using in vivo kinematic and ground reaction force (GRF) data as inputs. Differences between experimental and model marker data and inconsistencies in the GRFs applied to these musculoskeletal models may not produce accurate simulations. Therefore, residual forces and moments are applied to these models in order to reduce these differences. Numerical optimization techniques can be used to determine optimal tracking weights of each degree of freedom of a musculoskeletal model in order to reduce differences between the experimental and model marker data as well as residual forces and moments. In this study, the particle swarm optimization (PSO) and simplex simulated annealing (SIMPSA) algorithms were used to determine optimal tracking weights for the simulation of a sidestep cut. The PSO and SIMPSA algorithms were able to produce model kinematics that were within 1.4° of experimental kinematics with residual forces and moments of less than 10 N and 18 Nm, respectively. The PSO algorithm was able to replicate the experimental kinematic data more closely and produce more dynamically consistent kinematic data for a sidestep cut compared to the SIMPSA algorithm. Future studies should use external optimization routines to determine dynamically consistent kinematic data and report the differences between experimental and model data for these musculoskeletal simulations.
A joint equalization algorithm in high speed communication systems
NASA Astrophysics Data System (ADS)
Hao, Xin; Lin, Changxing; Wang, Zhaohui; Cheng, Binbin; Deng, Xianjin
2018-02-01
This paper presents a joint equalization algorithm in high speed communication systems. This algorithm takes the advantages of traditional equalization algorithms to use pre-equalization and post-equalization. The pre-equalization algorithm takes the advantage of CMA algorithm, which is not sensitive to the frequency offset. Pre-equalization is located before the carrier recovery loop in order to make the carrier recovery loop a better performance and overcome most of the frequency offset. The post-equalization takes the advantage of MMA algorithm in order to overcome the residual frequency offset. This paper analyzes the advantages and disadvantages of several equalization algorithms in the first place, and then simulates the proposed joint equalization algorithm in Matlab platform. The simulation results shows the constellation diagrams and the bit error rate curve, both these results show that the proposed joint equalization algorithm is better than the traditional algorithms. The residual frequency offset is shown directly in the constellation diagrams. When SNR is 14dB, the bit error rate of the simulated system with the proposed joint equalization algorithm is 103 times better than CMA algorithm, 77 times better than MMA equalization, and 9 times better than CMA-MMA equalization.
Parallel-SymD: A Parallel Approach to Detect Internal Symmetry in Protein Domains.
Jha, Ashwani; Flurchick, K M; Bikdash, Marwan; Kc, Dukka B
2016-01-01
Internally symmetric proteins are proteins that have a symmetrical structure in their monomeric single-chain form. Around 10-15% of the protein domains can be regarded as having some sort of internal symmetry. In this regard, we previously published SymD (symmetry detection), an algorithm that determines whether a given protein structure has internal symmetry by attempting to align the protein to its own copy after the copy is circularly permuted by all possible numbers of residues. SymD has proven to be a useful algorithm to detect symmetry. In this paper, we present a new parallelized algorithm called Parallel-SymD for detecting symmetry of proteins on clusters of computers. The achieved speedup of the new Parallel-SymD algorithm scales well with the number of computing processors. Scaling is better for proteins with a larger number of residues. For a protein of 509 residues, a speedup of 63 was achieved on a parallel system with 100 processors.
Parallel-SymD: A Parallel Approach to Detect Internal Symmetry in Protein Domains
Jha, Ashwani; Flurchick, K. M.; Bikdash, Marwan
2016-01-01
Internally symmetric proteins are proteins that have a symmetrical structure in their monomeric single-chain form. Around 10–15% of the protein domains can be regarded as having some sort of internal symmetry. In this regard, we previously published SymD (symmetry detection), an algorithm that determines whether a given protein structure has internal symmetry by attempting to align the protein to its own copy after the copy is circularly permuted by all possible numbers of residues. SymD has proven to be a useful algorithm to detect symmetry. In this paper, we present a new parallelized algorithm called Parallel-SymD for detecting symmetry of proteins on clusters of computers. The achieved speedup of the new Parallel-SymD algorithm scales well with the number of computing processors. Scaling is better for proteins with a larger number of residues. For a protein of 509 residues, a speedup of 63 was achieved on a parallel system with 100 processors. PMID:27747230
[Improvement of magnetic resonance phase unwrapping method based on Goldstein Branch-cut algorithm].
Guo, Lin; Kang, Lili; Wang, Dandan
2013-02-01
The phase information of magnetic resonance (MR) phase image can be used in many MR imaging techniques, but phase wrapping of the images often results in inaccurate phase information and phase unwrapping is essential for MR imaging techniques. In this paper we analyze the causes of errors in phase unwrapping with the commonly used Goldstein Brunch-cut algorithm and propose an improved algorithm. During the unwrapping process, masking, filtering, dipole- remover preprocessor, and the Prim algorithm of the minimum spanning tree were introduced to optimize the residues essential for the Goldstein Brunch-cut algorithm. Experimental results showed that the residues, branch-cuts and continuous unwrapped phase surface were efficiently reduced and the quality of MR phase images was obviously improved with the proposed method.
A structural role for the PHP domain in E. coli DNA polymerase III.
Barros, Tiago; Guenther, Joel; Kelch, Brian; Anaya, Jordan; Prabhakar, Arjun; O'Donnell, Mike; Kuriyan, John; Lamers, Meindert H
2013-05-14
In addition to the core catalytic machinery, bacterial replicative DNA polymerases contain a Polymerase and Histidinol Phosphatase (PHP) domain whose function is not entirely understood. The PHP domains of some bacterial replicases are active metal-dependent nucleases that may play a role in proofreading. In E. coli DNA polymerase III, however, the PHP domain has lost several metal-coordinating residues and is likely to be catalytically inactive. Genomic searches show that the loss of metal-coordinating residues in polymerase PHP domains is likely to have coevolved with the presence of a separate proofreading exonuclease that works with the polymerase. Although the E. coli Pol III PHP domain has lost metal-coordinating residues, the structure of the domain has been conserved to a remarkable degree when compared to that of metal-binding PHP domains. This is demonstrated by our ability to restore metal binding with only three point mutations, as confirmed by the metal-bound crystal structure of this mutant determined at 2.9 Å resolution. We also show that Pol III, a large multi-domain protein, unfolds cooperatively and that mutations in the degenerate metal-binding site of the PHP domain decrease the overall stability of Pol III and reduce its activity. While the presence of a PHP domain in replicative bacterial polymerases is strictly conserved, its ability to coordinate metals and to perform proofreading exonuclease activity is not, suggesting additional non-enzymatic roles for the domain. Our results show that the PHP domain is a major structural element in Pol III and its integrity modulates both the stability and activity of the polymerase.
NASA Technical Reports Server (NTRS)
Nachtigal, Noel M.
1991-01-01
The Lanczos algorithm can be used both for eigenvalue problems and to solve linear systems. However, when applied to non-Hermitian matrices, the classical Lanczos algorithm is susceptible to breakdowns and potential instabilities. In addition, the biconjugate gradient (BCG) algorithm, which is the natural generalization of the conjugate gradient algorithm to non-Hermitian linear systems, has a second source of breakdowns, independent of the Lanczos breakdowns. Here, we present two new results. We propose an implementation of a look-ahead variant of the Lanczos algorithm which overcomes the breakdowns by skipping over those steps where a breakdown or a near-breakdown would occur. The new algorithm can handle look-ahead steps of any length and requires the same number of matrix-vector products and inner products per step as the classical Lanczos algorithm without look-ahead. Based on the proposed look-ahead Lanczos algorithm, we then present a novel BCG-like approach, the quasi-minimal residual (QMR) method, which avoids the second source of breakdowns in the BCG algorithm. We present details of the new method and discuss some of its properties. In particular, we discuss the relationship between QMR and BCG, showing how one can recover the BCG iterates, when they exist, from the QMR iterates. We also present convergence results for QMR, showing the connection between QMR and the generalized minimal residual (GMRES) algorithm, the optimal method in this class of methods. Finally, we give some numerical examples, both for eigenvalue computations and for non-Hermitian linear systems.
Bartlett, John M S; Christiansen, Jason; Gustavson, Mark; Rimm, David L; Piper, Tammy; van de Velde, Cornelis J H; Hasenburg, Annette; Kieback, Dirk G; Putter, Hein; Markopoulos, Christos J; Dirix, Luc Y; Seynaeve, Caroline; Rea, Daniel W
2016-01-01
Hormone receptors HER2/neu and Ki-67 are markers of residual risk in early breast cancer. An algorithm (IHC4) combining these markers may provide additional information on residual risk of recurrence in patients treated with hormone therapy. To independently validate the IHC4 algorithm in the multinational Tamoxifen Versus Exemestane Adjuvant Multicenter Trial (TEAM) cohort, originally developed on the trans-ATAC (Arimidex, Tamoxifen, Alone or in Combination Trial) cohort, by comparing 2 methodologies. The IHC4 biomarker expression was quantified on TEAM cohort samples (n = 2919) by using 2 independent methodologies (conventional 3,3'-diaminobezidine [DAB] immunohistochemistry with image analysis and standardized quantitative immunofluorescence [QIF] by AQUA technology). The IHC4 scores were calculated by using the same previously established coefficients and then compared with recurrence-free and distant recurrence-free survival, using multivariate Cox proportional hazards modeling. The QIF model was highly significant for prediction of residual risk (P < .001), with continuous model scores showing a hazard ratio (HR) of 1.012 (95% confidence interval [95% CI]: 1.010-1.014), which was significantly higher than that for the DAB model (HR: 1.008, 95% CI: 1.006-1.009); P < .001). Each model added significant prognostic value in addition to recognized clinical prognostic factors, including nodal status, in multivariate analyses. Quantitative immunofluorescence, however, showed more accuracy with respect to overall residual risk assessment than the DAB model. The use of the IHC4 algorithm was validated on the TEAM trial for predicting residual risk in patients with breast cancer. These data support the use of the IHC4 algorithm clinically, but quantitative and standardized approaches need to be used.
Speeding up local correlation methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kats, Daniel
2014-12-28
We present two techniques that can substantially speed up the local correlation methods. The first one allows one to avoid the expensive transformation of the electron-repulsion integrals from atomic orbitals to virtual space. The second one introduces an algorithm for the residual equations in the local perturbative treatment that, in contrast to the standard scheme, does not require holding the amplitudes or residuals in memory. It is shown that even an interpreter-based implementation of the proposed algorithm in the context of local MP2 method is faster and requires less memory than the highly optimized variants of conventional algorithms.
Evolution of substrate specificity in a retained enzyme driven by gene loss
Juárez-Vázquez, Ana Lilia; Edirisinghe, Janaka N; Verduzco-Castro, Ernesto A; Michalska, Karolina; Wu, Chenggang; Noda-García, Lianet; Babnigg, Gyorgy; Endres, Michael; Medina-Ruíz, Sofía; Santoyo-Flores, Julián; Carrillo-Tripp, Mauricio; Ton-That, Hung; Joachimiak, Andrzej; Henry, Christopher S; Barona-Gómez, Francisco
2017-01-01
The connection between gene loss and the functional adaptation of retained proteins is still poorly understood. We apply phylogenomics and metabolic modeling to detect bacterial species that are evolving by gene loss, with the finding that Actinomycetaceae genomes from human cavities are undergoing sizable reductions, including loss of L-histidine and L-tryptophan biosynthesis. We observe that the dual-substrate phosphoribosyl isomerase A or priA gene, at which these pathways converge, appears to coevolve with the occurrence of trp and his genes. Characterization of a dozen PriA homologs shows that these enzymes adapt from bifunctionality in the largest genomes, to a monofunctional, yet not necessarily specialized, inefficient form in genomes undergoing reduction. These functional changes are accomplished via mutations, which result from relaxation of purifying selection, in residues structurally mapped after sequence and X-ray structural analyses. Our results show how gene loss can drive the evolution of substrate specificity from retained enzymes. DOI: http://dx.doi.org/10.7554/eLife.22679.001 PMID:28362260
Evolution of Substrate Specificity in A Retained Enzyme Driven by Gene Loss
Juarez-Vazquez, Ana L.; Edirisinghe, Janaka N.; Verduzco-Castro, Ernesto A.; ...
2017-03-31
The connection between gene loss and the functional adaptation of retained proteins is still poorly understood. Here, we apply phylogenomics and metabolic modeling to detect bacterial species that are evolving by gene loss, with the finding that Actinomycetaceae genomes from human cavities are undergoing sizable reductions, including loss of L-histidine and L-tryptophan biosynthesis. We also observe that the dual-substrate phosphoribosyl isomerase A or priA gene, at which these pathways converge, appears to coevolve with the occurrence of trp and his genes. Characterization of a dozen PriA homologs shows that these enzymes adapt from bifunctionality in the largest genomes, to amore » monofunctional, yet not necessarily specialized, inefficient form in genomes undergoing reduction. These functional changes are accomplished via mutations, which result from relaxation of purifying selection, in residues structurally mapped after sequence and X-ray structural analyses. These results show how gene loss can drive the evolution of substrate specificity from retained enzymes.« less
Evolution of substrate specificity in a retained enzyme driven by gene loss
Juárez-Vázquez, Ana Lilia; Edirisinghe, Janaka N.; Verduzco-Castro, Ernesto A.; ...
2017-03-31
The connection between gene loss and the functional adaptation of retained proteins is still poorly understood. We apply phylogenomics and metabolic modeling to detect bacterial species that are evolving by gene loss, with the finding that Actinomycetaceae genomes from human cavities are undergoing sizable reductions, including loss of L-histidine and L-tryptophan biosynthesis. We observe that the dual-substrate phosphoribosyl isomerase A or priA gene, at which these pathways converge, appears to coevolve with the occurrence oftrpandhisgenes. Characterization of a dozen PriA homologs shows that these enzymes adapt from bifunctionality in the largest genomes, to a monofunctional, yet not necessarily specialized, inefficientmore » form in genomes undergoing reduction. These functional changes are accomplished via mutations, which result from relaxation of purifying selection, in residues structurally mapped after sequence and X-ray structural analyses. Finally, our results show how gene loss can drive the evolution of substrate specificity from retained enzymes.« less
Deconstruction of the Ras switching cycle through saturation mutagenesis
Bandaru, Pradeep; Shah, Neel H; Bhattacharyya, Moitrayee; Barton, John P; Kondo, Yasushi; Cofsky, Joshua C; Gee, Christine L; Chakraborty, Arup K; Kortemme, Tanja; Ranganathan, Rama; Kuriyan, John
2017-01-01
Ras proteins are highly conserved signaling molecules that exhibit regulated, nucleotide-dependent switching between active and inactive states. The high conservation of Ras requires mechanistic explanation, especially given the general mutational tolerance of proteins. Here, we use deep mutational scanning, biochemical analysis and molecular simulations to understand constraints on Ras sequence. Ras exhibits global sensitivity to mutation when regulated by a GTPase activating protein and a nucleotide exchange factor. Removing the regulators shifts the distribution of mutational effects to be largely neutral, and reveals hotspots of activating mutations in residues that restrain Ras dynamics and promote the inactive state. Evolutionary analysis, combined with structural and mutational data, argue that Ras has co-evolved with its regulators in the vertebrate lineage. Overall, our results show that sequence conservation in Ras depends strongly on the biochemical network in which it operates, providing a framework for understanding the origin of global selection pressures on proteins. DOI: http://dx.doi.org/10.7554/eLife.27810.001 PMID:28686159
Evolution of substrate specificity in a retained enzyme driven by gene loss
DOE Office of Scientific and Technical Information (OSTI.GOV)
Juárez-Vázquez, Ana Lilia; Edirisinghe, Janaka N.; Verduzco-Castro, Ernesto A.
The connection between gene loss and the functional adaptation of retained proteins is still poorly understood. We apply phylogenomics and metabolic modeling to detect bacterial species that are evolving by gene loss, with the finding that Actinomycetaceae genomes from human cavities are undergoing sizable reductions, including loss of L-histidine and L-tryptophan biosynthesis. We observe that the dual-substrate phosphoribosyl isomerase A or priA gene, at which these pathways converge, appears to coevolve with the occurrence oftrpandhisgenes. Characterization of a dozen PriA homologs shows that these enzymes adapt from bifunctionality in the largest genomes, to a monofunctional, yet not necessarily specialized, inefficientmore » form in genomes undergoing reduction. These functional changes are accomplished via mutations, which result from relaxation of purifying selection, in residues structurally mapped after sequence and X-ray structural analyses. Finally, our results show how gene loss can drive the evolution of substrate specificity from retained enzymes.« less
Evolution of Substrate Specificity in A Retained Enzyme Driven by Gene Loss
DOE Office of Scientific and Technical Information (OSTI.GOV)
Juarez-Vazquez, Ana L.; Edirisinghe, Janaka N.; Verduzco-Castro, Ernesto A.
The connection between gene loss and the functional adaptation of retained proteins is still poorly understood. Here, we apply phylogenomics and metabolic modeling to detect bacterial species that are evolving by gene loss, with the finding that Actinomycetaceae genomes from human cavities are undergoing sizable reductions, including loss of L-histidine and L-tryptophan biosynthesis. We also observe that the dual-substrate phosphoribosyl isomerase A or priA gene, at which these pathways converge, appears to coevolve with the occurrence of trp and his genes. Characterization of a dozen PriA homologs shows that these enzymes adapt from bifunctionality in the largest genomes, to amore » monofunctional, yet not necessarily specialized, inefficient form in genomes undergoing reduction. These functional changes are accomplished via mutations, which result from relaxation of purifying selection, in residues structurally mapped after sequence and X-ray structural analyses. These results show how gene loss can drive the evolution of substrate specificity from retained enzymes.« less
Li, Yiyang; Jin, Weiqi; Li, Shuo; Zhang, Xu; Zhu, Jin
2017-01-01
Cooled infrared detector arrays always suffer from undesired ripple residual nonuniformity (RNU) in sky scene observations. The ripple residual nonuniformity seriously affects the imaging quality, especially for small target detection. It is difficult to eliminate it using the calibration-based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified temporal high-pass nonuniformity correction algorithm using fuzzy scene classification. The fuzzy scene classification is designed to control the correction threshold so that the algorithm can remove ripple RNU without degrading the scene details. We test the algorithm on a real infrared sequence by comparing it to several well-established methods. The result shows that the algorithm has obvious advantages compared with the tested methods in terms of detail conservation and convergence speed for ripple RNU correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA), which has two advantages: (1) low resources consumption; and (2) small hardware delay (less than 10 image rows). It has been successfully applied in an actual system. PMID:28481320
Pattern-set generation algorithm for the one-dimensional multiple stock sizes cutting stock problem
NASA Astrophysics Data System (ADS)
Cui, Yaodong; Cui, Yi-Ping; Zhao, Zhigang
2015-09-01
A pattern-set generation algorithm (PSG) for the one-dimensional multiple stock sizes cutting stock problem (1DMSSCSP) is presented. The solution process contains two stages. In the first stage, the PSG solves the residual problems repeatedly to generate the patterns in the pattern set, where each residual problem is solved by the column-generation approach, and each pattern is generated by solving a single large object placement problem. In the second stage, the integer linear programming model of the 1DMSSCSP is solved using a commercial solver, where only the patterns in the pattern set are considered. The computational results of benchmark instances indicate that the PSG outperforms existing heuristic algorithms and rivals the exact algorithm in solution quality.
Campagne, F; Weinstein, H
1999-01-01
An algorithmic method for drawing residue-based schematic diagrams of proteins on a 2D page is presented and illustrated. The method allows the creation of rendering engines dedicated to a given family of sequences, or fold. The initial implementation provides an engine that can produce a 2D diagram representing secondary structure for any transmembrane protein sequence. We present the details of the strategy for automating the drawing of these diagrams. The most important part of this strategy is the development of an algorithm for laying out residues of a loop that connects to arbitrary points of a 2D plane. As implemented, this algorithm is suitable for real-time modification of the loop layout. This work is of interest for the representation and analysis of data from (1) protein databases, (2) mutagenesis results, or (3) various kinds of protein context-dependent annotations or data.
Friberg, Magne; Schwind, Christopher; Roark, Lindsey C; Raguso, Robert A; Thompson, John N
2014-09-01
Chemical defenses, repellents, and attractants are important shapers of species interactions. Chemical attractants could contribute to the divergence of coevolving plant-insect interactions, if pollinators are especially responsive to signals from the local plant species. We experimentally investigated patterns of daily floral scent production in three Lithophragma species (Saxifragaceae) that are geographically isolated and tested how scent divergence affects attraction of their major pollinator-the floral parasitic moth Greya politella (Prodoxidae). These moths oviposit through the corolla while simultaneously pollinating the flower with pollen adhering to the abdomen. The complex and species-specific floral scent profiles were emitted in higher amounts during the day, when these day-flying moths are active. There was minimal divergence found in petal color, which is another potential floral attractant. Female moths responded most strongly to scent from their local host species in olfactometer bioassays, and were more likely to oviposit in, and thereby pollinate, their local host species in no-choice trials. The results suggest that floral scent is an important attractant in this interaction. Local specialization in the pollinator response to a highly specific plant chemistry, thus, has the potential to contribute importantly to patterns of interaction specificity among coevolving plants and highly specialized pollinators.
Cooperative behavior and phase transitions in co-evolving stag hunt game
NASA Astrophysics Data System (ADS)
Zhang, W.; Li, Y. S.; Xu, C.; Hui, P. M.
2016-02-01
Cooperative behavior and different phases in a co-evolving network dynamics based on the stag hunt game is studied. The dynamical processes are parameterized by a payoff r that tends to promote non-cooperative behavior and a probability q for a rewiring attempt that could isolate the non-cooperators. The interplay between the parameters leads to different phases. Detailed simulations and a mean field theory are employed to reveal the properties of different phases. For small r, the cooperators are the majority and form a connected cluster while the non-cooperators increase with q but remain isolated over the whole range of q, and it is a static phase. For sufficiently large r, cooperators disappear in an intermediate range qL ≤ q ≤qU and a dynamical all-non-cooperators phase results. For q >qU, a static phase results again. A mean field theory based on how the link densities change in time by the co-evolving dynamics is constructed. The theory gives a phase diagram in the q- r parameter space that is qualitatively in agreement with simulation results. The sources of discrepancies between theory and simulations are discussed.
Cyber Security Research Frameworks For Coevolutionary Network Defense
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rush, George D.; Tauritz, Daniel Remy
Several architectures have been created for developing and testing systems used in network security, but most are meant to provide a platform for running cyber security experiments as opposed to automating experiment processes. In the first paper, we propose a framework termed Distributed Cyber Security Automation Framework for Experiments (DCAFE) that enables experiment automation and control in a distributed environment. Predictive analysis of adversaries is another thorny issue in cyber security. Game theory can be used to mathematically analyze adversary models, but its scalability limitations restrict its use. Computational game theory allows us to scale classical game theory to larger,more » more complex systems. In the second paper, we propose a framework termed Coevolutionary Agent-based Network Defense Lightweight Event System (CANDLES) that can coevolve attacker and defender agent strategies and capabilities and evaluate potential solutions with a custom network defense simulation. The third paper is a continuation of the CANDLES project in which we rewrote key parts of the framework. Attackers and defenders have been redesigned to evolve pure strategy, and a new network security simulation is devised which specifies network architecture and adds a temporal aspect. We also add a hill climber algorithm to evaluate the search space and justify the use of a coevolutionary algorithm.« less
Quasi-kernel polynomials and convergence results for quasi-minimal residual iterations
NASA Technical Reports Server (NTRS)
Freund, Roland W.
1992-01-01
Recently, Freund and Nachtigal have proposed a novel polynominal-based iteration, the quasi-minimal residual algorithm (QMR), for solving general nonsingular non-Hermitian linear systems. Motivated by the QMR method, we have introduced the general concept of quasi-kernel polynomials, and we have shown that the QMR algorithm is based on a particular instance of quasi-kernel polynomials. In this paper, we continue our study of quasi-kernel polynomials. In particular, we derive bounds for the norms of quasi-kernel polynomials. These results are then applied to obtain convergence theorems both for the QMR method and for a transpose-free variant of QMR, the TFQMR algorithm.
Using In Silico Fragmentation to Improve Routine Residue Screening in Complex Matrices.
Kaufmann, Anton; Butcher, Patrick; Maden, Kathryn; Walker, Stephan; Widmer, Mirjam
2017-12-01
Targeted residue screening requires the use of reference substances in order to identify potential residues. This becomes a difficult issue when using multi-residue methods capable of analyzing several hundreds of analytes. Therefore, the capability of in silico fragmentation based on a structure database ("suspect screening") instead of physical reference substances for routine targeted residue screening was investigated. The detection of fragment ions that can be predicted or explained by in silico software was utilized to reduce the number of false positives. These "proof of principle" experiments were done with a tool that is integrated into a commercial MS vendor instrument operating software (UNIFI) as well as with a platform-independent MS tool (Mass Frontier). A total of 97 analytes belonging to different chemical families were separated by reversed phase liquid chromatography and detected in a data-independent acquisition (DIA) mode using ion mobility hyphenated with quadrupole time of flight mass spectrometry. The instrument was operated in the MS E mode with alternating low and high energy traces. The fragments observed from product ion spectra were investigated using a "chopping" bond disconnection algorithm and a rule-based algorithm. The bond disconnection algorithm clearly explained more analyte product ions and a greater percentage of the spectral abundance than the rule-based software (92 out of the 97 compounds produced ≥1 explainable fragment ions). On the other hand, tests with a complex blank matrix (bovine liver extract) indicated that the chopping algorithm reports significantly more false positive fragments than the rule based software. Graphical Abstract.
Using In Silico Fragmentation to Improve Routine Residue Screening in Complex Matrices
NASA Astrophysics Data System (ADS)
Kaufmann, Anton; Butcher, Patrick; Maden, Kathryn; Walker, Stephan; Widmer, Mirjam
2017-12-01
Targeted residue screening requires the use of reference substances in order to identify potential residues. This becomes a difficult issue when using multi-residue methods capable of analyzing several hundreds of analytes. Therefore, the capability of in silico fragmentation based on a structure database ("suspect screening") instead of physical reference substances for routine targeted residue screening was investigated. The detection of fragment ions that can be predicted or explained by in silico software was utilized to reduce the number of false positives. These "proof of principle" experiments were done with a tool that is integrated into a commercial MS vendor instrument operating software (UNIFI) as well as with a platform-independent MS tool (Mass Frontier). A total of 97 analytes belonging to different chemical families were separated by reversed phase liquid chromatography and detected in a data-independent acquisition (DIA) mode using ion mobility hyphenated with quadrupole time of flight mass spectrometry. The instrument was operated in the MSE mode with alternating low and high energy traces. The fragments observed from product ion spectra were investigated using a "chopping" bond disconnection algorithm and a rule-based algorithm. The bond disconnection algorithm clearly explained more analyte product ions and a greater percentage of the spectral abundance than the rule-based software (92 out of the 97 compounds produced ≥1 explainable fragment ions). On the other hand, tests with a complex blank matrix (bovine liver extract) indicated that the chopping algorithm reports significantly more false positive fragments than the rule based software. [Figure not available: see fulltext.
Near-lossless multichannel EEG compression based on matrix and tensor decompositions.
Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej
2013-05-01
A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.
4D-PET reconstruction using a spline-residue model with spatial and temporal roughness penalties
NASA Astrophysics Data System (ADS)
Ralli, George P.; Chappell, Michael A.; McGowan, Daniel R.; Sharma, Ricky A.; Higgins, Geoff S.; Fenwick, John D.
2018-05-01
4D reconstruction of dynamic positron emission tomography (dPET) data can improve the signal-to-noise ratio in reconstructed image sequences by fitting smooth temporal functions to the voxel time-activity-curves (TACs) during the reconstruction, though the optimal choice of function remains an open question. We propose a spline-residue model, which describes TACs as weighted sums of convolutions of the arterial input function with cubic B-spline basis functions. Convolution with the input function constrains the spline-residue model at early time-points, potentially enhancing noise suppression in early time-frames, while still allowing a wide range of TAC descriptions over the entire imaged time-course, thus limiting bias. Spline-residue based 4D-reconstruction is compared to that of a conventional (non-4D) maximum a posteriori (MAP) algorithm, and to 4D-reconstructions based on adaptive-knot cubic B-splines, the spectral model and an irreversible two-tissue compartment (‘2C3K’) model. 4D reconstructions were carried out using a nested-MAP algorithm including spatial and temporal roughness penalties. The algorithms were tested using Monte-Carlo simulated scanner data, generated for a digital thoracic phantom with uptake kinetics based on a dynamic [18F]-Fluromisonidazole scan of a non-small cell lung cancer patient. For every algorithm, parametric maps were calculated by fitting each voxel TAC within a sub-region of the reconstructed images with the 2C3K model. Compared to conventional MAP reconstruction, spline-residue-based 4D reconstruction achieved >50% improvements for five of the eight combinations of the four kinetics parameters for which parametric maps were created with the bias and noise measures used to analyse them, and produced better results for 5/8 combinations than any of the other reconstruction algorithms studied, while spectral model-based 4D reconstruction produced the best results for 2/8. 2C3K model-based 4D reconstruction generated the most biased parametric maps. Inclusion of a temporal roughness penalty function improved the performance of 4D reconstruction based on the cubic B-spline, spectral and spline-residue models.
Swingle, Mark R; Honkanen, Richard Eric
2018-05-07
The reversible phosphorylation of proteins regulates many key functions in eukaryotic cells. Phosphorylation is catalyzed by protein kinases, with the majority of phosphorylation occurring on side chains of serine and threonine residues. The phosphomonoesters generated by protein kinases are hydrolyzed by protein phosphatases. In the absence of a phosphatase the half-time for the hydrolysis of alkyl phosphate dianions at 25º C is over 1 trillion years; knon ~2 x 10-20 sec-1. Therefore, ser/thr phosphatases are critical for processes controlled by reversible phosphorylation. This review is based on a search of the literature in available databases. We compare the catalytic mechanism of PPP-family phosphatases (PPPases) and the interactions of inhibitors that target these enzymes. PPPases are metal-dependent hydrolases that enhance the rate of hydrolysis ([kcat/kM]/knon ) by a factor of ~1021, placing them among the most powerful known catalysts on earth. Biochemical and structural studies indicate the remarkable catalytic proficiencies of PPPases are achieved by 10 conserved amino acids, DXH(X)~26DXXDR(X)~20-26NH(X)~50H(X)~25-45R(X)~30-40H. Six act as metal-coordinating residues. Four position and orient the substrate phosphate. Together, two metal ions and the 10 catalytic residues position the phosphoryl group and an activated bridging water/hydroxide nucleophile for inline attack upon the substrate phosphorous atom. The PPPases are conserved among species, and many structurally diverse natural toxins co-evolved to target these enzymes. Although the catalytic site is conserved, opportunities for the development of selective inhibitors of this important group of metalloenzymes exist. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
A structural role for the PHP domain in E. coli DNA polymerase III
2013-01-01
Background In addition to the core catalytic machinery, bacterial replicative DNA polymerases contain a Polymerase and Histidinol Phosphatase (PHP) domain whose function is not entirely understood. The PHP domains of some bacterial replicases are active metal-dependent nucleases that may play a role in proofreading. In E. coli DNA polymerase III, however, the PHP domain has lost several metal-coordinating residues and is likely to be catalytically inactive. Results Genomic searches show that the loss of metal-coordinating residues in polymerase PHP domains is likely to have coevolved with the presence of a separate proofreading exonuclease that works with the polymerase. Although the E. coli Pol III PHP domain has lost metal-coordinating residues, the structure of the domain has been conserved to a remarkable degree when compared to that of metal-binding PHP domains. This is demonstrated by our ability to restore metal binding with only three point mutations, as confirmed by the metal-bound crystal structure of this mutant determined at 2.9 Å resolution. We also show that Pol III, a large multi-domain protein, unfolds cooperatively and that mutations in the degenerate metal-binding site of the PHP domain decrease the overall stability of Pol III and reduce its activity. Conclusions While the presence of a PHP domain in replicative bacterial polymerases is strictly conserved, its ability to coordinate metals and to perform proofreading exonuclease activity is not, suggesting additional non-enzymatic roles for the domain. Our results show that the PHP domain is a major structural element in Pol III and its integrity modulates both the stability and activity of the polymerase. PMID:23672456
A Test of Genetic Algorithms in Relevance Feedback.
ERIC Educational Resources Information Center
Lopez-Pujalte, Cristina; Guerrero Bote, Vicente P.; Moya Anegon, Felix de
2002-01-01
Discussion of information retrieval, query optimization techniques, and relevance feedback focuses on genetic algorithms, which are derived from artificial intelligence techniques. Describes an evaluation of different genetic algorithms using a residual collection method and compares results with the Ide dec-hi method (Salton and Buckley, 1990…
Estimates of Single Sensor Error Statistics for the MODIS Matchup Database Using Machine Learning
NASA Astrophysics Data System (ADS)
Kumar, C.; Podesta, G. P.; Minnett, P. J.; Kilpatrick, K. A.
2017-12-01
Sea surface temperature (SST) is a fundamental quantity for understanding weather and climate dynamics. Although sensors aboard satellites provide global and repeated SST coverage, a characterization of SST precision and bias is necessary for determining the suitability of SST retrievals in various applications. Guidance on how to derive meaningful error estimates is still being developed. Previous methods estimated retrieval uncertainty based on geophysical factors, e.g. season or "wet" and "dry" atmospheres, but the discrete nature of these bins led to spatial discontinuities in SST maps. Recently, a new approach clustered retrievals based on the terms (excluding offset) in the statistical algorithm used to estimate SST. This approach resulted in over 600 clusters - too many to understand the geophysical conditions that influence retrieval error. Using MODIS and buoy SST matchups (2002 - 2016), we use machine learning algorithms (recursive and conditional trees, random forests) to gain insight into geophysical conditions leading to the different signs and magnitudes of MODIS SST residuals (satellite SSTs minus buoy SSTs). MODIS retrievals were first split into three categories: < -0.4 C, -0.4 C ≤ residual ≤ 0.4 C, and > 0.4 C. These categories are heavily unbalanced, with residuals > 0.4 C being much less frequent. Performance of classification algorithms is affected by imbalance, thus we tested various rebalancing algorithms (oversampling, undersampling, combinations of the two). We consider multiple features for the decision tree algorithms: regressors from the MODIS SST algorithm, proxies for temperature deficit, and spatial homogeneity of brightness temperatures (BTs), e.g., the range of 11 μm BTs inside a 25 km2 area centered on the buoy location. These features and a rebalancing of classes led to an 81.9% accuracy when classifying SST retrievals into the < -0.4 C and -0.4 C ≤ residual ≤ 0.4 C categories. Spatial homogeneity in BTs consistently appears as a very important variable for classification, suggesting that unidentified cloud contamination still is one of the causes leading to negative SST residuals. Precision and accuracy of error estimates from our decision tree classifier are enhanced using this knowledge.
Reliability enhancement of Navier-Stokes codes through convergence enhancement
NASA Technical Reports Server (NTRS)
Choi, K.-Y.; Dulikravich, G. S.
1993-01-01
Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.
Reliability enhancement of Navier-Stokes codes through convergence enhancement
NASA Astrophysics Data System (ADS)
Choi, K.-Y.; Dulikravich, G. S.
1993-11-01
Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.
Estimating meme fitness in adaptive memetic algorithms for combinatorial problems.
Smith, J E
2012-01-01
Among the most promising and active research areas in heuristic optimisation is the field of adaptive memetic algorithms (AMAs). These gain much of their reported robustness by adapting the probability with which each of a set of local improvement operators is applied, according to an estimate of their current value to the search process. This paper addresses the issue of how the current value should be estimated. Assuming the estimate occurs over several applications of a meme, we consider whether the extreme or mean improvements should be used, and whether this aggregation should be global, or local to some part of the solution space. To investigate these issues, we use the well-established COMA framework that coevolves the specification of a population of memes (representing different local search algorithms) alongside a population of candidate solutions to the problem at hand. Two very different memetic algorithms are considered: the first using adaptive operator pursuit to adjust the probabilities of applying a fixed set of memes, and a second which applies genetic operators to dynamically adapt and create memes and their functional definitions. For the latter, especially on combinatorial problems, credit assignment mechanisms based on historical records, or on notions of landscape locality, will have limited application, and it is necessary to estimate the value of a meme via some form of sampling. The results on a set of binary encoded combinatorial problems show that both methods are very effective, and that for some problems it is necessary to use thousands of variables in order to tease apart the differences between different reward schemes. However, for both memetic algorithms, a significant pattern emerges that reward based on mean improvement is better than that based on extreme improvement. This contradicts recent findings from adapting the parameters of operators involved in global evolutionary search. The results also show that local reward schemes outperform global reward schemes in combinatorial spaces, unlike in continuous spaces. An analysis of evolving meme behaviour is used to explain these findings.
QMR: A Quasi-Minimal Residual method for non-Hermitian linear systems
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Nachtigal, Noel M.
1990-01-01
The biconjugate gradient (BCG) method is the natural generalization of the classical conjugate gradient algorithm for Hermitian positive definite matrices to general non-Hermitian linear systems. Unfortunately, the original BCG algorithm is susceptible to possible breakdowns and numerical instabilities. A novel BCG like approach is presented called the quasi-minimal residual (QMR) method, which overcomes the problems of BCG. An implementation of QMR based on a look-ahead version of the nonsymmetric Lanczos algorithm is proposed. It is shown how BCG iterates can be recovered stably from the QMR process. Some further properties of the QMR approach are given and an error bound is presented. Finally, numerical experiments are reported.
A coevolving model based on preferential triadic closure for social media networks
Li, Menghui; Zou, Hailin; Guan, Shuguang; Gong, Xiaofeng; Li, Kun; Di, Zengru; Lai, Choy-Heng
2013-01-01
The dynamical origin of complex networks, i.e., the underlying principles governing network evolution, is a crucial issue in network study. In this paper, by carrying out analysis to the temporal data of Flickr and Epinions–two typical social media networks, we found that the dynamical pattern in neighborhood, especially the formation of triadic links, plays a dominant role in the evolution of networks. We thus proposed a coevolving dynamical model for such networks, in which the evolution is only driven by the local dynamics–the preferential triadic closure. Numerical experiments verified that the model can reproduce global properties which are qualitatively consistent with the empirical observations. PMID:23979061
Deep Convolutional Framelet Denosing for Low-Dose CT via Wavelet Residual Network.
Kang, Eunhee; Chang, Won; Yoo, Jaejun; Ye, Jong Chul
2018-06-01
Model-based iterative reconstruction algorithms for low-dose X-ray computed tomography (CT) are computationally expensive. To address this problem, we recently proposed a deep convolutional neural network (CNN) for low-dose X-ray CT and won the second place in 2016 AAPM Low-Dose CT Grand Challenge. However, some of the textures were not fully recovered. To address this problem, here we propose a novel framelet-based denoising algorithm using wavelet residual network which synergistically combines the expressive power of deep learning and the performance guarantee from the framelet-based denoising algorithms. The new algorithms were inspired by the recent interpretation of the deep CNN as a cascaded convolution framelet signal representation. Extensive experimental results confirm that the proposed networks have significantly improved performance and preserve the detail texture of the original images.
Low-Complexity Lossless and Near-Lossless Data Compression Technique for Multispectral Imagery
NASA Technical Reports Server (NTRS)
Xie, Hua; Klimesh, Matthew A.
2009-01-01
This work extends the lossless data compression technique described in Fast Lossless Compression of Multispectral- Image Data, (NPO-42517) NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26. The original technique was extended to include a near-lossless compression option, allowing substantially smaller compressed file sizes when a small amount of distortion can be tolerated. Near-lossless compression is obtained by including a quantization step prior to encoding of prediction residuals. The original technique uses lossless predictive compression and is designed for use on multispectral imagery. A lossless predictive data compression algorithm compresses a digitized signal one sample at a time as follows: First, a sample value is predicted from previously encoded samples. The difference between the actual sample value and the prediction is called the prediction residual. The prediction residual is encoded into the compressed file. The decompressor can form the same predicted sample and can decode the prediction residual from the compressed file, and so can reconstruct the original sample. A lossless predictive compression algorithm can generally be converted to a near-lossless compression algorithm by quantizing the prediction residuals prior to encoding them. In this case, since the reconstructed sample values will not be identical to the original sample values, the encoder must determine the values that will be reconstructed and use these values for predicting later sample values. The technique described here uses this method, starting with the original technique, to allow near-lossless compression. The extension to allow near-lossless compression adds the ability to achieve much more compression when small amounts of distortion are tolerable, while retaining the low complexity and good overall compression effectiveness of the original algorithm.
DECONV-TOOL: An IDL based deconvolution software package
NASA Technical Reports Server (NTRS)
Varosi, F.; Landsman, W. B.
1992-01-01
There are a variety of algorithms for deconvolution of blurred images, each having its own criteria or statistic to be optimized in order to estimate the original image data. Using the Interactive Data Language (IDL), we have implemented the Maximum Likelihood, Maximum Entropy, Maximum Residual Likelihood, and sigma-CLEAN algorithms in a unified environment called DeConv_Tool. Most of the algorithms have as their goal the optimization of statistics such as standard deviation and mean of residuals. Shannon entropy, log-likelihood, and chi-square of the residual auto-correlation are computed by DeConv_Tool for the purpose of determining the performance and convergence of any particular method and comparisons between methods. DeConv_Tool allows interactive monitoring of the statistics and the deconvolved image during computation. The final results, and optionally, the intermediate results, are stored in a structure convenient for comparison between methods and review of the deconvolution computation. The routines comprising DeConv_Tool are available via anonymous FTP through the IDL Astronomy User's Library.
Using State Estimation Residuals to Detect Abnormal SCADA Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Jian; Chen, Yousu; Huang, Zhenyu
2010-04-30
Detection of abnormal supervisory control and data acquisition (SCADA) data is critically important for safe and secure operation of modern power systems. In this paper, a methodology of abnormal SCADA data detection based on state estimation residuals is presented. Preceded with a brief overview of outlier detection methods and bad SCADA data detection for state estimation, the framework of the proposed methodology is described. Instead of using original SCADA measurements as the bad data sources, the residuals calculated based on the results of the state estimator are used as the input for the outlier detection algorithm. The BACON algorithm ismore » applied to the outlier detection task. The IEEE 118-bus system is used as a test base to evaluate the effectiveness of the proposed methodology. The accuracy of the BACON method is compared with that of the 3-σ method for the simulated SCADA measurements and residuals.« less
Correlations and analytical approaches to co-evolving voter models
NASA Astrophysics Data System (ADS)
Ji, M.; Xu, C.; Choi, C. W.; Hui, P. M.
2013-11-01
The difficulty in formulating analytical treatments in co-evolving networks is studied in light of the Vazquez-Eguíluz-San Miguel voter model (VM) and a modified VM (MVM) that introduces a random mutation of the opinion as a noise in the VM. The density of active links, which are links that connect the nodes of opposite opinions, is shown to be highly sensitive to both the degree k of a node and the active links n among the neighbors of a node. We test the validity in the formalism of analytical approaches and show explicitly that the assumptions behind the commonly used homogeneous pair approximation scheme in formulating a mean-field theory are the source of the theory's failure due to the strong correlations between k, n and n2. An improved approach that incorporates spatial correlation to the nearest-neighbors explicitly and a random approximation for the next-nearest neighbors is formulated for the VM and the MVM, and it gives better agreement with the simulation results. We introduce an empirical approach that quantifies the correlations more accurately and gives results in good agreement with the simulation results. The work clarifies why simply mean-field theory fails and sheds light on how to analyze the correlations in the dynamic equations that are often generated in co-evolving processes.
Assembler: Efficient Discovery of Spatial Co-evolving Patterns in Massive Geo-sensory Data.
Zhang, Chao; Zheng, Yu; Ma, Xiuli; Han, Jiawei
2015-08-01
Recent years have witnessed the wide proliferation of geo-sensory applications wherein a bundle of sensors are deployed at different locations to cooperatively monitor the target condition. Given massive geo-sensory data, we study the problem of mining spatial co-evolving patterns (SCPs), i.e ., groups of sensors that are spatially correlated and co-evolve frequently in their readings. SCP mining is of great importance to various real-world applications, yet it is challenging because (1) the truly interesting evolutions are often flooded by numerous trivial fluctuations in the geo-sensory time series; and (2) the pattern search space is extremely large due to the spatiotemporal combinatorial nature of SCP. In this paper, we propose a two-stage method called Assembler. In the first stage, Assembler filters trivial fluctuations using wavelet transform and detects frequent evolutions for individual sensors via a segment-and-group approach. In the second stage, Assembler generates SCPs by assembling the frequent evolutions of individual sensors. Leveraging the spatial constraint, it conceptually organizes all the SCPs into a novel structure called the SCP search tree, which facilitates the effective pruning of the search space to generate SCPs efficiently. Our experiments on both real and synthetic data sets show that Assembler is effective, efficient, and scalable.
Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua
2016-11-21
Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed 'MPD-AwTTV'. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.
NASA Astrophysics Data System (ADS)
Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua
2016-11-01
Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed ‘MPD-AwTTV’. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.
High-order Newton-penalty algorithms
NASA Astrophysics Data System (ADS)
Dussault, Jean-Pierre
2005-10-01
Recent efforts in differentiable non-linear programming have been focused on interior point methods, akin to penalty and barrier algorithms. In this paper, we address the classical equality constrained program solved using the simple quadratic loss penalty function/algorithm. The suggestion to use extrapolations to track the differentiable trajectory associated with penalized subproblems goes back to the classic monograph of Fiacco & McCormick. This idea was further developed by Gould who obtained a two-steps quadratically convergent algorithm using prediction steps and Newton correction. Dussault interpreted the prediction step as a combined extrapolation with respect to the penalty parameter and the residual of the first order optimality conditions. Extrapolation with respect to the residual coincides with a Newton step.We explore here higher-order extrapolations, thus higher-order Newton-like methods. We first consider high-order variants of the Newton-Raphson method applied to non-linear systems of equations. Next, we obtain improved asymptotic convergence results for the quadratic loss penalty algorithm by using high-order extrapolation steps.
Jou, Jonathan D; Jain, Swati; Georgiev, Ivelin S; Donald, Bruce R
2016-06-01
Sparse energy functions that ignore long range interactions between residue pairs are frequently used by protein design algorithms to reduce computational cost. Current dynamic programming algorithms that fully exploit the optimal substructure produced by these energy functions only compute the GMEC. This disproportionately favors the sequence of a single, static conformation and overlooks better binding sequences with multiple low-energy conformations. Provable, ensemble-based algorithms such as A* avoid this problem, but A* cannot guarantee better performance than exhaustive enumeration. We propose a novel, provable, dynamic programming algorithm called Branch-Width Minimization* (BWM*) to enumerate a gap-free ensemble of conformations in order of increasing energy. Given a branch-decomposition of branch-width w for an n-residue protein design with at most q discrete side-chain conformations per residue, BWM* returns the sparse GMEC in O([Formula: see text]) time and enumerates each additional conformation in merely O([Formula: see text]) time. We define a new measure, Total Effective Search Space (TESS), which can be computed efficiently a priori before BWM* or A* is run. We ran BWM* on 67 protein design problems and found that TESS discriminated between BWM*-efficient and A*-efficient cases with 100% accuracy. As predicted by TESS and validated experimentally, BWM* outperforms A* in 73% of the cases and computes the full ensemble or a close approximation faster than A*, enumerating each additional conformation in milliseconds. Unlike A*, the performance of BWM* can be predicted in polynomial time before running the algorithm, which gives protein designers the power to choose the most efficient algorithm for their particular design problem.
An OMIC biomarker detection algorithm TriVote and its application in methylomic biomarker detection.
Xu, Cheng; Liu, Jiamei; Yang, Weifeng; Shu, Yayun; Wei, Zhipeng; Zheng, Weiwei; Feng, Xin; Zhou, Fengfeng
2018-04-01
Transcriptomic and methylomic patterns represent two major OMIC data sources impacted by both inheritable genetic information and environmental factors, and have been widely used as disease diagnosis and prognosis biomarkers. Modern transcriptomic and methylomic profiling technologies detect the status of tens of thousands or even millions of probing residues in the human genome, and introduce a major computational challenge for the existing feature selection algorithms. This study proposes a three-step feature selection algorithm, TriVote, to detect a subset of transcriptomic or methylomic residues with highly accurate binary classification performance. TriVote outperforms both filter and wrapper feature selection algorithms with both higher classification accuracy and smaller feature number on 17 transcriptomes and two methylomes. Biological functions of the methylome biomarkers detected by TriVote were discussed for their disease associations. An easy-to-use Python package is also released to facilitate the further applications.
Mallik, Saurav; Basu, Sudipto; Hait, Suman; Kundu, Sudip
2018-04-21
Do coding and regulatory segments of a gene co-evolve with each-other? Seeking answers to this question, here we analyze the case of Escherichia coli ribosomal protein S15, that represses its own translation by specifically binding its messenger RNA (rpsO mRNA) and stabilizing a pseudoknot structure at the upstream untranslated region, thus trapping the ribosome into an incomplete translation initiation complex. In the absence of S15, ribosomal protein S1 recognizes rpsO and promotes translation by melting this very pseudoknot. We employ a robust statistical method to detect signatures of positive epistasis between residue site pairs and find that biophysical constraints of translational regulation (S15-rpsO and S1-rpsO recognition, S15-mediated rpsO structural rearrangement, and S1-mediated melting) are strong predictors of positive epistasis. Transforming the epistatic pairs into a network, we find that signatures of two different, but interconnected regulatory cascades are imprinted in the sequence-space and can be captured in terms of two dense network modules that are sparsely connected to each other. This network topology further reflects a general principle of how functionally coupled components of biological networks are interconnected. These results depict a model case, where translational regulation drives characteristic residue-level epistasis-not only between a protein and its own mRNA but also between a protein and the mRNA of an entirely different protein. © 2018 Wiley Periodicals, Inc.
Choudhary, Deepanshu; Panjikar, Santosh; Anand, Ruchi
2013-01-01
Formylglycinamide ribonucleotide amidotransferase (FGAR-AT) is a 140 kDa bi-functional enzyme involved in a coupled reaction, where the glutaminase active site produces ammonia that is subsequently utilized to convert FGAR to its corresponding amidine in an ATP assisted fashion. The structure of FGAR-AT has been previously determined in an inactive state and the mechanism of activation remains largely unknown. In the current study, hydrophobic cavities were used as markers to identify regions involved in domain movements that facilitate catalytic coupling and subsequent activation of the enzyme. Three internal hydrophobic cavities were located by xenon trapping experiments on FGAR-AT crystals and further, these cavities were perturbed via site-directed mutagenesis. Biophysical characterization of the mutants demonstrated that two of these three voids are crucial for stability and function of the protein, although being ∼20 Å from the active centers. Interestingly, correlation analysis corroborated the experimental findings, and revealed that amino acids lining the functionally important cavities form correlated sets (co-evolving residues) that connect these regions to the amidotransferase active center. It was further proposed that the first cavity is transient and allows for breathing motion to occur and thereby serves as an allosteric hotspot. In contrast, the third cavity which lacks correlated residues was found to be highly plastic and accommodated steric congestion by local adjustment of the structure without affecting either stability or activity. PMID:24223728
Optical systolic array processor using residue arithmetic
NASA Technical Reports Server (NTRS)
Jackson, J.; Casasent, D.
1983-01-01
The use of residue arithmetic to increase the accuracy and reduce the dynamic range requirements of optical matrix-vector processors is evaluated. It is determined that matrix-vector operations and iterative algorithms can be performed totally in residue notation. A new parallel residue quantizer circuit is developed which significantly improves the performance of the systolic array feedback processor. Results are presented of a computer simulation of this system used to solve a set of three simultaneous equations.
Collective helping and bystander effects in coevolving helping networks.
Jo, Hang-Hyun; Lee, Hyun Keun; Park, Hyunggyu
2010-06-01
We study collective helping behavior and bystander effects in a coevolving helping network model. A node and a link of the network represents an agent who renders or receives help and a friendly relation between agents, respectively. A helping trial of an agent depends on relations with other involved agents and its result (success or failure) updates the relation between the helper and the recipient. We study the network link dynamics and its steady states analytically and numerically. The full phase diagram is presented with various kinds of active and inactive phases and the nature of phase transitions are explored. We find various interesting bystander effects, consistent with the field study results, of which the underlying mechanism is proposed.
Use of zerotree coding in a high-speed pyramid image multiresolution decomposition
NASA Astrophysics Data System (ADS)
Vega-Pineda, Javier; Cabrera, Sergio D.; Lucero, Aldo
1995-03-01
A Zerotree (ZT) coding scheme is applied as a post-processing stage to avoid transmitting zero data in the High-Speed Pyramid (HSP) image compression algorithm. This algorithm has features that increase the capability of the ZT coding to give very high compression rates. In this paper the impact of the ZT coding scheme is analyzed and quantified. The HSP algorithm creates a discrete-time multiresolution analysis based on a hierarchical decomposition technique that is a subsampling pyramid. The filters used to create the image residues and expansions can be related to wavelet representations. According to the pixel coordinates and the level in the pyramid, N2 different wavelet basis functions of various sizes and rotations are linearly combined. The HSP algorithm is computationally efficient because of the simplicity of the required operations, and as a consequence, it can be very easily implemented with VLSI hardware. This is the HSP's principal advantage over other compression schemes. The ZT coding technique transforms the different quantized image residual levels created by the HSP algorithm into a bit stream. The use of ZT's compresses even further the already compressed image taking advantage of parent-child relationships (trees) between the pixels of the residue images at different levels of the pyramid. Zerotree coding uses the links between zeros along the hierarchical structure of the pyramid, to avoid transmission of those that form branches of all zeros. Compression performance and algorithm complexity of the combined HSP-ZT method are compared with those of the JPEG standard technique.
Yilmaz, Emel Maden; Güntert, Peter
2015-09-01
An algorithm, CYLIB, is presented for converting molecular topology descriptions from the PDB Chemical Component Dictionary into CYANA residue library entries. The CYANA structure calculation algorithm uses torsion angle molecular dynamics for the efficient computation of three-dimensional structures from NMR-derived restraints. For this, the molecules have to be represented in torsion angle space with rotations around covalent single bonds as the only degrees of freedom. The molecule must be given a tree structure of torsion angles connecting rigid units composed of one or several atoms with fixed relative positions. Setting up CYANA residue library entries therefore involves, besides straightforward format conversion, the non-trivial step of defining a suitable tree structure of torsion angles, and to re-order the atoms in a way that is compatible with this tree structure. This can be done manually for small numbers of ligands but the process is time-consuming and error-prone. An automated method is necessary in order to handle the large number of different potential ligand molecules to be studied in drug design projects. Here, we present an algorithm for this purpose, and show that CYANA structure calculations can be performed with almost all small molecule ligands and non-standard amino acid residues in the PDB Chemical Component Dictionary.
The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grebe, A.; Leveling, A.; Lu, T.
The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay gamma-quanta by the residuals in the activated structures and scoring the prompt doses of these gamma-quanta at arbitrary distances frommore » those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and showed a good agreement. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.« less
The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose
NASA Astrophysics Data System (ADS)
Grebe, A.; Leveling, A.; Lu, T.; Mokhov, N.; Pronskikh, V.
2018-01-01
The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay γ-quanta by the residuals in the activated structures and scoring the prompt doses of these γ-quanta at arbitrary distances from those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and against experimental data from the CERF facility at CERN, and FermiCORD showed reasonable agreement with these. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.
2D-RBUC for efficient parallel compression of residuals
NASA Astrophysics Data System (ADS)
Đurđević, Đorđe M.; Tartalja, Igor I.
2018-02-01
In this paper, we present a method for lossless compression of residuals with an efficient SIMD parallel decompression. The residuals originate from lossy or near lossless compression of height fields, which are commonly used to represent models of terrains. The algorithm is founded on the existing RBUC method for compression of non-uniform data sources. We have adapted the method to capture 2D spatial locality of height fields, and developed the data decompression algorithm for modern GPU architectures already present even in home computers. In combination with the point-level SIMD-parallel lossless/lossy high field compression method HFPaC, characterized by fast progressive decompression and seamlessly reconstructed surface, the newly proposed method trades off small efficiency degradation for a non negligible compression ratio (measured up to 91%) benefit.
Non-Uniformity Correction Using Nonlinear Characteristic Performance Curves for Calibration
NASA Astrophysics Data System (ADS)
Lovejoy, McKenna Roberts
Infrared imaging is an expansive field with many applications. Advances in infrared technology have lead to a greater demand from both commercial and military sectors. However, a known problem with infrared imaging is its non-uniformity. This non-uniformity stems from the fact that each pixel in an infrared focal plane array has its own photoresponse. Many factors such as exposure time, temperature, and amplifier choice affect how the pixels respond to incoming illumination and thus impact image uniformity. To improve performance non-uniformity correction (NUC) techniques are applied. Standard calibration based techniques commonly use a linear model to approximate the nonlinear response. This often leaves unacceptable levels of residual non-uniformity. Calibration techniques often have to be repeated during use to continually correct the image. In this dissertation alternates to linear NUC algorithms are investigated. The goal of this dissertation is to determine and compare nonlinear non-uniformity correction algorithms. Ideally the results will provide better NUC performance resulting in less residual non-uniformity as well as reduce the need for recalibration. This dissertation will consider new approaches to nonlinear NUC such as higher order polynomials and exponentials. More specifically, a new gain equalization algorithm has been developed. The various nonlinear non-uniformity correction algorithms will be compared with common linear non-uniformity correction algorithms. Performance will be compared based on RMS errors, residual non-uniformity, and the impact quantization has on correction. Performance will be improved by identifying and replacing bad pixels prior to correction. Two bad pixel identification and replacement techniques will be investigated and compared. Performance will be presented in the form of simulation results as well as before and after images taken with short wave infrared cameras. The initial results show, using a third order polynomial with 16-bit precision, significant improvement over the one and two-point correction algorithms. All algorithm have been implemented in software with satisfactory results and the third order gain equalization non-uniformity correction algorithm has been implemented in hardware.
Lee, Seok-Yong; Banerjee, Anirban; MacKinnon, Roderick
2009-03-03
Voltage-dependent K(+) (Kv) channels gate open in response to the membrane voltage. To further our understanding of how cell membrane voltage regulates the opening of a Kv channel, we have studied the protein interfaces that attach the voltage-sensor domains to the pore. In the crystal structure, three physical interfaces exist. Only two of these consist of amino acids that are co-evolved across the interface between voltage sensor and pore according to statistical coupling analysis of 360 Kv channel sequences. A first co-evolved interface is formed by the S4-S5 linkers (one from each of four voltage sensors), which form a cuff surrounding the S6-lined pore opening at the intracellular surface. The crystal structure and published mutational studies support the hypothesis that the S4-S5 linkers convert voltage-sensor motions directly into gate opening and closing. A second co-evolved interface forms a small contact surface between S1 of the voltage sensor and the pore helix near the extracellular surface. We demonstrate through mutagenesis that this interface is necessary for the function and/or structure of two different Kv channels. This second interface is well positioned to act as a second anchor point between the voltage sensor and the pore, thus allowing efficient transmission of conformational changes to the pore's gate.
Supermassive black holes do not correlate with galaxy disks or pseudobulges.
Kormendy, John; Bender, R; Cornell, M E
2011-01-20
The masses of supermassive black holes are known to correlate with the properties of the bulge components of their host galaxies. In contrast, they seem not to correlate with galaxy disks. Disk-grown 'pseudobulges' are intermediate in properties between bulges and disks; it has been unclear whether they do or do not correlate with black holes in the same way that bulges do. At stake in this issue are conclusions about which parts of galaxies coevolve with black holes, possibly by being regulated by energy feedback from black holes. Here we report pseudobulge classifications for galaxies with dynamically detected black holes and combine them with recent measurements of velocity dispersions in the biggest bulgeless galaxies. These data confirm that black holes do not correlate with disks and show that they correlate little or not at all with pseudobulges. We suggest that there are two different modes of black-hole feeding. Black holes in bulges grow rapidly to high masses when mergers drive gas infall that feeds quasar-like events. In contrast, small black holes in bulgeless galaxies and in galaxies with pseudobulges grow as low-level Seyfert galaxies. Growth of the former is driven by global processes, so the biggest black holes coevolve with bulges, but growth of the latter is driven locally and stochastically, and they do not coevolve with disks and pseudobulges.
Zhang, Lin; Yin, Na; Fu, Xiong; Lin, Qiaomin; Wang, Ruchuan
2017-01-01
With the development of wireless sensor networks, certain network problems have become more prominent, such as limited node resources, low data transmission security, and short network life cycles. To solve these problems effectively, it is important to design an efficient and trusted secure routing algorithm for wireless sensor networks. Traditional ant-colony optimization algorithms exhibit only local convergence, without considering the residual energy of the nodes and many other problems. This paper introduces a multi-attribute pheromone ant secure routing algorithm based on reputation value (MPASR). This algorithm can reduce the energy consumption of a network and improve the reliability of the nodes’ reputations by filtering nodes with higher coincidence rates and improving the method used to update the nodes’ communication behaviors. At the same time, the node reputation value, the residual node energy and the transmission delay are combined to formulate a synthetic pheromone that is used in the formula for calculating the random proportion rule in traditional ant-colony optimization to select the optimal data transmission path. Simulation results show that the improved algorithm can increase both the security of data transmission and the quality of routing service. PMID:28282894
Error floor behavior study of LDPC codes for concatenated codes design
NASA Astrophysics Data System (ADS)
Chen, Weigang; Yin, Liuguo; Lu, Jianhua
2007-11-01
Error floor behavior of low-density parity-check (LDPC) codes using quantized decoding algorithms is statistically studied with experimental results on a hardware evaluation platform. The results present the distribution of the residual errors after decoding failure and reveal that the number of residual error bits in a codeword is usually very small using quantized sum-product (SP) algorithm. Therefore, LDPC code may serve as the inner code in a concatenated coding system with a high code rate outer code and thus an ultra low error floor can be achieved. This conclusion is also verified by the experimental results.
Nam, Haewon
2017-01-01
We propose a novel metal artifact reduction (MAR) algorithm for CT images that completes a corrupted sinogram along the metal trace region. When metal implants are located inside a field of view, they create a barrier to the transmitted X-ray beam due to the high attenuation of metals, which significantly degrades the image quality. To fill in the metal trace region efficiently, the proposed algorithm uses multiple prior images with residual error compensation in sinogram space. Multiple prior images are generated by applying a recursive active contour (RAC) segmentation algorithm to the pre-corrected image acquired by MAR with linear interpolation, where the number of prior image is controlled by RAC depending on the object complexity. A sinogram basis is then acquired by forward projection of the prior images. The metal trace region of the original sinogram is replaced by the linearly combined sinogram of the prior images. Then, the additional correction in the metal trace region is performed to compensate the residual errors occurred by non-ideal data acquisition condition. The performance of the proposed MAR algorithm is compared with MAR with linear interpolation and the normalized MAR algorithm using simulated and experimental data. The results show that the proposed algorithm outperforms other MAR algorithms, especially when the object is complex with multiple bone objects. PMID:28604794
NASA Astrophysics Data System (ADS)
Zhou, Xin; Jun, Sun; Zhang, Bing; Jun, Wu
2017-07-01
In order to improve the reliability of the spectrum feature extracted by wavelet transform, a method combining wavelet transform (WT) with bacterial colony chemotaxis algorithm and support vector machine (BCC-SVM) algorithm (WT-BCC-SVM) was proposed in this paper. Besides, we aimed to identify different kinds of pesticide residues on lettuce leaves in a novel and rapid non-destructive way by using fluorescence spectra technology. The fluorescence spectral data of 150 lettuce leaf samples of five different kinds of pesticide residues on the surface of lettuce were obtained using Cary Eclipse fluorescence spectrometer. Standard normalized variable detrending (SNV detrending), Savitzky-Golay coupled with Standard normalized variable detrending (SG-SNV detrending) were used to preprocess the raw spectra, respectively. Bacterial colony chemotaxis combined with support vector machine (BCC-SVM) and support vector machine (SVM) classification models were established based on full spectra (FS) and wavelet transform characteristics (WTC), respectively. Moreover, WTC were selected by WT. The results showed that the accuracy of training set, calibration set and the prediction set of the best optimal classification model (SG-SNV detrending-WT-BCC-SVM) were 100%, 98% and 93.33%, respectively. In addition, the results indicated that it was feasible to use WT-BCC-SVM to establish diagnostic model of different kinds of pesticide residues on lettuce leaves.
Towards Evolving Electronic Circuits for Autonomous Space Applications
NASA Technical Reports Server (NTRS)
Lohn, Jason D.; Haith, Gary L.; Colombano, Silvano P.; Stassinopoulos, Dimitris
2000-01-01
The relatively new field of Evolvable Hardware studies how simulated evolution can reconfigure, adapt, and design hardware structures in an automated manner. Space applications, especially those requiring autonomy, are potential beneficiaries of evolvable hardware. For example, robotic drilling from a mobile platform requires high-bandwidth controller circuits that are difficult to design. In this paper, we present automated design techniques based on evolutionary search that could potentially be used in such applications. First, we present a method of automatically generating analog circuit designs using evolutionary search and a circuit construction language. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. Using a parallel genetic algorithm, we present experimental results for five design tasks. Second, we investigate the use of coevolution in automated circuit design. We examine fitness evaluation by comparing the effectiveness of four fitness schedules. The results indicate that solution quality is highest with static and co-evolving fitness schedules as compared to the other two dynamic schedules. We discuss these results and offer two possible explanations for the observed behavior: retention of useful information, and alignment of problem difficulty with circuit proficiency.
Multiscale time-dependent density functional theory: Demonstration for plasmons.
Jiang, Jiajian; Abi Mansour, Andrew; Ortoleva, Peter J
2017-08-07
Plasmon properties are of significant interest in pure and applied nanoscience. While time-dependent density functional theory (TDDFT) can be used to study plasmons, it becomes impractical for elucidating the effect of size, geometric arrangement, and dimensionality in complex nanosystems. In this study, a new multiscale formalism that addresses this challenge is proposed. This formalism is based on Trotter factorization and the explicit introduction of a coarse-grained (CG) structure function constructed as the Weierstrass transform of the electron wavefunction. This CG structure function is shown to vary on a time scale much longer than that of the latter. A multiscale propagator that coevolves both the CG structure function and the electron wavefunction is shown to bring substantial efficiency over classical propagators used in TDDFT. This efficiency follows from the enhanced numerical stability of the multiscale method and the consequence of larger time steps that can be used in a discrete time evolution. The multiscale algorithm is demonstrated for plasmons in a group of interacting sodium nanoparticles (15-240 atoms), and it achieves improved efficiency over TDDFT without significant loss of accuracy or space-time resolution.
Goldstone, Jared V; Sundaramoorthy, Munirathinam; Zhao, Bin; Waterman, Michael R; Stegeman, John J; Lamb, David C
2016-01-01
Biosynthesis of steroid hormones in vertebrates involves three cytochrome P450 hydroxylases, CYP11A1, CYP17A1 and CYP19A1, which catalyze sequential steps in steroidogenesis. These enzymes are conserved in the vertebrates, but their origin and existence in other chordate subphyla (Tunicata and Cephalochordata) have not been clearly established. In this study, selected protein sequences of CYP11A1, CYP17A1 and CYP19A1 were compiled and analyzed using multiple sequence alignment and phylogenetic analysis. Our analyses show that cephalochordates have sequences orthologous to vertebrate CYP11A1, CYP17A1 or CYP19A1, and that echinoderms and hemichordates possess CYP11-like but not CYP19 genes. While the cephalochordate sequences have low identity with the vertebrate sequences, reflecting evolutionary distance, the data show apparent origin of CYP11 prior to the evolution of CYP19 and possibly CYP17, thus indicating a sequential origin of these functionally related steroidogenic CYPs. Co-occurrence of the three CYPs in early chordates suggests that the three genes may have coevolved thereafter, and that functional conservation should be reflected in functionally important residues in the proteins. CYP19A1 has the largest number of conserved residues while CYP11A1 sequences are less conserved. Structural analyses of human CYP11A1, CYP17A1 and CYP19A1 show that critical substrate binding site residues are highly conserved in each enzyme family. The results emphasize that the steroidogenic pathways producing glucocorticoids and reproductive steroids are several hundred million years old and that the catalytic structural elements of the enzymes have been conserved over the same period of time. Analysis of these elements may help to identify when precursor functions linked to these enzymes first arose. Copyright © 2015 Elsevier Inc. All rights reserved.
Tseng, Michelle
2017-09-01
Recent research in mosquito population genetics suggests that interpopulation hybridization has likely contributed to the rapid spread of the container-breeding mosquitoes. Here, I used laboratory experiments to investigate whether interpopulation Aedes (Stegomyia) albopictus (Skuse) F1 and F2 hybrids exhibit higher fitness than parental populations, and whether hybrid mosquito performance is related to infection by the coevolved protozoan parasite Ascogregarina taiwanensis (Lien and Levine). Overall, there were significant differences in development time, wing length, and survival between the two parental mosquito populations, but no difference in per capita growth rate r. Hybrid mosquitoes were generally intermediate in phenotype to the parentals, except that F2 females were significantly larger than the midparent average. In addition, As. taiwanensis parasites produced fewest oocysts when they were reared in hosts of hybrid origin. These data suggest that hybridization between previously isolated mosquito populations can result in slight increases in potential mosquito reproductive success, via increased hybrid body size, and via the temporary escape from coevolved parasites. These findings are significant because studies have shown that even slight hybrid vigor can have positive fitness consequences for population persistence. Although this was a laboratory experiment extending only to the F2 generation, many other invasive insects also carry coevolved parasites, and thus the patterns seen in this mosquito system may be broadly relevant. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
MIMO system identification using frequency response data
NASA Technical Reports Server (NTRS)
Medina, Enrique A.; Irwin, R. D.; Mitchell, Jerrel R.; Bukley, Angelia P.
1992-01-01
A solution to the problem of obtaining a multi-input, multi-output statespace model of a system from its individual input/output frequency responses is presented. The Residue Identification Algorithm (RID) identifies the system poles from a transfer function model of the determinant of the frequency response data matrix. Next, the residue matrices of the modes are computed guaranteeing that each input/output frequency response is fitted in the least squares sense. Finally, a realization of the system is computed. Results of the application of RID to experimental frequency responses of a large space structure ground test facility are presented and compared to those obtained via the Eigensystem Realization Algorithm.
Ising antiferromagnet on the Archimedean lattices.
Yu, Unjong
2015-06-01
Geometric frustration effects were studied systematically with the Ising antiferromagnet on the 11 Archimedean lattices using the Monte Carlo methods. The Wang-Landau algorithm for static properties (specific heat and residual entropy) and the Metropolis algorithm for a freezing order parameter were adopted. The exact residual entropy was also found. Based on the degree of frustration and dynamic properties, ground states of them were determined. The Shastry-Sutherland lattice and the trellis lattice are weakly frustrated and have two- and one-dimensional long-range-ordered ground states, respectively. The bounce, maple-leaf, and star lattices have the spin ice phase. The spin liquid phase appears in the triangular and kagome lattices.
Ising antiferromagnet on the Archimedean lattices
NASA Astrophysics Data System (ADS)
Yu, Unjong
2015-06-01
Geometric frustration effects were studied systematically with the Ising antiferromagnet on the 11 Archimedean lattices using the Monte Carlo methods. The Wang-Landau algorithm for static properties (specific heat and residual entropy) and the Metropolis algorithm for a freezing order parameter were adopted. The exact residual entropy was also found. Based on the degree of frustration and dynamic properties, ground states of them were determined. The Shastry-Sutherland lattice and the trellis lattice are weakly frustrated and have two- and one-dimensional long-range-ordered ground states, respectively. The bounce, maple-leaf, and star lattices have the spin ice phase. The spin liquid phase appears in the triangular and kagome lattices.
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2009-01-01
Given a system which can fail in 1 or n different ways, a fault detection and isolation (FDI) algorithm uses sensor data in order to determine which fault is the most likely to have occurred. The effectiveness of an FDI algorithm can be quantified by a confusion matrix, which i ndicates the probability that each fault is isolated given that each fault has occurred. Confusion matrices are often generated with simulation data, particularly for complex systems. In this paper we perform FDI using sums of squares of sensor residuals (SSRs). We assume that the sensor residuals are Gaussian, which gives the SSRs a chi-squared distribution. We then generate analytic lower and upper bounds on the confusion matrix elements. This allows for the generation of optimal sensor sets without numerical simulations. The confusion matrix bound s are verified with simulated aircraft engine data.
Heterodimer Binding Scaffolds Recognition via the Analysis of Kinetically Hot Residues.
Perišić, Ognjen
2018-03-16
Physical interactions between proteins are often difficult to decipher. The aim of this paper is to present an algorithm that is designed to recognize binding patches and supporting structural scaffolds of interacting heterodimer proteins using the Gaussian Network Model (GNM). The recognition is based on the (self) adjustable identification of kinetically hot residues and their connection to possible binding scaffolds. The kinetically hot residues are residues with the lowest entropy, i.e., the highest contribution to the weighted sum of the fastest modes per chain extracted via GNM. The algorithm adjusts the number of fast modes in the GNM's weighted sum calculation using the ratio of predicted and expected numbers of target residues (contact and the neighboring first-layer residues). This approach produces very good results when applied to dimers with high protein sequence length ratios. The protocol's ability to recognize near native decoys was compared to the ability of the residue-level statistical potential of Lu and Skolnick using the Sternberg and Vakser decoy dimers sets. The statistical potential produced better overall results, but in a number of cases its predicting ability was comparable, or even inferior, to the prediction ability of the adjustable GNM approach. The results presented in this paper suggest that in heterodimers at least one protein has interacting scaffold determined by the immovable, kinetically hot residues. In many cases, interacting proteins (especially if being of noticeably different sizes) either behave as a rigid lock and key or, presumably, exhibit the opposite dynamic behavior. While the binding surface of one protein is rigid and stable, its partner's interacting scaffold is more flexible and adaptable.
An Image Segmentation Based on a Genetic Algorithm for Determining Soil Coverage by Crop Residues
Ribeiro, Angela; Ranz, Juan; Burgos-Artizzu, Xavier P.; Pajares, Gonzalo; Sanchez del Arco, Maria J.; Navarrete, Luis
2011-01-01
Determination of the soil coverage by crop residues after ploughing is a fundamental element of Conservation Agriculture. This paper presents the application of genetic algorithms employed during the fine tuning of the segmentation process of a digital image with the aim of automatically quantifying the residue coverage. In other words, the objective is to achieve a segmentation that would permit the discrimination of the texture of the residue so that the output of the segmentation process is a binary image in which residue zones are isolated from the rest. The RGB images used come from a sample of images in which sections of terrain were photographed with a conventional camera positioned in zenith orientation atop a tripod. The images were taken outdoors under uncontrolled lighting conditions. Up to 92% similarity was achieved between the images obtained by the segmentation process proposed in this paper and the templates made by an elaborate manual tracing process. In addition to the proposed segmentation procedure and the fine tuning procedure that was developed, a global quantification of the soil coverage by residues for the sampled area was achieved that differed by only 0.85% from the quantification obtained using template images. Moreover, the proposed method does not depend on the type of residue present in the image. The study was conducted at the experimental farm “El Encín” in Alcalá de Henares (Madrid, Spain). PMID:22163966
Johnson, Quentin R; Lindsay, Richard J; Shen, Tongye
2018-02-21
A computational method which extracts the dominant motions from an ensemble of biomolecular conformations via a correlation analysis of residue-residue contacts is presented. The algorithm first renders the structural information into contact matrices, then constructs the collective modes based on the correlated dynamics of a selected set of dynamic contacts. Associated programs can bridge the results for further visualization using graphics software. The aim of this method is to provide an analysis of conformations of biopolymers from the contact viewpoint. It may assist a systematical uncovering of conformational switching mechanisms existing in proteins and biopolymer systems in general by statistical analysis of simulation snapshots. In contrast to conventional correlation analyses of Cartesian coordinates (such as distance covariance analysis and Cartesian principal component analysis), this program also provides an alternative way to locate essential collective motions in general. Herein, we detail the algorithm in a stepwise manner and comment on the importance of the method as applied to decoding allosteric mechanisms. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
High quality image-pair-based deblurring method using edge mask and improved residual deconvolution
NASA Astrophysics Data System (ADS)
Cui, Guangmang; Zhao, Jufeng; Gao, Xiumin; Feng, Huajun; Chen, Yueting
2017-04-01
Image deconvolution problem is a challenging task in the field of image process. Using image pairs could be helpful to provide a better restored image compared with the deblurring method from a single blurred image. In this paper, a high quality image-pair-based deblurring method is presented using the improved RL algorithm and the gain-controlled residual deconvolution technique. The input image pair includes a non-blurred noisy image and a blurred image captured for the same scene. With the estimated blur kernel, an improved RL deblurring method based on edge mask is introduced to obtain the preliminary deblurring result with effective ringing suppression and detail preservation. Then the preliminary deblurring result is served as the basic latent image and the gain-controlled residual deconvolution is utilized to recover the residual image. A saliency weight map is computed as the gain map to further control the ringing effects around the edge areas in the residual deconvolution process. The final deblurring result is obtained by adding the preliminary deblurring result with the recovered residual image. An optical experimental vibration platform is set up to verify the applicability and performance of the proposed algorithm. Experimental results demonstrate that the proposed deblurring framework obtains a superior performance in both subjective and objective assessments and has a wide application in many image deblurring fields.
Hannigan, Geoffrey D.; Duhaime, Melissa B.; Koutra, Danai
2018-01-01
Viruses and bacteria are critical components of the human microbiome and play important roles in health and disease. Most previous work has relied on studying bacteria and viruses independently, thereby reducing them to two separate communities. Such approaches are unable to capture how these microbial communities interact, such as through processes that maintain community robustness or allow phage-host populations to co-evolve. We implemented a network-based analytical approach to describe phage-bacteria network diversity throughout the human body. We built these community networks using a machine learning algorithm to predict which phages could infect which bacteria in a given microbiome. Our algorithm was applied to paired viral and bacterial metagenomic sequence sets from three previously published human cohorts. We organized the predicted interactions into networks that allowed us to evaluate phage-bacteria connectedness across the human body. We observed evidence that gut and skin network structures were person-specific and not conserved among cohabitating family members. High-fat diets appeared to be associated with less connected networks. Network structure differed between skin sites, with those exposed to the external environment being less connected and likely more susceptible to network degradation by microbial extinction events. This study quantified and contrasted the diversity of virome-microbiome networks across the human body and illustrated how environmental factors may influence phage-bacteria interactive dynamics. This work provides a baseline for future studies to better understand system perturbations, such as disease states, through ecological networks. PMID:29668682
Hannigan, Geoffrey D; Duhaime, Melissa B; Koutra, Danai; Schloss, Patrick D
2018-04-01
Viruses and bacteria are critical components of the human microbiome and play important roles in health and disease. Most previous work has relied on studying bacteria and viruses independently, thereby reducing them to two separate communities. Such approaches are unable to capture how these microbial communities interact, such as through processes that maintain community robustness or allow phage-host populations to co-evolve. We implemented a network-based analytical approach to describe phage-bacteria network diversity throughout the human body. We built these community networks using a machine learning algorithm to predict which phages could infect which bacteria in a given microbiome. Our algorithm was applied to paired viral and bacterial metagenomic sequence sets from three previously published human cohorts. We organized the predicted interactions into networks that allowed us to evaluate phage-bacteria connectedness across the human body. We observed evidence that gut and skin network structures were person-specific and not conserved among cohabitating family members. High-fat diets appeared to be associated with less connected networks. Network structure differed between skin sites, with those exposed to the external environment being less connected and likely more susceptible to network degradation by microbial extinction events. This study quantified and contrasted the diversity of virome-microbiome networks across the human body and illustrated how environmental factors may influence phage-bacteria interactive dynamics. This work provides a baseline for future studies to better understand system perturbations, such as disease states, through ecological networks.
Visual tracking based on the sparse representation of the PCA subspace
NASA Astrophysics Data System (ADS)
Chen, Dian-bing; Zhu, Ming; Wang, Hui-li
2017-09-01
We construct a collaborative model of the sparse representation and the subspace representation. First, we represent the tracking target in the principle component analysis (PCA) subspace, and then we employ an L 1 regularization to restrict the sparsity of the residual term, an L 2 regularization term to restrict the sparsity of the representation coefficients, and an L 2 norm to restrict the distance between the reconstruction and the target. Then we implement the algorithm in the particle filter framework. Furthermore, an iterative method is presented to get the global minimum of the residual and the coefficients. Finally, an alternative template update scheme is adopted to avoid the tracking drift which is caused by the inaccurate update. In the experiment, we test the algorithm on 9 sequences, and compare the results with 5 state-of-art methods. According to the results, we can conclude that our algorithm is more robust than the other methods.
RRCRank: a fusion method using rank strategy for residue-residue contact prediction.
Jing, Xiaoyang; Dong, Qiwen; Lu, Ruqian
2017-09-02
In structural biology area, protein residue-residue contacts play a crucial role in protein structure prediction. Some researchers have found that the predicted residue-residue contacts could effectively constrain the conformational search space, which is significant for de novo protein structure prediction. In the last few decades, related researchers have developed various methods to predict residue-residue contacts, especially, significant performance has been achieved by using fusion methods in recent years. In this work, a novel fusion method based on rank strategy has been proposed to predict contacts. Unlike the traditional regression or classification strategies, the contact prediction task is regarded as a ranking task. First, two kinds of features are extracted from correlated mutations methods and ensemble machine-learning classifiers, and then the proposed method uses the learning-to-rank algorithm to predict contact probability of each residue pair. First, we perform two benchmark tests for the proposed fusion method (RRCRank) on CASP11 dataset and CASP12 dataset respectively. The test results show that the RRCRank method outperforms other well-developed methods, especially for medium and short range contacts. Second, in order to verify the superiority of ranking strategy, we predict contacts by using the traditional regression and classification strategies based on the same features as ranking strategy. Compared with these two traditional strategies, the proposed ranking strategy shows better performance for three contact types, in particular for long range contacts. Third, the proposed RRCRank has been compared with several state-of-the-art methods in CASP11 and CASP12. The results show that the RRCRank could achieve comparable prediction precisions and is better than three methods in most assessment metrics. The learning-to-rank algorithm is introduced to develop a novel rank-based method for the residue-residue contact prediction of proteins, which achieves state-of-the-art performance based on the extensive assessment.
Yang, Zefeng; Gu, Shiliang; Wang, Xuefeng; Li, Wenjuan; Tang, Zaixiang; Xu, Chenwu
2008-09-01
CPP-like genes are members of a small family which features the existence of two similar Cys-rich domains termed CXC domains in their protein products and are distributed widely in plants and animals but do not exist in yeast. The members of this family in plants play an important role in development of reproductive tissue and control of cell division. To gain insights into how CPP-like genes evolved in plants, we conducted a comparative phylogenetic and molecular evolutionary analysis of the CPP-like gene family in Arabidopsis and rice. The results of phylogeny revealed that both gene loss and species-specific expansion contributed to the evolution of this family in Arabidopsis and rice. Both intron gain and intron loss were observed through intron/exon structure analysis for duplicated genes. Our results also suggested that positive selection was a major force during the evolution of CPP-like genes in plants, and most amino acid residues under positive selection were disproportionately located in the region outside the CXC domains. Further analysis revealed that two CXC domains and sequences connecting them might have coevolved during the long evolutionary period.
Martín-Galiano, Antonio J.; Buey, Rubén M.; Cabezas, Marta; Andreu, José M.
2010-01-01
The molecular switch for nucleotide-regulated assembly and disassembly of the main prokaryotic cell division protein FtsZ is unknown despite the numerous crystal structures that are available. We have characterized the functional motions in FtsZ with a computational consensus of essential dynamics, structural comparisons, sequence conservation, and networks of co-evolving residues. Employing this information, we have constructed 17 mutants, which alter the FtsZ functional cycle at different stages, to modify FtsZ flexibility. The mutant phenotypes ranged from benign to total inactivation and included increased GTPase, reduced assembly, and stabilized assembly. Six mutations clustering at the long cleft between the C-terminal β-sheet and core helix H7 deviated FtsZ assembly into curved filaments with inhibited GTPase, which still polymerize cooperatively. These mutations may perturb the predicted closure of the C-terminal domain onto H7 required for switching between curved and straight association modes and for GTPase activation. By mapping the FtsZ assembly switch, this work also gives insight into FtsZ druggability because the curved mutations delineate the putative binding site of the promising antibacterial FtsZ inhibitor PC190723. PMID:20472561
Mechanism of Nucleic Acid Chaperone Function of Retroviral Nuceleocapsid (NC) Proteins
NASA Astrophysics Data System (ADS)
Rouzina, Ioulia; Vo, My-Nuong; Stewart, Kristen; Musier-Forsyth, Karin; Cruceanu, Margareta; Williams, Mark
2006-03-01
Recent studies have highlighted two main activities of HIV-1 NC protein contributing to its function as a universal nucleic acid chaperone. Firstly, it is the ability of NC to weakly destabilize all nucleic acid,(NA), secondary structures, thus resolving the kinetic traps for NA refolding, while leaving the annealed state stable. Secondly, it is the ability of NC to aggregate NA, facilitating the nucleation step of bi-molecular annealing by increasing the local NA concentration. In this work we use single molecule DNA stretching and gel-based annealing assays to characterize these two chaperone activities of NC by using various HIV-1 NC mutants and several other retroviral NC proteins. Our results suggest that two NC functions are associated with its zinc fingers and cationic residues, respectively. NC proteins from other retroviruses have similar activities, although expressed to a different degree. Thus, NA aggregating ability improves, and NA duplex destabilizing activity decreases in the sequence: MLV NC, HIV NC, RSV NC. In contrast, HTLV NC protein works very differently from other NC proteins, and similarly to typical single stranded NA binding proteins. These features of retroviral NCs co-evolved with the structure of their genomes.
Mechanical Network in Titin Immunoglobulin from Force Distribution Analysis
Wilmanns, Matthias; Gräter, Frauke
2009-01-01
The role of mechanical force in cellular processes is increasingly revealed by single molecule experiments and simulations of force-induced transitions in proteins. How the applied force propagates within proteins determines their mechanical behavior yet remains largely unknown. We present a new method based on molecular dynamics simulations to disclose the distribution of strain in protein structures, here for the newly determined high-resolution crystal structure of I27, a titin immunoglobulin (IG) domain. We obtain a sparse, spatially connected, and highly anisotropic mechanical network. This allows us to detect load-bearing motifs composed of interstrand hydrogen bonds and hydrophobic core interactions, including parts distal to the site to which force was applied. The role of the force distribution pattern for mechanical stability is tested by in silico unfolding of I27 mutants. We then compare the observed force pattern to the sparse network of coevolved residues found in this family. We find a remarkable overlap, suggesting the force distribution to reflect constraints for the evolutionary design of mechanical resistance in the IG family. The force distribution analysis provides a molecular interpretation of coevolution and opens the road to the study of the mechanism of signal propagation in proteins in general. PMID:19282960
Leveraging Hierarchical Population Structure in Discrete Association Studies
Carlson, Jonathan; Kadie, Carl; Mallal, Simon; Heckerman, David
2007-01-01
Population structure can confound the identification of correlations in biological data. Such confounding has been recognized in multiple biological disciplines, resulting in a disparate collection of proposed solutions. We examine several methods that correct for confounding on discrete data with hierarchical population structure and identify two distinct confounding processes, which we call coevolution and conditional influence. We describe these processes in terms of generative models and show that these generative models can be used to correct for the confounding effects. Finally, we apply the models to three applications: identification of escape mutations in HIV-1 in response to specific HLA-mediated immune pressure, prediction of coevolving residues in an HIV-1 peptide, and a search for genotypes that are associated with bacterial resistance traits in Arabidopsis thaliana. We show that coevolution is a better description of confounding in some applications and conditional influence is better in others. That is, we show that no single method is best for addressing all forms of confounding. Analysis tools based on these models are available on the internet as both web based applications and downloadable source code at http://atom.research.microsoft.com/bio/phylod.aspx. PMID:17611623
Martín-Galiano, Antonio J; Buey, Rubén M; Cabezas, Marta; Andreu, José M
2010-07-16
The molecular switch for nucleotide-regulated assembly and disassembly of the main prokaryotic cell division protein FtsZ is unknown despite the numerous crystal structures that are available. We have characterized the functional motions in FtsZ with a computational consensus of essential dynamics, structural comparisons, sequence conservation, and networks of co-evolving residues. Employing this information, we have constructed 17 mutants, which alter the FtsZ functional cycle at different stages, to modify FtsZ flexibility. The mutant phenotypes ranged from benign to total inactivation and included increased GTPase, reduced assembly, and stabilized assembly. Six mutations clustering at the long cleft between the C-terminal beta-sheet and core helix H7 deviated FtsZ assembly into curved filaments with inhibited GTPase, which still polymerize cooperatively. These mutations may perturb the predicted closure of the C-terminal domain onto H7 required for switching between curved and straight association modes and for GTPase activation. By mapping the FtsZ assembly switch, this work also gives insight into FtsZ druggability because the curved mutations delineate the putative binding site of the promising antibacterial FtsZ inhibitor PC190723.
The spatial architecture of protein function and adaptation
McLaughlin, Richard N.; Poelwijk, Frank J.; Raman, Arjun; Gosal, Walraj S.; Ranganathan, Rama
2014-01-01
Statistical analysis of protein evolution suggests a design for natural proteins in which sparse networks of coevolving amino acids (termed sectors) comprise the essence of three-dimensional structure and function1, 2, 3, 4, 5. However, proteins are also subject to pressures deriving from the dynamics of the evolutionary process itself—the ability to tolerate mutation and to be adaptive to changing selection pressures6, 7, 8, 9, 10. To understand the relationship of the sector architecture to these properties, we developed a high-throughput quantitative method for a comprehensive single-mutation study in which every position is substituted individually to every other amino acid. Using a PDZ domain (PSD95pdz3) model system, we show that sector positions are functionally sensitive to mutation, whereas non-sector positions are more tolerant to substitution. In addition, we find that adaptation to a new binding specificity initiates exclusively through variation within sector residues. A combination of just two sector mutations located near and away from the ligand-binding site suffices to switch the binding specificity of PSD95pdz3 quantitatively towards a class-switching ligand. The localization of functional constraint and adaptive variation within the sector has important implications for understanding and engineering proteins. PMID:23041932
A fast recursive algorithm for molecular dynamics simulation
NASA Technical Reports Server (NTRS)
Jain, A.; Vaidehi, N.; Rodriguez, G.
1993-01-01
The present recursive algorithm for solving molecular systems' dynamical equations of motion employs internal variable models that reduce such simulations' computation time by an order of magnitude, relative to Cartesian models. Extensive use is made of spatial operator methods recently developed for analysis and simulation of the dynamics of multibody systems. A factor-of-450 speedup over the conventional O(N-cubed) algorithm is demonstrated for the case of a polypeptide molecule with 400 residues.
Cooperative network clustering and task allocation for heterogeneous small satellite network
NASA Astrophysics Data System (ADS)
Qin, Jing
The research of small satellite has emerged as a hot topic in recent years because of its economical prospects and convenience in launching and design. Due to the size and energy constraints of small satellites, forming a small satellite network(SSN) in which all the satellites cooperate with each other to finish tasks is an efficient and effective way to utilize them. In this dissertation, I designed and evaluated a weight based dominating set clustering algorithm, which efficiently organizes the satellites into stable clusters. The traditional clustering algorithms of large monolithic satellite networks, such as formation flying and satellite swarm, are often limited on automatic formation of clusters. Therefore, a novel Distributed Weight based Dominating Set(DWDS) clustering algorithm is designed to address the clustering problems in the stochastically deployed SSNs. Considering the unique features of small satellites, this algorithm is able to form the clusters efficiently and stably. In this algorithm, satellites are separated into different groups according to their spatial characteristics. A minimum dominating set is chosen as the candidate cluster head set based on their weights, which is a weighted combination of residual energy and connection degree. Then the cluster heads admit new neighbors that accept their invitations into the cluster, until the maximum cluster size is reached. Evaluated by the simulation results, in a SSN with 200 to 800 nodes, the algorithm is able to efficiently cluster more than 90% of nodes in 3 seconds. The Deadline Based Resource Balancing (DBRB) task allocation algorithm is designed for efficient task allocations in heterogeneous LEO small satellite networks. In the task allocation process, the dispatcher needs to consider the deadlines of the tasks as well as the residue energy of different resources for best energy utilization. We assume the tasks adopt a Map-Reduce framework, in which a task can consist of multiple subtasks. The DBRB algorithm is deployed on the head node of a cluster. It gathers the status from each cluster member and calculates their Node Importance Factors (NIFs) from the carried resources, residue power and compute capacity. The algorithm calculates the number of concurrent subtasks based on the deadlines, and allocates the subtasks to the nodes according to their NIF values. The simulation results show that when cluster members carry multiple resources, resource are more balanced and rare resources serve longer in DBRB than in the Earliest Deadline First algorithm. We also show that the algorithm performs well in service isolation by serving multiple tasks with different deadlines. Moreover, the average task response time with various cluster size settings is well controlled within deadlines as well. Except non-realtime tasks, small satellites may execute realtime tasks as well. The location-dependent tasks, such as image capturing, data transmission and remote sensing tasks are realtime tasks that are required to be started / finished on specific time. The resource energy balancing algorithm for realtime and non-realtime mixed workload is developed to efficiently schedule the tasks for best system performance. It calculates the residue energy for each resource type and tries to preserve resources and node availability when distributing tasks. Non-realtime tasks can be preempted by realtime tasks to provide better QoS to realtime tasks. I compared the performance of proposed algorithm with a random-priority scheduling algorithm, with only realtime tasks, non-realtime tasks and mixed tasks. It shows the resource energy reservation algorithm outperforms the latter one with both balanced and imbalanced workloads. Although the resource energy balancing task allocation algorithm for mixed workload provides preemption mechanism for realtime tasks, realtime tasks can still fail due to resource exhaustion. For LEO small satellite flies around the earth on stable orbits, the location-dependent realtime tasks can be considered as periodical tasks. Therefore, it is possible to reserve energy for these realtime tasks. The resource energy reservation algorithm preserves energy for the realtime tasks when the execution routine of periodical realtime tasks is known. In order to reserve energy for tasks starting very early in each period that the node does not have enough energy charged, an energy wrapping mechanism is also designed to calculate the residue energy from the previous period. The simulation results show that without energy reservation, realtime task failure rate can reach more than 60% when the workload is highly imbalanced. In contrast, the resource energy reservation produces zero RT task failures and leads to equal or better aggregate system throughput than the non-reservation algorithm. The proposed algorithm also preserves more energy because it avoids task preemption. (Abstract shortened by ProQuest.).
NASA Astrophysics Data System (ADS)
Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed
2017-01-01
For the first time, a new variable selection method based on swarm intelligence namely firefly algorithm is coupled with three different multivariate calibration models namely, concentration residual augmented classical least squares, artificial neural network and support vector regression in UV spectral data. A comparative study between the firefly algorithm and the well-known genetic algorithm was developed. The discussion revealed the superiority of using this new powerful algorithm over the well-known genetic algorithm. Moreover, different statistical tests were performed and no significant differences were found between all the models regarding their predictabilities. This ensures that simpler and faster models were obtained without any deterioration of the quality of the calibration.
Mate-choice copying: A fitness-enhancing behavior that evolves by indirect selection.
Santos, Mauro; Sapage, Manuel; Matos, Margarida; Varela, Susana A M
2017-06-01
A spatially explicit, individual-based simulation model is used to study the spread of an allele for mate-choice copying (MCC) through horizontal cultural transmission when female innate preferences do or do not coevolve with a male viability-increasing trait. Evolution of MCC is unlikely when innate female preferences coevolve with the trait, as copier females cannot express a higher preference than noncopier females for high-fitness males. However, if a genetic polymorphism for innate preference persists in the population, MCC can evolve by indirect selection through hitchhiking: the copying allele hitchhikes on the male trait. MCC can be an adaptive behavior-that is, a behavior that increases a population's average fitness relative to populations without MCC-even though the copying allele itself may be neutral or mildly deleterious. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
Eco-evolutionary dynamics in a coevolving host-virus system.
Frickel, Jens; Sieber, Michael; Becks, Lutz
2016-04-01
Eco-evolutionary dynamics have been shown to be important for understanding population and community stability and their adaptive potential. However, coevolution in the framework of eco-evolutionary theory has not been addressed directly. Combining experiments with an algal host and its viral parasite, and mathematical model analyses we show eco-evolutionary dynamics in antagonistic coevolving populations. The interaction between antagonists initially resulted in arms race dynamics (ARD) with selective sweeps, causing oscillating host-virus population dynamics. However, ARD ended and populations stabilised after the evolution of a general resistant host, whereas a trade-off between host resistance and growth then maintained host diversity over time (trade-off driven dynamics). Most importantly, our study shows that the interaction between ecology and evolution had important consequences for the predictability of the mode and tempo of adaptive change and for the stability and adaptive potential of populations. © 2016 John Wiley & Sons Ltd/CNRS.
Entropic determination of the phase transition in a coevolving opinion-formation model.
Burgos, E; Hernández, Laura; Ceva, H; Perazzo, R P J
2015-03-01
We study an opinion formation model by the means of a coevolving complex network where the vertices represent the individuals, characterized by their evolving opinions, and the edges represent the interactions among them. The network adapts to the spreading of opinions in two ways: not only connected agents interact and eventually change their thinking but an agent may also rewire one of its links to a neighborhood holding the same opinion as his. The dynamics, based on a global majority rule, depends on an external parameter that controls the plasticity of the network. We show how the information entropy associated to the distribution of group sizes allows us to locate the phase transition between a phase of full consensus and another, where different opinions coexist. We also determine the minimum size of the most informative sampling. At the transition the distribution of the sizes of groups holding the same opinion is scale free.
An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices, part 2
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Nachtigal, Noel M.
1990-01-01
It is shown how the look-ahead Lanczos process (combined with a quasi-minimal residual QMR) approach) can be used to develop a robust black box solver for large sparse non-Hermitian linear systems. Details of an implementation of the resulting QMR algorithm are presented. It is demonstrated that the QMR method is closely related to the biconjugate gradient (BCG) algorithm; however, unlike BCG, the QMR algorithm has smooth convergence curves and good numerical properties. We report numerical experiments with our implementation of the look-ahead Lanczos algorithm, both for eigenvalue problem and linear systems. Also, program listings of FORTRAN implementations of the look-ahead algorithm and the QMR method are included.
Heterodimer Binding Scaffolds Recognition via the Analysis of Kinetically Hot Residues
Perišić, Ognjen
2018-01-01
Physical interactions between proteins are often difficult to decipher. The aim of this paper is to present an algorithm that is designed to recognize binding patches and supporting structural scaffolds of interacting heterodimer proteins using the Gaussian Network Model (GNM). The recognition is based on the (self) adjustable identification of kinetically hot residues and their connection to possible binding scaffolds. The kinetically hot residues are residues with the lowest entropy, i.e., the highest contribution to the weighted sum of the fastest modes per chain extracted via GNM. The algorithm adjusts the number of fast modes in the GNM’s weighted sum calculation using the ratio of predicted and expected numbers of target residues (contact and the neighboring first-layer residues). This approach produces very good results when applied to dimers with high protein sequence length ratios. The protocol’s ability to recognize near native decoys was compared to the ability of the residue-level statistical potential of Lu and Skolnick using the Sternberg and Vakser decoy dimers sets. The statistical potential produced better overall results, but in a number of cases its predicting ability was comparable, or even inferior, to the prediction ability of the adjustable GNM approach. The results presented in this paper suggest that in heterodimers at least one protein has interacting scaffold determined by the immovable, kinetically hot residues. In many cases, interacting proteins (especially if being of noticeably different sizes) either behave as a rigid lock and key or, presumably, exhibit the opposite dynamic behavior. While the binding surface of one protein is rigid and stable, its partner’s interacting scaffold is more flexible and adaptable. PMID:29547506
NASA Technical Reports Server (NTRS)
Liu, Kuojuey Ray
1990-01-01
Least-squares (LS) estimations and spectral decomposition algorithms constitute the heart of modern signal processing and communication problems. Implementations of recursive LS and spectral decomposition algorithms onto parallel processing architectures such as systolic arrays with efficient fault-tolerant schemes are the major concerns of this dissertation. There are four major results in this dissertation. First, we propose the systolic block Householder transformation with application to the recursive least-squares minimization. It is successfully implemented on a systolic array with a two-level pipelined implementation at the vector level as well as at the word level. Second, a real-time algorithm-based concurrent error detection scheme based on the residual method is proposed for the QRD RLS systolic array. The fault diagnosis, order degraded reconfiguration, and performance analysis are also considered. Third, the dynamic range, stability, error detection capability under finite-precision implementation, order degraded performance, and residual estimation under faulty situations for the QRD RLS systolic array are studied in details. Finally, we propose the use of multi-phase systolic algorithms for spectral decomposition based on the QR algorithm. Two systolic architectures, one based on triangular array and another based on rectangular array, are presented for the multiphase operations with fault-tolerant considerations. Eigenvectors and singular vectors can be easily obtained by using the multi-pase operations. Performance issues are also considered.
NASA Astrophysics Data System (ADS)
Chen, Jun; Zhang, Xiangguang; Xing, Xiaogang; Ishizaka, Joji; Yu, Zhifeng
2017-12-01
Quantifying the diffuse attenuation coefficient of the photosynthetically available radiation (Kpar) can improve our knowledge of euphotic depth (Zeu) and biomass heating effects in the upper layers of oceans. An algorithm to semianalytically derive Kpar from remote sensing reflectance (Rrs) is developed for the global open oceans. This algorithm includes the following two portions: (1) a neural network model for deriving the diffuse attention coefficients (Kd) that considers the residual error in satellite Rrs, and (2) a three band depth-dependent Kpar algorithm (TDKA) for describing the spectrally selective attenuation mechanism of underwater solar radiation in the open oceans. This algorithm is evaluated with both in situ PAR profile data and satellite images, and the results show that it can produce acceptable PAR profile estimations while clearly removing the impacts of satellite residual errors on Kpar estimations. Furthermore, the performance of the TDKA algorithm is evaluated by its applicability in Zeu derivation and mean temperature within a mixed layer depth (TML) simulation, and the results show that it can significantly decrease the uncertainty in both compared with the classical chlorophyll-a concentration-based Kpar algorithm. Finally, the TDKA algorithm is applied in simulating biomass heating effects in the Sargasso Sea near Bermuda, with new Kpar data it is found that the biomass heating effects can lead to a 3.4°C maximum positive difference in temperature in the upper layers but could result in a 0.67°C maximum negative difference in temperature in the deep layers.
Co-evolving prisoner's dilemma: Performance indicators and analytic approaches
NASA Astrophysics Data System (ADS)
Zhang, W.; Choi, C. W.; Li, Y. S.; Xu, C.; Hui, P. M.
2017-02-01
Understanding the intrinsic relation between the dynamical processes in a co-evolving network and the necessary ingredients in formulating a reliable theory is an important question and a challenging task. Using two slightly different definitions of performance indicator in the context of a co-evolving prisoner's dilemma game, it is shown that very different cooperative levels result and theories of different complexity are required to understand the key features. When the payoff per opponent is used as the indicator (Case A), non-cooperative strategy has an edge and dominates in a large part of the parameter space formed by the cutting-and-rewiring probability and the strategy imitation probability. When the payoff from all opponents is used (Case B), cooperative strategy has an edge and dominates the parameter space. Two distinct phases, one homogeneous and dynamical and another inhomogeneous and static, emerge and the phase boundary in the parameter space is studied in detail. A simple theory assuming an average competing environment for cooperative agents and another for non-cooperative agents is shown to perform well in Case A. The same theory, however, fails badly for Case B. It is necessary to include more spatial correlation into a theory for Case B. We show that the local configuration approximation, which takes into account of the different competing environments for agents with different strategies and degrees, is needed to give reliable results for Case B. The results illustrate that formulating a proper theory requires both a conceptual understanding of the effects of the adaptive processes in the problem and a delicate balance between simplicity and accuracy.
NASA Astrophysics Data System (ADS)
Polverino, Pierpaolo; Frisk, Erik; Jung, Daniel; Krysander, Mattias; Pianese, Cesare
2017-07-01
The present paper proposes an advanced approach for Polymer Electrolyte Membrane Fuel Cell (PEMFC) systems fault detection and isolation through a model-based diagnostic algorithm. The considered algorithm is developed upon a lumped parameter model simulating a whole PEMFC system oriented towards automotive applications. This model is inspired by other models available in the literature, with further attention to stack thermal dynamics and water management. The developed model is analysed by means of Structural Analysis, to identify the correlations among involved physical variables, defined equations and a set of faults which may occur in the system (related to both auxiliary components malfunctions and stack degradation phenomena). Residual generators are designed by means of Causal Computation analysis and the maximum theoretical fault isolability, achievable with a minimal number of installed sensors, is investigated. The achieved results proved the capability of the algorithm to theoretically detect and isolate almost all faults with the only use of stack voltage and temperature sensors, with significant advantages from an industrial point of view. The effective fault isolability is proved through fault simulations at a specific fault magnitude with an advanced residual evaluation technique, to consider quantitative residual deviations from normal conditions and achieve univocal fault isolation.
P450 AND METABOLISM IN TOXICOLOGY
The cytochromes P450 catalyze the initial phase of detoxification of many environmental chemicals, xenobiotic, drugs and the secondary metabolic product of plants. Plant secondary chemicals can be highly toxic, and they evolved in a coevolving plant - animal warfare - the plants ...
Antiviral immunity and virus vaccines
USDA-ARS?s Scientific Manuscript database
As obligate intracellular organisms, viruses have co-evolved with their respective host species, which in turn have evolved diverse and sophisticated capabilities to protect themselves against viral infections and their associated diseases. Viruses have also evolved a remarkable variety of strategie...
NASA Astrophysics Data System (ADS)
Eladj, Said; bansir, fateh; ouadfeul, sid Ali
2016-04-01
The application of genetic algorithm starts with an initial population of chromosomes representing a "model space". Chromosome chains are preferentially Reproduced based on Their fitness Compared to the total population. However, a good chromosome has a Greater opportunity to Produce offspring Compared To other chromosomes in the population. The advantage of the combination HGA / SAA is the use of a global search approach on a large population of local maxima to Improve Significantly the performance of the method. To define the parameters of the Hybrid Genetic Algorithm Steepest Ascent Auto Statics (HGA / SAA) job, we Evaluated by testing in the first stage of "Steepest Ascent," the optimal parameters related to the data used. 1- The number of iterations "Number of hill climbing iteration" is equal to 40 iterations. This parameter defines the participation of the algorithm "SA", in this hybrid approach. 2- The minimum eigenvalue for SA '= 0.8. This is linked to the quality of data and S / N ratio. To find an implementation performance of hybrid genetic algorithms in the inversion for estimating of the residual static corrections, tests Were Performed to determine the number of generation of HGA / SAA. Using the values of residual static corrections already calculated by the Approaches "SAA and CSAA" learning has Proved very effective in the building of the cross-correlation table. To determine the optimal number of generation, we Conducted a series of tests ranging from [10 to 200] generations. The application on real seismic data in southern Algeria allowed us to judge the performance and capacity of the inversion with this hybrid method "HGA / SAA". This experience Clarified the influence of the corrections quality estimated from "SAA / CSAA" and the optimum number of generation hybrid genetic algorithm "HGA" required to have a satisfactory performance. Twenty (20) generations Were enough to Improve continuity and resolution of seismic horizons. This Will allow us to achieve a more accurate structural interpretation Key words: Hybrid Genetic Algorithm, number of generations, model space, local maxima, Number of hill climbing iteration, Minimum eigenvalue, cross-correlation table
NASA Technical Reports Server (NTRS)
Kummerow, Christian; Giglio, Louis
1994-01-01
This paper describes a multichannel physical approach for retrieving rainfall and vertical structure information from satellite-based passive microwave observations. The algorithm makes use of statistical inversion techniques based upon theoretically calculated relations between rainfall rates and brightness temperatures. Potential errors introduced into the theoretical calculations by the unknown vertical distribution of hydrometeors are overcome by explicity accounting for diverse hydrometeor profiles. This is accomplished by allowing for a number of different vertical distributions in the theoretical brightness temperature calculations and requiring consistency between the observed and calculated brightness temperatures. This paper will focus primarily on the theoretical aspects of the retrieval algorithm, which includes a procedure used to account for inhomogeneities of the rainfall within the satellite field of view as well as a detailed description of the algorithm as it is applied over both ocean and land surfaces. The residual error between observed and calculated brightness temperatures is found to be an important quantity in assessing the uniqueness of the solution. It is further found that the residual error is a meaningful quantity that can be used to derive expected accuracies from this retrieval technique. Examples comparing the retrieved results as well as the detailed analysis of the algorithm performance under various circumstances are the subject of a companion paper.
Iterative refinement of structure-based sequence alignments by Seed Extension
Kim, Changhoon; Tai, Chin-Hsien; Lee, Byungkook
2009-01-01
Background Accurate sequence alignment is required in many bioinformatics applications but, when sequence similarity is low, it is difficult to obtain accurate alignments based on sequence similarity alone. The accuracy improves when the structures are available, but current structure-based sequence alignment procedures still mis-align substantial numbers of residues. In order to correct such errors, we previously explored the possibility of replacing the residue-based dynamic programming algorithm in structure alignment procedures with the Seed Extension algorithm, which does not use a gap penalty. Here, we describe a new procedure called RSE (Refinement with Seed Extension) that iteratively refines a structure-based sequence alignment. Results RSE uses SE (Seed Extension) in its core, which is an algorithm that we reported recently for obtaining a sequence alignment from two superimposed structures. The RSE procedure was evaluated by comparing the correctly aligned fractions of residues before and after the refinement of the structure-based sequence alignments produced by popular programs. CE, DaliLite, FAST, LOCK2, MATRAS, MATT, TM-align, SHEBA and VAST were included in this analysis and the NCBI's CDD root node set was used as the reference alignments. RSE improved the average accuracy of sequence alignments for all programs tested when no shift error was allowed. The amount of improvement varied depending on the program. The average improvements were small for DaliLite and MATRAS but about 5% for CE and VAST. More substantial improvements have been seen in many individual cases. The additional computation times required for the refinements were negligible compared to the times taken by the structure alignment programs. Conclusion RSE is a computationally inexpensive way of improving the accuracy of a structure-based sequence alignment. It can be used as a standalone procedure following a regular structure-based sequence alignment or to replace the traditional iterative refinement procedures based on residue-level dynamic programming algorithm in many structure alignment programs. PMID:19589133
Bandyopadhyay, Deepak; Huan, Jun; Prins, Jan; Snoeyink, Jack; Wang, Wei; Tropsha, Alexander
2009-11-01
Protein function prediction is one of the central problems in computational biology. We present a novel automated protein structure-based function prediction method using libraries of local residue packing patterns that are common to most proteins in a known functional family. Critical to this approach is the representation of a protein structure as a graph where residue vertices (residue name used as a vertex label) are connected by geometrical proximity edges. The approach employs two steps. First, it uses a fast subgraph mining algorithm to find all occurrences of family-specific labeled subgraphs for all well characterized protein structural and functional families. Second, it queries a new structure for occurrences of a set of motifs characteristic of a known family, using a graph index to speed up Ullman's subgraph isomorphism algorithm. The confidence of function inference from structure depends on the number of family-specific motifs found in the query structure compared with their distribution in a large non-redundant database of proteins. This method can assign a new structure to a specific functional family in cases where sequence alignments, sequence patterns, structural superposition and active site templates fail to provide accurate annotation.
NASA Astrophysics Data System (ADS)
Park, Keecheol; Oh, Kyungsuk
2017-09-01
In order to investigate the effect of leveling conditions on residual stress evolution during the leveling process of hot rolled high strength steels, the in-plane residual stresses of sheet processed under controlled conditions at skin-pass mill and levelers were measured by cutting method. The residual stress was localized near the edge of sheet. As the thickness of sheet was increased, the residual stress occurred region was expanded. The magnitude of residual stress within the sheet was reduced as increasing the deformation occurred during the leveling process. But the residual stress itself was not removed completely. The magnitude of camber occurred at cut plate was able to be predicted by the residual stress distribution. A numerical algorithm was developed for analysing the effect of leveling conditions on residual stress. It was able to implement the effect of plastic deformation in leveling, tension, work roll bending, and initial state of sheet (residual stress and curl distribution). The validity of simulated results was verified from comparison with the experimentally measured residual stress and curl in a sheet.
Attia, Khalid A M; Nassar, Mohammed W I; El-Zeiny, Mohamed B; Serag, Ahmed
2017-01-05
For the first time, a new variable selection method based on swarm intelligence namely firefly algorithm is coupled with three different multivariate calibration models namely, concentration residual augmented classical least squares, artificial neural network and support vector regression in UV spectral data. A comparative study between the firefly algorithm and the well-known genetic algorithm was developed. The discussion revealed the superiority of using this new powerful algorithm over the well-known genetic algorithm. Moreover, different statistical tests were performed and no significant differences were found between all the models regarding their predictabilities. This ensures that simpler and faster models were obtained without any deterioration of the quality of the calibration. Copyright © 2016 Elsevier B.V. All rights reserved.
Implementation and performance of shutterless uncooled micro-bolometer cameras
NASA Astrophysics Data System (ADS)
Das, J.; de Gaspari, D.; Cornet, P.; Deroo, P.; Vermeiren, J.; Merken, P.
2015-06-01
A shutterless algorithm is implemented into the Xenics LWIR thermal cameras and modules. Based on a calibration set and a global temperature coefficient the optimal non-uniformity correction is calculated onboard of the camera. The limited resources in the camera require a compact algorithm, hence the efficiency of the coding is important. The performance of the shutterless algorithm is studied by a comparison of the residual non-uniformity (RNU) and signal-to-noise ratio (SNR) between the shutterless and shuttered correction algorithm. From this comparison we conclude that the shutterless correction is only slightly less performant compared to the standard shuttered algorithm, making this algorithm very interesting for thermal infrared applications where small weight and size, and continuous operation are important.
Prisilla, A; Prathiviraj, R; Chellapandi, P
2017-04-01
Clostridium botulinum (group-III) is an anaerobic bacterium producing C2 toxin along with botulinum neurotoxins. C2 toxin is belonged to binary toxin A family in bacterial ADP-ribosylation superfamily. A structural and functional diversity of binary toxin A family was inferred from different evolutionary constraints to determine the avirulence state of C2 toxin. Evolutionary genetic analyses revealed evidence of C2 toxin cluster evolution through horizontal gene transfer from the phage or plasmid origins, site-specific insertion by gene divergence, and homologous recombination event. It has also described that residue in conserved NAD-binding core, family-specific domain structure, and functional motifs found to predetermine its virulence state. Any mutational changes in these residues destabilized its structure-function relationship. Avirulent mutants of C2 toxin were screened and selected from a crucial site required for catalytic function of C2I and pore-forming function of C2II. We found coevolved amino acid pairs contributing an essential role in stabilization of its local structural environment. Avirulent toxins selected in this study were evaluated by detecting evolutionary constraints in stability of protein backbone structure, folding and conformational dynamic space, and antigenic peptides. We found 4 avirulent mutants of C2I and 5 mutants of C2II showing more stability in their local structural environment and backbone structure with rapid fold rate, and low conformational flexibility at mutated sites. Since, evolutionary constraints-free mutants with lack of catalytic and pore-forming function suggested as potential immunogenic candidates for treating C. botulinum infected poultry and veterinary animals. Single amino acid substitution in C2 toxin thus provides a major importance to understand its structure-function link, not only of a molecule but also of the pathogenesis.
Evaluation of Aster Images for Characterization and Mapping of Amethyst Mining Residues
NASA Astrophysics Data System (ADS)
Markoski, P. R.; Rolim, S. B. A.
2012-07-01
The objective of this work was to evaluate the potential of Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), subsystems VNIR (Visible and Near Infrared) and SWIR (Short Wave Infrared) images, for discrimination and mapping of amethyst mining residues (basalt) in the Ametista do Sul Region, Rio Grande do Sul State, Brazil. This region provides the most part of amethyst mining of the World. The basalt is extracted during the mining process and deposited outside the mine. As a result, mounts of residues (basalt) rise up. These mounts are many times smaller than ASTER pixel size (VNIR - 15 meters and SWIR - 30 meters). Thus, the pixel composition becomes a mixing of various materials, hampering its identification and mapping. Trying to solve this problem, multispectral algorithm Maximum Likelihood (MaxVer) and the hyperspectral technique SAM (Spectral Angle Mapper) were used in this work. Images from ASTER subsystems VNIR and SWIR were used to perform the classifications. SAM technique produced better results than MaxVer algorithm. The main error found by the techniques was the mixing between "shadow" and "mining residues/basalt" classes. With the SAM technique the confusion decreased because it employed the basalt spectral curve as a reference, while the multispectral techniques employed pixels groups that could have spectral mixture with other targets. The results showed that in tropical terrains as the study area, ASTER data can be efficacious for the characterization of mining residues.
Designing Antibacterial Peptides with Enhanced Killing Kinetics
Waghu, Faiza H.; Joseph, Shaini; Ghawali, Sanket; Martis, Elvis A.; Madan, Taruna; Venkatesh, Kareenhalli V.; Idicula-Thomas, Susan
2018-01-01
Antimicrobial peptides (AMPs) are gaining attention as substitutes for antibiotics in order to combat the risk posed by multi-drug resistant pathogens. Several research groups are engaged in design of potent anti-infective agents using natural AMPs as templates. In this study, a library of peptides with high sequence similarity to Myeloid Antimicrobial Peptide (MAP) family were screened using popular online prediction algorithms. These peptide variants were designed in a manner to retain the conserved residues within the MAP family. The prediction algorithms were found to effectively classify peptides based on their antimicrobial nature. In order to improve the activity of the identified peptides, molecular dynamics (MD) simulations, using bilayer and micellar systems could be used to design and predict effect of residue substitution on membranes of microbial and mammalian cells. The inference from MD simulation studies well corroborated with the wet-lab observations indicating that MD-guided rational design could lead to discovery of potent AMPs. The effect of the residue substitution on membrane activity was studied in greater detail using killing kinetic analysis. Killing kinetics studies on Gram-positive, negative and human erythrocytes indicated that a single residue change has a drastic effect on the potency of AMPs. An interesting outcome was a switch from monophasic to biphasic death rate constant of Staphylococcus aureus due to a single residue mutation in the peptide. PMID:29527201
NASA Astrophysics Data System (ADS)
Ekinci, Yunus Levent; Balkaya, Çağlayan; Göktürkler, Gökhan; Turan, Seçil
2016-06-01
An efficient approach to estimate model parameters from residual gravity data based on differential evolution (DE), a stochastic vector-based metaheuristic algorithm, has been presented. We have showed the applicability and effectiveness of this algorithm on both synthetic and field anomalies. According to our knowledge, this is a first attempt of applying DE for the parameter estimations of residual gravity anomalies due to isolated causative sources embedded in the subsurface. The model parameters dealt with here are the amplitude coefficient (A), the depth and exact origin of causative source (zo and xo, respectively) and the shape factors (q and ƞ). The error energy maps generated for some parameter pairs have successfully revealed the nature of the parameter estimation problem under consideration. Noise-free and noisy synthetic single gravity anomalies have been evaluated with success via DE/best/1/bin, which is a widely used strategy in DE. Additionally some complicated gravity anomalies caused by multiple source bodies have been considered, and the results obtained have showed the efficiency of the algorithm. Then using the strategy applied in synthetic examples some field anomalies observed for various mineral explorations such as a chromite deposit (Camaguey district, Cuba), a manganese deposit (Nagpur, India) and a base metal sulphide deposit (Quebec, Canada) have been considered to estimate the model parameters of the ore bodies. Applications have exhibited that the obtained results such as the depths and shapes of the ore bodies are quite consistent with those published in the literature. Uncertainty in the solutions obtained from DE algorithm has been also investigated by Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing without cooling schedule. Based on the resulting histogram reconstructions of both synthetic and field data examples the algorithm has provided reliable parameter estimations being within the sampling limits of M-H sampler. Although it is not a common inversion technique in geophysics, it can be stated that DE algorithm is worth to get more interest for parameter estimations from potential field data in geophysics considering its good accuracy, less computational cost (in the present problem) and the fact that a well-constructed initial guess is not required to reach the global minimum.
NASA Astrophysics Data System (ADS)
Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed
2016-03-01
Different chemometric models were applied for the quantitative analysis of amoxicillin (AMX), and flucloxacillin (FLX) in their binary mixtures, namely, partial least squares (PLS), spectral residual augmented classical least squares (SRACLS), concentration residual augmented classical least squares (CRACLS) and artificial neural networks (ANNs). All methods were applied with and without variable selection procedure (genetic algorithm GA). The methods were used for the quantitative analysis of the drugs in laboratory prepared mixtures and real market sample via handling the UV spectral data. Robust and simpler models were obtained by applying GA. The proposed methods were found to be rapid, simple and required no preliminary separation steps.
Comparison of atmospheric correction algorithms for the Coastal Zone Color Scanner
NASA Technical Reports Server (NTRS)
Tanis, F. J.; Jain, S. C.
1984-01-01
Before Nimbus-7 Costal Zone Color Scanner (CZC) data can be used to distinguish between coastal water types, methods must be developed for the removal of spatial variations in aerosol path radiance. These can dominate radiance measurements made by the satellite. An assessment is presently made of the ability of four different algorithms to quantitatively remove haze effects; each was adapted for the extraction of the required scene-dependent parameters during an initial pass through the data set The CZCS correction algorithms considered are (1) the Gordon (1981, 1983) algorithm; (2) the Smith and Wilson (1981) iterative algorityhm; (3) the pseudooptical depth method; and (4) the residual component algorithm.
Identification of residue pairing in interacting β-strands from a predicted residue contact map.
Mao, Wenzhi; Wang, Tong; Zhang, Wenxuan; Gong, Haipeng
2018-04-19
Despite the rapid progress of protein residue contact prediction, predicted residue contact maps frequently contain many errors. However, information of residue pairing in β strands could be extracted from a noisy contact map, due to the presence of characteristic contact patterns in β-β interactions. This information may benefit the tertiary structure prediction of mainly β proteins. In this work, we propose a novel ridge-detection-based β-β contact predictor to identify residue pairing in β strands from any predicted residue contact map. Our algorithm RDb 2 C adopts ridge detection, a well-developed technique in computer image processing, to capture consecutive residue contacts, and then utilizes a novel multi-stage random forest framework to integrate the ridge information and additional features for prediction. Starting from the predicted contact map of CCMpred, RDb 2 C remarkably outperforms all state-of-the-art methods on two conventional test sets of β proteins (BetaSheet916 and BetaSheet1452), and achieves F1-scores of ~ 62% and ~ 76% at the residue level and strand level, respectively. Taking the prediction of the more advanced RaptorX-Contact as input, RDb 2 C achieves impressively higher performance, with F1-scores reaching ~ 76% and ~ 86% at the residue level and strand level, respectively. In a test of structural modeling using the top 1 L predicted contacts as constraints, for 61 mainly β proteins, the average TM-score achieves 0.442 when using the raw RaptorX-Contact prediction, but increases to 0.506 when using the improved prediction by RDb 2 C. Our method can significantly improve the prediction of β-β contacts from any predicted residue contact maps. Prediction results of our algorithm could be directly applied to effectively facilitate the practical structure prediction of mainly β proteins. All source data and codes are available at http://166.111.152.91/Downloads.html or the GitHub address of https://github.com/wzmao/RDb2C .
Fast transform decoding of nonsystematic Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Truong, T. K.; Cheung, K.-M.; Reed, I. S.; Shiozaki, A.
1989-01-01
A Reed-Solomon (RS) code is considered to be a special case of a redundant residue polynomial (RRP) code, and a fast transform decoding algorithm to correct both errors and erasures is presented. This decoding scheme is an improvement of the decoding algorithm for the RRP code suggested by Shiozaki and Nishida, and can be realized readily on very large scale integration chips.
Jin, Zhigang; Ma, Yingying; Su, Yishan; Li, Shuo; Fu, Xiaomei
2017-07-19
Underwater sensor networks (UWSNs) have become a hot research topic because of their various aquatic applications. As the underwater sensor nodes are powered by built-in batteries which are difficult to replace, extending the network lifetime is a most urgent need. Due to the low and variable transmission speed of sound, the design of reliable routing algorithms for UWSNs is challenging. In this paper, we propose a Q-learning based delay-aware routing (QDAR) algorithm to extend the lifetime of underwater sensor networks. In QDAR, a data collection phase is designed to adapt to the dynamic environment. With the application of the Q-learning technique, QDAR can determine a global optimal next hop rather than a greedy one. We define an action-utility function in which residual energy and propagation delay are both considered for adequate routing decisions. Thus, the QDAR algorithm can extend the network lifetime by uniformly distributing the residual energy and provide lower end-to-end delay. The simulation results show that our protocol can yield nearly the same network lifetime, and can reduce the end-to-end delay by 20-25% compared with a classic lifetime-extended routing protocol (QELAR).
NASA Astrophysics Data System (ADS)
Garnier, Romain; Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier
2016-05-01
A new variational algorithm called adaptive vibrational configuration interaction (A-VCI) intended for the resolution of the vibrational Schrödinger equation was developed. The main advantage of this approach is to efficiently reduce the dimension of the active space generated into the configuration interaction (CI) process. Here, we assume that the Hamiltonian writes as a sum of products of operators. This adaptive algorithm was developed with the use of three correlated conditions, i.e., a suitable starting space, a criterion for convergence, and a procedure to expand the approximate space. The velocity of the algorithm was increased with the use of a posteriori error estimator (residue) to select the most relevant direction to increase the space. Two examples have been selected for benchmark. In the case of H2CO, we mainly study the performance of A-VCI algorithm: comparison with the variation-perturbation method, choice of the initial space, and residual contributions. For CH3CN, we compare the A-VCI results with a computed reference spectrum using the same potential energy surface and for an active space reduced by about 90%.
Garnier, Romain; Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier
2016-05-28
A new variational algorithm called adaptive vibrational configuration interaction (A-VCI) intended for the resolution of the vibrational Schrödinger equation was developed. The main advantage of this approach is to efficiently reduce the dimension of the active space generated into the configuration interaction (CI) process. Here, we assume that the Hamiltonian writes as a sum of products of operators. This adaptive algorithm was developed with the use of three correlated conditions, i.e., a suitable starting space, a criterion for convergence, and a procedure to expand the approximate space. The velocity of the algorithm was increased with the use of a posteriori error estimator (residue) to select the most relevant direction to increase the space. Two examples have been selected for benchmark. In the case of H2CO, we mainly study the performance of A-VCI algorithm: comparison with the variation-perturbation method, choice of the initial space, and residual contributions. For CH3CN, we compare the A-VCI results with a computed reference spectrum using the same potential energy surface and for an active space reduced by about 90%.
Anomaly Detection in Moving-Camera Video Sequences Using Principal Subspace Analysis
Thomaz, Lucas A.; Jardim, Eric; da Silva, Allan F.; ...
2017-10-16
This study presents a family of algorithms based on sparse decompositions that detect anomalies in video sequences obtained from slow moving cameras. These algorithms start by computing the union of subspaces that best represents all the frames from a reference (anomaly free) video as a low-rank projection plus a sparse residue. Then, they perform a low-rank representation of a target (possibly anomalous) video by taking advantage of both the union of subspaces and the sparse residue computed from the reference video. Such algorithms provide good detection results while at the same time obviating the need for previous video synchronization. However,more » this is obtained at the cost of a large computational complexity, which hinders their applicability. Another contribution of this paper approaches this problem by using intrinsic properties of the obtained data representation in order to restrict the search space to the most relevant subspaces, providing computational complexity gains of up to two orders of magnitude. The developed algorithms are shown to cope well with videos acquired in challenging scenarios, as verified by the analysis of 59 videos from the VDAO database that comprises videos with abandoned objects in a cluttered industrial scenario.« less
Ma, Yingying; Su, Yishan; Li, Shuo; Fu, Xiaomei
2017-01-01
Underwater sensor networks (UWSNs) have become a hot research topic because of their various aquatic applications. As the underwater sensor nodes are powered by built-in batteries which are difficult to replace, extending the network lifetime is a most urgent need. Due to the low and variable transmission speed of sound, the design of reliable routing algorithms for UWSNs is challenging. In this paper, we propose a Q-learning based delay-aware routing (QDAR) algorithm to extend the lifetime of underwater sensor networks. In QDAR, a data collection phase is designed to adapt to the dynamic environment. With the application of the Q-learning technique, QDAR can determine a global optimal next hop rather than a greedy one. We define an action-utility function in which residual energy and propagation delay are both considered for adequate routing decisions. Thus, the QDAR algorithm can extend the network lifetime by uniformly distributing the residual energy and provide lower end-to-end delay. The simulation results show that our protocol can yield nearly the same network lifetime, and can reduce the end-to-end delay by 20–25% compared with a classic lifetime-extended routing protocol (QELAR). PMID:28753951
Anomaly Detection in Moving-Camera Video Sequences Using Principal Subspace Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomaz, Lucas A.; Jardim, Eric; da Silva, Allan F.
This study presents a family of algorithms based on sparse decompositions that detect anomalies in video sequences obtained from slow moving cameras. These algorithms start by computing the union of subspaces that best represents all the frames from a reference (anomaly free) video as a low-rank projection plus a sparse residue. Then, they perform a low-rank representation of a target (possibly anomalous) video by taking advantage of both the union of subspaces and the sparse residue computed from the reference video. Such algorithms provide good detection results while at the same time obviating the need for previous video synchronization. However,more » this is obtained at the cost of a large computational complexity, which hinders their applicability. Another contribution of this paper approaches this problem by using intrinsic properties of the obtained data representation in order to restrict the search space to the most relevant subspaces, providing computational complexity gains of up to two orders of magnitude. The developed algorithms are shown to cope well with videos acquired in challenging scenarios, as verified by the analysis of 59 videos from the VDAO database that comprises videos with abandoned objects in a cluttered industrial scenario.« less
Tile-Based Two-Dimensional Phase Unwrapping for Digital Holography Using a Modular Framework
Antonopoulos, Georgios C.; Steltner, Benjamin; Heisterkamp, Alexander; Ripken, Tammo; Meyer, Heiko
2015-01-01
A variety of physical and biomedical imaging techniques, such as digital holography, interferometric synthetic aperture radar (InSAR), or magnetic resonance imaging (MRI) enable measurement of the phase of a physical quantity additionally to its amplitude. However, the phase can commonly only be measured modulo 2π, as a so called wrapped phase map. Phase unwrapping is the process of obtaining the underlying physical phase map from the wrapped phase. Tile-based phase unwrapping algorithms operate by first tessellating the phase map, then unwrapping individual tiles, and finally merging them to a continuous phase map. They can be implemented computationally efficiently and are robust to noise. However, they are prone to failure in the presence of phase residues or erroneous unwraps of single tiles. We tried to overcome these shortcomings by creating novel tile unwrapping and merging algorithms as well as creating a framework that allows to combine them in modular fashion. To increase the robustness of the tile unwrapping step, we implemented a model-based algorithm that makes efficient use of linear algebra to unwrap individual tiles. Furthermore, we adapted an established pixel-based unwrapping algorithm to create a quality guided tile merger. These original algorithms as well as previously existing ones were implemented in a modular phase unwrapping C++ framework. By examining different combinations of unwrapping and merging algorithms we compared our method to existing approaches. We could show that the appropriate choice of unwrapping and merging algorithms can significantly improve the unwrapped result in the presence of phase residues and noise. Beyond that, our modular framework allows for efficient design and test of new tile-based phase unwrapping algorithms. The software developed in this study is freely available. PMID:26599984
Tile-Based Two-Dimensional Phase Unwrapping for Digital Holography Using a Modular Framework.
Antonopoulos, Georgios C; Steltner, Benjamin; Heisterkamp, Alexander; Ripken, Tammo; Meyer, Heiko
2015-01-01
A variety of physical and biomedical imaging techniques, such as digital holography, interferometric synthetic aperture radar (InSAR), or magnetic resonance imaging (MRI) enable measurement of the phase of a physical quantity additionally to its amplitude. However, the phase can commonly only be measured modulo 2π, as a so called wrapped phase map. Phase unwrapping is the process of obtaining the underlying physical phase map from the wrapped phase. Tile-based phase unwrapping algorithms operate by first tessellating the phase map, then unwrapping individual tiles, and finally merging them to a continuous phase map. They can be implemented computationally efficiently and are robust to noise. However, they are prone to failure in the presence of phase residues or erroneous unwraps of single tiles. We tried to overcome these shortcomings by creating novel tile unwrapping and merging algorithms as well as creating a framework that allows to combine them in modular fashion. To increase the robustness of the tile unwrapping step, we implemented a model-based algorithm that makes efficient use of linear algebra to unwrap individual tiles. Furthermore, we adapted an established pixel-based unwrapping algorithm to create a quality guided tile merger. These original algorithms as well as previously existing ones were implemented in a modular phase unwrapping C++ framework. By examining different combinations of unwrapping and merging algorithms we compared our method to existing approaches. We could show that the appropriate choice of unwrapping and merging algorithms can significantly improve the unwrapped result in the presence of phase residues and noise. Beyond that, our modular framework allows for efficient design and test of new tile-based phase unwrapping algorithms. The software developed in this study is freely available.
Network Analysis of Earth's Co-Evolving Geosphere and Biosphere
NASA Astrophysics Data System (ADS)
Hazen, R. M.; Eleish, A.; Liu, C.; Morrison, S. M.; Meyer, M.; Consortium, K. D.
2017-12-01
A fundamental goal of Earth science is the deep understanding of Earth's dynamic, co-evolving geosphere and biosphere through deep time. Network analysis of geo- and bio- `big data' provides an interactive, quantitative, and predictive visualization framework to explore complex and otherwise hidden high-dimension features of diversity, distribution, and change in the evolution of Earth's geochemistry, mineralogy, paleobiology, and biochemistry [1]. Networks also facilitate quantitative comparison of different geological time periods, tectonic settings, and geographical regions, as well as different planets and moons, through network metrics, including density, centralization, diameter, and transitivity.We render networks by employing data related to geographical, paragenetic, environmental, or structural relationships among minerals, fossils, proteins, and microbial taxa. An important recent finding is that the topography of many networks reflects parameters not explicitly incorporated in constructing the network. For example, networks for minerals, fossils, and protein structures reveal embedded qualitative time axes, with additional network geometries possibly related to extinction and/or other punctuation events (see Figure). Other axes related to chemical activities and volatile fugacities, as well as pressure and/or depth of formation, may also emerge from network analysis. These patterns provide new insights into the way planets evolve, especially Earth's co-evolving geosphere and biosphere. 1. Morrison, S.M. et al. (2017) Network analysis of mineralogical systems. American Mineralogist 102, in press. Figure Caption: A network of Phanerozoic Era fossil animals from the past 540 million years includes blue, red, and black circles (nodes) representing family-level taxa and grey lines (links) between coexisting families. Age information was not used in the construction of this network; nevertheless an intrinsic timeline is embedded in the network topology. In addition, two mass extinction events appear as "pinch points" in the network.
NASA Astrophysics Data System (ADS)
Jeffs, Brian D.; Christou, Julian C.
1998-09-01
This paper addresses post processing for resolution enhancement of sequences of short exposure adaptive optics (AO) images of space objects. The unknown residual blur is removed using Bayesian maximum a posteriori blind image restoration techniques. In the problem formulation, both the true image and the unknown blur psf's are represented by the flexible generalized Gaussian Markov random field (GGMRF) model. The GGMRF probability density function provides a natural mechanism for expressing available prior information about the image and blur. Incorporating such prior knowledge in the deconvolution optimization is crucial for the success of blind restoration algorithms. For example, space objects often contain sharp edge boundaries and geometric structures, while the residual blur psf in the corresponding partially corrected AO image is spectrally band limited, and exhibits while the residual blur psf in the corresponding partially corrected AO image is spectrally band limited, and exhibits smoothed, random , texture-like features on a peaked central core. By properly choosing parameters, GGMRF models can accurately represent both the blur psf and the object, and serve to regularize the deconvolution problem. These two GGMRF models also serve as discriminator functions to separate blur and object in the solution. Algorithm performance is demonstrated with examples from synthetic AO images. Results indicate significant resolution enhancement when applied to partially corrected AO images. An efficient computational algorithm is described.
Chen, Peng; Li, Jinyan; Wong, Limsoon; Kuwahara, Hiroyuki; Huang, Jianhua Z; Gao, Xin
2013-08-01
Hot spot residues of proteins are fundamental interface residues that help proteins perform their functions. Detecting hot spots by experimental methods is costly and time-consuming. Sequential and structural information has been widely used in the computational prediction of hot spots. However, structural information is not always available. In this article, we investigated the problem of identifying hot spots using only physicochemical characteristics extracted from amino acid sequences. We first extracted 132 relatively independent physicochemical features from a set of the 544 properties in AAindex1, an amino acid index database. Each feature was utilized to train a classification model with a novel encoding schema for hot spot prediction by the IBk algorithm, an extension of the K-nearest neighbor algorithm. The combinations of the individual classifiers were explored and the classifiers that appeared frequently in the top performing combinations were selected. The hot spot predictor was built based on an ensemble of these classifiers and to work in a voting manner. Experimental results demonstrated that our method effectively exploited the feature space and allowed flexible weights of features for different queries. On the commonly used hot spot benchmark sets, our method significantly outperformed other machine learning algorithms and state-of-the-art hot spot predictors. The program is available at http://sfb.kaust.edu.sa/pages/software.aspx. Copyright © 2013 Wiley Periodicals, Inc.
Subband Image Coding with Jointly Optimized Quantizers
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Chung, Wilson C.; Smith Mark J. T.
1995-01-01
An iterative design algorithm for the joint design of complexity- and entropy-constrained subband quantizers and associated entropy coders is proposed. Unlike conventional subband design algorithms, the proposed algorithm does not require the use of various bit allocation algorithms. Multistage residual quantizers are employed here because they provide greater control of the complexity-performance tradeoffs, and also because they allow efficient and effective high-order statistical modeling. The resulting subband coder exploits statistical dependencies within subbands, across subbands, and across stages, mainly through complexity-constrained high-order entropy coding. Experimental results demonstrate that the complexity-rate-distortion performance of the new subband coder is exceptional.
Developing improved durum wheat germplasm by altering the cytoplasmic genome
USDA-ARS?s Scientific Manuscript database
In eukaryotic organisms, nuclear and cytoplasmic genomes interact to drive cellular functions. These genomes have co-evolved to form specific nuclear-cytoplasmic interactions that are essential to the origin, success, and evolution of diploid and polyploid species. Hundreds of genetic diseases in h...
Ant Species Differences Determined by Epistasis between Brood and Worker Genomes
Linksvayer, Timothy A.
2007-01-01
Epistasis arising from physiological interactions between gene products often contributes to species differences, particularly those involved in reproductive isolation. In social organisms, phenotypes are influenced by the genotypes of multiple interacting individuals. In theory, social interactions can give rise to an additional type of epistasis between the genomes of social partners that can contribute to species differences. Using a full-factorial cross-fostering design with three species of closely related Temnothorax ants, I found that adult worker size was determined by an interaction between the genotypes of developing brood and care-giving workers, i.e. intergenomic epistasis. Such intergenomic social epistasis provides a strong signature of coevolution between social partners. These results demonstrate that just as physiologically interacting genes coevolve, diverge, and contribute to species differences, so do socially interacting genes. Coevolution and conflict between social partners, especially relatives such as parents and offspring, has long been recognized as having widespread evolutionary effects. This coevolutionary process may often result in coevolved socially-interacting gene complexes that contribute to species differences. PMID:17912371
Coevolving complex networks in the model of social interactions
NASA Astrophysics Data System (ADS)
Raducha, Tomasz; Gubiec, Tomasz
2017-04-01
We analyze Axelrod's model of social interactions on coevolving complex networks. We introduce four extensions with different mechanisms of edge rewiring. The models are intended to catch two kinds of interactions-preferential attachment, which can be observed in scientists or actors collaborations, and local rewiring, which can be observed in friendship formation in everyday relations. Numerical simulations show that proposed dynamics can lead to the power-law distribution of nodes' degree and high value of the clustering coefficient, while still retaining the small-world effect in three models. All models are characterized by two phase transitions of a different nature. In case of local rewiring we obtain order-disorder discontinuous phase transition even in the thermodynamic limit, while in case of long-distance switching discontinuity disappears in the thermodynamic limit, leaving one continuous phase transition. In addition, we discover a new and universal characteristic of the second transition point-an abrupt increase of the clustering coefficient, due to formation of many small complete subgraphs inside the network.
Evolutionary prisoner's dilemma games coevolving on adaptive networks.
Lee, Hsuan-Wei; Malik, Nishant; Mucha, Peter J
2018-02-01
We study a model for switching strategies in the Prisoner's Dilemma game on adaptive networks of player pairings that coevolve as players attempt to maximize their return. We use a node-based strategy model wherein each player follows one strategy at a time (cooperate or defect) across all of its neighbors, changing that strategy and possibly changing partners in response to local changes in the network of player pairing and in the strategies used by connected partners. We compare and contrast numerical simulations with existing pair approximation differential equations for describing this system, as well as more accurate equations developed here using the framework of approximate master equations. We explore the parameter space of the model, demonstrating the relatively high accuracy of the approximate master equations for describing the system observations made from simulations. We study two variations of this partner-switching model to investigate the system evolution, predict stationary states, and compare the total utilities and other qualitative differences between these two model variants.
The coevolution of recognition and social behavior.
Smead, Rory; Forber, Patrick
2016-05-26
Recognition of behavioral types can facilitate the evolution of cooperation by enabling altruistic behavior to be directed at other cooperators and withheld from defectors. While much is known about the tendency for recognition to promote cooperation, relatively little is known about whether such a capacity can coevolve with the social behavior it supports. Here we use evolutionary game theory and multi-population dynamics to model the coevolution of social behavior and recognition. We show that conditional harming behavior enables the evolution and stability of social recognition, whereas conditional helping leads to a deterioration of recognition ability. Expanding the model to include a complex game where both helping and harming interactions are possible, we find that conditional harming behavior can stabilize recognition, and thereby lead to the evolution of conditional helping. Our model identifies a novel hypothesis for the evolution of cooperation: conditional harm may have coevolved with recognition first, thereby helping to establish the mechanisms necessary for the evolution of cooperation.
The coevolution of recognition and social behavior
Smead, Rory; Forber, Patrick
2016-01-01
Recognition of behavioral types can facilitate the evolution of cooperation by enabling altruistic behavior to be directed at other cooperators and withheld from defectors. While much is known about the tendency for recognition to promote cooperation, relatively little is known about whether such a capacity can coevolve with the social behavior it supports. Here we use evolutionary game theory and multi-population dynamics to model the coevolution of social behavior and recognition. We show that conditional harming behavior enables the evolution and stability of social recognition, whereas conditional helping leads to a deterioration of recognition ability. Expanding the model to include a complex game where both helping and harming interactions are possible, we find that conditional harming behavior can stabilize recognition, and thereby lead to the evolution of conditional helping. Our model identifies a novel hypothesis for the evolution of cooperation: conditional harm may have coevolved with recognition first, thereby helping to establish the mechanisms necessary for the evolution of cooperation. PMID:27225673
Topics in Statistical Calibration
2014-03-27
on a parametric bootstrap where, instead of sampling directly from the residuals , samples are drawn from a normal distribution. This procedure will...addition to centering them (Davison and Hinkley, 1997). When there are outliers in the residuals , the bootstrap distribution of x̂0 can become skewed or...based and inversion methods using the linear mixed-effects model. Then, a simple parametric bootstrap algorithm is proposed that can be used to either
Nugent, Timothy; Jones, David T.
2010-01-01
Alpha-helical transmembrane proteins constitute roughly 30% of a typical genome and are involved in a wide variety of important biological processes including cell signalling, transport of membrane-impermeable molecules and cell recognition. Despite significant efforts to predict transmembrane protein topology, comparatively little attention has been directed toward developing a method to pack the helices together. Here, we present a novel approach to predict lipid exposure, residue contacts, helix-helix interactions and finally the optimal helical packing arrangement of transmembrane proteins. Using molecular dynamics data, we have trained and cross-validated a support vector machine (SVM) classifier to predict per residue lipid exposure with 69% accuracy. This information is combined with additional features to train a second SVM to predict residue contacts which are then used to determine helix-helix interaction with up to 65% accuracy under stringent cross-validation on a non-redundant test set. Our method is also able to discriminate native from decoy helical packing arrangements with up to 70% accuracy. Finally, we employ a force-directed algorithm to construct the optimal helical packing arrangement which demonstrates success for proteins containing up to 13 transmembrane helices. This software is freely available as source code from http://bioinf.cs.ucl.ac.uk/memsat/mempack/. PMID:20333233
NASA Astrophysics Data System (ADS)
Şenol, Mehmet; Alquran, Marwan; Kasmaei, Hamed Daei
2018-06-01
In this paper, we present analytic-approximate solution of time-fractional Zakharov-Kuznetsov equation. This model demonstrates the behavior of weakly nonlinear ion acoustic waves in a plasma bearing cold ions and hot isothermal electrons in the presence of a uniform magnetic field. Basic definitions of fractional derivatives are described in the Caputo sense. Perturbation-iteration algorithm (PIA) and residual power series method (RPSM) are applied to solve this equation with success. The convergence analysis is also presented for both methods. Numerical results are given and then they are compared with the exact solutions. Comparison of the results reveal that both methods are competitive, powerful, reliable, simple to use and ready to apply to wide range of fractional partial differential equations.
Robust Speech Enhancement Using Two-Stage Filtered Minima Controlled Recursive Averaging
NASA Astrophysics Data System (ADS)
Ghourchian, Negar; Selouani, Sid-Ahmed; O'Shaughnessy, Douglas
In this paper we propose an algorithm for estimating noise in highly non-stationary noisy environments, which is a challenging problem in speech enhancement. This method is based on minima-controlled recursive averaging (MCRA) whereby an accurate, robust and efficient noise power spectrum estimation is demonstrated. We propose a two-stage technique to prevent the appearance of musical noise after enhancement. This algorithm filters the noisy speech to achieve a robust signal with minimum distortion in the first stage. Subsequently, it estimates the residual noise using MCRA and removes it with spectral subtraction. The proposed Filtered MCRA (FMCRA) performance is evaluated using objective tests on the Aurora database under various noisy environments. These measures indicate the higher output SNR and lower output residual noise and distortion.
Approximated affine projection algorithm for feedback cancellation in hearing aids.
Lee, Sangmin; Kim, In-Young; Park, Young-Cheol
2007-09-01
We propose an approximated affine projection (AP) algorithm for feedback cancellation in hearing aids. It is based on the conventional approach using the Gauss-Seidel (GS) iteration, but provides more stable convergence behaviour even with small step sizes. In the proposed algorithm, a residue of the weighted error vector, instead of the current error sample, is used to provide stable convergence. A new learning rate control scheme is also applied to the proposed algorithm to prevent signal cancellation and system instability. The new scheme determines step size in proportion to the prediction factor of the input, so that adaptation is inhibited whenever tone-like signals are present in the input. Simulation results verified the efficiency of the proposed algorithm.
Assembly constraints drive co-evolution among ribosomal constituents.
Mallik, Saurav; Akashi, Hiroshi; Kundu, Sudip
2015-06-23
Ribosome biogenesis, a central and essential cellular process, occurs through sequential association and mutual co-folding of protein-RNA constituents in a well-defined assembly pathway. Here, we construct a network of co-evolving nucleotide/amino acid residues within the ribosome and demonstrate that assembly constraints are strong predictors of co-evolutionary patterns. Predictors of co-evolution include a wide spectrum of structural reconstitution events, such as cooperativity phenomenon, protein-induced rRNA reconstitutions, molecular packing of different rRNA domains, protein-rRNA recognition, etc. A correlation between folding rate of small globular proteins and their topological features is known. We have introduced an analogous topological characteristic for co-evolutionary network of ribosome, which allows us to differentiate between rRNA regions subjected to rapid reconstitutions from those hindered by kinetic traps. Furthermore, co-evolutionary patterns provide a biological basis for deleterious mutation sites and further allow prediction of potential antibiotic targeting sites. Understanding assembly pathways of multicomponent macromolecules remains a key challenge in biophysics. Our study provides a 'proof of concept' that directly relates co-evolution to biophysical interactions during multicomponent assembly and suggests predictive power to identify candidates for critical functional interactions as well as for assembly-blocking antibiotic target sites. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Technology-Enabled Crime, Policing and Security
ERIC Educational Resources Information Center
McQuade, Sam
2006-01-01
Crime, policing and security are enabled by and co-evolve with technologies that make them possible. As criminals compete with security and policing officials for technological advantage perpetually complex crime, policing and security results in relatively confusing and therefore unmanageable threats to society. New, adaptive and ordinary crimes…
Biogeography of mutualistic fungi cultivated by leafcutter ants
USDA-ARS?s Scientific Manuscript database
Leafcutter ants propagate co-evolving fungi for food. The nearly 50 species of leafcutter ants (Atta, Acromyrmex) range from Argentina to the USA, with the greatest species diversity in southern South America. We elucidate the biogeography of fungi cultivated by leafcutter ants using DNA-sequence an...
Cache-site selection in Clark's Nutcracker (Nucifraga columbiana)
Teresa J. Lorenz; Kimberly A. Sullivan; Amanda V. Bakian; Carol A. Aubry
2011-01-01
Clark's Nutcracker (Nucifraga Columbiana) is one of the most specialized scatter-hoarding birds, considered a seed disperser for four species of pines (Pinus spp.), as well as an obligate coevolved mutualist of White bark Pine (P. albicaulis). Cache-site selection has not been formally studied in Clark...
USDA-ARS?s Scientific Manuscript database
Bacterial mutualists can increase the biochemical capacity of animals. Highly co-evolved nutritional mutualists do this by synthesizing nutrients missing from the host's diet. Genomics tools have recently advanced the study of these partnerships. Here we examined the endosymbiont Xiphinematobacter (...
Model and algorithm based on accurate realization of dwell time in magnetorheological finishing.
Song, Ci; Dai, Yifan; Peng, Xiaoqiang
2010-07-01
Classically, a dwell-time map is created with a method such as deconvolution or numerical optimization, with the input being a surface error map and influence function. This dwell-time map is the numerical optimum for minimizing residual form error, but it takes no account of machine dynamics limitations. The map is then reinterpreted as machine speeds and accelerations or decelerations in a separate operation. In this paper we consider combining the two methods in a single optimization by the use of a constrained nonlinear optimization model, which regards both the two-norm of the surface residual error and the dwell-time gradient as an objective function. This enables machine dynamic limitations to be properly considered within the scope of the optimization, reducing both residual surface error and polishing times. Further simulations are introduced to demonstrate the feasibility of the model, and the velocity map is reinterpreted from the dwell time, meeting the requirement of velocity and the limitations of accelerations or decelerations. Indeed, the model and algorithm can also apply to other computer-controlled subaperture methods.
Lorenzo, J Ramiro; Alonso, Leonardo G; Sánchez, Ignacio E
2015-01-01
Asparagine residues in proteins undergo spontaneous deamidation, a post-translational modification that may act as a molecular clock for the regulation of protein function and turnover. Asparagine deamidation is modulated by protein local sequence, secondary structure and hydrogen bonding. We present NGOME, an algorithm able to predict non-enzymatic deamidation of internal asparagine residues in proteins in the absence of structural data, using sequence-based predictions of secondary structure and intrinsic disorder. Compared to previous algorithms, NGOME does not require three-dimensional structures yet yields better predictions than available sequence-only methods. Four case studies of specific proteins show how NGOME may help the user identify deamidation-prone asparagine residues, often related to protein gain of function, protein degradation or protein misfolding in pathological processes. A fifth case study applies NGOME at a proteomic scale and unveils a correlation between asparagine deamidation and protein degradation in yeast. NGOME is freely available as a webserver at the National EMBnet node Argentina, URL: http://www.embnet.qb.fcen.uba.ar/ in the subpage "Protein and nucleic acid structure and sequence analysis".
Using State Estimation Residuals to Detect Abnormal SCADA Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Jian; Chen, Yousu; Huang, Zhenyu
2010-06-14
Detection of manipulated supervisory control and data acquisition (SCADA) data is critically important for the safe and secure operation of modern power systems. In this paper, a methodology of detecting manipulated SCADA data based on state estimation residuals is presented. A framework of the proposed methodology is described. Instead of using original SCADA measurements as the bad data sources, the residuals calculated based on the results of the state estimator are used as the input for the outlier detection process. The BACON algorithm is applied to detect outliers in the state estimation residuals. The IEEE 118-bus system is used asmore » a test case to evaluate the effectiveness of the proposed methodology. The accuracy of the BACON method is compared with that of the 3-σ method for the simulated SCADA measurements and residuals.« less
InterProSurf: a web server for predicting interacting sites on protein surfaces
Negi, Surendra S.; Schein, Catherine H.; Oezguen, Numan; Power, Trevor D.; Braun, Werner
2009-01-01
Summary A new web server, InterProSurf, predicts interacting amino acid residues in proteins that are most likely to interact with other proteins, given the 3D structures of subunits of a protein complex. The prediction method is based on solvent accessible surface area of residues in the isolated subunits, a propensity scale for interface residues and a clustering algorithm to identify surface regions with residues of high interface propensities. Here we illustrate the application of InterProSurf to determine which areas of Bacillus anthracis toxins and measles virus hemagglutinin protein interact with their respective cell surface receptors. The computationally predicted regions overlap with those regions previously identified as interface regions by sequence analysis and mutagenesis experiments. PMID:17933856
Phase-unwrapping algorithm by a rounding-least-squares approach
NASA Astrophysics Data System (ADS)
Juarez-Salazar, Rigoberto; Robledo-Sanchez, Carlos; Guerrero-Sanchez, Fermin
2014-02-01
A simple and efficient phase-unwrapping algorithm based on a rounding procedure and a global least-squares minimization is proposed. Instead of processing the gradient of the wrapped phase, this algorithm operates over the gradient of the phase jumps by a robust and noniterative scheme. Thus, the residue-spreading and over-smoothing effects are reduced. The algorithm's performance is compared with four well-known phase-unwrapping methods: minimum cost network flow (MCNF), fast Fourier transform (FFT), quality-guided, and branch-cut. A computer simulation and experimental results show that the proposed algorithm reaches a high-accuracy level than the MCNF method by a low-computing time similar to the FFT phase-unwrapping method. Moreover, since the proposed algorithm is simple, fast, and user-free, it could be used in metrological interferometric and fringe-projection automatic real-time applications.
Iterative inversion of deformation vector fields with feedback control.
Dubey, Abhishek; Iliopoulos, Alexandros-Stavros; Sun, Xiaobai; Yin, Fang-Fang; Ren, Lei
2018-05-14
Often, the inverse deformation vector field (DVF) is needed together with the corresponding forward DVF in four-dimesional (4D) reconstruction and dose calculation, adaptive radiation therapy, and simultaneous deformable registration. This study aims at improving both accuracy and efficiency of iterative algorithms for DVF inversion, and advancing our understanding of divergence and latency conditions. We introduce a framework of fixed-point iteration algorithms with active feedback control for DVF inversion. Based on rigorous convergence analysis, we design control mechanisms for modulating the inverse consistency (IC) residual of the current iterate, to be used as feedback into the next iterate. The control is designed adaptively to the input DVF with the objective to enlarge the convergence area and expedite convergence. Three particular settings of feedback control are introduced: constant value over the domain throughout the iteration; alternating values between iteration steps; and spatially variant values. We also introduce three spectral measures of the displacement Jacobian for characterizing a DVF. These measures reveal the critical role of what we term the nontranslational displacement component (NTDC) of the DVF. We carry out inversion experiments with an analytical DVF pair, and with DVFs associated with thoracic CT images of six patients at end of expiration and end of inspiration. The NTDC-adaptive iterations are shown to attain a larger convergence region at a faster pace compared to previous nonadaptive DVF inversion iteration algorithms. By our numerical experiments, alternating control yields smaller IC residuals and inversion errors than constant control. Spatially variant control renders smaller residuals and errors by at least an order of magnitude, compared to other schemes, in no more than 10 steps. Inversion results also show remarkable quantitative agreement with analysis-based predictions. Our analysis captures properties of DVF data associated with clinical CT images, and provides new understanding of iterative DVF inversion algorithms with a simple residual feedback control. Adaptive control is necessary and highly effective in the presence of nonsmall NTDCs. The adaptive iterations or the spectral measures, or both, may potentially be incorporated into deformable image registration methods. © 2018 American Association of Physicists in Medicine.
A technique for accelerating the convergence of restarted GMRES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, A H; Jessup, E R; Manteuffel, T
2004-03-09
We have observed that the residual vectors at the end of each restart cycle of restarted GMRES often alternate direction in a cyclic fashion, thereby slowing convergence. We present a new technique for accelerating the convergence of restarted GMRES by disrupting this alternating pattern. The new algorithm resembles a full conjugate gradient method with polynomial preconditioning, and its implementation requires minimal changes to the standard restarted GMRES algorithm.
A dynamical approach in exploring the unknown mass in the Solar system using pulsar timing arrays
NASA Astrophysics Data System (ADS)
Guo, Y. J.; Lee, K. J.; Caballero, R. N.
2018-04-01
The error in the Solar system ephemeris will lead to dipolar correlations in the residuals of pulsar timing array for widely separated pulsars. In this paper, we utilize such correlated signals, and construct a Bayesian data-analysis framework to detect the unknown mass in the Solar system and to measure the orbital parameters. The algorithm is designed to calculate the waveform of the induced pulsar-timing residuals due to the unmodelled objects following the Keplerian orbits in the Solar system. The algorithm incorporates a Bayesian-analysis suit used to simultaneously analyse the pulsar-timing data of multiple pulsars to search for coherent waveforms, evaluate the detection significance of unknown objects, and to measure their parameters. When the object is not detectable, our algorithm can be used to place upper limits on the mass. The algorithm is verified using simulated data sets, and cross-checked with analytical calculations. We also investigate the capability of future pulsar-timing-array experiments in detecting the unknown objects. We expect that the future pulsar-timing data can limit the unknown massive objects in the Solar system to be lighter than 10-11-10-12 M⊙, or measure the mass of Jovian system to a fractional precision of 10-8-10-9.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, C; Adcock, A; Azevedo, S
2010-12-28
Some diagnostics at the National Ignition Facility (NIF), including the Gamma Reaction History (GRH) diagnostic, require multiple channels of data to achieve the required dynamic range. These channels need to be stitched together into a single time series, and they may have non-uniform and redundant time samples. We chose to apply the popular cubic smoothing spline technique to our stitching problem because we needed a general non-parametric method. We adapted one of the algorithms in the literature, by Hutchinson and deHoog, to our needs. The modified algorithm and the resulting code perform a cubic smoothing spline fit to multiple datamore » channels with redundant time samples and missing data points. The data channels can have different, time-varying, zero-mean white noise characteristics. The method we employ automatically determines an optimal smoothing level by minimizing the Generalized Cross Validation (GCV) score. In order to automatically validate the smoothing level selection, the Weighted Sum-Squared Residual (WSSR) and zero-mean tests are performed on the residuals. Further, confidence intervals, both analytical and Monte Carlo, are also calculated. In this paper, we describe the derivation of our cubic smoothing spline algorithm. We outline the algorithm and test it with simulated and experimental data.« less
Automated hexahedral mesh generation from biomedical image data: applications in limb prosthetics.
Zachariah, S G; Sanders, J E; Turkiyyah, G M
1996-06-01
A general method to generate hexahedral meshes for finite element analysis of residual limbs and similar biomedical geometries is presented. The method utilizes skeleton-based subdivision of cross-sectional domains to produce simple subdomains in which structured meshes are easily generated. Application to a below-knee residual limb and external prosthetic socket is described. The residual limb was modeled as consisting of bones, soft tissue, and skin. The prosthetic socket model comprised a socket wall with an inner liner. The geometries of these structures were defined using axial cross-sectional contour data from X-ray computed tomography, optical scanning, and mechanical surface digitization. A tubular surface representation, using B-splines to define the directrix and generator, is shown to be convenient for definition of the structure geometries. Conversion of cross-sectional data to the compact tubular surface representation is direct, and the analytical representation simplifies geometric querying and numerical optimization within the mesh generation algorithms. The element meshes remain geometrically accurate since boundary nodes are constrained to lie on the tubular surfaces. Several element meshes of increasing mesh density were generated for two residual limbs and prosthetic sockets. Convergence testing demonstrated that approximately 19 elements are required along a circumference of the residual limb surface for a simple linear elastic model. A model with the fibula absent compared with the same geometry with the fibula present showed differences suggesting higher distal stresses in the absence of the fibula. Automated hexahedral mesh generation algorithms for sliced data represent an advancement in prosthetic stress analysis since they allow rapid modeling of any given residual limb and optimization of mesh parameters.
The Möbius domain wall fermion algorithm
NASA Astrophysics Data System (ADS)
Brower, Richard C.; Neff, Harmut; Orginos, Kostas
2017-11-01
We present a review of the properties of generalized domain wall Fermions, based on a (real) Möbius transformation on the Wilson overlap kernel, discussing their algorithmic efficiency, the degree of explicit chiral violations measured by the residual mass (mres) and the Ward-Takahashi identities. The Möbius class interpolates between Shamir's domain wall operator and Boriçi's domain wall implementation of Neuberger's overlap operator without increasing the number of Dirac applications per conjugate gradient iteration. A new scaling parameter (α) reduces chiral violations at finite fifth dimension (Ls) but yields exactly the same overlap action in the limit Ls → ∞. Through the use of 4d Red/Black preconditioning and optimal tuning for the scaling α(Ls) , we show that chiral symmetry violations are typically reduced by an order of magnitude at fixed Ls. We argue that the residual mass for a tuned Möbius algorithm with α = O(1 /Lsγ) for γ < 1 will eventually fall asymptotically as mres = O(1 /Ls1+γ) in the case of a 5D Hamiltonian with out a spectral gap.
Least-dependent-component analysis based on mutual information
NASA Astrophysics Data System (ADS)
Stögbauer, Harald; Kraskov, Alexander; Astakhov, Sergey A.; Grassberger, Peter
2004-12-01
We propose to use precise estimators of mutual information (MI) to find the least dependent components in a linearly mixed signal. On the one hand, this seems to lead to better blind source separation than with any other presently available algorithm. On the other hand, it has the advantage, compared to other implementations of “independent” component analysis (ICA), some of which are based on crude approximations for MI, that the numerical values of the MI can be used for (i) estimating residual dependencies between the output components; (ii) estimating the reliability of the output by comparing the pairwise MIs with those of remixed components; and (iii) clustering the output according to the residual interdependencies. For the MI estimator, we use a recently proposed k -nearest-neighbor-based algorithm. For time sequences, we combine this with delay embedding, in order to take into account nontrivial time correlations. After several tests with artificial data, we apply the resulting MILCA (mutual-information-based least dependent component analysis) algorithm to a real-world dataset, the ECG of a pregnant woman.
The NOAA-NASA CZCS Reanalysis Effort
NASA Technical Reports Server (NTRS)
Gregg, Watson W.; Conkright, Margarita E.; OReilly, John E.; Patt, Frederick S.; Wang, Meng-Hua; Yoder, James; Casey-McCabe, Nancy; Koblinsky, Chester J. (Technical Monitor)
2001-01-01
Satellite observations of global ocean chlorophyll span over two decades. However, incompatibilities between processing algorithms prevent us from quantifying natural variability. We applied a comprehensive reanalysis to the Coastal Zone Color Scanner (CZCS) archive, called the NOAA-NASA CZCS Reanalysis (NCR) Effort. NCR consisted of 1) algorithm improvement (AI), where CZCS processing algorithms were improved using modernized atmospheric correction and bio-optical algorithms, and 2) blending, where in situ data were incorporated into the CZCS AI to minimize residual errors. The results indicated major improvement over the previously available CZCS archive. Global spatial and seasonal patterns of NCR chlorophyll indicated remarkable correspondence with modern sensors, suggesting compatibility. The NCR permits quantitative analyses of interannual and interdecadal trends in global ocean chlorophyll.
Loblolly pine (Pinus taeda L.) Has co-evolved a high dependency on ectomycorrhizal (ECM) associations most likely because its natural range includes soils of varying moisture that are P- and/or N-deficient. Because of its wide geographic distrubition, we would expect its roots t...
ERIC Educational Resources Information Center
Collins, Steve; Ting, Hermia
2014-01-01
The profession of teaching is unique because of the extent to which a teacher becomes involved in the lives of their "clients". The level of care required to support students well can be intense, confusing, and overwhelming. Relationships co-evolve within an ever-changing process and care is considered an essential aspect of complex relationships…
Social Labs in Universities: Innovation and Impact in Medialab UGR
ERIC Educational Resources Information Center
Romero-Frías, Esteban; Robinson-García, Nicolás
2017-01-01
Social laboratories, defined as experimental spaces for co-creation, have recently become the main centers of innovation. Medialabs are experimental laboratories of technologies and communication media which have co-evolved along with the digital society into mediation laboratories of citizen experimentation, observing a confluence of both models.…
Fathers' and Mothers' Parenting Predicting and Responding to Adolescent Sexual Risk Behaviors
ERIC Educational Resources Information Center
Coley, Rebekah Levine; Votruba-Drzal, Elizabeth; Schindler, Holly S.
2009-01-01
Transactional models of problem behavior argue that less effective parenting and adolescent problem behaviors coevolve, exerting bidirectional influences. This article extends such models by analyzing growth trajectories of sexual risk behaviors and parenting processes among 3,206 adolescents (aged 13-18) and their residential parents. Within…
Analysis of co-evolving genes in campylobacter jejuni and C. coli
USDA-ARS?s Scientific Manuscript database
Background: The population structure of Campylobacter has been frequently studied by MLST, for which fragments of housekeeping genes are compared. We wished to determine if the used MLST genes are representative of the complete genome. Methods: A set of 1029 core gene families (CGF) was identifie...
Mizutani, Eiji; Demmel, James W
2003-01-01
This paper briefly introduces our numerical linear algebra approaches for solving structured nonlinear least squares problems arising from 'multiple-output' neural-network (NN) models. Our algorithms feature trust-region regularization, and exploit sparsity of either the 'block-angular' residual Jacobian matrix or the 'block-arrow' Gauss-Newton Hessian (or Fisher information matrix in statistical sense) depending on problem scale so as to render a large class of NN-learning algorithms 'efficient' in both memory and operation costs. Using a relatively large real-world nonlinear regression application, we shall explain algorithmic strengths and weaknesses, analyzing simulation results obtained by both direct and iterative trust-region algorithms with two distinct NN models: 'multilayer perceptrons' (MLP) and 'complementary mixtures of MLP-experts' (or neuro-fuzzy modular networks).
NASA Astrophysics Data System (ADS)
Ratliff, Bradley M.; LeMaster, Daniel A.
2012-06-01
Pixel-to-pixel response nonuniformity is a common problem that affects nearly all focal plane array sensors. This results in a frame-to-frame fixed pattern noise (FPN) that causes an overall degradation in collected data. FPN is often compensated for through the use of blackbody calibration procedures; however, FPN is a particularly challenging problem because the detector responsivities drift relative to one another in time, requiring that the sensor be recalibrated periodically. The calibration process is obstructive to sensor operation and is therefore only performed at discrete intervals in time. Thus, any drift that occurs between calibrations (along with error in the calibration sources themselves) causes varying levels of residual calibration error to be present in the data at all times. Polarimetric microgrid sensors are particularly sensitive to FPN due to the spatial differencing involved in estimating the Stokes vector images. While many techniques exist in the literature to estimate FPN for conventional video sensors, few have been proposed to address the problem in microgrid imaging sensors. Here we present a scene-based nonuniformity correction technique for microgrid sensors that is able to reduce residual fixed pattern noise while preserving radiometry under a wide range of conditions. The algorithm requires a low number of temporal data samples to estimate the spatial nonuniformity and is computationally efficient. We demonstrate the algorithm's performance using real data from the AFRL PIRATE and University of Arizona LWIR microgrid sensors.
The contour-buildup algorithm to calculate the analytical molecular surface.
Totrov, M; Abagyan, R
1996-01-01
A new algorithm is presented to calculate the analytical molecular surface defined as a smooth envelope traced out by the surface of a probe sphere rolled over the molecule. The core of the algorithm is the sequential build up of multi-arc contours on the van der Waals spheres. This algorithm yields substantial reduction in both memory and time requirements of surface calculations. Further, the contour-buildup principle is intrinsically "local", which makes calculations of the partial molecular surfaces even more efficient. Additionally, the algorithm is equally applicable not only to convex patches, but also to concave triangular patches which may have complex multiple intersections. The algorithm permits the rigorous calculation of the full analytical molecular surface for a 100-residue protein in about 2 seconds on an SGI indigo with R4400++ processor at 150 Mhz, with the performance scaling almost linearly with the protein size. The contour-buildup algorithm is faster than the original Connolly algorithm an order of magnitude.
Seismic noise attenuation using an online subspace tracking algorithm
NASA Astrophysics Data System (ADS)
Zhou, Yatong; Li, Shuhua; Zhang, Dong; Chen, Yangkang
2018-02-01
We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. When the multidimensional seismic data are mapped to a low-rank space, the subspace tracking algorithm can be directly applied to the input low-rank matrix to estimate the useful signals. Since the subspace tracking algorithm is an online algorithm, it is more robust to random noise than traditional truncated singular value decomposition (TSVD) based subspace tracking algorithm. Compared with the state-of-the-art algorithms, the proposed denoising method can obtain better performance. More specifically, the proposed method outperforms the TSVD-based singular spectrum analysis method in causing less residual noise and also in saving half of the computational cost. Several synthetic and field data examples with different levels of complexities demonstrate the effectiveness and robustness of the presented algorithm in rejecting different types of noise including random noise, spiky noise, blending noise, and coherent noise.
Prediction of interface residue based on the features of residue interaction network.
Jiao, Xiong; Ranganathan, Shoba
2017-11-07
Protein-protein interaction plays a crucial role in the cellular biological processes. Interface prediction can improve our understanding of the molecular mechanisms of the related processes and functions. In this work, we propose a classification method to recognize the interface residue based on the features of a weighted residue interaction network. The random forest algorithm is used for the prediction and 16 network parameters and the B-factor are acting as the element of the input feature vector. Compared with other similar work, the method is feasible and effective. The relative importance of these features also be analyzed to identify the key feature for the prediction. Some biological meaning of the important feature is explained. The results of this work can be used for the related work about the structure-function relationship analysis via a residue interaction network model. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Yan; Li, Lin; Huang, Yi-Fan; Du, Bao-Lin
2009-07-01
This paper analyses the dynamic residual aberrations of a conformal optical system and introduces adaptive optics (AO) correction technology to this system. The image sharpening AO system is chosen as the correction scheme. Communication between MATLAB and Code V is established via ActiveX technique in computer simulation. The SPGD algorithm is operated at seven zoom positions to calculate the optimized surface shape of the deformable mirror. After comparison of performance of the corrected system with the baseline system, AO technology is proved to be a good way of correcting the dynamic residual aberration in conformal optical design.
Beyond basins: φ,ψ preferences of a residue depend heavily on the φ,ψ values of its neighbors.
Hollingsworth, Scott A; Lewis, Matthew C; Karplus, P Andrew
2016-09-01
The Ramachandran plot distributions of nonglycine residues from experimentally determined structures are routinely described as grouping into one of six major basins: β, PII , α, αL , ξ and γ'. Recent work describing the most common conformations adopted by pairs of residues in folded proteins [i.e., (φ,ψ)2 -motifs] showed that commonly described major basins are not true single thermodynamic basins, but are composed of distinct subregions that are associated with various conformations of either the preceding or following neighbor residue. Here, as documentation of the extent to which the conformational preferences of a central residue are influenced by the conformations of its two neighbors, we present a set of φ,ψ-plots that are delimited simultaneously by the φ,ψ-angles of its neighboring residues on both sides. The level of influence seen here is typically greater than the influence associated with considering the identities of neighboring residues, implying that the use of this heretofore untapped information can improve the accuracy of structure prediction algorithms and low resolution protein structure refinement. © 2016 The Protein Society.
Cyclic coordinate descent: A robotics algorithm for protein loop closure.
Canutescu, Adrian A; Dunbrack, Roland L
2003-05-01
In protein structure prediction, it is often the case that a protein segment must be adjusted to connect two fixed segments. This occurs during loop structure prediction in homology modeling as well as in ab initio structure prediction. Several algorithms for this purpose are based on the inverse Jacobian of the distance constraints with respect to dihedral angle degrees of freedom. These algorithms are sometimes unstable and fail to converge. We present an algorithm developed originally for inverse kinematics applications in robotics. In robotics, an end effector in the form of a robot hand must reach for an object in space by altering adjustable joint angles and arm lengths. In loop prediction, dihedral angles must be adjusted to move the C-terminal residue of a segment to superimpose on a fixed anchor residue in the protein structure. The algorithm, referred to as cyclic coordinate descent or CCD, involves adjusting one dihedral angle at a time to minimize the sum of the squared distances between three backbone atoms of the moving C-terminal anchor and the corresponding atoms in the fixed C-terminal anchor. The result is an equation in one variable for the proposed change in each dihedral. The algorithm proceeds iteratively through all of the adjustable dihedral angles from the N-terminal to the C-terminal end of the loop. CCD is suitable as a component of loop prediction methods that generate large numbers of trial structures. It succeeds in closing loops in a large test set 99.79% of the time, and fails occasionally only for short, highly extended loops. It is very fast, closing loops of length 8 in 0.037 sec on average.
Emerald ash borer, black ash, and Native American basketmaking
Therese M. Poland; Marla R. Emery; Tina Ciaramitaro; Ed Pigeon; Angie Pigeon
2017-01-01
Native cultures coevolved with the forests of the Great Lakes region following the last ice age. Plentiful water, abundant game, and fertile soil supported fishing, hunting, and gathering, as well as subsistence agriculture. Lakes and tributaries facilitated transportation by canoe and trade among tribes. Native Americans developed a semi-nomadic lifestyle, moving...
Tree physiology and bark beetles
Michael G. Ryan; Gerard Sapes; Anna Sala; Sharon Hood
2015-01-01
Irruptive bark beetles usually co-occur with their co-evolved tree hosts at very low (endemic) population densities. However, recent droughts and higher temperatures have promoted widespread tree mortality with consequences for forest carbon, fire and ecosystem services (Kurz et al., 2008; Raffa et al., 2008; Jenkins et al., 2012). In this issue of New Phytologist,...
Network Search: A New Way of Seeing the Education Knowledge Domain
ERIC Educational Resources Information Center
McFarland, Daniel; Klopfer, Eric
2010-01-01
Background: The educational knowledge domain may be understood as a system composed of multiple, co-evolving networks that reflect the form and content of a cultural field. This paper describes the educational knowledge domain as having a community structure (form) based in relations of production (authoring) and consumption (referencing), and a…
ERIC Educational Resources Information Center
Kerpelman, Jennifer L.; Pittman, Joe F.; Cadely, Hans Saint-Eloi; Tuggle, Felicia J.; Harrell-Levy, Marinda K.; Adler-Baeder, Francesca M.
2012-01-01
Integration of adult attachment and psychosocial development theories suggests that adolescence is a time when capacities for romantic intimacy and identity formation are co-evolving. The current study addressed direct, indirect and moderated associations among identity and romantic attachment constructs with a diverse sample of 2178 middle…
Inflammatory phenotypes in the intestine of poultry: Not all inflammation is created equally
USDA-ARS?s Scientific Manuscript database
The intestinal tract harbors a diverse community of microbes that have co-evolved with the host immune system. Although many of these microbes execute functions that are critical for host physiology, the host immune system must control the microbial community so that the dynamics of this interdepen...
Honey bee gut dysbiosis: A novel context for disease ecology
USDA-ARS?s Scientific Manuscript database
A biofilm in the ileum of the honey bee has become a hot-spot of recent research. Highly host co-evolved, hindgut biofilm structure consists of six species that alter in relative abundance over the life of the adult worker, showing parallels with age-related senescence in Drosophila. Induced shifts ...
Ideology and Interaction: Debating Determinisms in Literacy Studies
ERIC Educational Resources Information Center
Collin, Ross; Street, Brian V.
2014-01-01
In this exchange, Street and Collin debate the merits of the interaction model of literacy that Collin outlined in a recent issue of Reading Research Quarterly. Built as a complement and a counter to Street's ideological model of literacy, Collin's interaction model defines literacies as technologies that coevolve with sociocultural…
Optimists' Creed: Brave New Cyberlearning, Evolving Utopias (Circa 2041)
ERIC Educational Resources Information Center
Burleson, Winslow; Lewis, Armanda
2016-01-01
This essay imagines the role that artificial intelligence innovations play in the integrated living, learning and research environments of 2041. Here, in 2041, in the context of increasingly complex wicked challenges, whose solutions by their very nature continue to evade even the most capable experts, society and technology have co-evolved to…
Spatial heterogeneity lowers rather than increases host-parasite specialization.
Hesse, E; Best, A; Boots, M; Hall, A R; Buckling, A
2015-09-01
Abiotic environmental heterogeneity can promote the evolution of diverse resource specialists, which in turn may increase the degree of host-parasite specialization. We coevolved Pseudomonas fluorescens and lytic phage ϕ2 in spatially structured populations, each consisting of two interconnected subpopulations evolving in the same or different nutrient media (homogeneous and heterogeneous environments, respectively). Counter to the normal expectation, host-parasite specialization was significantly lower in heterogeneous compared with homogeneous environments. This result could not be explained by dispersal homogenizing populations, as this would have resulted in the heterogeneous treatments having levels of specialization equal to or greater than that of the homogeneous environments. We argue that selection for costly generalists is greatest when the coevolving species are exposed to diverse environmental conditions and that this can provide an explanation for our results. A simple coevolutionary model of this process suggests that this can be a general mechanism by which environmental heterogeneity can reduce rather than increase host-parasite specialization. © 2015 The Authors. J. EVOL. BIOL. Journal of Evolutionary Biology Published by John Wiley & Sons Ltd on Behalf of European Society for Evolutionary Biology.
Reciprocal selection causes a coevolutionary arms race between crossbills and lodgepole pine.
Benkman, Craig W; Parchman, Thomas L; Favis, Amanda; Siepielski, Adam M
2003-08-01
Few studies have shown both reciprocal selection and reciprocal adaptations for a coevolving system in the wild. The goal of our study was to determine whether the patterns of selection on Rocky Mountain lodgepole pine (Pinus contorta spp. latifolia) and red crossbills (Loxia curvirostra complex) were concordant with earlier published evidence of reciprocal adaptations in lodgepole pine and crossbills on isolated mountain ranges in the absence of red squirrels (Tamiasciurus hudsonicus). We found that selection (directional) by crossbills on lodgepole pine where Tamiasciurus are absent was divergent from the selection (directional) exerted by Tamiasciurus on lodgepole pine. This resulted in divergent selection between areas with and without Tamiasciurus that was congruent with the geographic patterns of cone variation. In the South Hills, Idaho, where Tamiasciurus are absent and red crossbills are thought to be coevolving with lodgepole pine, crossbills experienced stabilizing selection on bill size, with cone structure as the agent of selection. These results show that crossbills and lodgepole pine exhibit reciprocal adaptations in response to reciprocal selection, and they provide insight into the traits mediating and responding to selection in a coevolutionary arms race.
Plant-associated methylobacteria as co-evolved phytosymbionts: a hypothesis.
Kutschera, Ulrich
2007-03-01
Due to their wall-associated pectin metabolism, growing plant cells emit significant amounts of the one-carbon alcohol methanol. Pink-pigmented microbes of the genus Methylobacterium that colonize the surfaces of leaves (epiphytes) are capable of growth on this volatile C1-compound as sole source of carbon and energy. In this article the results of experiments with germ-free (gnotobiotic) sporophytes of angiosperms (sunflower, maize) and gametophytes of bryophytes (a moss and two liverwort species) are summarized. The data show that methylobacteria do not stimulate the growth of these angiosperms, but organ development in moss protonemata and in thalli of liverworts is considerably enhanced. Since methylobacteria produce and secrete cytokinins and auxin, a model of plant-microbe-interaction (symbiosis) is proposed in which the methanol-consuming bacteria are viewed as coevolved partners of the gametophyte that determine its growth, survival and reproduction (fitness). This symbiosis is restricted to the haploid cells of moisture-dependent "living fossil" plants; it does not apply to the diploid sporophytes of higher embryophytes, which are fully adapted to life on land and apparently produce sufficient amounts of endogenous phytohormones.
Coevolution between Male and Female Genitalia in the Drosophila melanogaster Species Subgroup
Yassin, Amir; Orgogozo, Virginie
2013-01-01
In contrast to male genitalia that typically exhibit patterns of rapid and divergent evolution among internally fertilizing animals, female genitalia have been less well studied and are generally thought to evolve slowly among closely-related species. As a result, few cases of male-female genital coevolution have been documented. In Drosophila, female copulatory structures have been claimed to be mostly invariant compared to male structures. Here, we re-examined male and female genitalia in the nine species of the D. melanogaster subgroup. We describe several new species-specific female genital structures that appear to coevolve with male genital structures, and provide evidence that the coevolving structures contact each other during copulation. Several female structures might be defensive shields against apparently harmful male structures, such as cercal teeth, phallic hooks and spines. Evidence for male-female morphological coevolution in Drosophila has previously been shown at the post-copulatory level (e.g., sperm length and sperm storage organ size), and our results provide support for male-female coevolution at the copulatory level. PMID:23451172
Supermassive black holes do not correlate with dark matter haloes of galaxies.
Kormendy, John; Bender, Ralf
2011-01-20
Supermassive black holes have been detected in all galaxies that contain bulge components when the galaxies observed were close enough that the searches were feasible. Together with the observation that bigger black holes live in bigger bulges, this has led to the belief that black-hole growth and bulge formation regulate each other. That is, black holes and bulges coevolve. Therefore, reports of a similar correlation between black holes and the dark matter haloes in which visible galaxies are embedded have profound implications. Dark matter is likely to be non-baryonic, so these reports suggest that unknown, exotic physics controls black-hole growth. Here we show, in part on the basis of recent measurements of bulgeless galaxies, that there is almost no correlation between dark matter and parameters that measure black holes unless the galaxy also contains a bulge. We conclude that black holes do not correlate directly with dark matter. They do not correlate with galaxy disks, either. Therefore, black holes coevolve only with bulges. This simplifies the puzzle of their coevolution by focusing attention on purely baryonic processes in the galaxy mergers that make bulges.
The challenges of modelling antibody repertoire dynamics in HIV infection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Shishi; Perelson, Alan S.
Antibody affinity maturation by somatic hypermutation of B-cell immunoglobulin variable region genes has been studied for decades in various model systems using well-defined antigens. While much is known about the molecular details of the process, our understanding of the selective forces that generate affinity maturation are less well developed, particularly in the case of a co-evolving pathogen such as HIV. Despite this gap in understanding, high-throughput antibody sequence data are increasingly being collected to investigate the evolutionary trajectories of antibody lineages in HIV-infected individuals. Here, we review what is known in controlled experimental systems about the mechanisms underlying antibody selectionmore » and compare this to the observed temporal patterns of antibody evolution in HIV infection. In addition, we describe how our current understanding of antibody selection mechanisms leaves questions about antibody dynamics in HIV infection unanswered. Without a mechanistic understanding of antibody selection in the context of a co-evolving viral population, modelling and analysis of antibody sequences in HIV-infected individuals will be limited in their interpretation and predictive ability.« less
NASA Astrophysics Data System (ADS)
Tanimoto, Jun; Sagara, Hirokji
2015-11-01
We built a new indirect reciprocity model based on binary image scores, where an agent's strategy and norm co-evolve. The norm, meaning what behavior is evaluated as "good" or "bad," stipulates how image scores of two agents playing a game is altered, which has been presumed to be a fixed value in most previous studies. Also, unlike former studies, our model allows an agent to play with an agent who has a different norm. This point of relaxing the freedom of the model pulls down cooperation level vis-à-vis the case where an agent always plays with another one having same norm. However, it is observed that a rather larger dilemma shows robust cooperation establishing compared with a smaller dilemma, since a norm that punishes a so-called second-order free-rider is prompted. To encourage the evolution of norms to be able to punish second-order free-riders, a society needs a small number of defectors. This is elucidated by the fact that cases with action error are more cooperative than those without action error.
Plant-Associated Methylobacteria as Co-Evolved Phytosymbionts
2007-01-01
Due to their wall-associated pectin metabolism, growing plant cells emit significant amounts of the one-carbon alcohol methanol. Pink-pigmented microbes of the genus Methylobacterium that colonize the surfaces of leaves (epiphytes) are capable of growth on this volatile C1-compound as sole source of carbon and energy. In this article the results of experiments with germ-free (gnotobiotic) sporophytes of angiosperms (sunflower, maize) and gametophytes of bryophytes (a moss and two liverwort species) are summarized. The data show that methylobacteria do not stimulate the growth of these angiosperms, but organ development in moss protonemata and in thalli of liverworts is considerably enhanced. Since methylobacteria produce and secrete cytokinins and auxin, a model of plant-microbe-interaction (symbiosis) is proposed in which the methanol-consuming bacteria are viewed as coevolved partners of the gametophyte that determine its growth, survival and reproduction (fitness). This symbiosis is restricted to the haploid cells of moisture-dependent “living fossil” plants; it does not apply to the diploid sporophytes of higher embryophytes, which are fully adapted to life on land and apparently produce sufficient amounts of endogenous phytohormones. PMID:19516971
The challenges of modelling antibody repertoire dynamics in HIV infection
Luo, Shishi; Perelson, Alan S.
2015-07-20
Antibody affinity maturation by somatic hypermutation of B-cell immunoglobulin variable region genes has been studied for decades in various model systems using well-defined antigens. While much is known about the molecular details of the process, our understanding of the selective forces that generate affinity maturation are less well developed, particularly in the case of a co-evolving pathogen such as HIV. Despite this gap in understanding, high-throughput antibody sequence data are increasingly being collected to investigate the evolutionary trajectories of antibody lineages in HIV-infected individuals. Here, we review what is known in controlled experimental systems about the mechanisms underlying antibody selectionmore » and compare this to the observed temporal patterns of antibody evolution in HIV infection. In addition, we describe how our current understanding of antibody selection mechanisms leaves questions about antibody dynamics in HIV infection unanswered. Without a mechanistic understanding of antibody selection in the context of a co-evolving viral population, modelling and analysis of antibody sequences in HIV-infected individuals will be limited in their interpretation and predictive ability.« less
Hierarchical image segmentation via recursive superpixel with adaptive regularity
NASA Astrophysics Data System (ADS)
Nakamura, Kensuke; Hong, Byung-Woo
2017-11-01
A fast and accurate segmentation algorithm in a hierarchical way based on a recursive superpixel technique is presented. We propose a superpixel energy formulation in which the trade-off between data fidelity and regularization is dynamically determined based on the local residual in the energy optimization procedure. We also present an energy optimization algorithm that allows a pixel to be shared by multiple regions to improve the accuracy and appropriate the number of segments. The qualitative and quantitative evaluations demonstrate that our algorithm, combining the proposed energy and optimization, outperforms the conventional k-means algorithm by up to 29.10% in F-measure. We also perform comparative analysis with state-of-the-art algorithms in the hierarchical segmentation. Our algorithm yields smooth regions throughout the hierarchy as opposed to the others that include insignificant details. Our algorithm overtakes the other algorithms in terms of balance between accuracy and computational time. Specifically, our method runs 36.48% faster than the region-merging approach, which is the fastest of the comparing algorithms, while achieving a comparable accuracy.
A Lossless hybrid wavelet-fractal compression for welding radiographic images.
Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud
2016-01-01
In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm.
NOAA-NASA Coastal Zone Color Scanner reanalysis effort.
Gregg, Watson W; Conkright, Margarita E; O'Reilly, John E; Patt, Frederick S; Wang, Menghua H; Yoder, James A; Casey, Nancy W
2002-03-20
Satellite observations of global ocean chlorophyll span more than two decades. However, incompatibilities between processing algorithms prevent us from quantifying natural variability. We applied a comprehensive reanalysis to the Coastal Zone Color Scanner (CZCS) archive, called the National Oceanic and Atmospheric Administration and National Aeronautics and Space Administration (NOAA-NASA) CZCS reanalysis (NCR) effort. NCR consisted of (1) algorithm improvement (AI), where CZCS processing algorithms were improved with modernized atmospheric correction and bio-optical algorithms and (2) blending where in situ data were incorporated into the CZCS AI to minimize residual errors. Global spatial and seasonal patterns of NCR chlorophyll indicated remarkable correspondence with modern sensors, suggesting compatibility. The NCR permits quantitative analyses of interannual and interdecadal trends in global ocean chlorophyll.
Quadeer, Ahmed A.; Louie, Raymond H. Y.; Shekhar, Karthik; Chakraborty, Arup K.; Hsing, I-Ming
2014-01-01
ABSTRACT Chronic hepatitis C virus (HCV) infection is one of the leading causes of liver failure and liver cancer, affecting around 3% of the world's population. The extreme sequence variability of the virus resulting from error-prone replication has thwarted the discovery of a universal prophylactic vaccine. It is known that vigorous and multispecific cellular immune responses, involving both helper CD4+ and cytotoxic CD8+ T cells, are associated with the spontaneous clearance of acute HCV infection. Escape mutations in viral epitopes can, however, abrogate protective T-cell responses, leading to viral persistence and associated pathologies. Despite the propensity of the virus to mutate, there might still exist substitutions that incur a fitness cost. In this paper, we identify groups of coevolving residues within HCV nonstructural protein 3 (NS3) by analyzing diverse sequences of this protein using ideas from random matrix theory and associated methods. Our analyses indicate that one of these groups comprises a large percentage of residues for which HCV appears to resist multiple simultaneous substitutions. Targeting multiple residues in this group through vaccine-induced immune responses should either lead to viral recognition or elicit escape substitutions that compromise viral fitness. Our predictions are supported by published clinical data, which suggested that immune genotypes associated with spontaneous clearance of HCV preferentially recognized and targeted this vulnerable group of residues. Moreover, mapping the sites of this group onto the available protein structure provided insight into its functional significance. An epitope-based immunogen is proposed as an alternative to the NS3 epitopes in the peptide-based vaccine IC41. IMPORTANCE Despite much experimental work on HCV, a thorough statistical study of the HCV sequences for the purpose of immunogen design was missing in the literature. Such a study is vital to identify epistatic couplings among residues that can provide useful insights for designing a potent vaccine. In this work, ideas from random matrix theory were applied to characterize the statistics of substitutions within the diverse publicly available sequences of the genotype 1a HCV NS3 protein, leading to a group of sites for which HCV appears to resist simultaneous substitutions possibly due to deleterious effect on viral fitness. Our analysis leads to completely novel immunogen designs for HCV. In addition, the NS3 epitopes used in the recently proposed peptide-based vaccine IC41 were analyzed in the context of our framework. Our analysis predicts that alternative NS3 epitopes may be worth exploring as they might be more efficacious. PMID:24760894
A split finite element algorithm for the compressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Baker, A. J.
1979-01-01
An accurate and efficient numerical solution algorithm is established for solution of the high Reynolds number limit of the Navier-Stokes equations governing the multidimensional flow of a compressible essentially inviscid fluid. Finite element interpolation theory is used within a dissipative formulation established using Galerkin criteria within the Method of Weighted Residuals. An implicit iterative solution algorithm is developed, employing tensor product bases within a fractional steps integration procedure, that significantly enhances solution economy concurrent with sharply reduced computer hardware demands. The algorithm is evaluated for resolution of steep field gradients and coarse grid accuracy using both linear and quadratic tensor product interpolation bases. Numerical solutions for linear and nonlinear, one, two and three dimensional examples confirm and extend the linearized theoretical analyses, and results are compared to competitive finite difference derived algorithms.
RNA Bricks—a database of RNA 3D motifs and their interactions
Chojnowski, Grzegorz; Waleń, Tomasz; Bujnicki, Janusz M.
2014-01-01
The RNA Bricks database (http://iimcb.genesilico.pl/rnabricks), stores information about recurrent RNA 3D motifs and their interactions, found in experimentally determined RNA structures and in RNA–protein complexes. In contrast to other similar tools (RNA 3D Motif Atlas, RNA Frabase, Rloom) RNA motifs, i.e. ‘RNA bricks’ are presented in the molecular environment, in which they were determined, including RNA, protein, metal ions, water molecules and ligands. All nucleotide residues in RNA bricks are annotated with structural quality scores that describe real-space correlation coefficients with the electron density data (if available), backbone geometry and possible steric conflicts, which can be used to identify poorly modeled residues. The database is also equipped with an algorithm for 3D motif search and comparison. The algorithm compares spatial positions of backbone atoms of the user-provided query structure and of stored RNA motifs, without relying on sequence or secondary structure information. This enables the identification of local structural similarities among evolutionarily related and unrelated RNA molecules. Besides, the search utility enables searching ‘RNA bricks’ according to sequence similarity, and makes it possible to identify motifs with modified ribonucleotide residues at specific positions. PMID:24220091
Integrand reduction for two-loop scattering amplitudes through multivariate polynomial division
NASA Astrophysics Data System (ADS)
Mastrolia, Pierpaolo; Mirabella, Edoardo; Ossola, Giovanni; Peraro, Tiziano
2013-04-01
We describe the application of a novel approach for the reduction of scattering amplitudes, based on multivariate polynomial division, which we have recently presented. This technique yields the complete integrand decomposition for arbitrary amplitudes, regardless of the number of loops. It allows for the determination of the residue at any multiparticle cut, whose knowledge is a mandatory prerequisite for applying the integrand-reduction procedure. By using the division modulo Gröbner basis, we can derive a simple integrand recurrence relation that generates the multiparticle pole decomposition for integrands of arbitrary multiloop amplitudes. We apply the new reduction algorithm to the two-loop planar and nonplanar diagrams contributing to the five-point scattering amplitudes in N=4 super Yang-Mills and N=8 supergravity in four dimensions, whose numerator functions contain up to rank-two terms in the integration momenta. We determine all polynomial residues parametrizing the cuts of the corresponding topologies and subtopologies. We obtain the integral basis for the decomposition of each diagram from the polynomial form of the residues. Our approach is well suited for a seminumerical implementation, and its general mathematical properties provide an effective algorithm for the generalization of the integrand-reduction method to all orders in perturbation theory.
Cooperativity among Short Amyloid Stretches in Long Amyloidogenic Sequences
He, Zhisong; Shi, Xiaohe; Feng, Kaiyan; Ma, Buyong; Cai, Yu-Dong
2012-01-01
Amyloid fibrillar aggregates of polypeptides are associated with many neurodegenerative diseases. Short peptide segments in protein sequences may trigger aggregation. Identifying these stretches and examining their behavior in longer protein segments is critical for understanding these diseases and obtaining potential therapies. In this study, we combined machine learning and structure-based energy evaluation to examine and predict amyloidogenic segments. Our feature selection method discovered that windows consisting of long amino acid segments of ∼30 residues, instead of the commonly used short hexapeptides, provided the highest accuracy. Weighted contributions of an amino acid at each position in a 27 residue window revealed three cooperative regions of short stretch, resemble the β-strand-turn-β-strand motif in A-βpeptide amyloid and β-solenoid structure of HET-s(218–289) prion (C). Using an in-house energy evaluation algorithm, the interaction energy between two short stretches in long segment is computed and incorporated as an additional feature. The algorithm successfully predicted and classified amyloid segments with an overall accuracy of 75%. Our study revealed that genome-wide amyloid segments are not only dependent on short high propensity stretches, but also on nearby residues. PMID:22761773
A multiresolution hierarchical classification algorithm for filtering airborne LiDAR data
NASA Astrophysics Data System (ADS)
Chen, Chuanfa; Li, Yanyan; Li, Wei; Dai, Honglei
2013-08-01
We presented a multiresolution hierarchical classification (MHC) algorithm for differentiating ground from non-ground LiDAR point cloud based on point residuals from the interpolated raster surface. MHC includes three levels of hierarchy, with the simultaneous increase of cell resolution and residual threshold from the low to the high level of the hierarchy. At each level, the surface is iteratively interpolated towards the ground using thin plate spline (TPS) until no ground points are classified, and the classified ground points are used to update the surface in the next iteration. 15 groups of benchmark dataset, provided by the International Society for Photogrammetry and Remote Sensing (ISPRS) commission, were used to compare the performance of MHC with those of the 17 other publicized filtering methods. Results indicated that MHC with the average total error and average Cohen’s kappa coefficient of 4.11% and 86.27% performs better than all other filtering methods.
Simulation Analysis of Computer-Controlled pressurization for Mixture Ratio Control
NASA Technical Reports Server (NTRS)
Alexander, Leslie A.; Bishop-Behel, Karen; Benfield, Michael P. J.; Kelley, Anthony; Woodcock, Gordon R.
2005-01-01
A procedural code (C++) simulation was developed to investigate potentials for mixture ratio control of pressure-fed spacecraft rocket propulsion systems by measuring propellant flows, tank liquid quantities, or both, and using feedback from these measurements to adjust propellant tank pressures to set the correct operating mixture ratio for minimum propellant residuals. The pressurization system eliminated mechanical regulators in favor of a computer-controlled, servo- driven throttling valve. We found that a quasi-steady state simulation (pressure and flow transients in the pressurization systems resulting from changes in flow control valve position are ignored) is adequate for this purpose. Monte-Carlo methods are used to obtain simulated statistics on propellant depletion. Mixture ratio control algorithms based on proportional-integral-differential (PID) controller methods were developed. These algorithms actually set target tank pressures; the tank pressures are controlled by another PID controller. Simulation indicates this approach can provide reductions in residual propellants.
2013-08-16
approach in the context of a novel, immunologically relevant antigen. The limited accuracy of the tested algorithms to predict the in vivo immune responses...overlapping peptides spanning the entire sequence are individually tested for antibody interacting residues. Conformational B cell epitopes, in contrast...a blind assessment of this approach in the context of a novel, immunologically relevant antigen. The limited accuracy of the tested algorithms to
A New Secondary Structure Assignment Algorithm Using Cα Backbone Fragments
Cao, Chen; Wang, Guishen; Liu, An; Xu, Shutan; Wang, Lincong; Zou, Shuxue
2016-01-01
The assignment of secondary structure elements in proteins is a key step in the analysis of their structures and functions. We have developed an algorithm, SACF (secondary structure assignment based on Cα fragments), for secondary structure element (SSE) assignment based on the alignment of Cα backbone fragments with central poses derived by clustering known SSE fragments. The assignment algorithm consists of three steps: First, the outlier fragments on known SSEs are detected. Next, the remaining fragments are clustered to obtain the central fragments for each cluster. Finally, the central fragments are used as a template to make assignments. Following a large-scale comparison of 11 secondary structure assignment methods, SACF, KAKSI and PROSS are found to have similar agreement with DSSP, while PCASSO agrees with DSSP best. SACF and PCASSO show preference to reducing residues in N and C cap regions, whereas KAKSI, P-SEA and SEGNO tend to add residues to the terminals when DSSP assignment is taken as standard. Moreover, our algorithm is able to assign subtle helices (310-helix, π-helix and left-handed helix) and make uniform assignments, as well as to detect rare SSEs in β-sheets or long helices as outlier fragments from other programs. The structural uniformity should be useful for protein structure classification and prediction, while outlier fragments underlie the structure–function relationship. PMID:26978354
Quality assessment of MEG-to-MRI coregistrations
NASA Astrophysics Data System (ADS)
Sonntag, Hermann; Haueisen, Jens; Maess, Burkhard
2018-04-01
For high precision in source reconstruction of magnetoencephalography (MEG) or electroencephalography data, high accuracy of the coregistration of sources and sensors is mandatory. Usually, the source space is derived from magnetic resonance imaging (MRI). In most cases, however, no quality assessment is reported for sensor-to-MRI coregistrations. If any, typically root mean squares (RMS) of point residuals are provided. It has been shown, however, that RMS of residuals do not correlate with coregistration errors. We suggest using target registration error (TRE) as criterion for the quality of sensor-to-MRI coregistrations. TRE measures the effect of uncertainty in coregistrations at all points of interest. In total, 5544 data sets with sensor-to-head and 128 head-to-MRI coregistrations, from a single MEG laboratory, were analyzed. An adaptive Metropolis algorithm was used to estimate the optimal coregistration and to sample the coregistration parameters (rotation and translation). We found an average TRE between 1.3 and 2.3 mm at the head surface. Further, we observed a mean absolute difference in coregistration parameters between the Metropolis and iterative closest point algorithm of (1.9 +/- 15){\\hspace{0pt}}\\circ and (1.1 +/- 9) m. A paired sample t-test indicated a significant improvement in goal function minimization by using the Metropolis algorithm. The sampled parameters allowed computation of TRE on the entire grid of the MRI volume. Hence, we recommend the Metropolis algorithm for head-to-MRI coregistrations.
Full waveform inversion in the frequency domain using classified time-domain residual wavefields
NASA Astrophysics Data System (ADS)
Son, Woohyun; Koo, Nam-Hyung; Kim, Byoung-Yeop; Lee, Ho-Young; Joo, Yonghwan
2017-04-01
We perform the acoustic full waveform inversion in the frequency domain using residual wavefields that have been separated in the time domain. We sort the residual wavefields in the time domain according to the order of absolute amplitudes. Then, the residual wavefields are separated into several groups in the time domain. To analyze the characteristics of the residual wavefields, we compare the residual wavefields of conventional method with those of our residual separation method. From the residual analysis, the amplitude spectrum obtained from the trace before separation appears to have little energy at the lower frequency bands. However, the amplitude spectrum obtained from our strategy is regularized by the separation process, which means that the low-frequency components are emphasized. Therefore, our method helps to emphasize low-frequency components of residual wavefields. Then, we generate the frequency-domain residual wavefields by taking the Fourier transform of the separated time-domain residual wavefields. With these wavefields, we perform the gradient-based full waveform inversion in the frequency domain using back-propagation technique. Through a comparison of gradient directions, we confirm that our separation method can better describe the sub-salt image than the conventional approach. The proposed method is tested on the SEG/EAGE salt-dome model. The inversion results show that our algorithm is better than the conventional gradient based waveform inversion in the frequency domain, especially for deeper parts of the velocity model.
Karaboga, D; Aslan, S
2016-04-27
The great majority of biological sequences share significant similarity with other sequences as a result of evolutionary processes, and identifying these sequence similarities is one of the most challenging problems in bioinformatics. In this paper, we present a discrete artificial bee colony (ABC) algorithm, which is inspired by the intelligent foraging behavior of real honey bees, for the detection of highly conserved residue patterns or motifs within sequences. Experimental studies on three different data sets showed that the proposed discrete model, by adhering to the fundamental scheme of the ABC algorithm, produced competitive or better results than other metaheuristic motif discovery techniques.
Ando, S; Sekine, S; Mita, M; Katsuo, S
1989-12-15
An architecture and the algorithms for matrix multiplication using optical flip-flops (OFFs) in optical processors are proposed based on residue arithmetic. The proposed system is capable of processing all elements of matrices in parallel utilizing the information retrieving ability of optical Fourier processors. The employment of OFFs enables bidirectional data flow leading to a simpler architecture and the burden of residue-to-decimal (or residue-to-binary) conversion to operation time can be largely reduced by processing all elements in parallel. The calculated characteristics of operation time suggest a promising use of the system in a real time 2-D linear transform.
Scheduled Relaxation Jacobi method: Improvements and applications
NASA Astrophysics Data System (ADS)
Adsuara, J. E.; Cordero-Carrión, I.; Cerdá-Durán, P.; Aloy, M. A.
2016-09-01
Elliptic partial differential equations (ePDEs) appear in a wide variety of areas of mathematics, physics and engineering. Typically, ePDEs must be solved numerically, which sets an ever growing demand for efficient and highly parallel algorithms to tackle their computational solution. The Scheduled Relaxation Jacobi (SRJ) is a promising class of methods, atypical for combining simplicity and efficiency, that has been recently introduced for solving linear Poisson-like ePDEs. The SRJ methodology relies on computing the appropriate parameters of a multilevel approach with the goal of minimizing the number of iterations needed to cut down the residuals below specified tolerances. The efficiency in the reduction of the residual increases with the number of levels employed in the algorithm. Applying the original methodology to compute the algorithm parameters with more than 5 levels notably hinders obtaining optimal SRJ schemes, as the mixed (non-linear) algebraic-differential system of equations from which they result becomes notably stiff. Here we present a new methodology for obtaining the parameters of SRJ schemes that overcomes the limitations of the original algorithm and provide parameters for SRJ schemes with up to 15 levels and resolutions of up to 215 points per dimension, allowing for acceleration factors larger than several hundreds with respect to the Jacobi method for typical resolutions and, in some high resolution cases, close to 1000. Most of the success in finding SRJ optimal schemes with more than 10 levels is based on an analytic reduction of the complexity of the previously mentioned system of equations. Furthermore, we extend the original algorithm to apply it to certain systems of non-linear ePDEs.
Gomes, Adriano de Araújo; Alcaraz, Mirta Raquel; Goicoechea, Hector C; Araújo, Mario Cesar U
2014-02-06
In this work the Successive Projection Algorithm is presented for intervals selection in N-PLS for three-way data modeling. The proposed algorithm combines noise-reduction properties of PLS with the possibility of discarding uninformative variables in SPA. In addition, second-order advantage can be achieved by the residual bilinearization (RBL) procedure when an unexpected constituent is present in a test sample. For this purpose, SPA was modified in order to select intervals for use in trilinear PLS. The ability of the proposed algorithm, namely iSPA-N-PLS, was evaluated on one simulated and two experimental data sets, comparing the results to those obtained by N-PLS. In the simulated system, two analytes were quantitated in two test sets, with and without unexpected constituent. In the first experimental system, the determination of the four fluorophores (l-phenylalanine; l-3,4-dihydroxyphenylalanine; 1,4-dihydroxybenzene and l-tryptophan) was conducted with excitation-emission data matrices. In the second experimental system, quantitation of ofloxacin was performed in water samples containing two other uncalibrated quinolones (ciprofloxacin and danofloxacin) by high performance liquid chromatography with UV-vis diode array detector. For comparison purpose, a GA algorithm coupled with N-PLS/RBL was also used in this work. In most of the studied cases iSPA-N-PLS proved to be a promising tool for selection of variables in second-order calibration, generating models with smaller RMSEP, when compared to both the global model using all of the sensors in two dimensions and GA-NPLS/RBL. Copyright © 2013 Elsevier B.V. All rights reserved.
Parsons, T.; Blakely, R.J.; Brocher, T.M.
2001-01-01
The geologic structure of the Earth's upper crust can be revealed by modeling variation in seismic arrival times and in potential field measurements. We demonstrate a simple method for sequentially satisfying seismic traveltime and observed gravity residuals in an iterative 3-D inversion. The algorithm is portable to any seismic analysis method that uses a gridded representation of velocity structure. Our technique calculates the gravity anomaly resulting from a velocity model by converting to density with Gardner's rule. The residual between calculated and observed gravity is minimized by weighted adjustments to the model velocity-depth gradient where the gradient is steepest and where seismic coverage is least. The adjustments are scaled by the sign and magnitude of the gravity residuals, and a smoothing step is performed to minimize vertical streaking. The adjusted model is then used as a starting model in the next seismic traveltime iteration. The process is repeated until one velocity model can simultaneously satisfy both the gravity anomaly and seismic traveltime observations within acceptable misfits. We test our algorithm with data gathered in the Puget Lowland of Washington state, USA (Seismic Hazards Investigation in Puget Sound [SHIPS] experiment). We perform resolution tests with synthetic traveltime and gravity observations calculated with a checkerboard velocity model using the SHIPS experiment geometry, and show that the addition of gravity significantly enhances resolution. We calculate a new velocity model for the region using SHIPS traveltimes and observed gravity, and show examples where correlation between surface geology and modeled subsurface velocity structure is enhanced.
Improved Speech Coding Based on Open-Loop Parameter Estimation
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Chen, Ya-Chin; Longman, Richard W.
2000-01-01
A nonlinear optimization algorithm for linear predictive speech coding was developed early that not only optimizes the linear model coefficients for the open loop predictor, but does the optimization including the effects of quantization of the transmitted residual. It also simultaneously optimizes the quantization levels used for each speech segment. In this paper, we present an improved method for initialization of this nonlinear algorithm, and demonstrate substantial improvements in performance. In addition, the new procedure produces monotonically improving speech quality with increasing numbers of bits used in the transmitted error residual. Examples of speech encoding and decoding are given for 8 speech segments and signal to noise levels as high as 47 dB are produced. As in typical linear predictive coding, the optimization is done on the open loop speech analysis model. Here we demonstrate that minimizing the error of the closed loop speech reconstruction, instead of the simpler open loop optimization, is likely to produce negligible improvement in speech quality. The examples suggest that the algorithm here is close to giving the best performance obtainable from a linear model, for the chosen order with the chosen number of bits for the codebook.
The Mobius domain wall fermion algorithm
Brower, Richard C.; Neff, Harmut; Orginos, Kostas
2017-07-22
We present a review of the properties of generalized domain wall Fermions, based on a (real) Möbius transformation on the Wilson overlap kernel, discussing their algorithmic efficiency, the degree of explicit chiral violations measured by the residual mass (m res) and the Ward–Takahashi identities. The Möbius class interpolates between Shamir’s domain wall operator and Boriçi’s domain wall implementation of Neuberger’s overlap operator without increasing the number of Dirac applications per conjugate gradient iteration. A new scaling parameter (α) reduces chiral violations at finite fifth dimension (L s) but yields exactly the same overlap action in the limit L s →more » ∞ . Through the use of 4d Red/Black preconditioning and optimal tuning for the scaling α(L s), we show that chiral symmetry violations are typically reduced by an order of magnitude at fixed Ls . Here, we argue that the residual mass for a tuned Möbius algorithm with α = O(1/L s γ) for γ < 1 will eventually fall asymptotically as m res = O(1/L s 1+γ) in the case of a 5D Hamiltonian with out a spectral gap.« less
The Mobius domain wall fermion algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brower, Richard C.; Neff, Harmut; Orginos, Kostas
We present a review of the properties of generalized domain wall Fermions, based on a (real) Möbius transformation on the Wilson overlap kernel, discussing their algorithmic efficiency, the degree of explicit chiral violations measured by the residual mass (m res) and the Ward–Takahashi identities. The Möbius class interpolates between Shamir’s domain wall operator and Boriçi’s domain wall implementation of Neuberger’s overlap operator without increasing the number of Dirac applications per conjugate gradient iteration. A new scaling parameter (α) reduces chiral violations at finite fifth dimension (L s) but yields exactly the same overlap action in the limit L s →more » ∞ . Through the use of 4d Red/Black preconditioning and optimal tuning for the scaling α(L s), we show that chiral symmetry violations are typically reduced by an order of magnitude at fixed Ls . Here, we argue that the residual mass for a tuned Möbius algorithm with α = O(1/L s γ) for γ < 1 will eventually fall asymptotically as m res = O(1/L s 1+γ) in the case of a 5D Hamiltonian with out a spectral gap.« less
Jeon, Namju; Lee, Hyeongcheol
2016-12-12
An integrated fault-diagnosis algorithm for a motor sensor of in-wheel independent drive electric vehicles is presented. This paper proposes a method that integrates the high- and low-level fault diagnoses to improve the robustness and performance of the system. For the high-level fault diagnosis of vehicle dynamics, a planar two-track non-linear model is first selected, and the longitudinal and lateral forces are calculated. To ensure redundancy of the system, correlation between the sensor and residual in the vehicle dynamics is analyzed to detect and separate the fault of the drive motor system of each wheel. To diagnose the motor system for low-level faults, the state equation of an interior permanent magnet synchronous motor is developed, and a parity equation is used to diagnose the fault of the electric current and position sensors. The validity of the high-level fault-diagnosis algorithm is verified using Carsim and Matlab/Simulink co-simulation. The low-level fault diagnosis is verified through Matlab/Simulink simulation and experiments. Finally, according to the residuals of the high- and low-level fault diagnoses, fault-detection flags are defined. On the basis of this information, an integrated fault-diagnosis strategy is proposed.
Elasto-limited plastic analysis of structures for probabilistic conditions
NASA Astrophysics Data System (ADS)
Movahedi Rad, M.
2018-06-01
With applying plastic analysis and design methods, significant saving in material can be obtained. However, as a result of this benefit excessive plastic deformations and large residual displacements might develop, which in turn might lead to unserviceability and collapse of the structure. In this study, for deterministic problem the residual deformation of structures is limited by considering a constraint on the complementary strain energy of the residual forces. For probabilistic problem the constraint for the complementary strain energy of the residual forces is given randomly and critical stresses updated during the iteration. Limit curves are presented for the plastic limit load factors. The results show that these constraints have significant effects on the load factors. The formulations of the deterministic and probabilistic problems lead to mathematical programming which are solved by the use of nonlinear algorithm.
Hybrid genetic algorithm-neural network: feature extraction for unpreprocessed microarray data.
Tong, Dong Ling; Schierz, Amanda C
2011-09-01
Suitable techniques for microarray analysis have been widely researched, particularly for the study of marker genes expressed to a specific type of cancer. Most of the machine learning methods that have been applied to significant gene selection focus on the classification ability rather than the selection ability of the method. These methods also require the microarray data to be preprocessed before analysis takes place. The objective of this study is to develop a hybrid genetic algorithm-neural network (GANN) model that emphasises feature selection and can operate on unpreprocessed microarray data. The GANN is a hybrid model where the fitness value of the genetic algorithm (GA) is based upon the number of samples correctly labelled by a standard feedforward artificial neural network (ANN). The model is evaluated by using two benchmark microarray datasets with different array platforms and differing number of classes (a 2-class oligonucleotide microarray data for acute leukaemia and a 4-class complementary DNA (cDNA) microarray dataset for SRBCTs (small round blue cell tumours)). The underlying concept of the GANN algorithm is to select highly informative genes by co-evolving both the GA fitness function and the ANN weights at the same time. The novel GANN selected approximately 50% of the same genes as the original studies. This may indicate that these common genes are more biologically significant than other genes in the datasets. The remaining 50% of the significant genes identified were used to build predictive models and for both datasets, the models based on the set of genes extracted by the GANN method produced more accurate results. The results also suggest that the GANN method not only can detect genes that are exclusively associated with a single cancer type but can also explore the genes that are differentially expressed in multiple cancer types. The results show that the GANN model has successfully extracted statistically significant genes from the unpreprocessed microarray data as well as extracting known biologically significant genes. We also show that assessing the biological significance of genes based on classification accuracy may be misleading and though the GANN's set of extra genes prove to be more statistically significant than those selected by other methods, a biological assessment of these genes is highly recommended to confirm their functionality. Copyright © 2011 Elsevier B.V. All rights reserved.
Dysphagia in Duchenne muscular dystrophy: practical recommendations to guide management.
Toussaint, Michel; Davidson, Zoe; Bouvoie, Veronique; Evenepoel, Nathalie; Haan, Jurn; Soudon, Philippe
2016-10-01
Duchenne muscular dystrophy (DMD) is a rapidly progressive neuromuscular disorder causing weakness of the skeletal, respiratory, cardiac and oropharyngeal muscles with up to one third of young men reporting difficulty swallowing (dysphagia). Recent studies on dysphagia in DMD clarify the pathophysiology of swallowing disorders and offer new tools for its assessment but little guidance is available for its management. This paper aims to provide a step-by-step algorithm to facilitate clinical decisions regarding dysphagia management in this patient population. This algorithm is based on 30 years of clinical experience with DMD in a specialised Centre for Neuromuscular Disorders (Inkendaal Rehabilitation Hospital, Belgium) and is supported by literature where available. Dysphagia can worsen the condition of ageing patients with DMD. Apart from the difficulties of chewing and oral fragmentation of the food bolus, dysphagia is rather a consequence of an impairment in the pharyngeal phase of swallowing. By contrast with central neurologic disorders, dysphagia in DMD accompanies solid rather than liquid intake. Symptoms of dysphagia may not be clinically evident; however laryngeal food penetration, accumulation of food residue in the pharynx and/or true laryngeal food aspiration may occur. The prevalence of these issues in DMD is likely underestimated. There is little guidance available for clinicians to manage dysphagia and improve feeding for young men with DMD. This report aims to provide a clinical algorithm to facilitate the diagnosis of dysphagia, to identify the symptoms and to propose practical recommendations to treat dysphagia in the adult DMD population. Implications for Rehabilitation Little guidance is available for the management of dysphagia in Duchenne dystrophy. Food can penetrate the vestibule, accumulate as residue or cause aspiration. We propose recommendations and an algorithm to guide management of dysphagia. Penetration/residue accumulation: prohibit solid food and promote intake of fluids. Aspiration: if cough augmentation techniques are ineffective, consider tracheostomy.
Dysphagia in Duchenne muscular dystrophy: practical recommendations to guide management
Toussaint, Michel; Davidson, Zoe; Bouvoie, Veronique; Evenepoel, Nathalie; Haan, Jurn; Soudon, Philippe
2016-01-01
Abstract Purpose: Duchenne muscular dystrophy (DMD) is a rapidly progressive neuromuscular disorder causing weakness of the skeletal, respiratory, cardiac and oropharyngeal muscles with up to one third of young men reporting difficulty swallowing (dysphagia). Recent studies on dysphagia in DMD clarify the pathophysiology of swallowing disorders and offer new tools for its assessment but little guidance is available for its management. This paper aims to provide a step-by-step algorithm to facilitate clinical decisions regarding dysphagia management in this patient population. Methods: This algorithm is based on 30 years of clinical experience with DMD in a specialised Centre for Neuromuscular Disorders (Inkendaal Rehabilitation Hospital, Belgium) and is supported by literature where available. Results: Dysphagia can worsen the condition of ageing patients with DMD. Apart from the difficulties of chewing and oral fragmentation of the food bolus, dysphagia is rather a consequence of an impairment in the pharyngeal phase of swallowing. By contrast with central neurologic disorders, dysphagia in DMD accompanies solid rather than liquid intake. Symptoms of dysphagia may not be clinically evident; however laryngeal food penetration, accumulation of food residue in the pharynx and/or true laryngeal food aspiration may occur. The prevalence of these issues in DMD is likely underestimated. Conclusions: There is little guidance available for clinicians to manage dysphagia and improve feeding for young men with DMD. This report aims to provide a clinical algorithm to facilitate the diagnosis of dysphagia, to identify the symptoms and to propose practical recommendations to treat dysphagia in the adult DMD population.Implications for RehabilitationLittle guidance is available for the management of dysphagia in Duchenne dystrophy.Food can penetrate the vestibule, accumulate as residue or cause aspiration.We propose recommendations and an algorithm to guide management of dysphagia.Penetration/residue accumulation: prohibit solid food and promote intake of fluids.Aspiration: if cough augmentation techniques are ineffective, consider tracheostomy. PMID:26728920
An efficient randomized algorithm for contact-based NMR backbone resonance assignment.
Kamisetty, Hetunandan; Bailey-Kellogg, Chris; Pandurangan, Gopal
2006-01-15
Backbone resonance assignment is a critical bottleneck in studies of protein structure, dynamics and interactions by nuclear magnetic resonance (NMR) spectroscopy. A minimalist approach to assignment, which we call 'contact-based', seeks to dramatically reduce experimental time and expense by replacing the standard suite of through-bond experiments with the through-space (nuclear Overhauser enhancement spectroscopy, NOESY) experiment. In the contact-based approach, spectral data are represented in a graph with vertices for putative residues (of unknown relation to the primary sequence) and edges for hypothesized NOESY interactions, such that observed spectral peaks could be explained if the residues were 'close enough'. Due to experimental ambiguity, several incorrect edges can be hypothesized for each spectral peak. An assignment is derived by identifying consistent patterns of edges (e.g. for alpha-helices and beta-sheets) within a graph and by mapping the vertices to the primary sequence. The key algorithmic challenge is to be able to uncover these patterns even when they are obscured by significant noise. This paper develops, analyzes and applies a novel algorithm for the identification of polytopes representing consistent patterns of edges in a corrupted NOESY graph. Our randomized algorithm aggregates simplices into polytopes and fixes inconsistencies with simple local modifications, called rotations, that maintain most of the structure already uncovered. In characterizing the effects of experimental noise, we employ an NMR-specific random graph model in proving that our algorithm gives optimal performance in expected polynomial time, even when the input graph is significantly corrupted. We confirm this analysis in simulation studies with graphs corrupted by up to 500% noise. Finally, we demonstrate the practical application of the algorithm on several experimental beta-sheet datasets. Our approach is able to eliminate a large majority of noise edges and to uncover large consistent sets of interactions. Our algorithm has been implemented in the platform-independent Python code. The software can be freely obtained for academic use by request from the authors.
Das, Shyama; Idicula, Sumam Mary
2011-01-01
The goal of biclustering in gene expression data matrix is to find a submatrix such that the genes in the submatrix show highly correlated activities across all conditions in the submatrix. A measure called mean squared residue (MSR) is used to simultaneously evaluate the coherence of rows and columns within the submatrix. MSR difference is the incremental increase in MSR when a gene or condition is added to the bicluster. In this chapter, three biclustering algorithms using MSR threshold (MSRT) and MSR difference threshold (MSRDT) are experimented and compared. All these methods use seeds generated from K-Means clustering algorithm. Then these seeds are enlarged by adding more genes and conditions. The first algorithm makes use of MSRT alone. Both the second and third algorithms make use of MSRT and the newly introduced concept of MSRDT. Highly coherent biclusters are obtained using this concept. In the third algorithm, a different method is used to calculate the MSRDT. The results obtained on bench mark datasets prove that these algorithms are better than many of the metaheuristic algorithms.
TEACH: An Ethogram-Based Method to Observe and Record Teaching Behavior
ERIC Educational Resources Information Center
Kline, Michelle Ann
2017-01-01
Teaching has attracted growing research attention in studies of human and animal behavior as a crucial behavior that coevolved with human cultural capacities. However, the synthesis of data on teaching across species and across human populations has proven elusive because researchers use a variety of definitions and methods to approach the topic.…
The Lecture as a Transmedial Pedagogical Form: A Historical Analysis
ERIC Educational Resources Information Center
Friesen, Norm
2011-01-01
The lecture has been much maligned as a pedagogical form, yet it persists and even flourishes today in the form of the podcast, the TED talk, and the "smart" lecture hall. This article examines the lecture as a pedagogical genre, as "a site where differences between media are negotiated" (Franzel) as these media coevolve. This examination shows…
Douglas, Angela E
2017-12-14
In this issue of Cell, Salem et al. demonstrate a remarkable instance of herbivory dependent on a co-evolved mutualism with specialized bacteria. Despite having a tiny genome and limited metabolic repertoire, the bacteria in Cassida beetles produce pectinases predicted to mediate degradation of plant cell walls in the insect diet. Copyright © 2017 Elsevier Inc. All rights reserved.
Vanessa Lopez; Paul F. Rugman-Jones; Tom W. Coleman; Richard Stouthamer; Mark Hoddle
2015-01-01
Californiaâs oak woodlands are threatened by the recent introduction of goldspotted oak borer (Agrilus auroguttatus). This invasive wood-borer is indigenous to mountain ranges in southern Arizona where its low population densities may be due to the presence of co-evolved, host-specific natural enemies. Reuniting A. auroguttatus...
The political ecology of forest health in the redwood region
Chris Lee; Yana Valachovic; Dan Stark
2017-01-01
Imported forest pests have changed North American forests and caused staggering monetary losses in the centuries since the country was founded. Since most problem-causing non-native pests are innocuous in their home ranges, where they have coevolved with their host trees, experts cannot predict which pathogens or insects will have lethal effect on other continents....
Root disease and exotic ecosystems: implications for long-term site productivity
W.J. Otrosina; M. Garbelotto
1998-01-01
Root disease fungi, particularly root-rotting Basidiomycetes, are key drivers of forest ecosystems. These fungi have co?evolved with their hosts in various forest ecosystems and are in various states of equilibrium with them. Management activities and various land uses have taken place in recent times that have dramatically altered edaphic and environmental conditions...
USDA-ARS?s Scientific Manuscript database
The rust virulence gene is co-evolving with the resistance gene in sunflower, leading to the emergence of new physiologic pathotypes. This presents a continuous threat to the sunflower crop necessitating the development of resistant sunflower hybrids providing a more efficient, durable, and environm...
2012-09-30
different livelihood strategies? How have community resilience and household vulnerability changed in response to engineered structures, shifting...impacts of environmental change by strengthening the factors that contribute to community resilience . 6 IMPACT/APPLICATIONS Our research will
Rajathei, David Mary; Preethi, Jayakumar; Singh, Hemant K; Rajan, Koilmani Emmanuvel
2014-08-01
Tryptophan hydroxylase (TPH) catalyses l-tryptophan into 5-hydroxy-l-tryptophan, which is the first and rate-limiting step of serotonin (5-HT) biosynthesis. Earlier, we found that TPH2 up-regulated in the hippocampus of postnatal rats after the oral treatment of Bacopa monniera leaf extract containing the active compound bacosides. However, the knowledge about the interactions between bacosides with TPH is limited. In this study, we take advantage of in silico approach to understand the interaction of bacoside-TPH complex using three different docking algorithms such as HexDock, PatchDock and AutoDock. All these three algorithms showed that bacoside A and A3 well fit into the cavity consists of active sites. Further, our analysis revealed that major active compounds bacoside A3 and A interact with different residues of TPH through hydrogen bond. Interestingly, Tyr235, Thr265 and Glu317 are the key residues among them, but none of them are either at tryptophan or BH4 binding region. However, its note worthy to mention that Tyr 235 is a catalytic sensitive residue, Thr265 is present in the flexible loop region and Glu317 is known to interacts with Fe. Interactions with these residues may critically regulate TPH function and thus serotonin synthesis. Our study suggested that the interaction of bacosides (A3/A) with TPH might up-regulate its activity to elevate the biosynthesis of 5-HT, thereby enhances learning and memory formation.
Peptide docking of HIV-1 p24 with single chain fragment variable (scFv) by CDOCKER algorithm
NASA Astrophysics Data System (ADS)
Karim, Hana Atiqah Abdul; Tayapiwatana, Chatchai; Nimmanpipug, Piyarat; Zain, Sharifuddin M.; Rahman, Noorsaadah Abdul; Lee, Vannajan Sanghiran
2014-10-01
In search for the important residues that might have involve in the binding interaction between the p24 caspid protein of HIV-1 fragment (MET68 - PRO90) with the single chain fragment variable (scFv) of FAB23.5, modern computational chemistry approach has been conducted and applied. The p24 fragment was initially taken out from the 1AFV protein molecule consisting of both light (VL) and heavy (VH) chains of FAB23.5 as well as the HIV-1 caspid protein. From there, the p24 (antigen) fragment was made to dock back into the protein pocket receptor (antibody) by using the CDOCKER algorithm to conduct the molecular docking process. The score calculated from the CDOCKER gave 15 possible docked poses with various docked ligand's positions, the interaction energy as well as the binding energy. The best docked pose that imitates the original antigen's position was determined and further processed to the In Situ minimization to obtain the residues interaction energy as well as to observe the hydrogen bonds interaction in the protein-peptide complex. Based on the results demonstrated, the specific residues in the complex that have shown immense lower interaction energies in the 5Å vicinity region from the peptide are from the heavy chain (VH:TYR105) and light chain (VL: ASN31, TYR32, and GLU97). Those residues play vital roles in the binding mechanism of Antibody-Antigen (Ab-Ag) complex of p24 with FAB23.5.
A biconjugate gradient type algorithm on massively parallel architectures
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Hochbruck, Marlis
1991-01-01
The biconjugate gradient (BCG) method is the natural generalization of the classical conjugate gradient algorithm for Hermitian positive definite matrices to general non-Hermitian linear systems. Unfortunately, the original BCG algorithm is susceptible to possible breakdowns and numerical instabilities. Recently, Freund and Nachtigal have proposed a novel BCG type approach, the quasi-minimal residual method (QMR), which overcomes the problems of BCG. Here, an implementation is presented of QMR based on an s-step version of the nonsymmetric look-ahead Lanczos algorithm. The main feature of the s-step Lanczos algorithm is that, in general, all inner products, except for one, can be computed in parallel at the end of each block; this is unlike the other standard Lanczos process where inner products are generated sequentially. The resulting implementation of QMR is particularly attractive on massively parallel SIMD architectures, such as the Connection Machine.
Spectral multigrid methods for the solution of homogeneous turbulence problems
NASA Technical Reports Server (NTRS)
Erlebacher, G.; Zang, T. A.; Hussaini, M. Y.
1987-01-01
New three-dimensional spectral multigrid algorithms are analyzed and implemented to solve the variable coefficient Helmholtz equation. Periodicity is assumed in all three directions which leads to a Fourier collocation representation. Convergence rates are theoretically predicted and confirmed through numerical tests. Residual averaging results in a spectral radius of 0.2 for the variable coefficient Poisson equation. In general, non-stationary Richardson must be used for the Helmholtz equation. The algorithms developed are applied to the large-eddy simulation of incompressible isotropic turbulence.
Implementation details of the coupled QMR algorithm
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Nachtigal, Noel M.
1992-01-01
The original quasi-minimal residual method (QMR) relies on the three-term look-ahead Lanczos process, to generate basis vectors for the underlying Krylov subspaces. However, empirical observations indicate that, in finite precision arithmetic, three-term vector recurrences are less robust than mathematically equivalent coupled two-term recurrences. Therefore, we recently proposed a new implementation of the QMR method based on a coupled two-term look-ahead Lanczos procedure. In this paper, we describe implementation details of this coupled QMR algorithm, and we present results of numerical experiments.
Characteristic-based algorithms for flows in thermo-chemical nonequilibrium
NASA Technical Reports Server (NTRS)
Walters, Robert W.; Cinnella, Pasquale; Slack, David C.; Halt, David
1990-01-01
A generalized finite-rate chemistry algorithm with Steger-Warming, Van Leer, and Roe characteristic-based flux splittings is presented in three-dimensional generalized coordinates for the Navier-Stokes equations. Attention is placed on convergence to steady-state solutions with fully coupled chemistry. Time integration schemes including explicit m-stage Runge-Kutta, implicit approximate-factorization, relaxation and LU decomposition are investigated and compared in terms of residual reduction per unit of CPU time. Practical issues such as code vectorization and memory usage on modern supercomputers are discussed.
Policy Gradient Adaptive Dynamic Programming for Data-Based Optimal Control.
Luo, Biao; Liu, Derong; Wu, Huai-Ning; Wang, Ding; Lewis, Frank L
2017-10-01
The model-free optimal control problem of general discrete-time nonlinear systems is considered in this paper, and a data-based policy gradient adaptive dynamic programming (PGADP) algorithm is developed to design an adaptive optimal controller method. By using offline and online data rather than the mathematical system model, the PGADP algorithm improves control policy with a gradient descent scheme. The convergence of the PGADP algorithm is proved by demonstrating that the constructed Q -function sequence converges to the optimal Q -function. Based on the PGADP algorithm, the adaptive control method is developed with an actor-critic structure and the method of weighted residuals. Its convergence properties are analyzed, where the approximate Q -function converges to its optimum. Computer simulation results demonstrate the effectiveness of the PGADP-based adaptive control method.
NASA Astrophysics Data System (ADS)
Han, Ke-Zhen; Feng, Jian; Cui, Xiaohong
2017-10-01
This paper considers the fault-tolerant optimised tracking control (FTOTC) problem for unknown discrete-time linear system. A research scheme is proposed on the basis of data-based parity space identification, reinforcement learning and residual compensation techniques. The main characteristic of this research scheme lies in the parity-space-identification-based simultaneous tracking control and residual compensation. The specific technical line consists of four main contents: apply subspace aided method to design observer-based residual generator; use reinforcement Q-learning approach to solve optimised tracking control policy; rely on robust H∞ theory to achieve noise attenuation; adopt fault estimation triggered by residual generator to perform fault compensation. To clarify the design and implementation procedures, an integrated algorithm is further constructed to link up these four functional units. The detailed analysis and proof are subsequently given to explain the guaranteed FTOTC performance of the proposed conclusions. Finally, a case simulation is provided to verify its effectiveness.
NASA Technical Reports Server (NTRS)
Palmer, Grant; Venkatapathy, Ethiraj
1993-01-01
Three solution algorithms, explicit underrelaxation, point implicit, and lower upper symmetric Gauss-Seidel (LUSGS), are used to compute nonequilibrium flow around the Apollo 4 return capsule at 62 km altitude. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness. The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15, 23, and 30, the LUSGS method produces an eight order of magnitude drop in the L2 norm of the energy residual in 1/3 to 1/2 the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 23 and above. At Mach 40 the performance of the LUSGS algorithm deteriorates to the point it is out-performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.
NASA Astrophysics Data System (ADS)
Singh, Ranjan Kumar; Rinawa, Moti Lal
2018-04-01
The residual stresses arising in fiber-reinforced laminates during their curing in closed molds lead to changes in the composites after their removal from the molds and cooling. One of these dimensional changes of angle sections is called springback. The parameters such as lay-up, stacking sequence, material system, cure temperature, thickness etc play important role in it. In present work, it is attempted to optimize lay-up and stacking sequence for maximization of flexural stiffness and minimization of springback angle. The search algorithms are employed to obtain best sequence through repair strategy such as swap. A new search algorithm, termed as lay-up search algorithm (LSA) is also proposed, which is an extension of permutation search algorithm (PSA). The efficacy of PSA and LSA is tested on the laminates with a range of lay-ups. A computer code is developed on MATLAB implementing the above schemes. Also, the strategies for multi objective optimization using search algorithms are suggested and tested.
A fast parallel clustering algorithm for molecular simulation trajectories.
Zhao, Yutong; Sheong, Fu Kit; Sun, Jian; Sander, Pedro; Huang, Xuhui
2013-01-15
We implemented a GPU-powered parallel k-centers algorithm to perform clustering on the conformations of molecular dynamics (MD) simulations. The algorithm is up to two orders of magnitude faster than the CPU implementation. We tested our algorithm on four protein MD simulation datasets ranging from the small Alanine Dipeptide to a 370-residue Maltose Binding Protein (MBP). It is capable of grouping 250,000 conformations of the MBP into 4000 clusters within 40 seconds. To achieve this, we effectively parallelized the code on the GPU and utilize the triangle inequality of metric spaces. Furthermore, the algorithm's running time is linear with respect to the number of cluster centers. In addition, we found the triangle inequality to be less effective in higher dimensions and provide a mathematical rationale. Finally, using Alanine Dipeptide as an example, we show a strong correlation between cluster populations resulting from the k-centers algorithm and the underlying density. © 2012 Wiley Periodicals, Inc. Copyright © 2012 Wiley Periodicals, Inc.
An adjoint method of sensitivity analysis for residual vibrations of structures subject to impacts
NASA Astrophysics Data System (ADS)
Yan, Kun; Cheng, Gengdong
2018-03-01
For structures subject to impact loads, the residual vibration reduction is more and more important as the machines become faster and lighter. An efficient sensitivity analysis of residual vibration with respect to structural or operational parameters is indispensable for using a gradient based optimization algorithm, which reduces the residual vibration in either active or passive way. In this paper, an integrated quadratic performance index is used as the measure of the residual vibration, since it globally measures the residual vibration response and its calculation can be simplified greatly with Lyapunov equation. Several sensitivity analysis approaches for performance index were developed based on the assumption that the initial excitations of residual vibration were given and independent of structural design. Since the resulting excitations by the impact load often depend on structural design, this paper aims to propose a new efficient sensitivity analysis method for residual vibration of structures subject to impacts to consider the dependence. The new method is developed by combining two existing methods and using adjoint variable approach. Three numerical examples are carried out and demonstrate the accuracy of the proposed method. The numerical results show that the dependence of initial excitations on structural design variables may strongly affects the accuracy of sensitivities.
NASA Astrophysics Data System (ADS)
Morton, Kenneth D., Jr.; Torrione, Peter A.; Collins, Leslie
2011-05-01
Laser induced breakdown spectroscopy (LIBS) can provide rapid, minimally destructive, chemical analysis of substances with the benefit of little to no sample preparation. Therefore, LIBS is a viable technology for the detection of substances of interest in near real-time fielded remote sensing scenarios. Of particular interest to military and security operations is the detection of explosive residues on various surfaces. It has been demonstrated that LIBS is capable of detecting such residues, however, the surface or substrate on which the residue is present can alter the observed spectra. Standard chemometric techniques such as principal components analysis and partial least squares discriminant analysis have previously been applied to explosive residue detection, however, the classification techniques developed on such data perform best against residue/substrate pairs that were included in model training but do not perform well when the residue/substrate pairs are not in the training set. Specifically residues in the training set may not be correctly detected if they are presented on a previously unseen substrate. In this work, we explicitly model LIBS spectra resulting from the residue and substrate to attempt to separate the response from each of the two components. This separation process is performed jointly with classifier design to ensure that the classifier that is developed is able to detect residues of interest without being confused by variations in the substrates. We demonstrate that the proposed classification algorithm provides improved robustness to variations in substrate compared to standard chemometric techniques for residue detection.
Sparse/DCT (S/DCT) two-layered representation of prediction residuals for video coding.
Kang, Je-Won; Gabbouj, Moncef; Kuo, C-C Jay
2013-07-01
In this paper, we propose a cascaded sparse/DCT (S/DCT) two-layer representation of prediction residuals, and implement this idea on top of the state-of-the-art high efficiency video coding (HEVC) standard. First, a dictionary is adaptively trained to contain featured patterns of residual signals so that a high portion of energy in a structured residual can be efficiently coded via sparse coding. It is observed that the sparse representation alone is less effective in the R-D performance due to the side information overhead at higher bit rates. To overcome this problem, the DCT representation is cascaded at the second stage. It is applied to the remaining signal to improve coding efficiency. The two representations successfully complement each other. It is demonstrated by experimental results that the proposed algorithm outperforms the HEVC reference codec HM5.0 in the Common Test Condition.
Online Sensor Fault Detection Based on an Improved Strong Tracking Filter
Wang, Lijuan; Wu, Lifeng; Guan, Yong; Wang, Guohui
2015-01-01
We propose a method for online sensor fault detection that is based on the evolving Strong Tracking Filter (STCKF). The cubature rule is used to estimate states to improve the accuracy of making estimates in a nonlinear case. A residual is the difference in value between an estimated value and the true value. A residual will be regarded as a signal that includes fault information. The threshold is set at a reasonable level, and will be compared with residuals to determine whether or not the sensor is faulty. The proposed method requires only a nominal plant model and uses STCKF to estimate the original state vector. The effectiveness of the algorithm is verified by simulation on a drum-boiler model. PMID:25690553
Texture orientation-based algorithm for detecting infrared maritime targets.
Wang, Bin; Dong, Lili; Zhao, Ming; Wu, Houde; Xu, Wenhai
2015-05-20
Infrared maritime target detection is a key technology for maritime target searching systems. However, in infrared maritime images (IMIs) taken under complicated sea conditions, background clutters, such as ocean waves, clouds or sea fog, usually have high intensity that can easily overwhelm the brightness of real targets, which is difficult for traditional target detection algorithms to deal with. To mitigate this problem, this paper proposes a novel target detection algorithm based on texture orientation. This algorithm first extracts suspected targets by analyzing the intersubband correlation between horizontal and vertical wavelet subbands of the original IMI on the first scale. Then the self-adaptive wavelet threshold denoising and local singularity analysis of the original IMI is combined to remove false alarms further. Experiments show that compared with traditional algorithms, this algorithm can suppress background clutter much better and realize better single-frame detection for infrared maritime targets. Besides, in order to guarantee accurate target extraction further, the pipeline-filtering algorithm is adopted to eliminate residual false alarms. The high practical value and applicability of this proposed strategy is backed strongly by experimental data acquired under different environmental conditions.
NASA Astrophysics Data System (ADS)
Mahalov, M. S.; Blumenstein, V. Yu
2017-10-01
The mechanical condition and residual stresses (RS) research and computational algorithms creation in complex types of loading on the product lifecycle stages relevance is shown. The mechanical state and RS forming finite element model at surface plastic deformation strengthening machining, including technological inheritance effect, is presented. A model feature is the production previous stages obtained transformation properties consideration, as well as these properties evolution during metal particles displacement through the deformation space in the present loading step.
NASA Astrophysics Data System (ADS)
Lei, H.; Lu, Z.; Vesselinov, V. V.; Ye, M.
2017-12-01
Simultaneous identification of both the zonation structure of aquifer heterogeneity and the hydrogeological parameters associated with these zones is challenging, especially for complex subsurface heterogeneity fields. In this study, a new approach, based on the combination of the level set method and a parallel genetic algorithm is proposed. Starting with an initial guess for the zonation field (including both zonation structure and the hydraulic properties of each zone), the level set method ensures that material interfaces are evolved through the inverse process such that the total residual between the simulated and observed state variables (hydraulic head) always decreases, which means that the inversion result depends on the initial guess field and the minimization process might fail if it encounters a local minimum. To find the global minimum, the genetic algorithm (GA) is utilized to explore the parameters that define initial guess fields, and the minimal total residual corresponding to each initial guess field is considered as the fitness function value in the GA. Due to the expensive evaluation of the fitness function, a parallel GA is adapted in combination with a simulated annealing algorithm. The new approach has been applied to several synthetic cases in both steady-state and transient flow fields, including a case with real flow conditions at the chromium contaminant site at the Los Alamos National Laboratory. The results show that this approach is capable of identifying the arbitrary zonation structures of aquifer heterogeneity and the hydrogeological parameters associated with these zones effectively.
NASA Astrophysics Data System (ADS)
Liu, Yu; Lin, Xiaocheng; Fan, Nianfei; Zhang, Lin
2016-01-01
Wireless video multicast has become one of the key technologies in wireless applications. But the main challenge of conventional wireless video multicast, i.e., the cliff effect, remains unsolved. To overcome the cliff effect, a hybrid digital-analog (HDA) video transmission framework based on SoftCast, which transmits the digital bitstream with the quantization residuals, is proposed. With an effective power allocation algorithm and appropriate parameter settings, the residual gains can be maximized; meanwhile, the digital bitstream can assure transmission of a basic video to the multicast receiver group. In the multiple-input multiple-output (MIMO) system, since nonuniform noise interference on different antennas can be regarded as the cliff effect problem, ParCast, which is a variation of SoftCast, is also applied to video transmission to solve it. The HDA scheme with corresponding power allocation algorithms is also applied to improve video performance. Simulations show that the proposed HDA scheme can overcome the cliff effect completely with the transmission of residuals. What is more, it outperforms the compared WSVC scheme by more than 2 dB when transmitting under the same bandwidth, and it can further improve performance by nearly 8 dB in MIMO when compared with the ParCast scheme.
Precise Aperture-Dependent Motion Compensation with Frequency Domain Fast Back-Projection Algorithm.
Zhang, Man; Wang, Guanyong; Zhang, Lei
2017-10-26
Precise azimuth-variant motion compensation (MOCO) is an essential and difficult task for high-resolution synthetic aperture radar (SAR) imagery. In conventional post-filtering approaches, residual azimuth-variant motion errors are generally compensated through a set of spatial post-filters, where the coarse-focused image is segmented into overlapped blocks concerning the azimuth-dependent residual errors. However, image domain post-filtering approaches, such as precise topography- and aperture-dependent motion compensation algorithm (PTA), have difficulty of robustness in declining, when strong motion errors are involved in the coarse-focused image. In this case, in order to capture the complete motion blurring function within each image block, both the block size and the overlapped part need necessary extension leading to degeneration of efficiency and robustness inevitably. Herein, a frequency domain fast back-projection algorithm (FDFBPA) is introduced to deal with strong azimuth-variant motion errors. FDFBPA disposes of the azimuth-variant motion errors based on a precise azimuth spectrum expression in the azimuth wavenumber domain. First, a wavenumber domain sub-aperture processing strategy is introduced to accelerate computation. After that, the azimuth wavenumber spectrum is partitioned into a set of wavenumber blocks, and each block is formed into a sub-aperture coarse resolution image via the back-projection integral. Then, the sub-aperture images are straightforwardly fused together in azimuth wavenumber domain to obtain a full resolution image. Moreover, chirp-Z transform (CZT) is also introduced to implement the sub-aperture back-projection integral, increasing the efficiency of the algorithm. By disusing the image domain post-filtering strategy, robustness of the proposed algorithm is improved. Both simulation and real-measured data experiments demonstrate the effectiveness and superiority of the proposal.
Depth-averaged instantaneous currents in a tidally dominated shelf sea from glider observations
NASA Astrophysics Data System (ADS)
Merckelbach, Lucas
2016-12-01
Ocean gliders have become ubiquitous observation platforms in the ocean in recent years. They are also increasingly used in coastal environments. The coastal observatory system COSYNA has pioneered the use of gliders in the North Sea, a shallow tidally energetic shelf sea. For operational reasons, the gliders operated in the North Sea are programmed to resurface every 3-5 h. The glider's dead-reckoning algorithm yields depth-averaged currents, averaged in time over each subsurface interval. Under operational conditions these averaged currents are a poor approximation of the instantaneous tidal current. In this work an algorithm is developed that estimates the instantaneous current (tidal and residual) from glider observations only. The algorithm uses a first-order Butterworth low pass filter to estimate the residual current component, and a Kalman filter based on the linear shallow water equations for the tidal component. A comparison of data from a glider experiment with current data from an acoustic Doppler current profilers deployed nearby shows that the standard deviations for the east and north current components are better than 7 cm s-1 in near-real-time mode and improve to better than 6 cm s-1 in delayed mode, where the filters can be run forward and backward. In the near-real-time mode the algorithm provides estimates of the currents that the glider is expected to encounter during its next few dives. Combined with a behavioural and dynamic model of the glider, this yields predicted trajectories, the information of which is incorporated in warning messages issued to ships by the (German) authorities. In delayed mode the algorithm produces useful estimates of the depth-averaged currents, which can be used in (process-based) analyses in case no other source of measured current information is available.
USDA-ARS?s Scientific Manuscript database
The newly invasive pest stink bug, Bagrada hilaris, threatens the cole crop industry and certain ornamentals in the U.S. Without its co-evolved natural enemies, it is likely to spread from the Southwest U.S. to the east coast, requiring millions more dollars to control it. If key biological control ...
Variables, Decisions, and Scripting in Construct
2009-09-01
grounded in sociology and cognitive science which seeks to model the processes and situations by which humans interact and share information...Construct is an embodiment of constructuralism (Carley 1986), a theory which posits that human social structures and cognitive structures co-evolve so that...human cognition reflects human social behavior, and that human social behavior simultaneously influences cognitive processes. Recent work with
Douglas-fir Progeny Testing for Resistance to Western Spruce Budworm
Geral I. McDonald
1983-01-01
Ample evidence exists that inland populations of Douglas-fir suffer varying amounts of defoliation by western spruce budworm (Johnson and Denton 1975; Williams 1967; McDonald 1981). Such variation in plant insect association can be the result of the plant escaping attack in time and place to actual confrontation between plant and insect (Harris 1980). Co-evolved...
CD Nelson; TL Kubisiak; HV Amerson
2010-01-01
Fusiform rust disease remains the most destructive disease in pine plantations in the southern United States. Our ongoing research is designed to identify, map, and clone the interacting genes of the host and pathogen. Several resistance (R) genes have been identified and genetically mapped using informative pine families and single-spore isolate inoculations. In...
Evolution, human-microbe interactions, and life history plasticity.
Rook, Graham; Bäckhed, Fredrik; Levin, Bruce R; McFall-Ngai, Margaret J; McLean, Angela R
2017-07-29
A bacterium was once a component of the ancestor of all eukaryotic cells, and much of the human genome originated in microorganisms. Today, all vertebrates harbour large communities of microorganisms (microbiota), particularly in the gut, and at least 20% of the small molecules in human blood are products of the microbiota. Changing human lifestyles and medical practices are disturbing the content and diversity of the microbiota, while simultaneously reducing our exposures to the so-called old infections and to organisms from the natural environment with which human beings co-evolved. Meanwhile, population growth is increasing the exposure of human beings to novel pathogens, particularly the crowd infections that were not part of our evolutionary history. Thus some microbes have co-evolved with human beings and play crucial roles in our physiology and metabolism, whereas others are entirely intrusive. Human metabolism is therefore a tug-of-war between managing beneficial microbes, excluding detrimental ones, and channelling as much energy as is available into other essential functions (eg, growth, maintenance, reproduction). This tug-of-war shapes the passage of each individual through life history decision nodes (eg, how fast to grow, when to mature, and how long to live). Copyright © 2017 Elsevier Ltd. All rights reserved.
The Role of the Environment in the Evolution of Tolerance and Resistance to a Pathogen.
Zeller, Michael; Koella, Jacob C
2017-09-01
Defense against parasites can be divided into resistance, which limits parasite burden, and tolerance, which reduces pathogenesis at a given parasite burden. Distinguishing between the two and understanding which defense is favored by evolution in different ecological settings are important, as they lead to fundamentally different evolutionary trajectories of host-parasite interactions. We let the mosquito Aedes aegypti evolve under different food levels and with either no parasite, a constant parasite, or a coevolving parasite (the microsporidian Vavraia culicis). We then tested tolerance and resistance of the evolved lines on a population level at the two food levels. Exposure to parasites during evolution increased resistance and tolerance, but there were no differences between the lines evolved with coevolving or constant parasites. Mosquitoes that had evolved with food restriction had higher resistance than those evolved with high food but similar tolerance. The mosquitoes that had restricted food when being tested had lower tolerance than those with normal food, but there was no difference in resistance. Our results emphasize the complexity and dependence on environmental conditions of the evolution and expression of resistance and tolerance and help to evaluate some of the predictions about the evolution of host defense against parasites.
Rickaby, R E M
2015-03-13
Life and the chemical environment are united in an inescapable feedback cycle. The periodic table of the elements essential for life has transformed over Earth's history, but, as today, evolved in tune with the elements available in abundance in the environment. The most revolutionary time in life's history was the advent and proliferation of oxygenic photosynthesis which forced the environment towards a greater degree of oxidation. Consideration of three inorganic chemical equilibria throughout this gradual oxygenation prescribes a phased release of trace metals to the environment, which appear to have coevolved with employment of these new chemicals by life. Evolution towards complexity was chemically constrained, and changes in availability of notably Fe, Zn and Cu paced the systematic development of complex organisms. Evolving life repeatedly catalysed its own chemical challenges via the unwitting release of new and initially toxic chemicals. Ultimately, the harnessing of these allowed life to advance to greater complexity, though the mechanism responsible for translating novel chemistry to heritable use remains elusive. Whether a chemical acts as a poison or a nutrient lies both in the dose and in its environmental history. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Richards, C H; Ventham, N T; Mansouri, D; Wilson, M; Ramsay, G; Mackay, C D; Parnaby, C N; Smith, D; On, J; Speake, D; McFarlane, G; Neo, Y N; Aitken, E; Forrest, C; Knight, K; McKay, A; Nair, H; Mulholland, C; Robertson, J H; Carey, F A; Steele, Rjc
2018-02-01
Colorectal polyp cancers present clinicians with a treatment dilemma. Decisions regarding whether to offer segmental resection or endoscopic surveillance are often taken without reference to good quality evidence. The aim of this study was to develop a treatment algorithm for patients with screen-detected polyp cancers. This national cohort study included all patients with a polyp cancer identified through the Scottish Bowel Screening Programme between 2000 and 2012. Multivariate regression analysis was used to assess the impact of clinical, endoscopic and pathological variables on the rate of adverse events (residual tumour in patients undergoing segmental resection or cancer-related death or disease recurrence in any patient). These data were used to develop a clinically relevant treatment algorithm. 485 patients with polyp cancers were included. 186/485 (38%) underwent segmental resection and residual tumour was identified in 41/186 (22%). The only factor associated with an increased risk of residual tumour in the bowel wall was incomplete excision of the original polyp (OR 5.61, p=0.001), while only lymphovascular invasion was associated with an increased risk of lymph node metastases (OR 5.95, p=0.002). When patients undergoing segmental resection or endoscopic surveillance were considered together, the risk of adverse events was significantly higher in patients with incomplete excision (OR 10.23, p<0.001) or lymphovascular invasion (OR 2.65, p=0.023). A policy of surveillance is adequate for the majority of patients with screen-detected colorectal polyp cancers. Consideration of segmental resection should be reserved for those with incomplete excision or evidence of lymphovascular invasion. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Zhang, Guoqing; Sun, Huaijiang; Xia, Guiyu; Sun, Quansen
2016-07-07
Sparse representation based classification (SRC) has been developed and shown great potential for real-world application. Based on SRC, Yang et al. [10] devised a SRC steered discriminative projection (SRC-DP) method. However, as a linear algorithm, SRC-DP cannot handle the data with highly nonlinear distribution. Kernel sparse representation-based classifier (KSRC) is a non-linear extension of SRC and can remedy the drawback of SRC. KSRC requires the use of a predetermined kernel function and selection of the kernel function and its parameters is difficult. Recently, multiple kernel learning for SRC (MKL-SRC) [22] has been proposed to learn a kernel from a set of base kernels. However, MKL-SRC only considers the within-class reconstruction residual while ignoring the between-class relationship, when learning the kernel weights. In this paper, we propose a novel multiple kernel sparse representation-based classifier (MKSRC), and then we use it as a criterion to design a multiple kernel sparse representation based orthogonal discriminative projection method (MK-SR-ODP). The proposed algorithm aims at learning a projection matrix and a corresponding kernel from the given base kernels such that in the low dimension subspace the between-class reconstruction residual is maximized and the within-class reconstruction residual is minimized. Furthermore, to achieve a minimum overall loss by performing recognition in the learned low-dimensional subspace, we introduce cost information into the dimensionality reduction method. The solutions for the proposed method can be efficiently found based on trace ratio optimization method [33]. Extensive experimental results demonstrate the superiority of the proposed algorithm when compared with the state-of-the-art methods.
A robust return-map algorithm for general multisurface plasticity
Adhikary, Deepak P.; Jayasundara, Chandana T.; Podgorney, Robert K.; ...
2016-06-16
Three new contributions to the field of multisurface plasticity are presented for general situations with an arbitrary number of nonlinear yield surfaces with hardening or softening. A method for handling linearly dependent flow directions is described. A residual that can be used in a line search is defined. An algorithm that has been implemented and comprehensively tested is discussed in detail. Examples are presented to illustrate the computational cost of various components of the algorithm. The overall result is that a single Newton-Raphson iteration of the algorithm costs between 1.5 and 2 times that of an elastic calculation. Examples alsomore » illustrate the successful convergence of the algorithm in complicated situations. For example, without using the new contributions presented here, the algorithm fails to converge for approximately 50% of the trial stresses for a common geomechanical model of sedementary rocks, while the current algorithm results in complete success. Since it involves no approximations, the algorithm is used to quantify the accuracy of an efficient, pragmatic, but approximate, algorithm used for sedimentary-rock plasticity in a commercial software package. Furthermore, the main weakness of the algorithm is identified as the difficulty of correctly choosing the set of initially active constraints in the general setting.« less
Dwell time method based on Richardson-Lucy algorithm
NASA Astrophysics Data System (ADS)
Jiang, Bo; Ma, Zhen
2017-10-01
When the noise in the surface error data given by the interferometer has no effect on the iterative convergence of the RL algorithm, the RL algorithm for deconvolution in image restoration can be applied to the CCOS model to solve the dwell time. By extending the initial error function on the edge and denoising the noise in the surface error data given by the interferometer , it makes the result more available . The simulation results show the final residual error 10.7912nm nm in PV and 0.4305 nm in RMS, when the initial surface error is 107.2414 nm in PV and 15.1331 nm in RMS. The convergence rates of the PV and RMS values can reach up to 89.9% and 96.0%, respectively . The algorithms can satisfy the requirement of fabrication very well.
Filtered gradient reconstruction algorithm for compressive spectral imaging
NASA Astrophysics Data System (ADS)
Mejia, Yuri; Arguello, Henry
2017-04-01
Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.
Jeon, Namju; Lee, Hyeongcheol
2016-01-01
An integrated fault-diagnosis algorithm for a motor sensor of in-wheel independent drive electric vehicles is presented. This paper proposes a method that integrates the high- and low-level fault diagnoses to improve the robustness and performance of the system. For the high-level fault diagnosis of vehicle dynamics, a planar two-track non-linear model is first selected, and the longitudinal and lateral forces are calculated. To ensure redundancy of the system, correlation between the sensor and residual in the vehicle dynamics is analyzed to detect and separate the fault of the drive motor system of each wheel. To diagnose the motor system for low-level faults, the state equation of an interior permanent magnet synchronous motor is developed, and a parity equation is used to diagnose the fault of the electric current and position sensors. The validity of the high-level fault-diagnosis algorithm is verified using Carsim and Matlab/Simulink co-simulation. The low-level fault diagnosis is verified through Matlab/Simulink simulation and experiments. Finally, according to the residuals of the high- and low-level fault diagnoses, fault-detection flags are defined. On the basis of this information, an integrated fault-diagnosis strategy is proposed. PMID:27973431
Zhang, Jian; Zhao, Xiaowei; Sun, Pingping; Gao, Bo; Ma, Zhiqiang
2014-01-01
B-cell epitopes are regions of the antigen surface which can be recognized by certain antibodies and elicit the immune response. Identification of epitopes for a given antigen chain finds vital applications in vaccine and drug research. Experimental prediction of B-cell epitopes is time-consuming and resource intensive, which may benefit from the computational approaches to identify B-cell epitopes. In this paper, a novel cost-sensitive ensemble algorithm is proposed for predicting the antigenic determinant residues and then a spatial clustering algorithm is adopted to identify the potential epitopes. Firstly, we explore various discriminative features from primary sequences. Secondly, cost-sensitive ensemble scheme is introduced to deal with imbalanced learning problem. Thirdly, we adopt spatial algorithm to tell which residues may potentially form the epitopes. Based on the strategies mentioned above, a new predictor, called CBEP (conformational B-cell epitopes prediction), is proposed in this study. CBEP achieves good prediction performance with the mean AUC scores (AUCs) of 0.721 and 0.703 on two benchmark datasets (bound and unbound) using the leave-one-out cross-validation (LOOCV). When compared with previous prediction tools, CBEP produces higher sensitivity and comparable specificity values. A web server named CBEP which implements the proposed method is available for academic use.
Landmine detection using two-tapped joint orthogonal matching pursuits
NASA Astrophysics Data System (ADS)
Goldberg, Sean; Glenn, Taylor; Wilson, Joseph N.; Gader, Paul D.
2012-06-01
Joint Orthogonal Matching Pursuits (JOMP) is used here in the context of landmine detection using data obtained from an electromagnetic induction (EMI) sensor. The response from an object containing metal can be decomposed into a discrete spectrum of relaxation frequencies (DSRF) from which we construct a dictionary. A greedy iterative algorithm is proposed for computing successive residuals of a signal by subtracting away the highest matching dictionary element at each step. The nal condence of a particular signal is a combination of the reciprocal of this residual and the mean of the complex component. A two-tap approach comparing signals on opposite sides of the geometric location of the sensor is examined and found to produce better classication. It is found that using only a single pursuit does a comparable job, reducing complexity and allowing for real-time implementation in automated target recognition systems. JOMP is particularly highlighted in comparison with a previous EMI detection algorithm known as String Match.
Conservation of hot regions in protein-protein interaction in evolution.
Hu, Jing; Li, Jiarui; Chen, Nansheng; Zhang, Xiaolong
2016-11-01
The hot regions of protein-protein interactions refer to the active area which formed by those most important residues to protein combination process. With the research development on protein interactions, lots of predicted hot regions can be discovered efficiently by intelligent computing methods, while performing biology experiments to verify each every prediction is hardly to be done due to the time-cost and the complexity of the experiment. This study based on the research of hot spot residue conservations, the proposed method is used to verify authenticity of predicted hot regions that using machine learning algorithm combined with protein's biological features and sequence conservation, though multiple sequence alignment, module substitute matrix and sequence similarity to create conservation scoring algorithm, and then using threshold module to verify the conservation tendency of hot regions in evolution. This research work gives an effective method to verify predicted hot regions in protein-protein interactions, which also provides a useful way to deeply investigate the functional activities of protein hot regions. Copyright © 2016. Published by Elsevier Inc.
Three dimensional unstructured multigrid for the Euler equations
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.
1991-01-01
The three dimensional Euler equations are solved on unstructured tetrahedral meshes using a multigrid strategy. The driving algorithm consists of an explicit vertex-based finite element scheme, which employs an edge-based data structure to assemble the residuals. The multigrid approach employs a sequence of independently generated coarse and fine meshes to accelerate the convergence to steady-state of the fine grid solution. Variables, residuals and corrections are passed back and forth between the various grids of the sequence using linear interpolation. The addresses and weights for interpolation are determined in a preprocessing stage using linear interpolation. The addresses and weights for interpolation are determined in a preprocessing stage using an efficient graph traversal algorithm. The preprocessing operation is shown to require a negligible fraction of the CPU time required by the overall solution procedure, while gains in overall solution efficiencies greater than an order of magnitude are demonstrated on meshes containing up to 350,000 vertices. Solutions using globally regenerated fine meshes as well as adaptively refined meshes are given.
A Lossy Compression Technique Enabling Duplication-Aware Sequence Alignment
Freschi, Valerio; Bogliolo, Alessandro
2012-01-01
In spite of the recognized importance of tandem duplications in genome evolution, commonly adopted sequence comparison algorithms do not take into account complex mutation events involving more than one residue at the time, since they are not compliant with the underlying assumption of statistical independence of adjacent residues. As a consequence, the presence of tandem repeats in sequences under comparison may impair the biological significance of the resulting alignment. Although solutions have been proposed, repeat-aware sequence alignment is still considered to be an open problem and new efficient and effective methods have been advocated. The present paper describes an alternative lossy compression scheme for genomic sequences which iteratively collapses repeats of increasing length. The resulting approximate representations do not contain tandem duplications, while retaining enough information for making their comparison even more significant than the edit distance between the original sequences. This allows us to exploit traditional alignment algorithms directly on the compressed sequences. Results confirm the validity of the proposed approach for the problem of duplication-aware sequence alignment. PMID:22518086
Sim, Jaehyun; Sim, Jun; Park, Eunsung; Lee, Julian
2015-06-01
Many proteins undergo large-scale motions where relatively rigid domains move against each other. The identification of rigid domains, as well as the hinge residues important for their relative movements, is important for various applications including flexible docking simulations. In this work, we develop a method for protein rigid domain identification based on an exhaustive enumeration of maximal rigid domains, the rigid domains not fully contained within other domains. The computation is performed by mapping the problem to that of finding maximal cliques in a graph. A minimal set of rigid domains are then selected, which cover most of the protein with minimal overlap. In contrast to the results of existing methods that partition a protein into non-overlapping domains using approximate algorithms, the rigid domains obtained from exact enumeration naturally contain overlapping regions, which correspond to the hinges of the inter-domain bending motion. The performance of the algorithm is demonstrated on several proteins. © 2015 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, E; Lasio, G; Yi, B
2014-06-01
Purpose: The Iterative Subtraction Algorithm (ISA) method generates retrospectively a pre-selected motion phase cone-beam CT image from the full motion cone-beam CT acquired at standard rotation speed. This work evaluates ISA method with real lung patient data. Methods: The goal of the ISA algorithm is to extract motion and no- motion components form the full reconstruction CBCT. The workflow consists of subtracting from the full CBCT all of the undesired motion phases and obtain a motion de-blurred single-phase CBCT image, followed by iteration of this subtraction process. ISA is realized as follows: 1) The projections are sorted to various phases,more » and from all phases, a full reconstruction is performed to generate an image CTM. 2) Generate forward projections of CTM at the desired phase projection angles, the subtraction of projection and the forward projection will reconstruct a CTSub1, which diminishes the desired phase component. 3) By adding back the CTSub1 to CTm, no motion CBCT, CTS1, can be computed. 4) CTS1 still contains residual motion component. 5) This residual motion component can be further reduced by iteration.The ISA 4DCBCT technique was implemented using Varian Trilogy accelerator OBI system. To evaluate the method, a lung patient CBCT dataset was used. The reconstruction algorithm is FDK. Results: The single phase CBCT reconstruction generated via ISA successfully isolates the desired motion phase from the full motion CBCT, effectively reducing motion blur. It also shows improved image quality, with reduced streak artifacts with respect to the reconstructions from unprocessed phase-sorted projections only. Conclusion: A CBCT motion de-blurring algorithm, ISA, has been developed and evaluated with lung patient data. The algorithm allows improved visualization of a single phase motion extracted from a standard CBCT dataset. This study has been supported by National Institute of Health through R01CA133539.« less
Improvements of the Ray-Tracing Based Method Calculating Hypocentral Loci for Earthquake Location
NASA Astrophysics Data System (ADS)
Zhao, A. H.
2014-12-01
Hypocentral loci are very useful to reliable and visual earthquake location. However, they can hardly be analytically expressed when the velocity model is complex. One of methods numerically calculating them is based on a minimum traveltime tree algorithm for tracing rays: a focal locus is represented in terms of ray paths in its residual field from the minimum point (namely initial point) to low residual points (referred as reference points of the focal locus). The method has no restrictions on the complexity of the velocity model but still lacks the ability of correctly dealing with multi-segment loci. Additionally, it is rather laborious to set calculation parameters for obtaining loci with satisfying completeness and fineness. In this study, we improve the ray-tracing based numerical method to overcome its advantages. (1) Reference points of a hypocentral locus are selected from nodes of the model cells that it goes through, by means of a so-called peeling method. (2) The calculation domain of a hypocentral locus is defined as such a low residual area that its connected regions each include one segment of the locus and hence all the focal locus segments are respectively calculated with the minimum traveltime tree algorithm for tracing rays by repeatedly assigning the minimum residual reference point among those that have not been traced as an initial point. (3) Short ray paths without branching are removed to make the calculated locus finer. Numerical tests show that the improved method becomes capable of efficiently calculating complete and fine hypocentral loci of earthquakes in a complex model.
NASA Technical Reports Server (NTRS)
Sovers, O. J.; Lanyi, G. E.
1994-01-01
To compare the validity of current algorithms that map zenith tropospheric delay to arbitrary elevation angles, 10 different tropospheric mapping functions are used to analyze the current data base of Deep Space Network Mark 3 intercontinental very long baseline interferometric (VLBI) data. This analysis serves as a stringent test because of the high proportion of low-elevation observations necessitated by the extremely long baselines. Postfit delay and delay-rate residuals are examined, as well as the scatter of baseline lengths about the time-linear model that characterizes tectonic motion. Among the functions that utilize surface meteorological data as input parameters, the Lanyi 1984 mapping shows the best performance both for residuals and baselines, through the 1985 Davis function is statistically nearly identical. The next best performance is shown by the recent function of Niell, which is based on an examination of global atmospheric characteristics as a function of season and uses no weather data at the time of the measurements. The Niell function shows a slight improvement in residuals relative to Lanyi, but also an increase in baseline scatter that is significant for the California-Spain baseline. Two variants of the Chao mapping function, as well as the Chao tables used with the interpolation algorithm employed in the Orbit Determination Program software, show substandard behavior for both VLBI residuals and baseline scatter. The length of the California-Australia baseline (10,600 km) in the VLBI solution can vary by as much as 5 to 10 cm for the 10 mapping functions.
New spatial diversity equalizer based on PLL
NASA Astrophysics Data System (ADS)
Rao, Wei
2011-10-01
A new Spatial Diversity Equalizer (SDE) based on phase-locked loop (PLL) is proposed to overcome the inter-symbol interference (ISI) and phase rotations simultaneously in the digital communication system. The proposed SDE consists of equal gain combining technique based on a famous blind equalization algorithm constant modulus algorithm (CMA) and a PLL. Compared with conventional SDE, the proposed SDE has not only faster convergence rate and lower residual error but also the ability to recover carrier phase rotation. The efficiency of the method is proved by computer simulation.
Parallel goal-oriented adaptive finite element modeling for 3D electromagnetic exploration
NASA Astrophysics Data System (ADS)
Zhang, Y.; Key, K.; Ovall, J.; Holst, M.
2014-12-01
We present a parallel goal-oriented adaptive finite element method for accurate and efficient electromagnetic (EM) modeling of complex 3D structures. An unstructured tetrahedral mesh allows this approach to accommodate arbitrarily complex 3D conductivity variations and a priori known boundaries. The total electric field is approximated by the lowest order linear curl-conforming shape functions and the discretized finite element equations are solved by a sparse LU factorization. Accuracy of the finite element solution is achieved through adaptive mesh refinement that is performed iteratively until the solution converges to the desired accuracy tolerance. Refinement is guided by a goal-oriented error estimator that uses a dual-weighted residual method to optimize the mesh for accurate EM responses at the locations of the EM receivers. As a result, the mesh refinement is highly efficient since it only targets the elements where the inaccuracy of the solution corrupts the response at the possibly distant locations of the EM receivers. We compare the accuracy and efficiency of two approaches for estimating the primary residual error required at the core of this method: one uses local element and inter-element residuals and the other relies on solving a global residual system using a hierarchical basis. For computational efficiency our method follows the Bank-Holst algorithm for parallelization, where solutions are computed in subdomains of the original model. To resolve the load-balancing problem, this approach applies a spectral bisection method to divide the entire model into subdomains that have approximately equal error and the same number of receivers. The finite element solutions are then computed in parallel with each subdomain carrying out goal-oriented adaptive mesh refinement independently. We validate the newly developed algorithm by comparison with controlled-source EM solutions for 1D layered models and with 2D results from our earlier 2D goal oriented adaptive refinement code named MARE2DEM. We demonstrate the performance and parallel scaling of this algorithm on a medium-scale computing cluster with a marine controlled-source EM example that includes a 3D array of receivers located over a 3D model that includes significant seafloor bathymetry variations and a heterogeneous subsurface.
Statistical discovery of site inter-dependencies in sub-molecular hierarchical protein structuring
2012-01-01
Background Much progress has been made in understanding the 3D structure of proteins using methods such as NMR and X-ray crystallography. The resulting 3D structures are extremely informative, but do not always reveal which sites and residues within the structure are of special importance. Recently, there are indications that multiple-residue, sub-domain structural relationships within the larger 3D consensus structure of a protein can be inferred from the analysis of the multiple sequence alignment data of a protein family. These intra-dependent clusters of associated sites are used to indicate hierarchical inter-residue relationships within the 3D structure. To reveal the patterns of associations among individual amino acids or sub-domain components within the structure, we apply a k-modes attribute (aligned site) clustering algorithm to the ubiquitin and transthyretin families in order to discover associations among groups of sites within the multiple sequence alignment. We then observe what these associations imply within the 3D structure of these two protein families. Results The k-modes site clustering algorithm we developed maximizes the intra-group interdependencies based on a normalized mutual information measure. The clusters formed correspond to sub-structural components or binding and interface locations. Applying this data-directed method to the ubiquitin and transthyretin protein family multiple sequence alignments as a test bed, we located numerous interesting associations of interdependent sites. These clusters were then arranged into cluster tree diagrams which revealed four structural sub-domains within the single domain structure of ubiquitin and a single large sub-domain within transthyretin associated with the interface among transthyretin monomers. In addition, several clusters of mutually interdependent sites were discovered for each protein family, each of which appear to play an important role in the molecular structure and/or function. Conclusions Our results demonstrate that the method we present here using a k-modes site clustering algorithm based on interdependency evaluation among sites obtained from a sequence alignment of homologous proteins can provide significant insights into the complex, hierarchical inter-residue structural relationships within the 3D structure of a protein family. PMID:22793672
Statistical discovery of site inter-dependencies in sub-molecular hierarchical protein structuring.
Durston, Kirk K; Chiu, David Ky; Wong, Andrew Kc; Li, Gary Cl
2012-07-13
Much progress has been made in understanding the 3D structure of proteins using methods such as NMR and X-ray crystallography. The resulting 3D structures are extremely informative, but do not always reveal which sites and residues within the structure are of special importance. Recently, there are indications that multiple-residue, sub-domain structural relationships within the larger 3D consensus structure of a protein can be inferred from the analysis of the multiple sequence alignment data of a protein family. These intra-dependent clusters of associated sites are used to indicate hierarchical inter-residue relationships within the 3D structure. To reveal the patterns of associations among individual amino acids or sub-domain components within the structure, we apply a k-modes attribute (aligned site) clustering algorithm to the ubiquitin and transthyretin families in order to discover associations among groups of sites within the multiple sequence alignment. We then observe what these associations imply within the 3D structure of these two protein families. The k-modes site clustering algorithm we developed maximizes the intra-group interdependencies based on a normalized mutual information measure. The clusters formed correspond to sub-structural components or binding and interface locations. Applying this data-directed method to the ubiquitin and transthyretin protein family multiple sequence alignments as a test bed, we located numerous interesting associations of interdependent sites. These clusters were then arranged into cluster tree diagrams which revealed four structural sub-domains within the single domain structure of ubiquitin and a single large sub-domain within transthyretin associated with the interface among transthyretin monomers. In addition, several clusters of mutually interdependent sites were discovered for each protein family, each of which appear to play an important role in the molecular structure and/or function. Our results demonstrate that the method we present here using a k-modes site clustering algorithm based on interdependency evaluation among sites obtained from a sequence alignment of homologous proteins can provide significant insights into the complex, hierarchical inter-residue structural relationships within the 3D structure of a protein family.
A nonlinear lag correction algorithm for a-Si flat-panel x-ray detectors
Starman, Jared; Star-Lack, Josh; Virshup, Gary; Shapiro, Edward; Fahrig, Rebecca
2012-01-01
Purpose: Detector lag, or residual signal, in a-Si flat-panel (FP) detectors can cause significant shading artifacts in cone-beam computed tomography reconstructions. To date, most correction models have assumed a linear, time-invariant (LTI) model and correct lag by deconvolution with an impulse response function (IRF). However, the lag correction is sensitive to both the exposure intensity and the technique used for determining the IRF. Even when the LTI correction that produces the minimum error is found, residual artifact remains. A new non-LTI method was developed to take into account the IRF measurement technique and exposure dependencies. Methods: First, a multiexponential (N = 4) LTI model was implemented for lag correction. Next, a non-LTI lag correction, known as the nonlinear consistent stored charge (NLCSC) method, was developed based on the LTI multiexponential method. It differs from other nonlinear lag correction algorithms in that it maintains a consistent estimate of the amount of charge stored in the FP and it does not require intimate knowledge of the semiconductor parameters specific to the FP. For the NLCSC method, all coefficients of the IRF are functions of exposure intensity. Another nonlinear lag correction method that only used an intensity weighting of the IRF was also compared. The correction algorithms were applied to step-response projection data and CT acquisitions of a large pelvic phantom and an acrylic head phantom. The authors collected rising and falling edge step-response data on a Varian 4030CB a-Si FP detector operating in dynamic gain mode at 15 fps at nine incident exposures (2.0%–92% of the detector saturation exposure). For projection data, 1st and 50th frame lag were measured before and after correction. For the CT reconstructions, five pairs of ROIs were defined and the maximum and mean signal differences within a pair were calculated for the different exposures and step-response edge techniques. Results: The LTI corrections left residual 1st and 50th frame lag up to 1.4% and 0.48%, while the NLCSC lag correction reduced 1st and 50th frame residual lags to less than 0.29% and 0.0052%. For CT reconstructions, the NLCSC lag correction gave an average error of 11 HU for the pelvic phantom and 3 HU for the head phantom, compared to 14–19 HU and 2–11 HU for the LTI corrections and 15 HU and 9 HU for the intensity weighted non-LTI algorithm. The maximum ROI error was always smallest for the NLCSC correction. The NLCSC correction was also superior to the intensity weighting algorithm. Conclusions: The NLCSC lag algorithm corrected for the exposure dependence of lag, provided superior image improvement for the pelvic phantom reconstruction, and gave similar results to the best case LTI results for the head phantom. The blurred ring artifact that is left over in the LTI corrections was better removed by the NLCSC correction in all cases. PMID:23039642
Embedding strategies for effective use of information from multiple sequence alignments.
Henikoff, S.; Henikoff, J. G.
1997-01-01
We describe a new strategy for utilizing multiple sequence alignment information to detect distant relationships in searches of sequence databases. A single sequence representing a protein family is enriched by replacing conserved regions with position-specific scoring matrices (PSSMs) or consensus residues derived from multiple alignments of family members. In comprehensive tests of these and other family representations, PSSM-embedded queries produced the best results overall when used with a special version of the Smith-Waterman searching algorithm. Moreover, embedding consensus residues instead of PSSMs improved performance with readily available single sequence query searching programs, such as BLAST and FASTA. Embedding PSSMs or consensus residues into a representative sequence improves searching performance by extracting multiple alignment information from motif regions while retaining single sequence information where alignment is uncertain. PMID:9070452
Improving Distributed Diagnosis Through Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Bregon, Anibal; Daigle, Matthew John; Roychoudhury, Indranil; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino
2011-01-01
Complex engineering systems require efficient fault diagnosis methodologies, but centralized approaches do not scale well, and this motivates the development of distributed solutions. This work presents an event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, by using the structural model decomposition capabilities provided by Possible Conflicts. We develop a distributed diagnosis algorithm that uses residuals computed by extending Possible Conflicts to build local event-based diagnosers based on global diagnosability analysis. The proposed approach is applied to a multitank system, and results demonstrate an improvement in the design of local diagnosers. Since local diagnosers use only a subset of the residuals, and use subsystem models to compute residuals (instead of the global system model), the local diagnosers are more efficient than previously developed distributed approaches.
An Optimal CDS Construction Algorithm with Activity Scheduling in Ad Hoc Networks
Penumalli, Chakradhar; Palanichamy, Yogesh
2015-01-01
A new energy efficient optimal Connected Dominating Set (CDS) algorithm with activity scheduling for mobile ad hoc networks (MANETs) is proposed. This algorithm achieves energy efficiency by minimizing the Broadcast Storm Problem [BSP] and at the same time considering the node's remaining energy. The Connected Dominating Set is widely used as a virtual backbone or spine in mobile ad hoc networks [MANETs] or Wireless Sensor Networks [WSN]. The CDS of a graph representing a network has a significant impact on an efficient design of routing protocol in wireless networks. Here the CDS is a distributed algorithm with activity scheduling based on unit disk graph [UDG]. The node's mobility and residual energy (RE) are considered as parameters in the construction of stable optimal energy efficient CDS. The performance is evaluated at various node densities, various transmission ranges, and mobility rates. The theoretical analysis and simulation results of this algorithm are also presented which yield better results. PMID:26221627
Mary E. Mason; Daniel A. Herms; David W. Carey; Kathleen S. Knight; Nurul I. Faridi; Jennifer. Koch
2011-01-01
Contrary to the high levels of devastation observed on North American ash species infested with emerald ash borer (EAB) (Agrilus planipennis Fairmaire), reports from Asia indicate that EAB-induced destruction of Asian ash species is limited to stressed trees. This indicates that Asian ash species have co-evolved resistance, or at least a high degree...
ERIC Educational Resources Information Center
Schultheis, Patrick J.; Bowling, Bethany V.
2011-01-01
Recent experimental evidence indicates that the ability of adults to tolerate milk, cheese, and other lactose-containing dairy products is an autosomal dominant trait that co-evolved with dairy farming in Central Europe about 7,500 years ago. Among persons of European descent, this trait is strongly associated with a C to T substitution at a…
Parchman, Thomas L; Benkman, Craig W; Mezquida, Eduardo T
2007-09-01
Crossbills (Aves: Loxia) and several conifers have coevolved in predator-prey arms races over the last 10,000 years. However, the extent to which coevolutionary arms races have contributed to the adaptive radiation of crossbills or to any other adaptive radiation is largely unknown. Here we extend our previous studies of geographically structured coevolution by considering a crossbill-conifer interaction that has persisted for a much longer time period and involves a conifer with more variable annual seed production. We examined geographic variation in the cone and seed traits of two sister species of pines, Pinus occidentalis and P. cubensis, on the islands of Hispaniola and Cuba, respectively. We also compared the Hispaniolan crossbill (Loxia megaplaga) to its sister taxa the North American white-winged crossbill (Loxia leucoptera leucoptera). The Hispaniolan crossbill is endemic to Hispaniola whereas Cuba lacks crossbills. In addition and in contrast to previous studies, the variation in selection experienced by these pines due to crossbills is not confounded by the occurrence of selection by tree squirrels (Tamiasciurus and Sciurus). As predicted if P. occidentalis has evolved defenses in response to selection exerted by crossbills, cones of P. occidentalis have scales that are 53% thicker than those of P. cubensis. Cones of P. occidentalis, but not P. cubensis, also have well-developed spines, a known defense against vertebrate seed predators. Consistent with patterns of divergence seen in crossbills coevolving locally with other conifers, the Hispaniolan crossbill has evolved a bill that is 25% deeper than the white-winged crossbill. Together with phylogenetic analyses, our results suggest that predator-prey coevolution between Hispaniolan crossbills and P. occidentalis over approximately 600,000 years has caused substantial morphological evolution in both the crossbill and pine. This also indicates that cone crop fluctuations do not prevent crossbills and conifers from coevolving. Furthermore, because the traits at the phenotypic interface of the interaction apparently remain the same over at least several hundred thousand years, divergence as a result of coevolution is greater at lower latitude where crossbill-conifer interactions have been less interrupted by Pleistocene events.
False colors removal on the YCr-Cb color space
NASA Astrophysics Data System (ADS)
Tomaselli, Valeria; Guarnera, Mirko; Messina, Giuseppe
2009-01-01
Post-processing algorithms are usually placed in the pipeline of imaging devices to remove residual color artifacts introduced by the demosaicing step. Although demosaicing solutions aim to eliminate, limit or correct false colors and other impairments caused by a non ideal sampling, post-processing techniques are usually more powerful in achieving this purpose. This is mainly because the input of post-processing algorithms is a fully restored RGB color image. Moreover, post-processing can be applied more than once, in order to meet some quality criteria. In this paper we propose an effective technique for reducing the color artifacts generated by conventional color interpolation algorithms, in YCrCb color space. This solution efficiently removes false colors and can be executed while performing the edge emphasis process.
NASA Astrophysics Data System (ADS)
Swaraj Pati, Mythili N.; Korde, Pranav; Dey, Pallav
2017-11-01
The purpose of this paper is to introduce an optimised variant to the round robin scheduling algorithm. Every algorithm works in its own way and has its own merits and demerits. The proposed algorithm overcomes the shortfalls of the existing scheduling algorithms in terms of waiting time, turnaround time, throughput and number of context switches. The algorithm is pre-emptive and works based on the priority of the associated processes. The priority is decided on the basis of the remaining burst time of a particular process, that is; lower the burst time, higher the priority and higher the burst time, lower the priority. To complete the execution, a time quantum is initially specified. In case if the burst time of a particular process is less than 2X of the specified time quantum but more than 1X of the specified time quantum; the process is given high priority and is allowed to execute until it completes entirely and finishes. Such processes do not have to wait for their next burst cycle.
An Efficient Next Hop Selection Algorithm for Multi-Hop Body Area Networks
Ayatollahitafti, Vahid; Ngadi, Md Asri; Mohamad Sharif, Johan bin; Abdullahi, Mohammed
2016-01-01
Body Area Networks (BANs) consist of various sensors which gather patient’s vital signs and deliver them to doctors. One of the most significant challenges faced, is the design of an energy-efficient next hop selection algorithm to satisfy Quality of Service (QoS) requirements for different healthcare applications. In this paper, a novel efficient next hop selection algorithm is proposed in multi-hop BANs. This algorithm uses the minimum hop count and a link cost function jointly in each node to choose the best next hop node. The link cost function includes the residual energy, free buffer size, and the link reliability of the neighboring nodes, which is used to balance the energy consumption and to satisfy QoS requirements in terms of end to end delay and reliability. Extensive simulation experiments were performed to evaluate the efficiency of the proposed algorithm using the NS-2 simulator. Simulation results show that our proposed algorithm provides significant improvement in terms of energy consumption, number of packets forwarded, end to end delay and packet delivery ratio compared to the existing routing protocol. PMID:26771586
Comparison of Nonequilibrium Solution Algorithms Applied to Chemically Stiff Hypersonic Flows
NASA Technical Reports Server (NTRS)
Palmer, Grant; Venkatapathy, Ethiraj
1995-01-01
Three solution algorithms, explicit under-relaxation, point implicit, and lower-upper symmetric Gauss-Seidel, are used to compute nonequilibrium flow around the Apollo 4 return capsule at the 62-km altitude point in its descent trajectory. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness.The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15 and 30, the lower-upper symmetric Gauss-Seidel method produces an eight order of magnitude drop in the energy residual in one-third to one-half the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 30 and above. At Mach 40 the performance of the lower-upper symmetric Gauss-Seidel algorithm deteriorates to the point that it is out performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.
Improved Boundary Layer Depth Retrievals from MPLNET
NASA Technical Reports Server (NTRS)
Lewis, Jasper R.; Welton, Ellsworth J.; Molod, Andrea M.; Joseph, Everette
2013-01-01
Continuous lidar observations of the planetary boundary layer (PBL) depth have been made at the Micropulse Lidar Network (MPLNET) site in Greenbelt, MD since April 2001. However, because of issues with the operational PBL depth algorithm, the data is not reliable for determining seasonal and diurnal trends. Therefore, an improved PBL depth algorithm has been developed which uses a combination of the wavelet technique and image processing. The new algorithm is less susceptible to contamination by clouds and residual layers, and in general, produces lower PBL depths. A 2010 comparison shows the operational algorithm overestimates the daily mean PBL depth when compared to the improved algorithm (1.85 and 1.07 km, respectively). The improved MPLNET PBL depths are validated using radiosonde comparisons which suggests the algorithm performs well to determine the depth of a fully developed PBL. A comparison with the Goddard Earth Observing System-version 5 (GEOS-5) model suggests that the model may underestimate the maximum daytime PBL depth by 410 m during the spring and summer. The best agreement between MPLNET and GEOS-5 occurred during the fall and they diered the most in the winter.
Detecting an atomic clock frequency anomaly using an adaptive Kalman filter algorithm
NASA Astrophysics Data System (ADS)
Song, Huijie; Dong, Shaowu; Wu, Wenjun; Jiang, Meng; Wang, Weixiong
2018-06-01
The abnormal frequencies of an atomic clock mainly include frequency jump and frequency drift jump. Atomic clock frequency anomaly detection is a key technique in time-keeping. The Kalman filter algorithm, as a linear optimal algorithm, has been widely used in real-time detection for abnormal frequency. In order to obtain an optimal state estimation, the observation model and dynamic model of the Kalman filter algorithm should satisfy Gaussian white noise conditions. The detection performance is degraded if anomalies affect the observation model or dynamic model. The idea of the adaptive Kalman filter algorithm, applied to clock frequency anomaly detection, uses the residuals given by the prediction for building ‘an adaptive factor’ the prediction state covariance matrix is real-time corrected by the adaptive factor. The results show that the model error is reduced and the detection performance is improved. The effectiveness of the algorithm is verified by the frequency jump simulation, the frequency drift jump simulation and the measured data of the atomic clock by using the chi-square test.
NASA Astrophysics Data System (ADS)
Dutt, Ashutosh; Mishra, Ashish; Goswami, D. R.; Kumar, A. S. Kiran
2016-05-01
The push-broom sensors in bands meant to study oceans, in general suffer from residual non uniformity even after radiometric correction. The in-orbit data from OCM-2 shows pronounced striping in lower bands. There have been many attempts and different approaches to solve the problem using image data itself. The success or lack of it of each algorithm lies on the quality of the uniform region identified. In this paper, an image based destriping algorithm is presented with constraints being derived from Ground Calibration exercise. The basis of the methodology is determination of pixel to pixel non-uniformity through uniform segments identified and collected from large number of images, covering the dynamic range of the sensor. The results show the effectiveness of the algorithm over different targets. The performance is qualitatively evaluated by visual inspection and quantitatively measured by two parameters.
Water Quality Monitoring for Lake Constance with a Physically Based Algorithm for MERIS Data.
Odermatt, Daniel; Heege, Thomas; Nieke, Jens; Kneubühler, Mathias; Itten, Klaus
2008-08-05
A physically based algorithm is used for automatic processing of MERIS level 1B full resolution data. The algorithm is originally used with input variables for optimization with different sensors (i.e. channel recalibration and weighting), aquatic regions (i.e. specific inherent optical properties) or atmospheric conditions (i.e. aerosol models). For operational use, however, a lake-specific parameterization is required, representing an approximation of the spatio-temporal variation in atmospheric and hydrooptic conditions, and accounting for sensor properties. The algorithm performs atmospheric correction with a LUT for at-sensor radiance, and a downhill simplex inversion of chl-a, sm and y from subsurface irradiance reflectance. These outputs are enhanced by a selective filter, which makes use of the retrieval residuals. Regular chl-a sampling measurements by the Lake's protection authority coinciding with MERIS acquisitions were used for parameterization, training and validation.
Autoregressive statistical pattern recognition algorithms for damage detection in civil structures
NASA Astrophysics Data System (ADS)
Yao, Ruigen; Pakzad, Shamim N.
2012-08-01
Statistical pattern recognition has recently emerged as a promising set of complementary methods to system identification for automatic structural damage assessment. Its essence is to use well-known concepts in statistics for boundary definition of different pattern classes, such as those for damaged and undamaged structures. In this paper, several statistical pattern recognition algorithms using autoregressive models, including statistical control charts and hypothesis testing, are reviewed as potentially competitive damage detection techniques. To enhance the performance of statistical methods, new feature extraction techniques using model spectra and residual autocorrelation, together with resampling-based threshold construction methods, are proposed. Subsequently, simulated acceleration data from a multi degree-of-freedom system is generated to test and compare the efficiency of the existing and proposed algorithms. Data from laboratory experiments conducted on a truss and a large-scale bridge slab model are then used to further validate the damage detection methods and demonstrate the superior performance of proposed algorithms.
CFA-aware features for steganalysis of color images
NASA Astrophysics Data System (ADS)
Goljan, Miroslav; Fridrich, Jessica
2015-03-01
Color interpolation is a form of upsampling, which introduces constraints on the relationship between neighboring pixels in a color image. These constraints can be utilized to substantially boost the accuracy of steganography detectors. In this paper, we introduce a rich model formed by 3D co-occurrences of color noise residuals split according to the structure of the Bayer color filter array to further improve detection. Some color interpolation algorithms, AHD and PPG, impose pixel constraints so tight that extremely accurate detection becomes possible with merely eight features eliminating the need for model richification. We carry out experiments on non-adaptive LSB matching and the content-adaptive algorithm WOW on five different color interpolation algorithms. In contrast to grayscale images, in color images that exhibit traces of color interpolation the security of WOW is significantly lower and, depending on the interpolation algorithm, may even be lower than non-adaptive LSB matching.
NASA Astrophysics Data System (ADS)
Saadat, S. A.; Safari, A.; Needell, D.
2016-06-01
The main role of gravity field recovery is the study of dynamic processes in the interior of the Earth especially in exploration geophysics. In this paper, the Stabilized Orthogonal Matching Pursuit (SOMP) algorithm is introduced for sparse reconstruction of regional gravity signals of the Earth. In practical applications, ill-posed problems may be encountered regarding unknown parameters that are sensitive to the data perturbations. Therefore, an appropriate regularization method needs to be applied to find a stabilized solution. The SOMP algorithm aims to regularize the norm of the solution vector, while also minimizing the norm of the corresponding residual vector. In this procedure, a convergence point of the algorithm that specifies optimal sparsity-level of the problem is determined. The results show that the SOMP algorithm finds the stabilized solution for the ill-posed problem at the optimal sparsity-level, improving upon existing sparsity based approaches.
NASA Astrophysics Data System (ADS)
Chitchian, Shahab; Vincent, Kathleen L.; Vargas, Gracie; Motamedi, Massoud
2012-11-01
We have explored the use of optical coherence tomography (OCT) as a noninvasive tool for assessing the toxicity of topical microbicides, products used to prevent HIV, by monitoring the integrity of the vaginal epithelium. A novel feature-based segmentation algorithm using a nearest-neighbor classifier was developed to monitor changes in the morphology of vaginal epithelium. The two-step automated algorithm yielded OCT images with a clearly defined epithelial layer, enabling differentiation of normal and damaged tissue. The algorithm was robust in that it was able to discriminate the epithelial layer from underlying stroma as well as residual microbicide product on the surface. This segmentation technique for OCT images has the potential to be readily adaptable to the clinical setting for noninvasively defining the boundaries of the epithelium, enabling quantifiable assessment of microbicide-induced damage in vaginal tissue.
An Extension to the Kalman Filter for an Improved Detection of Unknown Behavior
NASA Technical Reports Server (NTRS)
Benazera, Emmanuel; Narasimhan, Sriram
2005-01-01
The use of Kalman filter (KF) interferes with fault detection algorithms based on the residual between estimated and measured variables, since the measured values are used to update the estimates. This feedback results in the estimates being pulled closer to the measured values, influencing the residuals in the process. Here we present a fault detection scheme for systems that are being tracked by a KF. Our approach combines an open-loop prediction over an adaptive window and an information-based measure of the deviation of the Kalman estimate from the prediction to improve fault detection.
Campos, Andre N.; Souza, Efren L.; Nakamura, Fabiola G.; Nakamura, Eduardo F.; Rodrigues, Joel J. P. C.
2012-01-01
Target tracking is an important application of wireless sensor networks. The networks' ability to locate and track an object is directed linked to the nodes' ability to locate themselves. Consequently, localization systems are essential for target tracking applications. In addition, sensor networks are often deployed in remote or hostile environments. Therefore, density control algorithms are used to increase network lifetime while maintaining its sensing capabilities. In this work, we analyze the impact of localization algorithms (RPE and DPE) and density control algorithms (GAF, A3 and OGDC) on target tracking applications. We adapt the density control algorithms to address the k-coverage problem. In addition, we analyze the impact of network density, residual integration with density control, and k-coverage on both target tracking accuracy and network lifetime. Our results show that DPE is a better choice for target tracking applications than RPE. Moreover, among the evaluated density control algorithms, OGDC is the best option among the three. Although the choice of the density control algorithm has little impact on the tracking precision, OGDC outperforms GAF and A3 in terms of tracking time. PMID:22969329
The role of receptor topology in the vitamin D3 uptake and Ca{sup 2+} response systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morrill, Gene A., E-mail: gene.morrill@einstein.yu.edu; Kostellow, Adele B.; Gupta, Raj K.
The steroid hormone, vitamin D{sub 3}, regulates gene transcription via at least two receptors and initiates putative rapid response systems at the plasma membrane. The vitamin D receptor (VDR) binds vitamin D{sub 3} and a second receptor, importin-4, imports the VDR-vitamin D{sub 3} complex into the nucleus via nuclear pores. Here we present evidence that the Homo sapiens VDR homodimer contains two transmembrane (TM) helices ({sup 327}E – D{sup 342}), two TM “half-helix” ({sup 264}K − N{sup 276}), one or more large channels, and 16 cholesterol binding (CRAC/CARC) domains. The importin-4 monomer exhibits 3 pore-lining regions ({sup 226}E – L{supmore » 251}; {sup 768}V – G{sup 783}; {sup 876}S – A{sup 891}) and 16 CRAC/CARC domains. The MEMSAT algorithm indicates that VDR and importin-4 may not be restricted to cytoplasm and nucleus. VDR homodimer TM helix-topology predicts insertion into the plasma membrane, with two 84 residue C-terminal regions being extracellular. Similarly, MEMSAT predicts importin-4 insertion into the plasma membrane with 226 residue extracellular N-terminal regions and 96 residue C-terminal extracellular loops; with the pore-lining regions contributing gated Ca{sup 2+} channels. The PoreWalker algorithm indicates that, of the 427 residues in each VDR monomer, 91 line the largest channel, including two vitamin D{sub 3} binding sites and residues from both the TM helix and “half-helix”. Cholesterol-binding domains also extend into the channel within the ligand binding region. Programmed changes in bound cholesterol may regulate both membrane Ca{sup 2+} response systems and vitamin D{sub 3} uptake as well as receptor internalization by the endomembrane system culminating in uptake of the vitamin D{sub 3}-VDR-importin-4 complex into the nucleus.« less
Residual motion compensation in ECG-gated interventional cardiac vasculature reconstruction
NASA Astrophysics Data System (ADS)
Schwemmer, C.; Rohkohl, C.; Lauritsch, G.; Müller, K.; Hornegger, J.
2013-06-01
Three-dimensional reconstruction of cardiac vasculature from angiographic C-arm CT (rotational angiography) data is a major challenge. Motion artefacts corrupt image quality, reducing usability for diagnosis and guidance. Many state-of-the-art approaches depend on retrospective ECG-gating of projection data for image reconstruction. A trade-off has to be made regarding the size of the ECG-gating window. A large temporal window is desirable to avoid undersampling. However, residual motion will occur in a large window, causing motion artefacts. We present an algorithm to correct for residual motion. Our approach is based on a deformable 2D-2D registration between the forward projection of an initial, ECG-gated reconstruction, and the original projection data. The approach is fully automatic and does not require any complex segmentation of vasculature, or landmarks. The estimated motion is compensated for during the backprojection step of a subsequent reconstruction. We evaluated the method using the publicly available CAVAREV platform and on six human clinical datasets. We found a better visibility of structure, reduced motion artefacts, and increased sharpness of the vessels in the compensated reconstructions compared to the initial reconstructions. At the time of writing, our algorithm outperforms the leading result of the CAVAREV ranking list. For the clinical datasets, we found an average reduction of motion artefacts by 13 ± 6%. Vessel sharpness was improved by 25 ± 12% on average.
NASA Astrophysics Data System (ADS)
Tian, Yunfeng; Shen, Zheng-Kang
2016-02-01
We develop a spatial filtering method to remove random noise and extract the spatially correlated transients (i.e., common-mode component (CMC)) that deviate from zero mean over the span of detrended position time series of a continuous Global Positioning System (CGPS) network. The technique utilizes a weighting scheme that incorporates two factors—distances between neighboring sites and their correlations of long-term residual position time series. We use a grid search algorithm to find the optimal thresholds for deriving the CMC that minimizes the root-mean-square (RMS) of the filtered residual position time series. Comparing to the principal component analysis technique, our method achieves better (>13% on average) reduction of residual position scatters for the CGPS stations in western North America, eliminating regional transients of all spatial scales. It also has advantages in data manipulation: less intervention and applicable to a dense network of any spatial extent. Our method can also be used to detect CMC irrespective of its origins (i.e., tectonic or nontectonic), if such signals are of particular interests for further study. By varying the filtering distance range, the long-range CMC related to atmospheric disturbance can be filtered out, uncovering CMC associated with transient tectonic deformation. A correlation-based clustering algorithm is adopted to identify stations cluster that share the common regional transient characteristics.
An Adaptive Kalman Filter using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
Determining the accuracy of maximum likelihood parameter estimates with colored residuals
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1994-01-01
An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.
In-orbit offline estimation of the residual magnetic dipole biases of the POPSAT-HIP1 nanosatellite
NASA Astrophysics Data System (ADS)
Seriani, S.; Brama, Y. L.; Gallina, P.; Manzoni, G.
2016-05-01
The nanosatellite POPSAT-HIP1 is a Cubesat-class spacecraft launched on the 19th of June 2014 to test cold-gas based micro-thrusters; it is, as of April 2015, in a low Earth orbit at around 600 km of altitude and is equipped, notably, with a magnetometer. In order to increment the performance of the attitude control of nanosatellites like POPSAT, it is extremely useful to determine the main biases that act on the magnetometer while in orbit, for example those generated by the residual magnetic moment of the satellite itself and those originating from the transmitter. Thus, we present a methodology to perform an in-orbit offline estimation of the magnetometer bias caused by the residual magnetic moment of the satellite (we refer to this as the residual magnetic dipole bias, or RMDB). The method is based on a genetic algorithm coupled with a simplex algorithm, and provides the bias RMDB vector as output, requiring solely the magnetometer readings. This is exploited to compute the transmitter magnetic dipole bias (TMDB), by comparing the computed RMDB with the transmitter operating and idling. An experimental investigation is carried out by acquiring the magnetometer outputs in different phases of the spacecraft life (stabilized, maneuvering, free tumble). Results show remarkable accuracy with an RMDB orientation error between 3.6 ° and 6.2 ° , and a module error around 7 % . TMDB values show similar coherence values. Finally, we note some drawbacks of the methodologies, as well as some possible improvements, e.g. precise transmitter operations logging. In general, however, the methodology proves to be quite effective even with sparse and noisy data, and promises to be incisive in the improvement of attitude control systems.
USDA-ARS?s Scientific Manuscript database
Beneficial bacteria have been found in honey stomachs of the honey bee, Apis mellifera; a unique flora that appears to have coevolved with the honey bees. The health of our most important pollinators has come into focus during the last few years, because of yet unexplained conditions and diseases t...
James A. Stevens; Claire A. Montgomery
2002-01-01
In this report, multiresource research is described as it has coevolved with forest policy objectivesâfrom managing for single or dominant uses, to managing for compatible multiple forest uses, to sustaining ecosystem health on the forest. The evolution of analytical methods for multiresource research is traced from impact analysis to multiresource modeling, and...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernard, Michael Lewis
2015-11-01
The purpose of this work is to create a generalizable data- and theory-supported capability to better understand and anticipate (with quantifiable uncertainty): 1) how the dynamics of allegiance formations between various groups and society are impacted by active conflict and by third-party interventions and 2) how/why extremist allegiances co-evolve over time due to changing geopolitical, sociocultural, and military conditions.
A Complex Network Analysis of Granular Fabric Evolution in Three-Dimensions
2011-01-01
organized pattern formation (e.g., strain localization), and co-evolution of emergent in- ternal structures (e.g., force cycles and force chains) [15...these networks, particularly recurring patterns or motifs, and understanding how these co-evolve are crucial to the robust characterization and...the lead up to and during failure. Since failure patterns and boundaries of flow in three-dimensional specimens can be quite complicated and difficult
The `I see you' prey-predator signal of Apis cerana is innate
NASA Astrophysics Data System (ADS)
Tan, Ken; Wang, Zhenwei; Chen, Weiweng; Hu, Zongwen; Oldroyd, Benjamin P.
2013-03-01
An `I see you' (ISY) prey-predator signal can co-evolve when such a signal benefits both prey and predator. The prey benefits if, by producing the signal, the predator is likely to break off an attack. The predator benefits if it is informed by the signal that the prey is aware of its presence and can break off what is likely to be an unsuccessful and potentially costly hunt. Because the signal and response co-evolve in two species, the behaviour underlying an ISY signal is expected to have a strong genetic component and cannot be entirely learned. An example of an ISY signal is the `shimmering' behaviour performed by Asian hive bee workers in the presence of their predator Vespa velutina. To test the prediction that bee-hornet signalling is heritable, we let honey bee workers of two species emerge in an incubator so that they had never been exposed to V. velutina. In Apis cerana, the shimmering response developed 48 h post-emergence, was strong after 72 h and increased further over 2 weeks. In contrast, A. mellifera, which has evolved in the absence of Asian hornets, did not produce the shimmering signal. In control tests, A. cerana workers exposed to a non-threatening butterfly did not respond with the shimmering signal.
Snijders, Tom A.B.; Lomi, Alessandro; Torló, Vanina Jasmine
2012-01-01
We propose a new stochastic actor-oriented model for the co-evolution of two-mode and one-mode networks. The model posits that activities of a set of actors, represented in the two-mode network, co-evolve with exchanges and interactions between the actors, as represented in the one-mode network. The model assumes that the actors, not the activities, have agency. The empirical value of the model is demonstrated by examining how employment preferences co-evolve with friendship and advice relations in a group of seventy-five MBA students. The analysis shows that activity in the two-mode network, as expressed by number of employment preferences, is related to activity in the friendship network, as expressed by outdegrees. Further, advice ties between students lead to agreement with respect to employment preferences. In addition, considering the multiplexity of advice and friendship ties yields a better understanding of the dynamics of the advice relation: tendencies to reciprocation and homophily in advice relations are mediated to an important extent by friendship relations. The discussion pays attention to the implications of this study in the broader context of current efforts to model the co-evolutionary dynamics of social networks and individual behavior. PMID:23690653
White, Thomas E; Zeil, Jochen; Kemp, Darrell J; Rodriguez, R; Lenormand, T
2015-01-01
Sensory drive theory contends that signaling systems should evolve to optimize transmission between senders and intended receivers, while minimizing visibility to eavesdroppers where possible. In visual communication systems, the high directionality afforded by iridescent coloration presents underappreciated avenues for mediating this trade-off. This hypothesis predicts functional links between signal design and presentation such that visual conspicuousness is maximized only under ecologically relevant settings and/or to select audiences. We addressed this prediction using Hypolimnas bolina, a butterfly in which males possess ultraviolet markings on their dorsal wing surfaces with a narrow angular reflectance function. Males bearing brighter dorsal markings are increasingly attractive to females, but also likely more conspicuous to predators. Our data indicate that, during courtship (and given the ritualized wingbeat dynamics at these times), males position themselves relative to females in such a way as to simultaneously maximize three components of known or putative signal conspicuousness: brightness, area, and iridescent flash. This suggests that male signal design and display have coevolved for the delivery of an optimally conspicuous signal to courted females. More broadly, these findings imply a potential signaling role for iridescence itself, and pose a novel example for how signal design may coevolve with the behavioral context of display. PMID:25328995
Himler, Anna G; Machado, Carlos A
2009-12-01
Coevolutionary interactions between plants and their associated pollinators and seed dispersers are thought to have promoted the diversification of flowering plants (Raven 1977; Regal 1977; Stebbins 1981). The actual mechanisms by which pollinators could drive species diversification in plants are not fully understood. However, it is thought that pollinator host specialization can influence the evolution of reproductive isolation among plant populations because the pollinator's choice of host is what determines patterns of gene flow in its host plant, and host choice may also have important consequences on pollinator and host fitness (Grant 1949; Bawa 1992). In this issue of Molecular Ecology, Smith et al. (2009) present a very interesting study that addresses how host specialization affects pollinator fitness and patterns of gene flow in a plant host. Several aspects of this study match elements of a seminal mathematical model of plant-pollinator codivergence (Kiester et al. 1984) suggesting that reciprocal selection for matched plant and pollinator reproductive traits may lead to speciation in the host and its pollinator when there is strong host specialization and a pattern of geographic subdivision. Smith et al.'s study represents an important step to fill the gap in our understanding of how reciprocal selection may lead to speciation in coevolved plant-pollinator mutualisms.
Diversity Driven Coexistence: Collective Stability in the Cyclic Competition of Three Species
NASA Astrophysics Data System (ADS)
Bassler, Kevin E.; Frey, Erwin; Zia, R. K. P.
2015-03-01
The basic physics of collective behavior are often difficult to quantify and understand, particularly when the system is driven out of equilibrium. Many complex systems are usefully described as complex networks, consisting of nodes and links. The nodes specify individual components of the system and the links describe their interactions. When both nodes and links change dynamically, or `co-evolve', as happens in many realistic systems, complex mathematical structures are encountered, posing challenges to our understanding. In this context, we introduce a minimal system of node and link degrees of freedom, co-evolving with stochastic rules. Specifically, we show that diversity of social temperament (intro- or extroversion) can produce collective stable coexistence when three species compete cyclically. It is well-known that when only extroverts exist in a stochastic rock-paper-scissors game, or in a conserved predator-prey, Lotka-Volterra system, extinction occurs at times of O(N), where N is the number of nodes. We find that when both introverts and extroverts exist, where introverts sever social interactions and extroverts create them, collective coexistence prevails in long-living, quasi-stationary states. Work supported by the NSF through Grants DMR-1206839 (KEB) and DMR-1244666 (RKPZ), and by the AFOSR and DARPA through Grant FA9550-12-1-0405 (KEB).
Within-population covariation between sexual reproduction and susceptibility to local parasites.
Gibson, Amanda K; Xu, Julie Y; Lively, Curtis M
2016-09-01
Evolutionary biology has yet to reconcile the ubiquity of sex with its costs relative to asexual reproduction. Here, we test the hypothesis that coevolving parasites maintain sex in their hosts. Specifically, we examined the distributions of sexual reproduction and susceptibility to local parasites within a single population of freshwater snails (Potamopyrgus antipodarum). Susceptibility to local trematode parasites (Microphallus sp.) is a relative measure of the strength of coevolutionary selection in this system. Thus, if coevolving parasites maintain sex, sexual snails should be common where susceptibility is high. We tested this prediction in a mixed population of sexual and asexual snails by measuring the susceptibility of snails from multiple sites in a lake. Consistent with the prediction, the frequency of sexual snails was tightly and positively correlated with susceptibility to local parasites. Strikingly, in just two years, asexual females increased in frequency at sites where susceptibility declined. We also found that the frequency of sexual females covaries more strongly with susceptibility than with the prevalence of Microphallus infection in the field. In linking susceptibility to the frequency of sexual hosts, our results directly implicate spatial variation in coevolutionary selection in driving the geographic mosaic of sex. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.
A versatile multi-objective FLUKA optimization using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Vlachoudis, Vasilis; Antoniucci, Guido Arnau; Mathot, Serge; Kozlowska, Wioletta Sandra; Vretenar, Maurizio
2017-09-01
Quite often Monte Carlo simulation studies require a multi phase-space optimization, a complicated task, heavily relying on the operator experience and judgment. Examples of such calculations are shielding calculations with stringent conditions in the cost, in residual dose, material properties and space available, or in the medical field optimizing the dose delivered to a patient under a hadron treatment. The present paper describes our implementation inside flair[1] the advanced user interface of FLUKA[2,3] of a multi-objective Genetic Algorithm[Erreur ! Source du renvoi introuvable.] to facilitate the search for the optimum solution.
Query3d: a new method for high-throughput analysis of functional residues in protein structures.
Ausiello, Gabriele; Via, Allegra; Helmer-Citterich, Manuela
2005-12-01
The identification of local similarities between two protein structures can provide clues of a common function. Many different methods exist for searching for similar subsets of residues in proteins of known structure. However, the lack of functional and structural information on single residues, together with the low level of integration of this information in comparison methods, is a limitation that prevents these methods from being fully exploited in high-throughput analyses. Here we describe Query3d, a program that is both a structural DBMS (Database Management System) and a local comparison method. The method conserves a copy of all the residues of the Protein Data Bank annotated with a variety of functional and structural information. New annotations can be easily added from a variety of methods and known databases. The algorithm makes it possible to create complex queries based on the residues' function and then to compare only subsets of the selected residues. Functional information is also essential to speed up the comparison and the analysis of the results. With Query3d, users can easily obtain statistics on how many and which residues share certain properties in all proteins of known structure. At the same time, the method also finds their structural neighbours in the whole PDB. Programs and data can be accessed through the PdbFun web interface.
Query3d: a new method for high-throughput analysis of functional residues in protein structures
Ausiello, Gabriele; Via, Allegra; Helmer-Citterich, Manuela
2005-01-01
Background The identification of local similarities between two protein structures can provide clues of a common function. Many different methods exist for searching for similar subsets of residues in proteins of known structure. However, the lack of functional and structural information on single residues, together with the low level of integration of this information in comparison methods, is a limitation that prevents these methods from being fully exploited in high-throughput analyses. Results Here we describe Query3d, a program that is both a structural DBMS (Database Management System) and a local comparison method. The method conserves a copy of all the residues of the Protein Data Bank annotated with a variety of functional and structural information. New annotations can be easily added from a variety of methods and known databases. The algorithm makes it possible to create complex queries based on the residues' function and then to compare only subsets of the selected residues. Functional information is also essential to speed up the comparison and the analysis of the results. Conclusion With Query3d, users can easily obtain statistics on how many and which residues share certain properties in all proteins of known structure. At the same time, the method also finds their structural neighbours in the whole PDB. Programs and data can be accessed through the PdbFun web interface. PMID:16351754
New correction procedures for the fast field program which extend its range
NASA Technical Reports Server (NTRS)
West, M.; Sack, R. A.
1990-01-01
A fast field program (FFP) algorithm was developed based on the method of Lee et al., for the prediction of sound pressure level from low frequency, high intensity sources. In order to permit accurate predictions at distances greater than 2 km, new correction procedures have had to be included in the algorithm. Certain functions, whose Hankel transforms can be determined analytically, are subtracted from the depth dependent Green's function. The distance response is then obtained as the sum of these transforms and the Fast Fourier Transformation (FFT) of the residual k dependent function. One procedure, which permits the elimination of most complex exponentials, has allowed significant changes in the structure of the FFP algorithm, which has resulted in a substantial reduction in computation time.
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Hixon, Duane
1992-01-01
The development of efficient iterative solution methods for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations is discussed. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. In this work, another approach based on the classical conjugate gradient method, known as the Generalized Minimum Residual (GMRES) algorithm is investigated. The GMRES algorithm has been used in the past by a number of researchers for solving steady viscous and inviscid flow problems. Here, we investigate the suitability of this algorithm for solving the system of non-linear equations that arise in unsteady Navier-Stokes solvers at each time step.
Comparison of multihardware parallel implementations for a phase unwrapping algorithm
NASA Astrophysics Data System (ADS)
Hernandez-Lopez, Francisco Javier; Rivera, Mariano; Salazar-Garibay, Adan; Legarda-Sáenz, Ricardo
2018-04-01
Phase unwrapping is an important problem in the areas of optical metrology, synthetic aperture radar (SAR) image analysis, and magnetic resonance imaging (MRI) analysis. These images are becoming larger in size and, particularly, the availability and need for processing of SAR and MRI data have increased significantly with the acquisition of remote sensing data and the popularization of magnetic resonators in clinical diagnosis. Therefore, it is important to develop faster and accurate phase unwrapping algorithms. We propose a parallel multigrid algorithm of a phase unwrapping method named accumulation of residual maps, which builds on a serial algorithm that consists of the minimization of a cost function; minimization achieved by means of a serial Gauss-Seidel kind algorithm. Our algorithm also optimizes the original cost function, but unlike the original work, our algorithm is a parallel Jacobi class with alternated minimizations. This strategy is known as the chessboard type, where red pixels can be updated in parallel at same iteration since they are independent. Similarly, black pixels can be updated in parallel in an alternating iteration. We present parallel implementations of our algorithm for different parallel multicore architecture such as CPU-multicore, Xeon Phi coprocessor, and Nvidia graphics processing unit. In all the cases, we obtain a superior performance of our parallel algorithm when compared with the original serial version. In addition, we present a detailed comparative performance of the developed parallel versions.
Predictions for Proteins, RNAs and DNAs with the Gaussian Dielectric Function Using DelPhiPKa
Wang, Lin; Li, Lin; Alexov, Emil
2015-01-01
We developed a Poisson-Boltzmann based approach to calculate the PKa values of protein ionizable residues (Glu, Asp, His, Lys and Arg), nucleotides of RNA and single stranded DNA. Two novel features were utilized: the dielectric properties of the macromolecules and water phase were modeled via the smooth Gaussian-based dielectric function in DelPhi and the corresponding electrostatic energies were calculated without defining the molecular surface. We tested the algorithm by calculating PKa values for more than 300 residues from 32 proteins from the PPD dataset and achieved an overall RMSD of 0.77. Particularly, the RMSD of 0.55 was achieved for surface residues, while the RMSD of 1.1 for buried residues. The approach was also found capable of capturing the large PKa shifts of various single point mutations in staphylococcal nuclease (SNase) from PKa -cooperative dataset, resulting in an overall RMSD of 1.6 for this set of pKa’s. Investigations showed that predictions for most of buried mutant residues of SNase could be improved by using higher dielectric constant values. Furthermore, an option to generate different hydrogen positions also improves PKa predictions for buried carboxyl residues. Finally, the PKa calculations on two RNAs demonstrated the capability of this approach for other types of biomolecules. PMID:26408449
Parallel Splash Belief Propagation
2010-08-01
s/ ROGER J. DZIEGIEL, Jr. MICHAEL J. WESSING, Deputy Chief Work Unit Manager For... Management and Budget, Paperwork Reduction Project (0704-0188) Washington, DC 20503. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE...Propagation algorithm outperforms synchronous, round-robin, wild-fire ( Ranganathan et al., 2007), and residual (Elidan et al., 2006) belief propagation
Adaptive color demosaicing and false color removal
NASA Astrophysics Data System (ADS)
Guarnera, Mirko; Messina, Giuseppe; Tomaselli, Valeria
2010-04-01
Color interpolation solutions drastically influence the quality of the whole image generation pipeline, so they must guarantee the rendering of high quality pictures by avoiding typical artifacts such as blurring, zipper effects, and false colors. Moreover, demosaicing should avoid emphasizing typical artifacts of real sensors data, such as noise and green imbalance effect, which would be further accentuated by the subsequent steps of the processing pipeline. We propose a new adaptive algorithm that decides the interpolation technique to apply to each pixel, according to its neighborhood analysis. Edges are effectively interpolated through a directional filtering approach that interpolates the missing colors, selecting the suitable filter depending on edge orientation. Regions close to edges are interpolated through a simpler demosaicing approach. Thus flat regions are identified and low-pass filtered to eliminate some residual noise and to minimize the annoying green imbalance effect. Finally, an effective false color removal algorithm is used as a postprocessing step to eliminate residual color errors. The experimental results show how sharp edges are preserved, whereas undesired zipper effects are reduced, improving the edge resolution itself and obtaining superior image quality.
NASA Astrophysics Data System (ADS)
Citro, V.; Luchini, P.; Giannetti, F.; Auteri, F.
2017-09-01
The study of the stability of a dynamical system described by a set of partial differential equations (PDEs) requires the computation of unstable states as the control parameter exceeds its critical threshold. Unfortunately, the discretization of the governing equations, especially for fluid dynamic applications, often leads to very large discrete systems. As a consequence, matrix based methods, like for example the Newton-Raphson algorithm coupled with a direct inversion of the Jacobian matrix, lead to computational costs too large in terms of both memory and execution time. We present a novel iterative algorithm, inspired by Krylov-subspace methods, which is able to compute unstable steady states and/or accelerate the convergence to stable configurations. Our new algorithm is based on the minimization of the residual norm at each iteration step with a projection basis updated at each iteration rather than at periodic restarts like in the classical GMRES method. The algorithm is able to stabilize any dynamical system without increasing the computational time of the original numerical procedure used to solve the governing equations. Moreover, it can be easily inserted into a pre-existing relaxation (integration) procedure with a call to a single black-box subroutine. The procedure is discussed for problems of different sizes, ranging from a small two-dimensional system to a large three-dimensional problem involving the Navier-Stokes equations. We show that the proposed algorithm is able to improve the convergence of existing iterative schemes. In particular, the procedure is applied to the subcritical flow inside a lid-driven cavity. We also discuss the application of Boostconv to compute the unstable steady flow past a fixed circular cylinder (2D) and boundary-layer flow over a hemispherical roughness element (3D) for supercritical values of the Reynolds number. We show that Boostconv can be used effectively with any spatial discretization, be it a finite-difference, finite-volume, finite-element or spectral method.
NASA Astrophysics Data System (ADS)
Xu, Yu-Lin
The problem of computing the orbit of a visual binary from a set of observed positions is reconsidered. It is a least squares adjustment problem, if the observational errors follow a bias-free multivariate Gaussian distribution and the covariance matrix of the observations is assumed to be known. The condition equations are constructed to satisfy both the conic section equation and the area theorem, which are nonlinear in both the observations and the adjustment parameters. The traditional least squares algorithm, which employs condition equations that are solved with respect to the uncorrelated observations and either linear in the adjustment parameters or linearized by developing them in Taylor series by first-order approximation, is inadequate in our orbit problem. D.C. Brown proposed an algorithm solving a more general least squares adjustment problem in which the scalar residual function, however, is still constructed by first-order approximation. Not long ago, a completely general solution was published by W.H Jefferys, who proposed a rigorous adjustment algorithm for models in which the observations appear nonlinearly in the condition equations and may be correlated, and in which construction of the normal equations and the residual function involves no approximation. This method was successfully applied in our problem. The normal equations were first solved by Newton's scheme. Practical examples show that this converges fast if the observational errors are sufficiently small and the initial approximate solution is sufficiently accurate, and that it fails otherwise. Newton's method was modified to yield a definitive solution in the case the normal approach fails, by combination with the method of steepest descent and other sophisticated algorithms. Practical examples show that the modified Newton scheme can always lead to a final solution. The weighting of observations, the orthogonal parameters and the efficiency of a set of adjustment parameters are also considered. The definition of efficiency is revised.
Adaptive format conversion for scalable video coding
NASA Astrophysics Data System (ADS)
Wan, Wade K.; Lim, Jae S.
2001-12-01
The enhancement layer in many scalable coding algorithms is composed of residual coding information. There is another type of information that can be transmitted instead of (or in addition to) residual coding. Since the encoder has access to the original sequence, it can utilize adaptive format conversion (AFC) to generate the enhancement layer and transmit the different format conversion methods as enhancement data. This paper investigates the use of adaptive format conversion information as enhancement data in scalable video coding. Experimental results are shown for a wide range of base layer qualities and enhancement bitrates to determine when AFC can improve video scalability. Since the parameters needed for AFC are small compared to residual coding, AFC can provide video scalability at low enhancement layer bitrates that are not possible with residual coding. In addition, AFC can also be used in addition to residual coding to improve video scalability at higher enhancement layer bitrates. Adaptive format conversion has not been studied in detail, but many scalable applications may benefit from it. An example of an application that AFC is well-suited for is the migration path for digital television where AFC can provide immediate video scalability as well as assist future migrations.
Algorithm for computing descriptive statistics for very large data sets and the exa-scale era
NASA Astrophysics Data System (ADS)
Beekman, Izaak
2017-11-01
An algorithm for Single-point, Parallel, Online, Converging Statistics (SPOCS) is presented. It is suited for in situ analysis that traditionally would be relegated to post-processing, and can be used to monitor the statistical convergence and estimate the error/residual in the quantity-useful for uncertainty quantification too. Today, data may be generated at an overwhelming rate by numerical simulations and proliferating sensing apparatuses in experiments and engineering applications. Monitoring descriptive statistics in real time lets costly computations and experiments be gracefully aborted if an error has occurred, and monitoring the level of statistical convergence allows them to be run for the shortest amount of time required to obtain good results. This algorithm extends work by Pébay (Sandia Report SAND2008-6212). Pébay's algorithms are recast into a converging delta formulation, with provably favorable properties. The mean, variance, covariances and arbitrary higher order statistical moments are computed in one pass. The algorithm is tested using Sillero, Jiménez, & Moser's (2013, 2014) publicly available UPM high Reynolds number turbulent boundary layer data set, demonstrating numerical robustness, efficiency and other favorable properties.
Linear and nonlinear trending and prediction for AVHRR time series data
NASA Technical Reports Server (NTRS)
Smid, J.; Volf, P.; Slama, M.; Palus, M.
1995-01-01
The variability of AVHRR calibration coefficient in time was analyzed using algorithms of linear and non-linear time series analysis. Specifically we have used the spline trend modeling, autoregressive process analysis, incremental neural network learning algorithm and redundancy functional testing. The analysis performed on available AVHRR data sets revealed that (1) the calibration data have nonlinear dependencies, (2) the calibration data depend strongly on the target temperature, (3) both calibration coefficients and the temperature time series can be modeled, in the first approximation, as autonomous dynamical systems, (4) the high frequency residuals of the analyzed data sets can be best modeled as an autoregressive process of the 10th degree. We have dealt with a nonlinear identification problem and the problem of noise filtering (data smoothing). The system identification and filtering are significant problems for AVHRR data sets. The algorithms outlined in this study can be used for the future EOS missions. Prediction and smoothing algorithms for time series of calibration data provide a functional characterization of the data. Those algorithms can be particularly useful when calibration data are incomplete or sparse.
An implementation of the QMR method based on coupled two-term recurrences
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Nachtigal, Noeel M.
1992-01-01
The authors have proposed a new Krylov subspace iteration, the quasi-minimal residual algorithm (QMR), for solving non-Hermitian linear systems. In the original implementation of the QMR method, the Lanczos process with look-ahead is used to generate basis vectors for the underlying Krylov subspaces. In the Lanczos algorithm, these basis vectors are computed by means of three-term recurrences. It has been observed that, in finite precision arithmetic, vector iterations based on three-term recursions are usually less robust than mathematically equivalent coupled two-term vector recurrences. This paper presents a look-ahead algorithm that constructs the Lanczos basis vectors by means of coupled two-term recursions. Implementation details are given, and the look-ahead strategy is described. A new implementation of the QMR method, based on this coupled two-term algorithm, is described. A simplified version of the QMR algorithm without look-ahead is also presented, and the special case of QMR for complex symmetric linear systems is considered. Results of numerical experiments comparing the original and the new implementations of the QMR method are reported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdel-Rehim, A M; Stathopoulos, Andreas; Orginos, Kostas
2014-08-01
The technique that was used to build the EigCG algorithm for sparse symmetric linear systems is extended to the nonsymmetric case using the BiCG algorithm. We show that, similarly to the symmetric case, we can build an algorithm that is capable of computing a few smallest magnitude eigenvalues and their corresponding left and right eigenvectors of a nonsymmetric matrix using only a small window of the BiCG residuals while simultaneously solving a linear system with that matrix. For a system with multiple right-hand sides, we give an algorithm that computes incrementally more eigenvalues while solving the first few systems andmore » then uses the computed eigenvectors to deflate BiCGStab for the remaining systems. Our experiments on various test problems, including Lattice QCD, show the remarkable ability of EigBiCG to compute spectral approximations with accuracy comparable to that of the unrestarted, nonsymmetric Lanczos. Furthermore, our incremental EigBiCG followed by appropriately restarted and deflated BiCGStab provides a competitive method for systems with multiple right-hand sides.« less
NASA Technical Reports Server (NTRS)
James, Mark Anthony
1999-01-01
A finite element program has been developed to perform quasi-static, elastic-plastic crack growth simulations. The model provides a general framework for mixed-mode I/II elastic-plastic fracture analysis using small strain assumptions and plane stress, plane strain, and axisymmetric finite elements. Cracks are modeled explicitly in the mesh. As the cracks propagate, automatic remeshing algorithms delete the mesh local to the crack tip, extend the crack, and build a new mesh around the new tip. State variable mapping algorithms transfer stresses and displacements from the old mesh to the new mesh. The von Mises material model is implemented in the context of a non-linear Newton solution scheme. The fracture criterion is the critical crack tip opening displacement, and crack direction is predicted by the maximum tensile stress criterion at the crack tip. The implementation can accommodate multiple curving and interacting cracks. An additional fracture algorithm based on nodal release can be used to simulate fracture along a horizontal plane of symmetry. A core of plane strain elements can be used with the nodal release algorithm to simulate the triaxial state of stress near the crack tip. Verification and validation studies compare analysis results with experimental data and published three-dimensional analysis results. Fracture predictions using nodal release for compact tension, middle-crack tension, and multi-site damage test specimens produced accurate results for residual strength and link-up loads. Curving crack predictions using remeshing/mapping were compared with experimental data for an Arcan mixed-mode specimen. Loading angles from 0 degrees to 90 degrees were analyzed. The maximum tensile stress criterion was able to predict the crack direction and path for all loading angles in which the material failed in tension. Residual strength was also accurately predicted for these cases.
Meinhardt, Sarah; Swint-Kruse, Liskin
2008-12-01
In protein families, conserved residues often contribute to a common general function, such as DNA-binding. However, unique attributes for each homolog (e.g. recognition of alternative DNA sequences) must arise from variation in other functionally-important positions. The locations of these "specificity determinant" positions are obscured amongst the background of varied residues that do not make significant contributions to either structure or function. To isolate specificity determinants, a number of bioinformatics algorithms have been developed. When applied to the LacI/GalR family of transcription regulators, several specificity determinants are predicted in the 18 amino acids that link the DNA-binding and regulatory domains. However, results from alternative algorithms are only in partial agreement with each other. Here, we experimentally evaluate these predictions using an engineered repressor comprising the LacI DNA-binding domain, the LacI linker, and the GalR regulatory domain (LLhG). "Wild-type" LLhG has altered DNA specificity and weaker lacO(1) repression compared to LacI or a similar LacI:PurR chimera. Next, predictions of linker specificity determinants were tested, using amino acid substitution and in vivo repression assays to assess functional change. In LLhG, all predicted sites are specificity determinants, as well as three sites not predicted by any algorithm. Strategies are suggested for diminishing the number of false negative predictions. Finally, individual substitutions at LLhG specificity determinants exhibited a broad range of functional changes that are not predicted by bioinformatics algorithms. Results suggest that some variants have altered affinity for DNA, some have altered allosteric response, and some appear to have changed specificity for alternative DNA ligands.
Protein structure prediction with local adjust tabu search algorithm
2014-01-01
Background Protein folding structure prediction is one of the most challenging problems in the bioinformatics domain. Because of the complexity of the realistic protein structure, the simplified structure model and the computational method should be adopted in the research. The AB off-lattice model is one of the simplification models, which only considers two classes of amino acids, hydrophobic (A) residues and hydrophilic (B) residues. Results The main work of this paper is to discuss how to optimize the lowest energy configurations in 2D off-lattice model and 3D off-lattice model by using Fibonacci sequences and real protein sequences. In order to avoid falling into local minimum and faster convergence to the global minimum, we introduce a novel method (SATS) to the protein structure problem, which combines simulated annealing algorithm and tabu search algorithm. Various strategies, such as the new encoding strategy, the adaptive neighborhood generation strategy and the local adjustment strategy, are adopted successfully for high-speed searching the optimal conformation corresponds to the lowest energy of the protein sequences. Experimental results show that some of the results obtained by the improved SATS are better than those reported in previous literatures, and we can sure that the lowest energy folding state for short Fibonacci sequences have been found. Conclusions Although the off-lattice models is not very realistic, they can reflect some important characteristics of the realistic protein. It can be found that 3D off-lattice model is more like native folding structure of the realistic protein than 2D off-lattice model. In addition, compared with some previous researches, the proposed hybrid algorithm can more effectively and more quickly search the spatial folding structure of a protein chain. PMID:25474708
Randomized Hough transform filter for echo extraction in DLR
NASA Astrophysics Data System (ADS)
Liu, Tong; Chen, Hao; Shen, Ming; Gao, Pengqi; Zhao, You
2016-11-01
The signal-to-noise ratio (SNR) of debris laser ranging (DLR) data is extremely low, and the valid returns in the DLR range residuals are distributed on a curve in a long observation time. Therefore, it is hard to extract the signals from noise in the Observed-minus-Calculated (O-C) residuals with low SNR. In order to autonomously extract the valid returns, we propose a new algorithm based on randomized Hough transform (RHT). We firstly pre-process the data using histogram method to find the zonal area that contains all the possible signals to reduce large amount of noise. Then the data is processed with RHT algorithm to find the curve that the signal points are distributed on. A new parameter update strategy is introduced in the RHT to get the best parameters. We also analyze the values of the parameters in the algorithm. We test our algorithm on the 10 Hz repetition rate DLR data from Yunnan Observatory and 100 Hz repetition rate DLR data from Graz SLR station. For 10 Hz DLR data with relative larger and similar range gate, we can process it in real time and extract all the signals autonomously with a few false readings. For 100 Hz DLR data with longer observation time, we autonomously post-process DLR data of 0.9%, 2.7%, 8% and 33% return rate with high reliability. The extracted points contain almost all signals and a low percentage of noise. Additional noise is added to 10 Hz DLR data to get lower return rate data. The valid returns can also be well extracted for DLR data with 0.18% and 0.1% return rate.
SU-E-J-29: Automatic Image Registration Performance of Three IGRT Systems for Prostate Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barber, J; University of Sydney, Sydney, NSW; Sykes, J
Purpose: To compare the performance of an automatic image registration algorithm on image sets collected on three commercial image guidance systems, and explore its relationship with imaging parameters such as dose and sharpness. Methods: Images of a CIRS Virtually Human Male Pelvis phantom (VHMP) were collected on the CBCT systems of Varian TrueBeam/OBI and Elekta Synergy/XVI linear accelerators, across a range of mAs settings; and MVCT on a Tomotherapy Hi-ART accelerator with a range of pitch. Using the 6D correlation ratio algorithm of XVI, each image was registered to a mask of the prostate volume with a 5 mm expansion.more » Registrations were repeated 100 times, with random initial offsets introduced to simulate daily matching. Residual registration errors were calculated by correcting for the initial phantom set-up error. Automatic registration was also repeated after reconstructing images with different sharpness filters. Results: All three systems showed good registration performance, with residual translations <0.5mm (1σ) for typical clinical dose and reconstruction settings. Residual rotational error had larger range, with 0.8°, 1.2° and 1.9° for 1σ in XVI, OBI and Tomotherapy respectively. The registration accuracy of XVI images showed a strong dependence on imaging dose, particularly below 4mGy. No evidence of reduced performance was observed at the lowest dose settings for OBI and Tomotherapy, but these were above 4mGy. Registration failures (maximum target registration error > 3.6 mm on the surface of a 30mm sphere) occurred in 5% to 10% of registrations. Changing the sharpness of image reconstruction had no significant effect on registration performance. Conclusions: Using the present automatic image registration algorithm, all IGRT systems tested provided satisfactory registrations for clinical use, within a normal range of acquisition settings.« less
Detection of obstacles on runway using Ego-Motion compensation and tracking of significant features
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar (Principal Investigator); Camps, Octavia (Principal Investigator); Gandhi, Tarak; Devadiga, Sadashiva
1996-01-01
This report describes a method for obstacle detection on a runway for autonomous navigation and landing of an aircraft. Detection is done in the presence of extraneous features such as tiremarks. Suitable features are extracted from the image and warping using approximately known camera and plane parameters is performed in order to compensate ego-motion as far as possible. Residual disparity after warping is estimated using an optical flow algorithm. Features are tracked from frame to frame so as to obtain more reliable estimates of their motion. Corrections are made to motion parameters with the residual disparities using a robust method, and features having large residual disparities are signaled as obstacles. Sensitivity analysis of the procedure is also studied. Nelson's optical flow constraint is proposed to separate moving obstacles from stationary ones. A Bayesian framework is used at every stage so that the confidence in the estimates can be determined.
Failure detection system design methodology. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chow, E. Y.
1980-01-01
The design of a failure detection and identification system consists of designing a robust residual generation process and a high performance decision making process. The design of these two processes are examined separately. Residual generation is based on analytical redundancy. Redundancy relations that are insensitive to modelling errors and noise effects are important for designing robust residual generation processes. The characterization of the concept of analytical redundancy in terms of a generalized parity space provides a framework in which a systematic approach to the determination of robust redundancy relations are developed. The Bayesian approach is adopted for the design of high performance decision processes. The FDI decision problem is formulated as a Bayes sequential decision problem. Since the optimal decision rule is incomputable, a methodology for designing suboptimal rules is proposed. A numerical algorithm is developed to facilitate the design and performance evaluation of suboptimal rules.
Two States Mapping Based Time Series Neural Network Model for Compensation Prediction Residual Error
NASA Astrophysics Data System (ADS)
Jung, Insung; Koo, Lockjo; Wang, Gi-Nam
2008-11-01
The objective of this paper was to design a model of human bio signal data prediction system for decreasing of prediction error using two states mapping based time series neural network BP (back-propagation) model. Normally, a lot of the industry has been applied neural network model by training them in a supervised manner with the error back-propagation algorithm for time series prediction systems. However, it still has got a residual error between real value and prediction result. Therefore, we designed two states of neural network model for compensation residual error which is possible to use in the prevention of sudden death and metabolic syndrome disease such as hypertension disease and obesity. We determined that most of the simulation cases were satisfied by the two states mapping based time series prediction model. In particular, small sample size of times series were more accurate than the standard MLP model.
Mei, Longcan; Zhou, Yanping; Zhu, Lizhe; Liu, Changlin; Wu, Zhuo; Wang, Fangkui; Hao, Gefei; Yu, Di; Yuan, Hong; Cui, Yanfang
2018-03-20
A superkine variant of interleukin-2 with six site mutations away from the binding interface developed from the yeast display technique has been previously characterized as undergoing a distal structure alteration which is responsible for its super-potency and provides an elegant case study with which to get insight about how to utilize allosteric effect to achieve desirable protein functions. By examining the dynamic network and the allosteric pathways related to those mutated residues using various computational approaches, we found that nanosecond time scale all-atom molecular dynamics simulations can identify the dynamic network as efficient as an ensemble algorithm. The differentiated pathways for the six core residues form a dynamic network that outlines the area of structure alteration. The results offer potentials of using affordable computing power to predict allosteric structure of mutants in knowledge-based mutagenesis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sunden, Fanny; Peck, Ariana; Salzman, Julia
Enzymes enable life by accelerating reaction rates to biological timescales. Conventional studies have focused on identifying the residues that have a direct involvement in an enzymatic reaction, but these so-called ‘catalytic residues’ are embedded in extensive interaction networks. Although fundamental to our understanding of enzyme function, evolution, and engineering, the properties of these networks have yet to be quantitatively and systematically explored. We dissected an interaction network of five residues in the active site of Escherichia coli alkaline phosphatase. Analysis of the complex catalytic interdependence of specific residues identified three energetically independent but structurally interconnected functional units with distinct modesmore » of cooperativity. From an evolutionary perspective, this network is orders of magnitude more probable to arise than a fully cooperative network. From a functional perspective, new catalytic insights emerge. Further, such comprehensive energetic characterization will be necessary to benchmark the algorithms required to rationally engineer highly efficient enzymes.« less
Large Footprint LiDAR Data Processing for Ground Detection and Biomass Estimation
NASA Astrophysics Data System (ADS)
Zhuang, Wei
Ground detection in large footprint waveform Light Detection And Ranging (LiDAR) data is important in calculating and estimating downstream products, especially in forestry applications. For example, tree heights are calculated as the difference between the ground peak and first returned signal in a waveform. Forest attributes, such as aboveground biomass, are estimated based on the tree heights. This dissertation investigated new metrics and algorithms for estimating aboveground biomass and extracting ground peak location in large footprint waveform LiDAR data. In the first manuscript, an accurate and computationally efficient algorithm, named Filtering and Clustering Algorithm (FICA), was developed based on a set of multiscale second derivative filters for automatically detecting the ground peak in an waveform from Land, Vegetation and Ice Sensor. Compared to existing ground peak identification algorithms, FICA was tested in different land cover type plots and showed improved accuracy in ground detections of the vegetation plots and similar accuracy in developed area plots. Also, FICA adopted a peak identification strategy rather than following a curve-fitting process, and therefore, exhibited improved efficiency. In the second manuscript, an algorithm was developed specifically for shrub waveforms. The algorithm only partially fitted the shrub canopy reflection and detected the ground peak by investigating the residual signal, which was generated by deducting a Gaussian fitting function from the raw waveform. After the deduction, the overlapping ground peak was identified as the local maximum of the residual signal. In addition, an applicability model was built for determining waveforms where the proposed PCF algorithm should be applied. In the third manuscript, a new set of metrics was developed to increase accuracy in biomass estimation models. The metrics were based on the results of Gaussian decomposition. They incorporated both waveform intensity represented by the area covered by a Gaussian function and its associated heights, which was the centroid of the Gaussian function. By considering signal reflection of different vegetation layers, the developed metrics obtained better estimation accuracy in aboveground biomass when compared to existing metrics. In addition, the new developed metrics showed strong correlation with other forest structural attributes, such as mean Diameter at Breast Height (DBH) and stem density. In sum, the dissertation investigated the various techniques for large footprint waveform LiDAR processing for detecting the ground peak and estimating biomass. The novel techniques developed in this dissertation showed better performance than existing methods or metrics.
Metaheuristic optimisation methods for approximate solving of singular boundary value problems
NASA Astrophysics Data System (ADS)
Sadollah, Ali; Yadav, Neha; Gao, Kaizhou; Su, Rong
2017-07-01
This paper presents a novel approximation technique based on metaheuristics and weighted residual function (WRF) for tackling singular boundary value problems (BVPs) arising in engineering and science. With the aid of certain fundamental concepts of mathematics, Fourier series expansion, and metaheuristic optimisation algorithms, singular BVPs can be approximated as an optimisation problem with boundary conditions as constraints. The target is to minimise the WRF (i.e. error function) constructed in approximation of BVPs. The scheme involves generational distance metric for quality evaluation of the approximate solutions against exact solutions (i.e. error evaluator metric). Four test problems including two linear and two non-linear singular BVPs are considered in this paper to check the efficiency and accuracy of the proposed algorithm. The optimisation task is performed using three different optimisers including the particle swarm optimisation, the water cycle algorithm, and the harmony search algorithm. Optimisation results obtained show that the suggested technique can be successfully applied for approximate solving of singular BVPs.
Algorithm for Lossless Compression of Calibrated Hyperspectral Imagery
NASA Technical Reports Server (NTRS)
Kiely, Aaron B.; Klimesh, Matthew A.
2010-01-01
A two-stage predictive method was developed for lossless compression of calibrated hyperspectral imagery. The first prediction stage uses a conventional linear predictor intended to exploit spatial and/or spectral dependencies in the data. The compressor tabulates counts of the past values of the difference between this initial prediction and the actual sample value. To form the ultimate predicted value, in the second stage, these counts are combined with an adaptively updated weight function intended to capture information about data regularities introduced by the calibration process. Finally, prediction residuals are losslessly encoded using adaptive arithmetic coding. Algorithms of this type are commonly tested on a readily available collection of images from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) hyperspectral imager. On the standard calibrated AVIRIS hyperspectral images that are most widely used for compression benchmarking, the new compressor provides more than 0.5 bits/sample improvement over the previous best compression results. The algorithm has been implemented in Mathematica. The compression algorithm was demonstrated as beneficial on 12-bit calibrated AVIRIS images.
Text extraction via an edge-bounded averaging and a parametric character model
NASA Astrophysics Data System (ADS)
Fan, Jian
2003-01-01
We present a deterministic text extraction algorithm that relies on three basic assumptions: color/luminance uniformity of the interior region, closed boundaries of sharp edges and the consistency of local contrast. The algorithm is basically independent of the character alphabet, text layout, font size and orientation. The heart of this algorithm is an edge-bounded averaging for the classification of smooth regions that enhances robustness against noise without sacrificing boundary accuracy. We have also developed a verification process to clean up the residue of incoherent segmentation. Our framework provides a symmetric treatment for both regular and inverse text. We have proposed three heuristics for identifying the type of text from a cluster consisting of two types of pixel aggregates. Finally, we have demonstrated the advantages of the proposed algorithm over adaptive thresholding and block-based clustering methods in terms of boundary accuracy, segmentation coherency, and capability to identify inverse text and separate characters from background patches.
Pilla, Kala Bharath; Otting, Gottfried; Huber, Thomas
2017-03-07
Computational and nuclear magnetic resonance hybrid approaches provide efficient tools for 3D structure determination of small proteins, but currently available algorithms struggle to perform with larger proteins. Here we demonstrate a new computational algorithm that assembles the 3D structure of a protein from its constituent super-secondary structural motifs (Smotifs) with the help of pseudocontact shift (PCS) restraints for backbone amide protons, where the PCSs are produced from different metal centers. The algorithm, DINGO-PCS (3D assembly of Individual Smotifs to Near-native Geometry as Orchestrated by PCSs), employs the PCSs to recognize, orient, and assemble the constituent Smotifs of the target protein without any other experimental data or computational force fields. Using a universal Smotif database, the DINGO-PCS algorithm exhaustively enumerates any given Smotif. We benchmarked the program against ten different protein targets ranging from 100 to 220 residues with different topologies. For nine of these targets, the method was able to identify near-native Smotifs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Memetic Algorithm-Based Multi-Objective Coverage Optimization for Wireless Sensor Networks
Chen, Zhi; Li, Shuai; Yue, Wenjing
2014-01-01
Maintaining effective coverage and extending the network lifetime as much as possible has become one of the most critical issues in the coverage of WSNs. In this paper, we propose a multi-objective coverage optimization algorithm for WSNs, namely MOCADMA, which models the coverage control of WSNs as the multi-objective optimization problem. MOCADMA uses a memetic algorithm with a dynamic local search strategy to optimize the coverage of WSNs and achieve the objectives such as high network coverage, effective node utilization and more residual energy. In MOCADMA, the alternative solutions are represented as the chromosomes in matrix form, and the optimal solutions are selected through numerous iterations of the evolution process, including selection, crossover, mutation, local enhancement, and fitness evaluation. The experiment and evaluation results show MOCADMA can have good capabilities in maintaining the sensing coverage, achieve higher network coverage while improving the energy efficiency and effectively prolonging the network lifetime, and have a significant improvement over some existing algorithms. PMID:25360579
Memetic algorithm-based multi-objective coverage optimization for wireless sensor networks.
Chen, Zhi; Li, Shuai; Yue, Wenjing
2014-10-30
Maintaining effective coverage and extending the network lifetime as much as possible has become one of the most critical issues in the coverage of WSNs. In this paper, we propose a multi-objective coverage optimization algorithm for WSNs, namely MOCADMA, which models the coverage control of WSNs as the multi-objective optimization problem. MOCADMA uses a memetic algorithm with a dynamic local search strategy to optimize the coverage of WSNs and achieve the objectives such as high network coverage, effective node utilization and more residual energy. In MOCADMA, the alternative solutions are represented as the chromosomes in matrix form, and the optimal solutions are selected through numerous iterations of the evolution process, including selection, crossover, mutation, local enhancement, and fitness evaluation. The experiment and evaluation results show MOCADMA can have good capabilities in maintaining the sensing coverage, achieve higher network coverage while improving the energy efficiency and effectively prolonging the network lifetime, and have a significant improvement over some existing algorithms.
NASA Astrophysics Data System (ADS)
Gong, Changfei; Zeng, Dong; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Feng, Qianjin; Liang, Zhengrong; Ma, Jianhua
2016-03-01
Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for diagnosis and risk stratification of coronary artery disease by assessing the myocardial perfusion hemodynamic maps (MPHM). Meanwhile, the repeated scanning of the same region results in a relatively large radiation dose to patients potentially. In this work, we present a robust MPCT deconvolution algorithm with adaptive-weighted tensor total variation regularization to estimate residue function accurately under the low-dose context, which is termed `MPD-AwTTV'. More specifically, the AwTTV regularization takes into account the anisotropic edge property of the MPCT images compared with the conventional total variation (TV) regularization, which can mitigate the drawbacks of TV regularization. Subsequently, an effective iterative algorithm was adopted to minimize the associative objective function. Experimental results on a modified XCAT phantom demonstrated that the present MPD-AwTTV algorithm outperforms and is superior to other existing deconvolution algorithms in terms of noise-induced artifacts suppression, edge details preservation and accurate MPHM estimation.
Accelerating molecular property calculations with nonorthonormal Krylov space methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.
Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less
Accelerating molecular property calculations with nonorthonormal Krylov space methods
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; ...
2016-05-03
Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less
Banno, Masaki; Komiyama, Yusuke; Cao, Wei; Oku, Yuya; Ueki, Kokoro; Sumikoshi, Kazuya; Nakamura, Shugo; Terada, Tohru; Shimizu, Kentaro
2017-02-01
Several methods have been proposed for protein-sugar binding site prediction using machine learning algorithms. However, they are not effective to learn various properties of binding site residues caused by various interactions between proteins and sugars. In this study, we classified sugars into acidic and nonacidic sugars and showed that their binding sites have different amino acid occurrence frequencies. By using this result, we developed sugar-binding residue predictors dedicated to the two classes of sugars: an acid sugar binding predictor and a nonacidic sugar binding predictor. We also developed a combination predictor which combines the results of the two predictors. We showed that when a sugar is known to be an acidic sugar, the acidic sugar binding predictor achieves the best performance, and showed that when a sugar is known to be a nonacidic sugar or is not known to be either of the two classes, the combination predictor achieves the best performance. Our method uses only amino acid sequences for prediction. Support vector machine was used as a machine learning algorithm and the position-specific scoring matrix created by the position-specific iterative basic local alignment search tool was used as the feature vector. We evaluated the performance of the predictors using five-fold cross-validation. We have launched our system, as an open source freeware tool on the GitHub repository (https://doi.org/10.5281/zenodo.61513). Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Optimal Location through Distributed Algorithm to Avoid Energy Hole in Mobile Sink WSNs
Qing-hua, Li; Wei-hua, Gui; Zhi-gang, Chen
2014-01-01
In multihop data collection sensor network, nodes near the sink need to relay on remote data and, thus, have much faster energy dissipation rate and suffer from premature death. This phenomenon causes energy hole near the sink, seriously damaging the network performance. In this paper, we first compute energy consumption of each node when sink is set at any point in the network through theoretical analysis; then we propose an online distributed algorithm, which can adjust sink position based on the actual energy consumption of each node adaptively to get the actual maximum lifetime. Theoretical analysis and experimental results show that the proposed algorithms significantly improve the lifetime of wireless sensor network. It lowers the network residual energy by more than 30% when it is dead. Moreover, the cost for moving the sink is relatively smaller. PMID:24895668
Detection of abrupt changes in dynamic systems
NASA Technical Reports Server (NTRS)
Willsky, A. S.
1984-01-01
Some of the basic ideas associated with the detection of abrupt changes in dynamic systems are presented. Multiple filter-based techniques and residual-based method and the multiple model and generalized likelihood ratio methods are considered. Issues such as the effect of unknown onset time on algorithm complexity and structure and robustness to model uncertainty are discussed.
Residuals-Based Subgraph Detection with Cue Vertices
2015-11-30
Workshop, 2012, pp. 129–132. [5] M. E. J. Newman , “Finding community structure in networks using the eigenvectors of matrices,” Phys. Rev. E, vol. 74, no...from Data, vol. 1, no. 1, 2007. [7] M. W. Mahoney , L. Orecchia, and N. K. Vishnoi, “A spectral algorithm for improving graph partitions,” CoRR, vol. abs
Observed ground-motion variabilities and implication for source properties
NASA Astrophysics Data System (ADS)
Cotton, F.; Bora, S. S.; Bindi, D.; Specht, S.; Drouet, S.; Derras, B.; Pina-Valdes, J.
2016-12-01
One of the key challenges of seismology is to be able to calibrate and analyse the physical factors that control earthquake and ground-motion variabilities. Within the framework of empirical ground-motion prediction equation (GMPE) developments, ground-motions residuals (differences between recorded ground motions and the values predicted by a GMPE) are computed. The exponential growth of seismological near-field records and modern regression algorithms allow to decompose these residuals into between-event and a within-event residual components. The between-event term quantify all the residual effects of the source (e.g. stress-drops) which are not accounted by magnitude term as the only source parameter of the model. Between-event residuals provide a new and rather robust way to analyse the physical factors that control earthquake source properties and associated variabilities. We first will show the correlation between classical stress-drops and between-event residuals. We will also explain why between-event residuals may be a more robust way (compared to classical stress-drop analysis) to analyse earthquake source-properties. We will finally calibrate between-events variabilities using recent high-quality global accelerometric datasets (NGA-West 2, RESORCE) and datasets from recent earthquakes sequences (Aquila, Iquique, Kunamoto). The obtained between-events variabilities will be used to evaluate the variability of earthquake stress-drops but also the variability of source properties which cannot be explained by a classical Brune stress-drop variations. We will finally use the between-event residual analysis to discuss regional variations of source properties, differences between aftershocks and mainshocks and potential magnitude dependencies of source characteristics.
NASA Technical Reports Server (NTRS)
Jentink, Thomas Neil; Usab, William J., Jr.
1990-01-01
An explicit, Multigrid algorithm was written to solve the Euler and Navier-Stokes equations with special consideration given to the coarse mesh boundary conditions. These are formulated in a manner consistent with the interior solution, utilizing forcing terms to prevent coarse-mesh truncation error from affecting the fine-mesh solution. A 4-Stage Hybrid Runge-Kutta Scheme is used to advance the solution in time, and Multigrid convergence is further enhanced by using local time-stepping and implicit residual smoothing. Details of the algorithm are presented along with a description of Jameson's standard Multigrid method and a new approach to formulating the Multigrid equations.
Pseudo-time algorithms for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, E.
1986-01-01
A pseudo-time method is introduced to integrate the compressible Navier-Stokes equations to a steady state. This method is a generalization of a method used by Crocco and also by Allen and Cheng. We show that for a simple heat equation that this is just a renormalization of the time. For a convection-diffusion equation the renormalization is dependent only on the viscous terms. We implement the method for the Navier-Stokes equations using a Runge-Kutta type algorithm. This permits the time step to be chosen based on the inviscid model only. We also discuss the use of residual smoothing when viscous terms are present.
Railway obstacle detection algorithm using neural network
NASA Astrophysics Data System (ADS)
Yu, Mingyang; Yang, Peng; Wei, Sen
2018-05-01
Aiming at the difficulty of detection of obstacle in outdoor railway scene, a data-oriented method based on neural network to obtain image objects is proposed. First, we mark objects of images(such as people, trains, animals) acquired on the Internet. and then use the residual learning units to build Fast R-CNN framework. Then, the neural network is trained to get the target image characteristics by using stochastic gradient descent algorithm. Finally, a well-trained model is used to identify an outdoor railway image. if it includes trains and other objects, it will issue an alert. Experiments show that the correct rate of warning reached 94.85%.
Guo, Xiaoting; Sun, Changku; Wang, Peng
2017-08-01
This paper investigates the multi-rate inertial and vision data fusion problem in nonlinear attitude measurement systems, where the sampling rate of the inertial sensor is much faster than that of the vision sensor. To fully exploit the high frequency inertial data and obtain favorable fusion results, a multi-rate CKF (Cubature Kalman Filter) algorithm with estimated residual compensation is proposed in order to adapt to the problem of sampling rate discrepancy. During inter-sampling of slow observation data, observation noise can be regarded as infinite. The Kalman gain is unknown and approaches zero. The residual is also unknown. Therefore, the filter estimated state cannot be compensated. To obtain compensation at these moments, state error and residual formulas are modified when compared with the observation data available moments. Self-propagation equation of the state error is established to propagate the quantity from the moments with observation to the moments without observation. Besides, a multiplicative adjustment factor is introduced as Kalman gain, which acts on the residual. Then the filter estimated state can be compensated even when there are no visual observation data. The proposed method is tested and verified in a practical setup. Compared with multi-rate CKF without residual compensation and single-rate CKF, a significant improvement is obtained on attitude measurement by using the proposed multi-rate CKF with inter-sampling residual compensation. The experiment results with superior precision and reliability show the effectiveness of the proposed method.
An Adaptive Kalman Filter Using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. A. H. Jazwinski developed a specialized version of this technique for estimation of process noise. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
Tawfic, Israa Shaker; Kayhan, Sema Koc
2017-02-01
Compressed sensing (CS) is a new field used for signal acquisition and design of sensor that made a large drooping in the cost of acquiring sparse signals. In this paper, new algorithms are developed to improve the performance of the greedy algorithms. In this paper, a new greedy pursuit algorithm, SS-MSMP (Split Signal for Multiple Support of Matching Pursuit), is introduced and theoretical analyses are given. The SS-MSMP is suggested for sparse data acquisition, in order to reconstruct analog and efficient signals via a small set of general measurements. This paper proposes a new fast method which depends on a study of the behavior of the support indices through picking the best estimation of the corrosion between residual and measurement matrix. The term multiple supports originates from an algorithm; in each iteration, the best support indices are picked based on maximum quality created by discovering correlation for a particular length of support. We depend on this new algorithm upon our previous derivative of halting condition that we produce for Least Support Orthogonal Matching Pursuit (LS-OMP) for clear and noisy signal. For better reconstructed results, SS-MSMP algorithm provides the recovery of support set for long signals such as signals used in WBAN. Numerical experiments demonstrate that the new suggested algorithm performs well compared to existing algorithms in terms of many factors used for reconstruction performance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
pKa predictions for proteins, RNAs, and DNAs with the Gaussian dielectric function using DelPhi pKa.
Wang, Lin; Li, Lin; Alexov, Emil
2015-12-01
We developed a Poisson-Boltzmann based approach to calculate the pKa values of protein ionizable residues (Glu, Asp, His, Lys and Arg), nucleotides of RNA and single stranded DNA. Two novel features were utilized: the dielectric properties of the macromolecules and water phase were modeled via the smooth Gaussian-based dielectric function in DelPhi and the corresponding electrostatic energies were calculated without defining the molecular surface. We tested the algorithm by calculating pKa values for more than 300 residues from 32 proteins from the PPD dataset and achieved an overall RMSD of 0.77. Particularly, the RMSD of 0.55 was achieved for surface residues, while the RMSD of 1.1 for buried residues. The approach was also found capable of capturing the large pKa shifts of various single point mutations in staphylococcal nuclease (SNase) from pKa-cooperative dataset, resulting in an overall RMSD of 1.6 for this set of pKa's. Investigations showed that predictions for most of buried mutant residues of SNase could be improved by using higher dielectric constant values. Furthermore, an option to generate different hydrogen positions also improves pKa predictions for buried carboxyl residues. Finally, the pKa calculations on two RNAs demonstrated the capability of this approach for other types of biomolecules. © 2015 Wiley Periodicals, Inc.
Oran, Omer Faruk; Ider, Yusuf Ziya
2012-08-21
Most algorithms for magnetic resonance electrical impedance tomography (MREIT) concentrate on reconstructing the internal conductivity distribution of a conductive object from the Laplacian of only one component of the magnetic flux density (∇²B(z)) generated by the internal current distribution. In this study, a new algorithm is proposed to solve this ∇²B(z)-based MREIT problem which is mathematically formulated as the steady-state scalar pure convection equation. Numerical methods developed for the solution of the more general convection-diffusion equation are utilized. It is known that the solution of the pure convection equation is numerically unstable if sharp variations of the field variable (in this case conductivity) exist or if there are inconsistent boundary conditions. Various stabilization techniques, based on introducing artificial diffusion, are developed to handle such cases and in this study the streamline upwind Petrov-Galerkin (SUPG) stabilization method is incorporated into the Galerkin weighted residual finite element method (FEM) to numerically solve the MREIT problem. The proposed algorithm is tested with simulated and also experimental data from phantoms. Successful conductivity reconstructions are obtained by solving the related convection equation using the Galerkin weighted residual FEM when there are no sharp variations in the actual conductivity distribution. However, when there is noise in the magnetic flux density data or when there are sharp variations in conductivity, it is found that SUPG stabilization is beneficial.
A Minimum Variance Algorithm for Overdetermined TOA Equations with an Altitude Constraint.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero, Louis A; Mason, John J.
We present a direct (non-iterative) method for solving for the location of a radio frequency (RF) emitter, or an RF navigation receiver, using four or more time of arrival (TOA) measurements and an assumed altitude above an ellipsoidal earth. Both the emitter tracking problem and the navigation application are governed by the same equations, but with slightly different interpreta- tions of several variables. We treat the assumed altitude as a soft constraint, with a specified noise level, just as the TOA measurements are handled, with their respective noise levels. With 4 or more TOA measurements and the assumed altitude, themore » problem is overdetermined and is solved in the weighted least squares sense for the 4 unknowns, the 3-dimensional position and time. We call the new technique the TAQMV (TOA Altitude Quartic Minimum Variance) algorithm, and it achieves the minimum possible error variance for given levels of TOA and altitude estimate noise. The method algebraically produces four solutions, the least-squares solution, and potentially three other low residual solutions, if they exist. In the lightly overdermined cases where multiple local minima in the residual error surface are more likely to occur, this algebraic approach can produce all of the minima even when an iterative approach fails to converge. Algorithm performance in terms of solution error variance and divergence rate for bas eline (iterative) and proposed approach are given in tables.« less
NASA Astrophysics Data System (ADS)
Chase, A. P.; Boss, E.; Cetinić, I.; Slade, W.
2017-12-01
Phytoplankton community composition in the ocean is complex and highly variable over a wide range of space and time scales. Able to cover these scales, remote-sensing reflectance spectra can be measured both by satellite and by in situ radiometers. The spectral shape of reflectance in the open ocean is influenced by the particles in the water, mainly phytoplankton and covarying nonalgal particles. We investigate the utility of in situ hyperspectral remote-sensing reflectance measurements to detect phytoplankton pigments by using an inversion algorithm that defines phytoplankton pigment absorption as a sum of Gaussian functions. The inverted amplitudes of the Gaussian functions representing pigment absorption are compared to coincident High Performance Liquid Chromatography measurements of pigment concentration. We determined strong predictive capability for chlorophylls a, b, c1+c2, and the photoprotective carotenoids. We also tested the estimation of pigment concentrations from reflectance-derived chlorophyll a using global relationships of covariation between chlorophyll a and the accessory pigments. We found similar errors in pigment estimation based on the relationships of covariation versus the inversion algorithm. An investigation of spectral residuals in reflectance data after removal of chlorophyll-based average absorption spectra showed no strong relationship between spectral residuals and pigments. Ultimately, we are able to estimate concentrations of three chlorophylls and the photoprotective carotenoid pigments, noting that further work is necessary to address the challenge of extracting information from hyperspectral reflectance beyond the information that can be determined from chlorophyll a and its covariation with other pigments.
SEARCHING FOR THE HR 8799 DEBRIS DISK WITH HST /STIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerard, B.; Marois, C.; Tannock, M.
We present a new algorithm for space telescope high contrast imaging of close-to-face-on planetary disks called Optimized Spatially Filtered (OSFi) normalization. This algorithm is used on HR 8799 Hubble Space Telescope (HST) Space Telescope Imaging Spectrograph (STIS) coronagraphic archival data, showing an over-luminosity after reference star point-spread function (PSF) subtraction that may be from the inner disk and/or planetesimal belt components of this system. The PSF-subtracted radial profiles in two separate epochs from 2011 and 2012 are consistent with one another, and self-subtraction shows no residual in both epochs. We explore a number of possible false-positive scenarios that could explainmore » this residual flux, including telescope breathing, spectral differences between HR 8799 and the reference star, imaging of the known warm inner disk component, OSFi algorithm throughput and consistency with the standard spider normalization HST PSF subtraction technique, and coronagraph misalignment from pointing accuracy. In comparison to another similar STIS data set, we find that the over-luminosity is likely a result of telescope breathing and spectral difference between HR 8799 and the reference star. Thus, assuming a non-detection, we derive upper limits on the HR 8799 dust belt mass in small grains. In this scenario, we find that the flux of these micron-sized dust grains leaving the system due to radiation pressure is small enough to be consistent with measurements of other debris disk halos.« less
Objective identification of residue ranges for the superposition of protein structures
2011-01-01
Background The automation of objectively selecting amino acid residue ranges for structure superpositions is important for meaningful and consistent protein structure analyses. So far there is no widely-used standard for choosing these residue ranges for experimentally determined protein structures, where the manual selection of residue ranges or the use of suboptimal criteria remain commonplace. Results We present an automated and objective method for finding amino acid residue ranges for the superposition and analysis of protein structures, in particular for structure bundles resulting from NMR structure calculations. The method is implemented in an algorithm, CYRANGE, that yields, without protein-specific parameter adjustment, appropriate residue ranges in most commonly occurring situations, including low-precision structure bundles, multi-domain proteins, symmetric multimers, and protein complexes. Residue ranges are chosen to comprise as many residues of a protein domain that increasing their number would lead to a steep rise in the RMSD value. Residue ranges are determined by first clustering residues into domains based on the distance variance matrix, and then refining for each domain the initial choice of residues by excluding residues one by one until the relative decrease of the RMSD value becomes insignificant. A penalty for the opening of gaps favours contiguous residue ranges in order to obtain a result that is as simple as possible, but not simpler. Results are given for a set of 37 proteins and compared with those of commonly used protein structure validation packages. We also provide residue ranges for 6351 NMR structures in the Protein Data Bank. Conclusions The CYRANGE method is capable of automatically determining residue ranges for the superposition of protein structure bundles for a large variety of protein structures. The method correctly identifies ordered regions. Global structure superpositions based on the CYRANGE residue ranges allow a clear presentation of the structure, and unnecessary small gaps within the selected ranges are absent. In the majority of cases, the residue ranges from CYRANGE contain fewer gaps and cover considerably larger parts of the sequence than those from other methods without significantly increasing the RMSD values. CYRANGE thus provides an objective and automatic method for standardizing the choice of residue ranges for the superposition of protein structures. PMID:21592348
A phylogenetic Kalman filter for ancestral trait reconstruction using molecular data.
Lartillot, Nicolas
2014-02-15
Correlation between life history or ecological traits and genomic features such as nucleotide or amino acid composition can be used for reconstructing the evolutionary history of the traits of interest along phylogenies. Thus far, however, such ancestral reconstructions have been done using simple linear regression approaches that do not account for phylogenetic inertia. These reconstructions could instead be seen as a genuine comparative regression problem, such as formalized by classical generalized least-square comparative methods, in which the trait of interest and the molecular predictor are represented as correlated Brownian characters coevolving along the phylogeny. Here, a Bayesian sampler is introduced, representing an alternative and more efficient algorithmic solution to this comparative regression problem, compared with currently existing generalized least-square approaches. Technically, ancestral trait reconstruction based on a molecular predictor is shown to be formally equivalent to a phylogenetic Kalman filter problem, for which backward and forward recursions are developed and implemented in the context of a Markov chain Monte Carlo sampler. The comparative regression method results in more accurate reconstructions and a more faithful representation of uncertainty, compared with simple linear regression. Application to the reconstruction of the evolution of optimal growth temperature in Archaea, using GC composition in ribosomal RNA stems and amino acid composition of a sample of protein-coding genes, confirms previous findings, in particular, pointing to a hyperthermophilic ancestor for the kingdom. The program is freely available at www.phylobayes.org.
Calculation algorithms for breath-by-breath alveolar gas exchange: the unknowns!
Golja, Petra; Cettolo, Valentina; Francescato, Maria Pia
2018-06-25
Several papers (algorithm papers) describe computational algorithms that assess alveolar breath-by-breath gas exchange by accounting for changes in lung gas stores. It is unclear, however, if the effects of the latter are actually considered in literature. We evaluated dissemination of algorithm papers and the relevant provided information. The list of documents investigating exercise transients (in 1998-2017) was extracted from Scopus database. Documents citing the algorithm papers in the same period were analyzed in full text to check consistency of the relevant information provided. Less than 8% (121/1522) of documents dealing with exercise transients cited at least one algorithm paper; the paper of Beaver et al. (J Appl Physiol 51:1662-1675, 1981) was cited most often, with others being cited tenfold less. Among the documents citing the algorithm paper of Beaver et al. (J Appl Physiol 51:1662-1675, 1981) (N = 251), only 176 cited it for the application of their algorithm/s; in turn, 61% (107/176) of them stated the alveolar breath-by-breath gas exchange measurement, but only 1% (1/107) of the latter also reported the assessment of volunteers' functional residual capacity, a crucial parameter for the application of the algorithm. Information related to gas exchange was provided consistently in the methods and in the results in 1 of the 107 documents. Dissemination of algorithm papers in literature investigating exercise transients is by far narrower than expected. The information provided about the actual application of gas exchange algorithms is often inadequate and/or ambiguous. Some guidelines are provided that can help to improve the quality of future publications in the field.
Susan L. Brantley; William H. McDowell; William E. Dietrich; Timothy S. White; Praveen Kumar; Suzanne P. Anderson; Jon Chorover; Kathleen Ann Lohse; Roger C. Bales; Daniel D. Richter; Gordon Grant; Jérôme Gaillardet
2017-01-01
The critical zone (CZ), the dynamic living skin of the Earth, extends from the top of the vegetative canopy through the soil and down to fresh bedrock and the bottom of the groundwater. All humans live in and depend on the CZ. This zone has three co-evolving surfaces: the top of the vegetative canopy, the ground surface, and a deep subsurface below which Earthâs...
Coupling of link- and node-ordering in the coevolving voter model.
Toruniewska, J; Kułakowski, K; Suchecki, K; Hołyst, J A
2017-10-01
We consider the process of reaching the final state in the coevolving voter model. There is a coevolution of state dynamics, where a node can copy a state from a random neighbor with probabilty 1-p and link dynamics, where a node can rewire its link to another node of the same state with probability p. That exhibits an absorbing transition to a frozen phase above a critical value of rewiring probability. Our analytical and numerical studies show that in the active phase mean values of magnetization of nodes n and links m tend to the same value that depends on initial conditions. In a similar way mean degrees of spins up and spins down become equal. The system obeys a special statistical conservation law since a linear combination of both types magnetizations averaged over many realizations starting from the same initial conditions is a constant of motion: Λ≡(1-p)μm(t)+pn(t)=const., where μ is the mean node degree. The final mean magnetization of nodes and links in the active phase is proportional to Λ while the final density of active links is a square function of Λ. If the rewiring probability is above a critical value and the system separates into disconnected domains, then the values of nodes and links magnetizations are not the same and final mean degrees of spins up and spins down can be different.
Le Roux, Johannes J; Hui, Cang; Keet, Jan-Hendrik; Ellis, Allan G
2017-09-01
Contents 1354 I. 1354 II. 1355 III. 1357 IV. 1357 V. 1359 1359 References 1359 SUMMARY: Interactions between non-native plants and their mutualists are often disrupted upon introduction to new environments. Using legume-rhizobium mutualistic interactions as an example, we discuss two pathways that can influence symbiotic associations in such situations: co-introduction of coevolved rhizobia; and utilization of, and adaptation to, resident rhizobia, hereafter referred to as 'ecological fitting'. Co-introduction and ecological fitting have distinct implications for successful legume invasions and their impacts. Under ecological fitting, initial impacts may be less severe and will accrue over longer periods as novel symbiotic associations and/or adaptations may require fine-tuning over time. Co-introduction will have more profound impacts that will accrue more rapidly as a result of positive feedbacks between densities of non-native rhizobia and their coevolved host plants, in turn enhancing competition between native and non-native rhizobia. Co-introduction can further impact invasion outcomes by the exchange of genetic material between native and non-native rhizobia, potentially resulting in decreased fitness of native legumes. A better understanding of the roles of these two pathways in the invasion dynamics of non-native legumes is much needed, and we highlight some of the exciting research avenues it presents. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.
Regenerative metamorphosis in hairs and feathers: follicle as a programmable biological printer
Oh, Ji Won; Lin, Sung-Jan; Plikus, Maksim V.
2015-01-01
Present-day hairs and feathers are marvels of biological engineering perfected over 200 million years of convergent evolution. Prominently, both follicle types coevolved regenerative cycling, wherein active filament making (anagen) is intermitted by a phase of relative quiescence (telogen). Such regenerative cycling enables follicles to “reload” their morphogenetic program and make qualitatively different filaments in the consecutive cycles. Indeed, many species of mammals and birds undergo regenerative metamorphosis, prominently changing their integument between juvenile and adult forms. This phenomenon is inconspicuous in mice, which led to the conventional perception that hair type is hardwired during follicle morphogenesis and cannot switch. A series of recent works by Chi and Morgan change this perception, and show that many mouse follicles naturally switch hair morphologies, for instance from “wavy” zigzag to straight awl, in the second growth cycle. A series of observations and genetic experiments show that back and forth hair type switching depends on the number of cells in the follicle's dermal papilla, with the critical threshold being around 40-50 cells. Pigmentation is another parameter that hair and feather follicles can reload between cycles, and even midway through anagen. Recent works show that hair and feather pigmentation “printing” programs coevolved to rely on pulsed expression of Agouti, a melanocortin receptor-1 antagonist, in the follicular mesenchyme. Here, we discuss broader implications of hair and feather regenerative plasticity. PMID:25557541
Role of social environment and social clustering in spread of opinions in coevolving networks.
Malik, Nishant; Mucha, Peter J
2013-12-01
Taking a pragmatic approach to the processes involved in the phenomena of collective opinion formation, we investigate two specific modifications to the coevolving network voter model of opinion formation studied by Holme and Newman [Phys. Rev. E 74, 056108 (2006)]. First, we replace the rewiring probability parameter by a distribution of probability of accepting or rejecting opinions between individuals, accounting for heterogeneity and asymmetric influences in relationships between individuals. Second, we modify the rewiring step by a path-length-based preference for rewiring that reinforces local clustering. We have investigated the influences of these modifications on the outcomes of simulations of this model. We found that varying the shape of the distribution of probability of accepting or rejecting opinions can lead to the emergence of two qualitatively distinct final states, one having several isolated connected components each in internal consensus, allowing for the existence of diverse opinions, and the other having a single dominant connected component with each node within that dominant component having the same opinion. Furthermore, more importantly, we found that the initial clustering in the network can also induce similar transitions. Our investigation also indicates that these transitions are governed by a weak and complex dependence on system size. We found that the networks in the final states of the model have rich structural properties including the small world property for some parameter regimes.
The gourmet ape: evolution and human food preferences.
Krebs, John R
2009-09-01
This review explores the relation between evolution, ecology, and culture in determining human food preferences. The basic physiology and morphology of Homo sapiens sets boundaries to our eating habits, but within these boundaries human food preferences are remarkably varied, both within and between populations. This does not mean that variation is entirely cultural or learned, because genes and culture may coevolve to determine variation in dietary habits. This coevolution has been well elucidated in some cases, such as lactose tolerance (lactase persistence) in adults, but is less well understood in others, such as in favism in the Mediterranean and other regions. Genetic variation in bitter taste sensitivity has been well documented, and it affects food preferences (eg, avoidance of cruciferous vegetables). The selective advantage of this variation is not clear. In African populations, there is an association between insensitivity to bitter taste and the prevalence of malaria, which suggests that insensitivity may have been selected for in regions in which eating bitter plants would confer some protection against malaria. Another, more general, hypothesis is that variation in bitter taste sensitivity has coevolved with the use of spices in cooking, which, in turn, is thought to be a cultural tradition that reduces the dangers of microbial contamination of food. Our evolutionary heritage of food preferences and eating habits leaves us mismatched with the food environments we have created, which leads to problems such as obesity and type 2 diabetes.
Villari, Caterina; Herms, Daniel A; Whitehill, Justin G A; Cipollini, Don; Bonello, Pierluigi
2016-01-01
We review the literature on host resistance of ash to emerald ash borer (EAB, Agrilus planipennis), an invasive species that causes widespread mortality of ash. Manchurian ash (Fraxinus mandshurica), which coevolved with EAB, is more resistant than evolutionarily naïve North American and European congeners. Manchurian ash was less preferred for adult feeding and oviposition than susceptible hosts, more resistant to larval feeding, had higher constitutive concentrations of bark lignans, coumarins, proline, tyramine and defensive proteins, and was characterized by faster oxidation of phenolics. Consistent with EAB being a secondary colonizer of coevolved hosts, drought stress decreased the resistance of Manchurian ash, but had no effect on constitutive bark phenolics, suggesting that they do not contribute to increased susceptibility in response to drought stress. The induced resistance of North American species to EAB in response to the exogenous application of methyl jasmonate was associated with increased bark concentrations of verbascoside, lignin and/or trypsin inhibitors, which decreased larval survival and/or growth in bioassays. This finding suggests that these inherently susceptible species possess latent defenses that are not induced naturally by larval colonization, perhaps because they fail to recognize larval cues or respond quickly enough. Finally, we propose future research directions that would address some critical knowledge gaps. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
NASA Astrophysics Data System (ADS)
Roth, A.; Schneider, J.; Klimach, T.; Mertes, S.; van Pinxteren, D.; Herrmann, H.; Borrmann, S.
2016-01-01
Cloud residues and out-of-cloud aerosol particles with diameters between 150 and 900 nm were analysed by online single particle aerosol mass spectrometry during the 6-week study Hill Cap Cloud Thuringia (HCCT)-2010 in September-October 2010. The measurement location was the mountain Schmücke (937 m a.s.l.) in central Germany. More than 160 000 bipolar mass spectra from out-of-cloud aerosol particles and more than 13 000 bipolar mass spectra from cloud residual particles were obtained and were classified using a fuzzy c-means clustering algorithm. Analysis of the uncertainty of the sorting algorithm was conducted on a subset of the data by comparing the clustering output with particle-by-particle inspection and classification by the operator. This analysis yielded a false classification probability between 13 and 48 %. Additionally, particle types were identified by specific marker ions. The results from the ambient aerosol analysis show that 63 % of the analysed particles belong to clusters having a diurnal variation, suggesting that local or regional sources dominate the aerosol, especially for particles containing soot and biomass burning particles. In the cloud residues, the relative percentage of large soot-containing particles and particles containing amines was found to be increased compared to the out-of-cloud aerosol, while, in general, organic particles were less abundant in the cloud residues. In the case of amines, this can be explained by the high solubility of the amines, while the large soot-containing particles were found to be internally mixed with inorganics, which explains their activation as cloud condensation nuclei. Furthermore, the results show that during cloud processing, both sulfate and nitrate are added to the residual particles, thereby changing the mixing state and increasing the fraction of particles with nitrate and/or sulfate. This is expected to lead to higher hygroscopicity after cloud evaporation, and therefore to an increase of the particles' ability to act as cloud condensation nuclei after their cloud passage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bushuev, A. V.; Kozhin, A. F., E-mail: alexfkozhin@yandex.ru; Aleeva, T. B.
An active neutron method for measuring the residual mass of {sup 235}U in spent fuel assemblies (FAs) of the IRT MEPhI research reactor is presented. The special measuring stand design and uniform irradiation of the fuel with neutrons along the entire length of the active part of the FA provide high accuracy of determination of the residual {sup 235}U content. AmLi neutron sources yield a higher effect/background ratio than other types of sources and do not induce the fission of {sup 238}U. The proposed method of transfer of the isotope source in accordance with a given algorithm may be usedmore » in experiments where the studied object needs to be irradiated with a uniform fluence.« less
Arctic Ocean Tides from GRACE Satellite Accelerations
NASA Astrophysics Data System (ADS)
Killett, B.; Wahr, J. M.; Desai, S. D.; Yuan, D.; Watkins, M. M.
2010-12-01
Because missions such as TOPEX/POSEIDON don't extend to high latitudes, Arctic ocean tidal solutions aren't constrained by altimetry data. The resulting errors in tidal models alias into monthly GRACE gravity field solutions at all latitudes. Fortunately, GRACE inter-satellite ranging data can be used to solve for these tides directly. Seven years of GRACE inter-satellite acceleration data are inverted using a mascon approach to solve for residual amplitudes and phases of major solar and lunar tides in the Arctic ocean relative to FES 2004. Simulations are performed to test the inversion algorithm's performance, and uncertainty estimates are derived from the tidal signal over land. Truncation error magnitudes and patterns are compared to the residual tidal signals.
Berry, Luke; Poudel, Saroj; Tokmina-Lukaszewska, Monika; Colman, Daniel R; Nguyen, Diep M N; Schut, Gerrit J; Adams, Michael W W; Peters, John W; Boyd, Eric S; Bothner, Brian
2018-01-01
Recent investigations into ferredoxin-dependent transhydrogenases, a class of enzymes responsible for electron transport, have highlighted the biological importance of flavin-based electron bifurcation (FBEB). FBEB generates biomolecules with very low reduction potential by coupling the oxidation of an electron donor with intermediate potential to the reduction of high and low potential molecules. Bifurcating systems can generate biomolecules with very low reduction potentials, such as reduced ferredoxin (Fd), from species such as NADPH. Metabolic systems that use bifurcation are more efficient and confer a competitive advantage for the organisms that harbor them. Structural models are now available for two NADH-dependent ferredoxin-NADP + oxidoreductase (Nfn) complexes. These models, together with spectroscopic studies, have provided considerable insight into the catalytic process of FBEB. However, much about the mechanism and regulation of these multi-subunit proteins remains unclear. Using hydrogen/deuterium exchange mass spectrometry (HDX-MS) and statistical coupling analysis (SCA), we identified specific pathways of communication within the model FBEB system, Nfn from Pyrococus furiosus, under conditions at each step of the catalytic cycle. HDX-MS revealed evidence for allosteric coupling across protein subunits upon nucleotide and ferredoxin binding. SCA uncovered a network of co-evolving residues that can provide connectivity across the complex. Together, the HDX-MS and SCA data show that protein allostery occurs across the ensemble of iron‑sulfur cofactors and ligand binding sites using specific pathways that connect domains allowing them to function as dynamically coordinated units. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaRock, Doris L.; Brzovic, Peter S.; Levin, Itay
Salmonella enterica serovar typhimurium translocates a glycerophospholipid: cholesterol acyltransferase (SseJ) into the host cytosol after its entry into mammalian cells. SseJ is recruited to the cytoplasmic face of the host cell phagosome membrane where it is activated upon binding the small GTPase, RhoA. SseJ is regulated similarly to cognate eukaryotic effectors, as only the GTP-bound form of RhoA family members stimulates enzymatic activity. Using NMR and biochemistry, this work demonstrates that SseJ competes effectively with Rhotekin, ROCK, and PKN1 in binding to a similar RhoA surface. The RhoA surface that binds SseJ includes the regulatory switch regions that control activationmore » of mammalian effectors. These data were used to create RhoA mutants with altered SseJ binding and activation. This structure-function analysis supports a model in which SseJ activation occurs predominantly through binding to residues within switch region II. We further defined the nature of the interaction between SseJ and RhoA by constructing SseJ mutants in the RhoA binding surface. These data indicate that SseJ binding to RhoA is required for recruitment of SseJ to the endosomal network and for full Salmonella virulence for inbred susceptible mice, indicating that regulation of SseJ by small GTPases is an important virulence strategy of this bacterial pathogen. The dependence of a bacterial effector on regulation by a mammalian GTPase defines further how intimately host pathogen interactions have coevolved through similar and divergent evolutionary strategies.« less
Developing a New Wireless Sensor Network Platform and Its Application in Precision Agriculture
Aquino-Santos, Raúl; González-Potes, Apolinar; Edwards-Block, Arthur; Virgen-Ortiz, Raúl Alejandro
2011-01-01
Wireless sensor networks are gaining greater attention from the research community and industrial professionals because these small pieces of “smart dust” offer great advantages due to their small size, low power consumption, easy integration and support for “green” applications. Green applications are considered a hot topic in intelligent environments, ubiquitous and pervasive computing. This work evaluates a new wireless sensor network platform and its application in precision agriculture, including its embedded operating system and its routing algorithm. To validate the technological platform and the embedded operating system, two different routing strategies were compared: hierarchical and flat. Both of these routing algorithms were tested in a small-scale network applied to a watermelon field. However, we strongly believe that this technological platform can be also applied to precision agriculture because it incorporates a modified version of LORA-CBF, a wireless location-based routing algorithm that uses cluster-based flooding. Cluster-based flooding addresses the scalability concerns of wireless sensor networks, while the modified LORA-CBF routing algorithm includes a metric to monitor residual battery energy. Furthermore, results show that the modified version of LORA-CBF functions well with both the flat and hierarchical algorithms, although it functions better with the flat algorithm in a small-scale agricultural network. PMID:22346622
Developing a new wireless sensor network platform and its application in precision agriculture.
Aquino-Santos, Raúl; González-Potes, Apolinar; Edwards-Block, Arthur; Virgen-Ortiz, Raúl Alejandro
2011-01-01
Wireless sensor networks are gaining greater attention from the research community and industrial professionals because these small pieces of "smart dust" offer great advantages due to their small size, low power consumption, easy integration and support for "green" applications. Green applications are considered a hot topic in intelligent environments, ubiquitous and pervasive computing. This work evaluates a new wireless sensor network platform and its application in precision agriculture, including its embedded operating system and its routing algorithm. To validate the technological platform and the embedded operating system, two different routing strategies were compared: hierarchical and flat. Both of these routing algorithms were tested in a small-scale network applied to a watermelon field. However, we strongly believe that this technological platform can be also applied to precision agriculture because it incorporates a modified version of LORA-CBF, a wireless location-based routing algorithm that uses cluster-based flooding. Cluster-based flooding addresses the scalability concerns of wireless sensor networks, while the modified LORA-CBF routing algorithm includes a metric to monitor residual battery energy. Furthermore, results show that the modified version of LORA-CBF functions well with both the flat and hierarchical algorithms, although it functions better with the flat algorithm in a small-scale agricultural network.
GOME Total Ozone and Calibration Error Derived Usign Version 8 TOMS Algorithm
NASA Technical Reports Server (NTRS)
Gleason, J.; Wellemeyer, C.; Qin, W.; Ahn, C.; Gopalan, A.; Bhartia, P.
2003-01-01
The Global Ozone Monitoring Experiment (GOME) is a hyper-spectral satellite instrument measuring the ultraviolet backscatter at relatively high spectral resolution. GOME radiances have been slit averaged to emulate measurements of the Total Ozone Mapping Spectrometer (TOMS) made at discrete wavelengths and processed using the new TOMS Version 8 Ozone Algorithm. Compared to Differential Optical Absorption Spectroscopy (DOAS) techniques based on local structure in the Huggins Bands, the TOMS uses differential absorption between a pair of wavelengths including the local stiucture as well as the background continuum. This makes the TOMS Algorithm more sensitive to ozone, but it also makes the algorithm more sensitive to instrument calibration errors. While calibration adjustments are not needed for the fitting techniques like the DOAS employed in GOME algorithms, some adjustment is necessary when applying the TOMS Algorithm to GOME. Using spectral discrimination at near ultraviolet wavelength channels unabsorbed by ozone, the GOME wavelength dependent calibration drift is estimated and then checked using pair justification. In addition, the day one calibration offset is estimated based on the residuals of the Version 8 TOMS Algorithm. The estimated drift in the 2b detector of GOME is small through the first four years and then increases rapidly to +5% in normalized radiance at 331 nm relative to 385 nm by mid 2000. The lb detector appears to be quite well behaved throughout this time period.
Least-Squares Analysis of Data with Uncertainty in "y" and "x": Algorithms in Excel and KaleidaGraph
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2018-01-01
For the least-squares analysis of data having multiple uncertain variables, the generally accepted best solution comes from minimizing the sum of weighted squared residuals over all uncertain variables, with, for example, weights in x[subscript i] taken as inversely proportional to the variance [delta][subscript xi][superscript 2]. A complication…
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
Grout, Ray; Kolla, Hemanth; Minion, Michael; ...
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. Here, we demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray; Kolla, Hemanth; Minion, Michael
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited tomore » recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray; Kolla, Hemanth; Minion, Michael
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Mortensen, Stig B; Klim, Søren; Dammann, Bernd; Kristensen, Niels R; Madsen, Henrik; Overgaard, Rune V
2007-10-01
The non-linear mixed-effects model based on stochastic differential equations (SDEs) provides an attractive residual error model, that is able to handle serially correlated residuals typically arising from structural mis-specification of the true underlying model. The use of SDEs also opens up for new tools for model development and easily allows for tracking of unknown inputs and parameters over time. An algorithm for maximum likelihood estimation of the model has earlier been proposed, and the present paper presents the first general implementation of this algorithm. The implementation is done in Matlab and also demonstrates the use of parallel computing for improved estimation times. The use of the implementation is illustrated by two examples of application which focus on the ability of the model to estimate unknown inputs facilitated by the extension to SDEs. The first application is a deconvolution-type estimation of the insulin secretion rate based on a linear two-compartment model for C-peptide measurements. In the second application the model is extended to also give an estimate of the time varying liver extraction based on both C-peptide and insulin measurements.
Converting point-wise nuclear cross sections to pole representation using regularized vector fitting
NASA Astrophysics Data System (ADS)
Peng, Xingjie; Ducru, Pablo; Liu, Shichang; Forget, Benoit; Liang, Jingang; Smith, Kord
2018-03-01
Direct Doppler broadening of nuclear cross sections in Monte Carlo codes has been widely sought for coupled reactor simulations. One recent approach proposed analytical broadening using a pole representation of the commonly used resonance models and the introduction of a local windowing scheme to improve performance (Hwang, 1987; Forget et al., 2014; Josey et al., 2015, 2016). This pole representation has been achieved in the past by converting resonance parameters in the evaluation nuclear data library into poles and residues. However, cross sections of some isotopes are only provided as point-wise data in ENDF/B-VII.1 library. To convert these isotopes to pole representation, a recent approach has been proposed using the relaxed vector fitting (RVF) algorithm (Gustavsen and Semlyen, 1999; Gustavsen, 2006; Liu et al., 2018). This approach however needs to specify ahead of time the number of poles. This article addresses this issue by adding a poles and residues filtering step to the RVF procedure. This regularized VF (ReV-Fit) algorithm is shown to efficiently converge the poles close to the physical ones, eliminating most of the superfluous poles, and thus enabling the conversion of point-wise nuclear cross sections.
Chaudhry, Jehanzeb Hameed; Estep, Don; Tavener, Simon; Carey, Varis; Sandelin, Jeff
2016-01-01
We consider numerical methods for initial value problems that employ a two stage approach consisting of solution on a relatively coarse discretization followed by solution on a relatively fine discretization. Examples include adaptive error control, parallel-in-time solution schemes, and efficient solution of adjoint problems for computing a posteriori error estimates. We describe a general formulation of two stage computations then perform a general a posteriori error analysis based on computable residuals and solution of an adjoint problem. The analysis accommodates various variations in the two stage computation and in formulation of the adjoint problems. We apply the analysis to compute "dual-weighted" a posteriori error estimates, to develop novel algorithms for efficient solution that take into account cancellation of error, and to the Parareal Algorithm. We test the various results using several numerical examples.
Zou, Weiyao; Burns, Stephen A.
2012-01-01
A Lagrange multiplier-based damped least-squares control algorithm for woofer-tweeter (W-T) dual deformable-mirror (DM) adaptive optics (AO) is tested with a breadboard system. We show that the algorithm can complementarily command the two DMs to correct wavefront aberrations within a single optimization process: the woofer DM correcting the high-stroke, low-order aberrations, and the tweeter DM correcting the low-stroke, high-order aberrations. The optimal damping factor for a DM is found to be the median of the eigenvalue spectrum of the influence matrix of that DM. Wavefront control accuracy is maximized with the optimized control parameters. For the breadboard system, the residual wavefront error can be controlled to the precision of 0.03 μm in root mean square. The W-T dual-DM AO has applications in both ophthalmology and astronomy. PMID:22441462
Input Forces Estimation for Nonlinear Systems by Applying a Square-Root Cubature Kalman Filter.
Song, Xuegang; Zhang, Yuexin; Liang, Dakai
2017-10-10
This work presents a novel inverse algorithm to estimate time-varying input forces in nonlinear beam systems. With the system parameters determined, the input forces can be estimated in real-time from dynamic responses, which can be used for structural health monitoring. In the process of input forces estimation, the Runge-Kutta fourth-order algorithm was employed to discretize the state equations; a square-root cubature Kalman filter (SRCKF) was employed to suppress white noise; the residual innovation sequences, a priori state estimate, gain matrix, and innovation covariance generated by SRCKF were employed to estimate the magnitude and location of input forces by using a nonlinear estimator. The nonlinear estimator was based on the least squares method. Numerical simulations of a large deflection beam and an experiment of a linear beam constrained by a nonlinear spring were employed. The results demonstrated accuracy of the nonlinear algorithm.
A Novel General Imaging Formation Algorithm for GNSS-Based Bistatic SAR.
Zeng, Hong-Cheng; Wang, Peng-Bo; Chen, Jie; Liu, Wei; Ge, LinLin; Yang, Wei
2016-02-26
Global Navigation Satellite System (GNSS)-based bistatic Synthetic Aperture Radar (SAR) recently plays a more and more significant role in remote sensing applications for its low-cost and real-time global coverage capability. In this paper, a general imaging formation algorithm was proposed for accurately and efficiently focusing GNSS-based bistatic SAR data, which avoids the interpolation processing in traditional back projection algorithms (BPAs). A two-dimensional point target spectrum model was firstly presented, and the bulk range cell migration correction (RCMC) was consequently derived for reducing range cell migration (RCM) and coarse focusing. As the bulk RCMC seriously changes the range history of the radar signal, a modified and much more efficient hybrid correlation operation was introduced for compensating residual phase errors. Simulation results were presented based on a general geometric topology with non-parallel trajectories and unequal velocities for both transmitter and receiver platforms, showing a satisfactory performance by the proposed method.
Zou, Weiyao; Burns, Stephen A
2012-03-20
A Lagrange multiplier-based damped least-squares control algorithm for woofer-tweeter (W-T) dual deformable-mirror (DM) adaptive optics (AO) is tested with a breadboard system. We show that the algorithm can complementarily command the two DMs to correct wavefront aberrations within a single optimization process: the woofer DM correcting the high-stroke, low-order aberrations, and the tweeter DM correcting the low-stroke, high-order aberrations. The optimal damping factor for a DM is found to be the median of the eigenvalue spectrum of the influence matrix of that DM. Wavefront control accuracy is maximized with the optimized control parameters. For the breadboard system, the residual wavefront error can be controlled to the precision of 0.03 μm in root mean square. The W-T dual-DM AO has applications in both ophthalmology and astronomy. © 2012 Optical Society of America
A Novel General Imaging Formation Algorithm for GNSS-Based Bistatic SAR
Zeng, Hong-Cheng; Wang, Peng-Bo; Chen, Jie; Liu, Wei; Ge, LinLin; Yang, Wei
2016-01-01
Global Navigation Satellite System (GNSS)-based bistatic Synthetic Aperture Radar (SAR) recently plays a more and more significant role in remote sensing applications for its low-cost and real-time global coverage capability. In this paper, a general imaging formation algorithm was proposed for accurately and efficiently focusing GNSS-based bistatic SAR data, which avoids the interpolation processing in traditional back projection algorithms (BPAs). A two-dimensional point target spectrum model was firstly presented, and the bulk range cell migration correction (RCMC) was consequently derived for reducing range cell migration (RCM) and coarse focusing. As the bulk RCMC seriously changes the range history of the radar signal, a modified and much more efficient hybrid correlation operation was introduced for compensating residual phase errors. Simulation results were presented based on a general geometric topology with non-parallel trajectories and unequal velocities for both transmitter and receiver platforms, showing a satisfactory performance by the proposed method. PMID:26927117
A preliminary design for flight testing the FINDS algorithm
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Godiwala, P. M.
1986-01-01
This report presents a preliminary design for flight testing the FINDS (Fault Inferring Nonlinear Detection System) algorithm on a target flight computer. The FINDS software was ported onto the target flight computer by reducing the code size by 65%. Several modifications were made to the computational algorithms resulting in a near real-time execution speed. Finally, a new failure detection strategy was developed resulting in a significant improvement in the detection time performance. In particular, low level MLS, IMU and IAS sensor failures are detected instantaneously with the new detection strategy, while accelerometer and the rate gyro failures are detected within the minimum time allowed by the information generated in the sensor residuals based on the point mass equations of motion. All of the results have been demonstrated by using five minutes of sensor flight data for the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment.
Efficient least angle regression for identification of linear-in-the-parameters models
Beach, Thomas H.; Rezgui, Yacine
2017-01-01
Least angle regression, as a promising model selection method, differentiates itself from conventional stepwise and stagewise methods, in that it is neither too greedy nor too slow. It is closely related to L1 norm optimization, which has the advantage of low prediction variance through sacrificing part of model bias property in order to enhance model generalization capability. In this paper, we propose an efficient least angle regression algorithm for model selection for a large class of linear-in-the-parameters models with the purpose of accelerating the model selection process. The entire algorithm works completely in a recursive manner, where the correlations between model terms and residuals, the evolving directions and other pertinent variables are derived explicitly and updated successively at every subset selection step. The model coefficients are only computed when the algorithm finishes. The direct involvement of matrix inversions is thereby relieved. A detailed computational complexity analysis indicates that the proposed algorithm possesses significant computational efficiency, compared with the original approach where the well-known efficient Cholesky decomposition is involved in solving least angle regression. Three artificial and real-world examples are employed to demonstrate the effectiveness, efficiency and numerical stability of the proposed algorithm. PMID:28293140
A Distance-based Energy Aware Routing algorithm for wireless sensor networks.
Wang, Jin; Kim, Jeong-Uk; Shu, Lei; Niu, Yu; Lee, Sungyoung
2010-01-01
Energy efficiency and balancing is one of the primary challenges for wireless sensor networks (WSNs) since the tiny sensor nodes cannot be easily recharged once they are deployed. Up to now, many energy efficient routing algorithms or protocols have been proposed with techniques like clustering, data aggregation and location tracking etc. However, many of them aim to minimize parameters like total energy consumption, latency etc., which cause hotspot nodes and partitioned network due to the overuse of certain nodes. In this paper, a Distance-based Energy Aware Routing (DEAR) algorithm is proposed to ensure energy efficiency and energy balancing based on theoretical analysis of different energy and traffic models. During the routing process, we consider individual distance as the primary parameter in order to adjust and equalize the energy consumption among involved sensors. The residual energy is also considered as a secondary factor. In this way, all the intermediate nodes will consume their energy at similar rate, which maximizes network lifetime. Simulation results show that the DEAR algorithm can reduce and balance the energy consumption for all sensor nodes so network lifetime is greatly prolonged compared to other routing algorithms.
Model-Based Fault Tolerant Control
NASA Technical Reports Server (NTRS)
Kumar, Aditya; Viassolo, Daniel
2008-01-01
The Model Based Fault Tolerant Control (MBFTC) task was conducted under the NASA Aviation Safety and Security Program. The goal of MBFTC is to develop and demonstrate real-time strategies to diagnose and accommodate anomalous aircraft engine events such as sensor faults, actuator faults, or turbine gas-path component damage that can lead to in-flight shutdowns, aborted take offs, asymmetric thrust/loss of thrust control, or engine surge/stall events. A suite of model-based fault detection algorithms were developed and evaluated. Based on the performance and maturity of the developed algorithms two approaches were selected for further analysis: (i) multiple-hypothesis testing, and (ii) neural networks; both used residuals from an Extended Kalman Filter to detect the occurrence of the selected faults. A simple fusion algorithm was implemented to combine the results from each algorithm to obtain an overall estimate of the identified fault type and magnitude. The identification of the fault type and magnitude enabled the use of an online fault accommodation strategy to correct for the adverse impact of these faults on engine operability thereby enabling continued engine operation in the presence of these faults. The performance of the fault detection and accommodation algorithm was extensively tested in a simulation environment.
A new implementation of the CMRH method for solving dense linear systems
NASA Astrophysics Data System (ADS)
Heyouni, M.; Sadok, H.
2008-04-01
The CMRH method [H. Sadok, Methodes de projections pour les systemes lineaires et non lineaires, Habilitation thesis, University of Lille1, Lille, France, 1994; H. Sadok, CMRH: A new method for solving nonsymmetric linear systems based on the Hessenberg reduction algorithm, Numer. Algorithms 20 (1999) 303-321] is an algorithm for solving nonsymmetric linear systems in which the Arnoldi component of GMRES is replaced by the Hessenberg process, which generates Krylov basis vectors which are orthogonal to standard unit basis vectors rather than mutually orthogonal. The iterate is formed from these vectors by solving a small least squares problem involving a Hessenberg matrix. Like GMRES, this method requires one matrix-vector product per iteration. However, it can be implemented to require half as much arithmetic work and less storage. Moreover, numerical experiments show that this method performs accurately and reduces the residual about as fast as GMRES. With this new implementation, we show that the CMRH method is the only method with long-term recurrence which requires not storing at the same time the entire Krylov vectors basis and the original matrix as in the GMRES algorithmE A comparison with Gaussian elimination is provided.
Face sketch recognition based on edge enhancement via deep learning
NASA Astrophysics Data System (ADS)
Xie, Zhenzhu; Yang, Fumeng; Zhang, Yuming; Wu, Congzhong
2017-11-01
In this paper,we address the face sketch recognition problem. Firstly, we utilize the eigenface algorithm to convert a sketch image into a synthesized sketch face image. Subsequently, considering the low-level vision problem in synthesized face sketch image .Super resolution reconstruction algorithm based on CNN(convolutional neural network) is employed to improve the visual effect. To be specific, we uses a lightweight super-resolution structure to learn a residual mapping instead of directly mapping the feature maps from the low-level space to high-level patch representations, which making the networks are easier to optimize and have lower computational complexity. Finally, we adopt LDA(Linear Discriminant Analysis) algorithm to realize face sketch recognition on synthesized face image before super resolution and after respectively. Extensive experiments on the face sketch database(CUFS) from CUHK demonstrate that the recognition rate of SVM(Support Vector Machine) algorithm improves from 65% to 69% and the recognition rate of LDA(Linear Discriminant Analysis) algorithm improves from 69% to 75%.What'more,the synthesized face image after super resolution can not only better describer image details such as hair ,nose and mouth etc, but also improve the recognition accuracy effectively.
A Filtering of Incomplete GNSS Position Time Series with Probabilistic Principal Component Analysis
NASA Astrophysics Data System (ADS)
Gruszczynski, Maciej; Klos, Anna; Bogusz, Janusz
2018-04-01
For the first time, we introduced the probabilistic principal component analysis (pPCA) regarding the spatio-temporal filtering of Global Navigation Satellite System (GNSS) position time series to estimate and remove Common Mode Error (CME) without the interpolation of missing values. We used data from the International GNSS Service (IGS) stations which contributed to the latest International Terrestrial Reference Frame (ITRF2014). The efficiency of the proposed algorithm was tested on the simulated incomplete time series, then CME was estimated for a set of 25 stations located in Central Europe. The newly applied pPCA was compared with previously used algorithms, which showed that this method is capable of resolving the problem of proper spatio-temporal filtering of GNSS time series characterized by different observation time span. We showed, that filtering can be carried out with pPCA method when there exist two time series in the dataset having less than 100 common epoch of observations. The 1st Principal Component (PC) explained more than 36% of the total variance represented by time series residuals' (series with deterministic model removed), what compared to the other PCs variances (less than 8%) means that common signals are significant in GNSS residuals. A clear improvement in the spectral indices of the power-law noise was noticed for the Up component, which is reflected by an average shift towards white noise from - 0.98 to - 0.67 (30%). We observed a significant average reduction in the accuracy of stations' velocity estimated for filtered residuals by 35, 28 and 69% for the North, East, and Up components, respectively. CME series were also subjected to analysis in the context of environmental mass loading influences of the filtering results. Subtraction of the environmental loading models from GNSS residuals provides to reduction of the estimated CME variance by 20 and 65% for horizontal and vertical components, respectively.
Bradley, Michael J; Chivers, Peter T; Baker, Nathan A
2008-05-16
Escherichia coli NikR is a homotetrameric Ni(2+)- and DNA-binding protein that functions as a transcriptional repressor of the NikABCDE nickel permease. The protein is composed of two distinct domains. The N-terminal 50 amino acids of each chain forms part of the dimeric ribbon-helix-helix (RHH) domains, a well-studied DNA-binding fold. The 83-residue C-terminal nickel-binding domain forms an ACT (aspartokinase, chorismate mutase, and TyrA) fold and contains the tetrameric interface. In this study, we have utilized an equilibrium molecular dynamics simulation in order to explore the conformational dynamics of the NikR tetramer and determine important residue interactions within and between the RHH and ACT domains to gain insight into the effects of Ni(2+) on DNA-binding activity. The molecular simulation data were analyzed using two different correlation measures based on fluctuations in atomic position and noncovalent contacts together with a clustering algorithm to define groups of residues with similar correlation patterns for both types of correlation measure. Based on these analyses, we have defined a series of residue interrelationships that describe an allosteric communication pathway between the Ni(2+)- and DNA-binding sites, which are separated by 40 A. Several of the residues identified by our analyses have been previously shown experimentally to be important for NikR function. An additional subset of the identified residues structurally connects the experimentally implicated residues and may help coordinate the allosteric communication between the ACT and RHH domains.
Bradley, Michael J.; Chivers, Peter T.; Baker, Nathan A.
2008-01-01
Summary E. coliNikR is a homotetrameric Ni2+- and DNA-binding protein that functions as a transcriptional repressor of the NikABCDE nickel permease. The protein is composed of 2 distinct domains. The N-terminal fifty amino acids of each chain forms part of the dimeric ribbon-helix-helix (RHH) domains, a well-studied DNA-binding fold. The eighty-three residue C-terminal nickel-binding domain forms an ACT-fold and contains the tetrameric interface. In this study, we have utilized an equilibrium molecular dynamics (MD) simulation in order to explore the conformational dynamics of the NikR tetramer and determine important residue interactions within and between the RHH and ACT domains to gain insight into the effects of Ni on DNA-binding activity. The molecular simulation data was analyzed using two different correlation measures based on fluctuations in atomic position and non-covalent contacts, together with a clustering algorithm to define groups of residues with similar correlation patterns for both types of correlation measure. Based on these analyses, we have defined a series of residue interrelationships that describe an allosteric communication pathway between the Ni2+ and DNA binding sites, which are separated by 40 Å. Several of the residues identified by our analyses have been previously shown experimentally to be important for NikR function. An additional subset of the identified residues structurally connects the experimentally implicated residues and may help coordinate the allosteric communication between the ACT and RHH domains. PMID:18433769
Protein complexes are assemblies of subunits that have co-evolved to execute one or many coordinated functions in the cellular environment. Functional annotation of mammalian protein complexes is critical to understanding biological processes, as well as disease mechanisms. Here, we used genetic co-essentiality derived from genome-scale RNAi- and CRISPR-Cas9-based fitness screens performed across hundreds of human cancer cell lines to assign measures of functional similarity.
NASA Astrophysics Data System (ADS)
Wang, Wenhui; Cao, Changyong; Ignatov, Alex; Li, Zhenglong; Wang, Likun; Zhang, Bin; Blonski, Slawomir; Li, Jun
2017-09-01
The Suomi NPP VIIRS thermal emissive bands (TEB) have been performing very well since data became available on January 20, 2012. The longwave infrared bands at 11 and 12 um (M15 and M16) are primarily used for sea surface temperature (SST) retrievals. A long standing anomaly has been observed during the quarterly warm-up-cool-down (WUCD) events. During such event daytime SST product becomes anomalous with a warm bias shown as a spike in the SST time series on the order of 0.2 K. A previous study (CAO et al. 2017) suggested that the VIIRS TEB calibration anomaly during WUCD is due to a flawed theoretical assumption in the calibration equation and proposed an Ltrace method to address the issue. This paper complements that study and presents operational implementation and validation of the Ltrace method for M15 and M16. The Ltrace method applies bias correction during WUCD only. It requires a simple code change and one-time calibration parameter look-up-table update. The method was evaluated using colocated CrIS observations and the SST algorithm. Our results indicate that the method can effectively reduce WUCD calibration anomaly in M15, with residual bias of 0.02 K after the correction. It works less effectively for M16, with residual bias of 0.04 K. The Ltrace method may over-correct WUCD calibration biases, especially for M16. However, the residual WUCD biases are small in both bands. Evaluation results using the SST algorithm show that the method can effectively remove SST anomaly during WUCD events.
Preconditioned conjugate gradient methods for the compressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Venkatakrishnan, V.
1990-01-01
The compressible Navier-Stokes equations are solved for a variety of two-dimensional inviscid and viscous problems by preconditioned conjugate gradient-like algorithms. Roe's flux difference splitting technique is used to discretize the inviscid fluxes. The viscous terms are discretized by using central differences. An algebraic turbulence model is also incorporated. The system of linear equations which arises out of the linearization of a fully implicit scheme is solved iteratively by the well known methods of GMRES (Generalized Minimum Residual technique) and Chebyschev iteration. Incomplete LU factorization and block diagonal factorization are used as preconditioners. The resulting algorithm is competitive with the best current schemes, but has wide applications in parallel computing and unstructured mesh computations.
Oceanic tidal signals in magnetic satellite data
NASA Astrophysics Data System (ADS)
Wardinski, I.; Lesur, V.
2015-12-01
In this study we discuss the observation of oceanic tidal signals in magnetic satellite data. We analyse 10 years of CHAMP satellite data. The detection algorithm is applied on residual signal that remains after the derivation of GRIMM 42 (Lesur et al., 2015). The signals found represent the major tidal constituents, such as the M2 tide. However, other tidal constituents appear to be swallowed by unmodelled external and induced magnetic signal, particularly in equatorial and circumpolar regions. A part of the study also focuses on the temporal variability of the signal detection and its dependence on geomagnetic activity. Possible refinements to the detection algorithm and its applicability to SWARM data are also presented and discussed.
Harmonic regression based multi-temporal cloud filtering algorithm for Landsat 8
NASA Astrophysics Data System (ADS)
Joshi, P.
2015-12-01
Landsat data archive though rich is seen to have missing dates and periods owing to the weather irregularities and inconsistent coverage. The satellite images are further subject to cloud cover effects resulting in erroneous analysis and observations of ground features. In earlier studies the change detection algorithm using statistical control charts on harmonic residuals of multi-temporal Landsat 5 data have been shown to detect few prominent remnant clouds [Brooks, Evan B., et al, 2014]. So, in this work we build on this harmonic regression approach to detect and filter clouds using a multi-temporal series of Landsat 8 images. Firstly, we compute the harmonic coefficients using the fitting models on annual training data. This time series of residuals is further subjected to Shewhart X-bar control charts which signal the deviations of cloud points from the fitted multi-temporal fourier curve. For the process with standard deviation σ we found the second and third order harmonic regression with a x-bar chart control limit [Lσ] ranging between [0.5σ < Lσ < σ] as most efficient in detecting clouds. By implementing second order harmonic regression with successive x-bar chart control limits of L and 0.5 L on the NDVI, NDSI and haze optimized transformation (HOT), and utilizing the seasonal physical properties of these parameters, we have designed a novel multi-temporal algorithm for filtering clouds from Landsat 8 images. The method is applied to Virginia and Alabama in Landsat8 UTM zones 17 and 16 respectively. Our algorithm efficiently filters all types of cloud cover with an overall accuracy greater than 90%. As a result of the multi-temporal operation and the ability to recreate the multi-temporal database of images using only the coefficients of the fourier regression, our algorithm is largely storage and time efficient. The results show a good potential for this multi-temporal approach for cloud detection as a timely and targeted solution for the Landsat 8 research community, catering to the need for innovative processing solutions in the infant stage of the satellite.
Rapid automated superposition of shapes and macromolecular models using spherical harmonics.
Konarev, Petr V; Petoukhov, Maxim V; Svergun, Dmitri I
2016-06-01
A rapid algorithm to superimpose macromolecular models in Fourier space is proposed and implemented ( SUPALM ). The method uses a normalized integrated cross-term of the scattering amplitudes as a proximity measure between two three-dimensional objects. The reciprocal-space algorithm allows for direct matching of heterogeneous objects including high- and low-resolution models represented by atomic coordinates, beads or dummy residue chains as well as electron microscopy density maps and inhomogeneous multi-phase models ( e.g. of protein-nucleic acid complexes). Using spherical harmonics for the computation of the amplitudes, the method is up to an order of magnitude faster than the real-space algorithm implemented in SUPCOMB by Kozin & Svergun [ J. Appl. Cryst. (2001 ▸), 34 , 33-41]. The utility of the new method is demonstrated in a number of test cases and compared with the results of SUPCOMB . The spherical harmonics algorithm is best suited for low-resolution shape models, e.g . those provided by solution scattering experiments, but also facilitates a rapid cross-validation against structural models obtained by other methods.
RGB-D SLAM Combining Visual Odometry and Extended Information Filter
Zhang, Heng; Liu, Yanli; Tan, Jindong; Xiong, Naixue
2015-01-01
In this paper, we present a novel RGB-D SLAM system based on visual odometry and an extended information filter, which does not require any other sensors or odometry. In contrast to the graph optimization approaches, this is more suitable for online applications. A visual dead reckoning algorithm based on visual residuals is devised, which is used to estimate motion control input. In addition, we use a novel descriptor called binary robust appearance and normals descriptor (BRAND) to extract features from the RGB-D frame and use them as landmarks. Furthermore, considering both the 3D positions and the BRAND descriptors of the landmarks, our observation model avoids explicit data association between the observations and the map by marginalizing the observation likelihood over all possible associations. Experimental validation is provided, which compares the proposed RGB-D SLAM algorithm with just RGB-D visual odometry and a graph-based RGB-D SLAM algorithm using the publicly-available RGB-D dataset. The results of the experiments demonstrate that our system is quicker than the graph-based RGB-D SLAM algorithm. PMID:26263990
NASA Astrophysics Data System (ADS)
Yan, H.; Zheng, M. J.; Zhu, D. Y.; Wang, H. T.; Chang, W. S.
2015-07-01
When using clutter suppression interferometry (CSI) algorithm to perform signal processing in a three-channel wide-area surveillance radar system, the primary concern is to effectively suppress the ground clutter. However, a portion of moving target's energy is also lost in the process of channel cancellation, which is often neglected in conventional applications. In this paper, we firstly investigate the two-dimensional (radial velocity dimension and squint angle dimension) residual amplitude of moving targets after channel cancellation with CSI algorithm. Then, a new approach is proposed to increase the two-dimensional detection probability of moving targets by reserving the maximum value of the three channel cancellation results in non-uniformly spaced channel system. Besides, theoretical expression of the false alarm probability with the proposed approach is derived in the paper. Compared with the conventional approaches in uniformly spaced channel system, simulation results validate the effectiveness of the proposed approach. To our knowledge, it is the first time that the two-dimensional detection probability of CSI algorithm is studied.
Li, Shuo; Peng, Jun; Liu, Weirong; Zhu, Zhengfa; Lin, Kuo-Chi
2013-12-19
Recent research has indicated that using the mobility of the actuator in wireless sensor and actuator networks (WSANs) to achieve mobile data collection can greatly increase the sensor network lifetime. However, mobile data collection may result in unacceptable collection delays in the network if the path of the actuator is too long. Because real-time network applications require meeting data collection delay constraints, planning the path of the actuator is a very important issue to balance the prolongation of the network lifetime and the reduction of the data collection delay. In this paper, a multi-hop routing mobile data collection algorithm is proposed based on dynamic polling point selection with delay constraints to address this issue. The algorithm can actively update the selection of the actuator's polling points according to the sensor nodes' residual energies and their locations while also considering the collection delay constraint. It also dynamically constructs the multi-hop routing trees rooted by these polling points to balance the sensor node energy consumption and the extension of the network lifetime. The effectiveness of the algorithm is validated by simulation.
Using video-oriented instructions to speed up sequence comparison.
Wozniak, A
1997-04-01
This document presents an implementation of the well-known Smith-Waterman algorithm for comparison of proteic and nucleic sequences, using specialized video instructions. These instructions, SIMD-like in their design, make possible parallelization of the algorithm at the instruction level. Benchmarks on an ULTRA SPARC running at 167 MHz show a speed-up factor of two compared to the same algorithm implemented with integer instructions on the same machine. Performance reaches over 18 million matrix cells per second on a single processor, giving to our knowledge the fastest implementation of the Smith-Waterman algorithm on a workstation. The accelerated procedure was introduced in LASSAP--a LArge Scale Sequence compArison Package software developed at INRIA--which handles parallelism at higher level. On a SUN Enterprise 6000 server with 12 processors, a speed of nearly 200 million matrix cells per second has been obtained. A sequence of length 300 amino acids is scanned against SWISSPROT R33 (1,8531,385 residues) in 29 s. This procedure is not restricted to databank scanning. It applies to all cases handled by LASSAP (intra- and inter-bank comparisons, Z-score computation, etc.
Multispectral Image Compression Based on DSC Combined with CCSDS-IDC
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches. PMID:25110741
Multispectral image compression based on DSC combined with CCSDS-IDC.
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.
An O(Nm(sup 2)) Plane Solver for the Compressible Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Thomas, J. L.; Bonhaus, D. L.; Anderson, W. K.; Rumsey, C. L.; Biedron, R. T.
1999-01-01
A hierarchical multigrid algorithm for efficient steady solutions to the two-dimensional compressible Navier-Stokes equations is developed and demonstrated. The algorithm applies multigrid in two ways: a Full Approximation Scheme (FAS) for a nonlinear residual equation and a Correction Scheme (CS) for a linearized defect correction implicit equation. Multigrid analyses which include the effect of boundary conditions in one direction are used to estimate the convergence rate of the algorithm for a model convection equation. Three alternating-line- implicit algorithms are compared in terms of efficiency. The analyses indicate that full multigrid efficiency is not attained in the general case; the number of cycles to attain convergence is dependent on the mesh density for high-frequency cross-stream variations. However, the dependence is reasonably small and fast convergence is eventually attained for any given frequency with either the FAS or the CS scheme alone. The paper summarizes numerical computations for which convergence has been attained to within truncation error in a few multigrid cycles for both inviscid and viscous ow simulations on highly stretched meshes.
A Novel Online Data-Driven Algorithm for Detecting UAV Navigation Sensor Faults.
Sun, Rui; Cheng, Qi; Wang, Guanyu; Ochieng, Washington Yotto
2017-09-29
The use of Unmanned Aerial Vehicles (UAVs) has increased significantly in recent years. On-board integrated navigation sensors are a key component of UAVs' flight control systems and are essential for flight safety. In order to ensure flight safety, timely and effective navigation sensor fault detection capability is required. In this paper, a novel data-driven Adaptive Neuron Fuzzy Inference System (ANFIS)-based approach is presented for the detection of on-board navigation sensor faults in UAVs. Contrary to the classic UAV sensor fault detection algorithms, based on predefined or modelled faults, the proposed algorithm combines an online data training mechanism with the ANFIS-based decision system. The main advantages of this algorithm are that it allows real-time model-free residual analysis from Kalman Filter (KF) estimates and the ANFIS to build a reliable fault detection system. In addition, it allows fast and accurate detection of faults, which makes it suitable for real-time applications. Experimental results have demonstrated the effectiveness of the proposed fault detection method in terms of accuracy and misdetection rate.
Nitric oxide-based protein modification: formation and site-specificity of protein S-nitrosylation
Kovacs, Izabella; Lindermayr, Christian
2013-01-01
Nitric oxide (NO) is a reactive free radical with pleiotropic functions that participates in diverse biological processes in plants, such as germination, root development, stomatal closing, abiotic stress, and defense responses. It acts mainly through redox-based modification of cysteine residue(s) of target proteins, called protein S-nitrosylation.In this way NO regulates numerous cellular functions and signaling events in plants. Identification of S-nitrosylated substrates and their exact target cysteine residue(s) is very important to reveal the molecular mechanisms and regulatory roles of S-nitrosylation. In addition to the necessity of protein–protein interaction for trans-nitrosylation and denitrosylation reactions, the cellular redox environment and cysteine thiol micro-environment have been proposed important factors for the specificity of protein S-nitrosylation. Several methods have recently been developed for the proteomic identification of target proteins. However, the specificity of NO-based cysteine modification is still less defined. In this review, we discuss formation and specificity of S-nitrosylation. Special focus will be on potential S-nitrosylation motifs, site-specific proteomic analyses, computational predictions using different algorithms, and on structural analysis of cysteine S-nitrosylation. PMID:23717319
Sunden, Fanny; Peck, Ariana; Salzman, Julia; Ressl, Susanne; Herschlag, Daniel
2015-01-01
Enzymes enable life by accelerating reaction rates to biological timescales. Conventional studies have focused on identifying the residues that have a direct involvement in an enzymatic reaction, but these so-called ‘catalytic residues’ are embedded in extensive interaction networks. Although fundamental to our understanding of enzyme function, evolution, and engineering, the properties of these networks have yet to be quantitatively and systematically explored. We dissected an interaction network of five residues in the active site of Escherichia coli alkaline phosphatase. Analysis of the complex catalytic interdependence of specific residues identified three energetically independent but structurally interconnected functional units with distinct modes of cooperativity. From an evolutionary perspective, this network is orders of magnitude more probable to arise than a fully cooperative network. From a functional perspective, new catalytic insights emerge. Further, such comprehensive energetic characterization will be necessary to benchmark the algorithms required to rationally engineer highly efficient enzymes. DOI: http://dx.doi.org/10.7554/eLife.06181.001 PMID:25902402
NASA Astrophysics Data System (ADS)
Trufanov, Aleksandr N.; Trufanov, Nikolay A.; Semenov, Nikita V.
2016-09-01
The experimental data analysis of the stress applying rod section geometry for the PANDA-type polarization maintaining optical fiber has been performed. The dependencies of the change in the radial dimensions of the preform and the doping boundary on the angular coordinate have been obtained. The original algorithm of experimental data statistic analysis, which enables determination of the specimens' characteristic form of section, has been described. The influence of actual doped zone geometry on the residual stress fields formed during the stress rod preform fabrication has been investigated. It has been established that the deviation of the boundary between pure silica and the doped zone from the circular shape results in dissymmetry and local concentrations of the residual stress fields along the section, which can cause preforms destruction at high degrees of doping. The observed geometry deviations of up to 10% lead to the increase of the maximum stress intensity value by over 20%.
Mahalingam, Rajasekaran; Peng, Hung-Pin; Yang, An-Suei
2014-08-01
Protein-fatty acid interaction is vital for many cellular processes and understanding this interaction is important for functional annotation as well as drug discovery. In this work, we present a method for predicting the fatty acid (FA)-binding residues by using three-dimensional probability density distributions of interacting atoms of FAs on protein surfaces which are derived from the known protein-FA complex structures. A machine learning algorithm was established to learn the characteristic patterns of the probability density maps specific to the FA-binding sites. The predictor was trained with five-fold cross validation on a non-redundant training set and then evaluated with an independent test set as well as on holo-apo pair's dataset. The results showed good accuracy in predicting the FA-binding residues. Further, the predictor developed in this study is implemented as an online server which is freely accessible at the following website, http://ismblab.genomics.sinica.edu.tw/. Copyright © 2014 Elsevier B.V. All rights reserved.
Sunden, Fanny; Peck, Ariana; Salzman, Julia; ...
2015-04-22
Enzymes enable life by accelerating reaction rates to biological timescales. Conventional studies have focused on identifying the residues that have a direct involvement in an enzymatic reaction, but these so-called ‘catalytic residues’ are embedded in extensive interaction networks. Although fundamental to our understanding of enzyme function, evolution, and engineering, the properties of these networks have yet to be quantitatively and systematically explored. We dissected an interaction network of five residues in the active site of Escherichia coli alkaline phosphatase. Analysis of the complex catalytic interdependence of specific residues identified three energetically independent but structurally interconnected functional units with distinct modesmore » of cooperativity. From an evolutionary perspective, this network is orders of magnitude more probable to arise than a fully cooperative network. From a functional perspective, new catalytic insights emerge. Further, such comprehensive energetic characterization will be necessary to benchmark the algorithms required to rationally engineer highly efficient enzymes.« less
Dhanda, Sandeep Kumar; Grifoni, Alba; Pham, John; Vaughan, Kerrie; Sidney, John; Peters, Bjoern; Sette, Alessandro
2018-01-01
Unwanted immune responses against protein therapeutics can reduce efficacy or lead to adverse reactions. T-cell responses are key in the development of such responses, and are directed against immunodominant regions within the protein sequence, often associated with binding to several allelic variants of HLA class II molecules (promiscuous binders). Herein, we report a novel computational strategy to predict 'de-immunized' peptides, based on previous studies of erythropoietin protein immunogenicity. This algorithm (or method) first predicts promiscuous binding regions within the target protein sequence and then identifies residue substitutions predicted to reduce HLA binding. Further, this method anticipates the effect of any given substitution on flanking peptides, thereby circumventing the creation of nascent HLA-binding regions. As a proof-of-principle, the algorithm was applied to Vatreptacog α, an engineered Factor VII molecule associated with unintended immunogenicity. The algorithm correctly predicted the two immunogenic peptides containing the engineered residues. As a further validation, we selected and evaluated the immunogenicity of seven substitutions predicted to simultaneously reduce HLA binding for both peptides, five control substitutions with no predicted reduction in HLA-binding capacity, and additional flanking region controls. In vitro immunogenicity was detected in 21·4% of the cultures of peptides predicted to have reduced HLA binding and 11·4% of the flanking regions, compared with 46% for the cultures of the peptides predicted to be immunogenic. This method has been implemented as an interactive application, freely available online at http://tools.iedb.org/deimmunization/. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Millard, R. C.; Seaver, G.
1990-12-01
A 27-term index of refraction algorithm for pure and sea waters has been developed using four experimental data sets of differing accuracies. They cover the range 500-700 nm in wavelength, 0-30°C in temperature, 0-40 psu in salinity, and 0-11,000 db in pressure. The index of refraction algorithm has an accuracy that varies from 0.4 ppm for pure water at atmospheric pressure to 80 ppm at high pressures, but preserves the accuracy of each original data set. This algorithm is a significant improvement over existing descriptions as it is in analytical form with a better and more carefully defined accuracy. A salinometer algorithm with the same uncertainty has been created by numerically inverting the index algorithm using the Newton-Raphson method. The 27-term index algorithm was used to generate a pseudo-data set at the sodium D wavelength (589.26 nm) from which a 6-term densitometer algorithm was constructed. The densitometer algorithm also produces salinity as an intermediate step in the salinity inversion. The densitometer residuals have a standard deviation of 0.049 kg m -3 which is not accurate enough for most oceanographic applications. However, the densitometer algorithm was used to explore the sensitivity of density from this technique to temperature and pressure uncertainties. To achieve a deep ocean densitometer of 0.001 kg m -3 accuracy would require the index of refraction to have an accuracy of 0.3 ppm, the temperature an accuracy of 0.01°C and the pressure 1 db. Our assessment of the currently available index of refraction measurements finds that only the data for fresh water at atmospheric pressure produce an algorithm satisfactory for oceanographic use (density to 0.4 ppm). The data base for the algorithm at higher pressures and various salinities requires an order of magnitude or better improvement in index measurement accuracy before the resultant density accuracy will be comparable to the currently available oceanographic algorithm.
NASA Astrophysics Data System (ADS)
Bushuev, A. V.; Kozhin, A. F.; Aleeva, T. B.; Zubarev, V. N.; Petrova, E. V.; Smirnov, V. E.
2016-12-01
An active neutron method for measuring the residual mass of 235U in spent fuel assemblies (FAs) of the IRT MEPhI research reactor is presented. The special measuring stand design and uniform irradiation of the fuel with neutrons along the entire length of the active part of the FA provide high accuracy of determination of the residual 235U content. AmLi neutron sources yield a higher effect/background ratio than other types of sources and do not induce the fission of 238U. The proposed method of transfer of the isotope source in accordance with a given algorithm may be used in experiments where the studied object needs to be irradiated with a uniform fluence.
Image coding using entropy-constrained residual vector quantization
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.
1993-01-01
The residual vector quantization (RVQ) structure is exploited to produce a variable length codeword RVQ. Necessary conditions for the optimality of this RVQ are presented, and a new entropy-constrained RVQ (ECRVQ) design algorithm is shown to be very effective in designing RVQ codebooks over a wide range of bit rates and vector sizes. The new EC-RVQ has several important advantages. It can outperform entropy-constrained VQ (ECVQ) in terms of peak signal-to-noise ratio (PSNR), memory, and computation requirements. It can also be used to design high rate codebooks and codebooks with relatively large vector sizes. Experimental results indicate that when the new EC-RVQ is applied to image coding, very high quality is achieved at relatively low bit rates.
The algebraic decoding of the (41, 21, 9) quadratic residue code
NASA Technical Reports Server (NTRS)
Reed, Irving S.; Truong, T. K.; Chen, Xuemin; Yin, Xiaowei
1992-01-01
A new algebraic approach for decoding the quadratic residue (QR) codes, in particular the (41, 21, 9) QR code is presented. The key ideas behind this decoding technique are a systematic application of the Sylvester resultant method to the Newton identities associated with the code syndromes to find the error-locator polynomial, and next a method for determining error locations by solving certain quadratic, cubic and quartic equations over GF(2 exp m) in a new way which uses Zech's logarithms for the arithmetic. The algorithms developed here are suitable for implementation in a programmable microprocessor or special-purpose VLSI chip. It is expected that the algebraic methods developed here can apply generally to other codes such as the BCH and Reed-Solomon codes.
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi; Hixon, Duane
1993-01-01
The work done under this project was documented in detail as the Ph. D. dissertation of Dr. Duane Hixon. The objectives of the research project were evaluation of the generalized minimum residual method (GMRES) as a tool for accelerating 2-D and 3-D unsteady flows and evaluation of the suitability of the GMRES algorithm for unsteady flows, computed on parallel computer architectures.
Phytoplankton as Particles - A New Approach to Modeling Algal Blooms
2013-07-01
68 Figure 69. Amplitudes of lunar semi-diurnal and diurnal harmonics of observed and computed...particle behavior when the trajectory takes a particle outside the model domain. The rules associated with the present particle-tracking algorithms are... land - ward, although occasional reversals occurred. Amplitude of the current fluctuations was ≈ 20 cm s-1. Model residual currents for one year were
WE-G-207-07: Iterative CT Shading Correction Method with No Prior Information
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, P; Mao, T; Niu, T
2015-06-15
Purpose: Shading artifacts are caused by scatter contamination, beam hardening effects and other non-ideal imaging condition. Our Purpose is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT imaging (e.g., cone-beam CT, low-kVp CT) without relying on prior information. Methods: Our method applies general knowledge of the relatively uniform CT number distribution in one tissue component. Image segmentation is applied to construct template image where each structure is filled with the same CT number of that specific tissue. By subtracting the ideal template from CT image, the residual from various error sources are generated.more » Since the forward projection is an integration process, the non-continuous low-frequency shading artifacts in the image become continuous and low-frequency signals in the line integral. Residual image is thus forward projected and its line integral is filtered using Savitzky-Golay filter to estimate the error. A compensation map is reconstructed on the error using standard FDK algorithm and added to the original image to obtain the shading corrected one. Since the segmentation is not accurate on shaded CT image, the proposed scheme is iterated until the variation of residual image is minimized. Results: The proposed method is evaluated on a Catphan600 phantom, a pelvic patient and a CT angiography scan for carotid artery assessment. Compared to the one without correction, our method reduces the overall CT number error from >200 HU to be <35 HU and increases the spatial uniformity by a factor of 1.4. Conclusion: We propose an effective iterative algorithm for shading correction in CT imaging. Being different from existing algorithms, our method is only assisted by general anatomical and physical information in CT imaging without relying on prior knowledge. Our method is thus practical and attractive as a general solution to CT shading correction. This work is supported by the National Science Foundation of China (NSFC Grant No. 81201091), National High Technology Research and Development Program of China (863 program, Grant No. 2015AA020917), and Fund Project for Excellent Abroad Scholar Personnel in Science and Technology.« less
NASA Astrophysics Data System (ADS)
Zhang, Tianran; Wooster, Martin
2016-04-01
Until recently, crop residues have been the second largest industrial waste product produced in China and field-based burning of crop residues is considered to remain extremely widespread, with impacts on air quality and potential negative effects on health, public transportation. However, due to the small size and perhaps short-lived nature of the individual burns, the extent of the activity and its spatial variability remains somewhat unclear. Satellite EO data has been used to gauge the timing and magnitude of Chinese crop burning, but current approaches very likely miss significant amounts of the activity because the individual burned areas are either too small to detect with frequently acquired moderate spatial resolution data such as MODIS. The Visible Infrared Imaging Radiometer Suite (VIIRS) on-board Suomi-NPP (National Polar-orbiting Partnership) satellite launched on October, 2011 has one set of multi-spectral channels providing full global coverage at 375 m nadir spatial resolutions. It is expected that the 375 m spatial resolution "I-band" imagery provided by VIIRS will allow active fires to be detected that are ~ 10× smaller than those that can be detected by MODIS. In this study the new small fire detection algorithm is built based on VIIRS-I band global fire detection algorithm and hot spot detection algorithm for the BIRD satellite mission. VIIRS-I band imagery data will be used to identify agricultural fire activity across Eastern China. A 30 m spatial resolution global land cover data map is used for false alarm masking. The ground-based validation is performed using images taken from UAV. The fire detection result is been compared with active fire product from the long-standing MODIS sensor onboard the TERRA and AQUA satellites, which shows small fires missed from traditional MODIS fire product may count for over 1/3 of total fire energy in Eastern China.
PhosSA: Fast and accurate phosphorylation site assignment algorithm for mass spectrometry data.
Saeed, Fahad; Pisitkun, Trairak; Hoffert, Jason D; Rashidian, Sara; Wang, Guanghui; Gucek, Marjan; Knepper, Mark A
2013-11-07
Phosphorylation site assignment of high throughput tandem mass spectrometry (LC-MS/MS) data is one of the most common and critical aspects of phosphoproteomics. Correctly assigning phosphorylated residues helps us understand their biological significance. The design of common search algorithms (such as Sequest, Mascot etc.) do not incorporate site assignment; therefore additional algorithms are essential to assign phosphorylation sites for mass spectrometry data. The main contribution of this study is the design and implementation of a linear time and space dynamic programming strategy for phosphorylation site assignment referred to as PhosSA. The proposed algorithm uses summation of peak intensities associated with theoretical spectra as an objective function. Quality control of the assigned sites is achieved using a post-processing redundancy criteria that indicates the signal-to-noise ratio properties of the fragmented spectra. The quality assessment of the algorithm was determined using experimentally generated data sets using synthetic peptides for which phosphorylation sites were known. We report that PhosSA was able to achieve a high degree of accuracy and sensitivity with all the experimentally generated mass spectrometry data sets. The implemented algorithm is shown to be extremely fast and scalable with increasing number of spectra (we report up to 0.5 million spectra/hour on a moderate workstation). The algorithm is designed to accept results from both Sequest and Mascot search engines. An executable is freely available at http://helixweb.nih.gov/ESBL/PhosSA/ for academic research purposes.
Maintaining and Enhancing Diversity of Sampled Protein Conformations in Robotics-Inspired Methods.
Abella, Jayvee R; Moll, Mark; Kavraki, Lydia E
2018-01-01
The ability to efficiently sample structurally diverse protein conformations allows one to gain a high-level view of a protein's energy landscape. Algorithms from robot motion planning have been used for conformational sampling, and several of these algorithms promote diversity by keeping track of "coverage" in conformational space based on the local sampling density. However, large proteins present special challenges. In particular, larger systems require running many concurrent instances of these algorithms, but these algorithms can quickly become memory intensive because they typically keep previously sampled conformations in memory to maintain coverage estimates. In addition, robotics-inspired algorithms depend on defining useful perturbation strategies for exploring the conformational space, which is a difficult task for large proteins because such systems are typically more constrained and exhibit complex motions. In this article, we introduce two methodologies for maintaining and enhancing diversity in robotics-inspired conformational sampling. The first method addresses algorithms based on coverage estimates and leverages the use of a low-dimensional projection to define a global coverage grid that maintains coverage across concurrent runs of sampling. The second method is an automatic definition of a perturbation strategy through readily available flexibility information derived from B-factors, secondary structure, and rigidity analysis. Our results show a significant increase in the diversity of the conformations sampled for proteins consisting of up to 500 residues when applied to a specific robotics-inspired algorithm for conformational sampling. The methodologies presented in this article may be vital components for the scalability of robotics-inspired approaches.