Li, Su; Gul, Yasmeen; Wang, Weimin; Qian, Xueqiao; Zhao, Yuhua
2013-01-10
In order to be able to modulate and improve the function of PPARγ and decrease further some metabolic diseases of M. amblycephala, we have cloned and identified the full-length cDNA of PPARγ in M. amblycephala and examined its transcription patterns at different embryo developmental stages and in different tissues of adult and immature fish. We also accurately normalized seven reference genes by GeNorm and calculated their gene transcription normalization factors. The full-length of PPARγ was 1968 bp, consisting of 218 bp 5'-untranslated region, 1,533 bp open reading frame encoding 510 amino acids residues and 217 bp 3'-untranslated region. M. amblycephala PPARγ peptide was predicted to consist of 4 conserved domains, i.e. N-terminal domain, DNA-binding domain, ligand binding domain and flexible hinge region. PPARγ mRNAs were detected in all studied tissues of adult and immature fish including adipose tissue, gill, heart, liver, spleen, kidney, white muscle, intestine, brain and gonad. In adult fish, PPARγ transcription in liver was highest, followed by gills and it was lowest in female gonads. Moreover, the differences among liver, gill, intestine/brain, spleen/white muscle, kidney and female gonads were greatly significant (p<0.01). The transcription of PPARγ in male gonads was significantly higher than in female gonads (p<0.01). In immature fish, the transcription of PPARγ was highest in intestines followed by adipose tissue, and it was lowest in hearts and white muscles. A great difference was observed (p<0.01) in the transcription of PPARγ among adipose tissue, intestines, liver and heart/white muscles. At different embryo developmental stages, PPARγ transcription in unfertilized spermatozoa was greatly higher than in unfertilized ovum (p<0.01) and it was highest among different embryo developmental stages. The transcription of PPARγ increased gradually during 2 cells stage and 32 cells stage and then decreased until gastrula stage at which it was
Voorhuijzen, Marleen M.; Staats, Martijn; Hutten, Ronald C. B.; Van Dijk, Jeroen P.; Kok, Esther; Frazzon, Jeverson
2015-01-01
Potato (Solanum tuberosum) yield has increased dramatically over the last 50 years and this has been achieved by a combination of improved agronomy and biotechnology efforts. Gene studies are taking place to improve new qualities and develop new cultivars. Reverse transcriptase quantitative polymerase chain reaction (RT-qPCR) is a bench-marking analytical tool for gene expression analysis, but its accuracy is highly dependent on a reliable normalization strategy of an invariant reference genes. For this reason, the goal of this work was to select and validate reference genes for transcriptional analysis of edible tubers of potato. To do so, RT-qPCR primers were designed for ten genes with relatively stable expression in potato tubers as observed in RNA-Seq experiments. Primers were designed across exon boundaries to avoid genomic DNA contamination. Differences were observed in the ranking of candidate genes identified by geNorm, NormFinder and BestKeeper algorithms. The ranks determined by geNorm and NormFinder were very similar and for all samples the most stable candidates were C2, exocyst complex component sec3 (SEC3) and ATCUL3/ATCUL3A/CUL3/CUL3A (CUL3A). According to BestKeeper, the importin alpha and ubiquitin-associated/ts-n genes were the most stable. Three genes were selected as reference genes for potato edible tubers in RT-qPCR studies. The first one, called C2, was selected in common by NormFinder and geNorm, the second one is SEC3, selected by NormFinder, and the third one is CUL3A, selected by geNorm. Appropriate reference genes identified in this work will help to improve the accuracy of gene expression quantification analyses by taking into account differences that may be observed in RNA quality or reverse transcription efficiency across the samples. PMID:25830330
Mariot, Roberta Fogliatto; de Oliveira, Luisa Abruzzi; Voorhuijzen, Marleen M; Staats, Martijn; Hutten, Ronald C B; Van Dijk, Jeroen P; Kok, Esther; Frazzon, Jeverson
2015-01-01
Potato (Solanum tuberosum) yield has increased dramatically over the last 50 years and this has been achieved by a combination of improved agronomy and biotechnology efforts. Gene studies are taking place to improve new qualities and develop new cultivars. Reverse transcriptase quantitative polymerase chain reaction (RT-qPCR) is a bench-marking analytical tool for gene expression analysis, but its accuracy is highly dependent on a reliable normalization strategy of an invariant reference genes. For this reason, the goal of this work was to select and validate reference genes for transcriptional analysis of edible tubers of potato. To do so, RT-qPCR primers were designed for ten genes with relatively stable expression in potato tubers as observed in RNA-Seq experiments. Primers were designed across exon boundaries to avoid genomic DNA contamination. Differences were observed in the ranking of candidate genes identified by geNorm, NormFinder and BestKeeper algorithms. The ranks determined by geNorm and NormFinder were very similar and for all samples the most stable candidates were C2, exocyst complex component sec3 (SEC3) and ATCUL3/ATCUL3A/CUL3/CUL3A (CUL3A). According to BestKeeper, the importin alpha and ubiquitin-associated/ts-n genes were the most stable. Three genes were selected as reference genes for potato edible tubers in RT-qPCR studies. The first one, called C2, was selected in common by NormFinder and geNorm, the second one is SEC3, selected by NormFinder, and the third one is CUL3A, selected by geNorm. Appropriate reference genes identified in this work will help to improve the accuracy of gene expression quantification analyses by taking into account differences that may be observed in RNA quality or reverse transcription efficiency across the samples.
Imai, Tsuyoshi; Ubi, Benjamin E; Saito, Takanori; Moriguchi, Takaya
2014-01-01
We have evaluated suitable reference genes for real time (RT)-quantitative PCR (qPCR) analysis in Japanese pear (Pyrus pyrifolia). We tested most frequently used genes in the literature such as β-Tubulin, Histone H3, Actin, Elongation factor-1α, Glyceraldehyde-3-phosphate dehydrogenase, together with newly added genes Annexin, SAND and TIP41. A total of 17 primer combinations for these eight genes were evaluated using cDNAs synthesized from 16 tissue samples from four groups, namely: flower bud, flower organ, fruit flesh and fruit skin. Gene expression stabilities were analyzed using geNorm and NormFinder software packages or by ΔCt method. geNorm analysis indicated three best performing genes as being sufficient for reliable normalization of RT-qPCR data. Suitable reference genes were different among sample groups, suggesting the importance of validation of gene expression stability of reference genes in the samples of interest. Ranking of stability was basically similar between geNorm and NormFinder, suggesting usefulness of these programs based on different algorithms. ΔCt method suggested somewhat different results in some groups such as flower organ or fruit skin; though the overall results were in good correlation with geNorm or NormFinder. Gene expression of two cold-inducible genes PpCBF2 and PpCBF4 were quantified using the three most and the three least stable reference genes suggested by geNorm. Although normalized quantities were different between them, the relative quantities within a group of samples were similar even when the least stable reference genes were used. Our data suggested that using the geometric mean value of three reference genes for normalization is quite a reliable approach to evaluating gene expression by RT-qPCR. We propose that the initial evaluation of gene expression stability by ΔCt method, and subsequent evaluation by geNorm or NormFinder for limited number of superior gene candidates will be a practical way of finding out
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
Wang, Yu; Wang, Zhong-Kang; Huang, Yi; Liao, Yu-Feng; Yin, You-Ping
2014-01-01
The blister beetle Mylabris cichorii L. (Coleoptera: Meloidae) is a traditional medicinal insect recorded in the Chinese Pharmacopoeia. It synthesizes cantharidin, which kills cancer cells efficiently. Only males produce large amounts of cantharidin. Reference genes are required as endogenous controls for the analysis of differential gene expression in M. cichorii. Our study chose 10 genes as candidate reference genes. The stability of expression of these genes was analyzed by quantitative PCR and determined with two algorithms, geNorm and Normfinder. We recommend UBE3A and RPL22e as suitable reference genes in females and UBE3A, TAF5, and RPL22e in males.
Wang, Yu; Wang, Zhong-Kang; Huang, Yi; Liao, Yu-Feng; Yin, You-Ping
2014-01-01
The blister beetle Mylabris cichorii L. (Coleoptera: Meloidae) is a traditional medicinal insect recorded in the Chinese Pharmacopoeia. It synthesizes cantharidin, which kills cancer cells efficiently. Only males produce large amounts of cantharidin. Reference genes are required as endogenous controls for the analysis of differential gene expression in M. cichorii. Our study chose 10 genes as candidate reference genes. The stability of expression of these genes was analyzed by quantitative PCR and determined with two algorithms, geNorm and Normfinder. We recommend UBE3A and RPL22e as suitable reference genes in females and UBE3A, TAF5, and RPL22e in males. PMID:25368050
Chong, Gabriella; Kuo, Fu-Wen; Tsai, Sujune; Lin, Chiahsin
2017-01-01
Quantification by real-time RT-PCR requires a stable internal reference known as a housekeeping gene (HKG) for normalising the mRNA levels of target genes. The present study identified and validated stably expressed HKGs in post-thaw Symbiodinium clade G. Six potential HKGs, namely, pcna, gapdh, 18S rRNA, hsp90, rbcl, and ps1, were analysed using three different algorithms, namely, GeNorm, NormFinder, and BestKeeper. The GeNorm algorithm ranked the candidate genes as follows in the order of decreasing stability: pcna and gapdh > ps1 > 18S rRNA > hsp90 > rbcl. Results obtained using the NormFinder algorithm also showed that pcna was the most stable HKG and ps1 was the second most stable HKG. We found that the candidate HKGs examined in this study showed variable stability with respect to the three algorithms. These results indicated that both pcna and ps1 were suitable for normalising target gene expression determined by performing real-time RT-PCR in cryopreservation studies on Symbiodinium clade G. The results of the present study would help future studies to elucidate the effect of cryopreservation on gene expression in dinoflagellates. PMID:28067273
Reference Gene Validation for RT-qPCR, a Note on Different Available Software Packages
De Spiegelaere, Ward; Dern-Wieloch, Jutta; Weigel, Roswitha; Schumacher, Valérie; Schorle, Hubert; Nettersheim, Daniel; Bergmann, Martin; Brehm, Ralph; Kliesch, Sabine; Vandekerckhove, Linos; Fink, Cornelia
2015-01-01
Background An appropriate normalization strategy is crucial for data analysis from real time reverse transcription polymerase chain reactions (RT-qPCR). It is widely supported to identify and validate stable reference genes, since no single biological gene is stably expressed between cell types or within cells under different conditions. Different algorithms exist to validate optimal reference genes for normalization. Applying human cells, we here compare the three main methods to the online available RefFinder tool that integrates these algorithms along with R-based software packages which include the NormFinder and GeNorm algorithms. Results 14 candidate reference genes were assessed by RT-qPCR in two sample sets, i.e. a set of samples of human testicular tissue containing carcinoma in situ (CIS), and a set of samples from the human adult Sertoli cell line (FS1) either cultured alone or in co-culture with the seminoma like cell line (TCam-2) or with equine bone marrow derived mesenchymal stem cells (eBM-MSC). Expression stabilities of the reference genes were evaluated using geNorm, NormFinder, and BestKeeper. Similar results were obtained by the three approaches for the most and least stably expressed genes. The R-based packages NormqPCR, SLqPCR and the NormFinder for R script gave identical gene rankings. Interestingly, different outputs were obtained between the original software packages and the RefFinder tool, which is based on raw Cq values for input. When the raw data were reanalysed assuming 100% efficiency for all genes, then the outputs of the original software packages were similar to the RefFinder software, indicating that RefFinder outputs may be biased because PCR efficiencies are not taken into account. Conclusions This report shows that assay efficiency is an important parameter for reference gene validation. New software tools that incorporate these algorithms should be carefully validated prior to use. PMID:25825906
Hirschburger, Daniela; Müller, Manuel; Voegele, Ralf T.; Link, Tobias
2015-01-01
Phakopsora pachyrhizi is a devastating pathogen on soybean, endangering soybean production worldwide. Use of Host Induced Gene Silencing (HIGS) and the study of effector proteins could provide novel strategies for pathogen control. For both approaches quantification of transcript abundance by RT-qPCR is essential. Suitable stable reference genes for normalization are indispensable to obtain accurate RT-qPCR results. According to the Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines and using algorithms geNorm and NormFinder we tested candidate reference genes from P. pachyrhizi and Glycine max for their suitability in normalization of transcript levels throughout the infection process. For P. pachyrhizi we recommend a combination of CytB and PDK or GAPDH for in planta experiments. Gene expression during in vitro stages and over the whole infection process was found to be highly unstable. Here, RPS14 and UbcE2 are ranked best by geNorm and NormFinder. Alternatively CytB that has the smallest Cq range (Cq: quantification cycle) could be used. We recommend specification of gene expression relative to the germ tube stage rather than to the resting urediospore stage. For studies omitting the resting spore and the appressorium stages a combination of Elf3 and RPS9, or PKD and GAPDH should be used. For normalization of soybean genes during rust infection Ukn2 and cons7 are recommended. PMID:26404265
Li, X; Huang, K; Chen, F; Li, W; Sun, S; Shi, X-E; Yang, G
2016-06-01
Intramuscular fat (IMF) is an important trait influencing meat quality, and intramuscular stromal-vascular cell (MSVC) differentiation is a key factor affecting IMF deposition. Quantitative real-time PCR (qPCR) is often used to screen the differentially expressed genes during differentiation of MSVCs, where proper reference genes are essential. In this study, we assessed 31 of previously reported reference genes for their expression suitability in porcine MSVCs derived form longissimus dorsi with qPCR. The expression stability of these genes was evaluated using NormFinder, geNorm and BestKeeper algorithms. NormFinder and geNorm uncovered ACTB, ALDOA and RPS18 as the most three stable genes. BestKeeper identified RPL13A, SSU72 and DAK as the most three stable genes. GAPDH was found to be the least stable gene by all of the three software packages, indicating it is not an appropriate reference gene in qPCR assay. These results might be helpful for further studies in pigs that explore the molecular mechanism underlying IMF deposition.
Long, Xiangyu; He, Bin; Gao, Xinsheng; Qin, Yunxia; Yang, Jianghua; Fang, Yongjun; Qi, Jiyan; Tang, Chaorong
2015-06-01
In rubber tree, latex regeneration is one of the decisive factors influencing the rubber yield, although its molecular regulation is not well known. Quantitative real-time PCR (qPCR) is a popular and powerful tool used to understand the molecular mechanisms of latex regeneration. However, the suitable reference genes required for qPCR are not available to investigate the expressions of target genes during latex regeneration. In this study, 20 candidate reference genes were selected and evaluated for their expression stability across the samples during the process of latex regeneration. All reference genes showed a relatively wide range of the threshold cycle values, and their stability was validated by four different algorithms (comparative delta Ct method, Bestkeeper, NormFinder and GeNorm). Three softwares (comparative delta Ct method, NormFinder and GeNorm) exported similar results that identify UBC4, ADF, UBC2a, eIF2 and ADF4 as the top five suitable references, and 18S as the least suitable one. The application of the screened references would improve accuracy and reliability of gene expression analysis in latex regeneration experiments.
Walter, Robert Fred Henry; Werner, Robert; Vollbrecht, Claudia; Hager, Thomas; Flom, Elena; Christoph, Daniel Christian; Schmeller, Jan; Schmid, Kurt Werner; Wohlschlaeger, Jeremias; Mairinger, Fabian Dominik
2016-01-01
Background Neuroendocrine lung cancer (NELC) represents 25% of all lung cancer cases and large patient collectives exist as formalin-fixed, paraffin-embedded (FFPE) tissue only. FFPE is controversially discussed as source for molecular biological analyses and reference genes for NELC are poorly establishes. Material and methods Forty-three representative FFPE-specimens were used for mRNA expression analysis using the digital nCounter technology (NanoString). Based on recent literature, a total of 91 mRNA targets were investigated as potential tumor markers or reference genes. The geNorm, NormFinder algorithms and coefficient of correlation were used to identify the most stable reference genes. Statistical analysis was performed by using the R programming environment (version 3.1.1) Results RNA integrity (RIN) ranged from 1.8 to 2.6 and concentrations from 34 to 2,109 ng/μl. However, the nCounter technology gave evaluable results for all samples tested. ACTB, CDKN1B, GAPDH, GRB2, RHOA and SDCBP were identified as constantly expressed genes with high stability (M-)values according to geNorm, NormFinder and coefficients of correlation. Conclusion FFPE-derived mRNA is suitable for molecular biological investigations via the nCounter technology, although it is highly degraded. ACTB, CDKN1B, GAPDH, GRB2, RHOA and SDCBP are potent reference genes in neuroendocrine tumors of the lung. PMID:27802291
Mathur, Deepali; Urena-Peralta, Juan R.; Lopez-Rodas, Gerardo; Casanova, Bonaventura; Coret-Ferrer, Francisco; Burgal-Marti, Maria
2015-01-01
Gene expression studies employing real-time PCR has become an intrinsic part of biomedical research. Appropriate normalization of target gene transcript(s) based on stably expressed housekeeping genes is crucial in individual experimental conditions to obtain accurate results. In multiple sclerosis (MS), several gene expression studies have been undertaken, however, the suitability of housekeeping genes to express stably in this disease is not yet explored. Recent research suggests that their expression level may vary under different experimental conditions. Hence it is indispensible to evaluate their expression stability to accurately normalize target gene transcripts. The present study aims to evaluate the expression stability of seven housekeeping genes in rat granule neurons treated with cerebrospinal fluid of MS patients. The selected reference genes were quantified by real time PCR and their expression stability was assessed using GeNorm and NormFinder algorithms. GeNorm identified transferrin receptor (Tfrc) and microglobulin beta-2 (B2m) the most stable genes followed by ribosomal protein L19 (Rpl19) whereas β-actin (ActB) and glyceraldehyde-3-phosphate-dehydrogenase (Gapdh) the most fluctuated ones in these neurons. NormFinder identified Tfrc as the best invariable gene followed by B2m and Rpl19. ActB and Gapdh were the least stable genes as analyzed by NormFinder algorithm. Both methods reported Tfrc and B2m the most stably expressed genes and Gapdh the least stable one. Altogether our data demonstrate the significance of pre-validation of housekeeping genes for accurate normalization and indicates Tfrc and B2m as best endogenous controls in MS. ActB and Gapdh are not recommended in gene expression studies related to current one. PMID:26441545
Jatav, Pradeep; Sodhi, Monika; Sharma, Ankita; Mann, Sandeep; Kishore, Amit; Shandilya, Umesh K; Mohanty, Ashok K; Kataria, Ranjit S; Yadav, Poonam; Verma, Preeti; Kumar, Surinder; Malakar, Dhruba; Mukesh, Manishi
2016-03-01
The present study aims to evaluate the suitability of 10 candidate genes, namely GAPDH, ACTB, RPS15A, RPL4, RPS9, RPS23, HMBS, HPRT1, EEF1A1 and UBI as internal control genes (ICG) to normalize the transcriptional data of mammary epithelial cells (MEC) in Indian cows. A total of 52 MEC samples were isolated from milk of Sahiwal cows (major indigenous dairy breed of India) across different stages of lactation: Early (5-15 days), Peak (30-60 days), Mid (100-140 days) and Late (> 240 days). Three different statistical algorithms: geNorm, Normfinder and BestKeeper were used to assess the suitability of these genes. In geNorm analysis, all the genes exhibited expression stability (M) values below 0.5 with EEF1A1 and RPL4 showing the maximum expression stability. Similar to geNorm, Normfinder also identified EEF1A1 and RPL4 as two of the most stable genes. In Bestkeeper algorithm as well, all the 10 genes showed consistent expression levels. The analysis showed that four genes, that is, EEF1A1, RPL4, GAPDH and ACTB exhibited higher coefficient of correlation to the Bestkeeper index, lower coefficient of variance and standard deviation, indicating their superiority to be used as ICG. The present analysis has provided evidence that RPL4, EEF1A1, GAPDH and ACTB could probably act as most suitable genes for normalizing the transcriptional data of milk-derived mammary epithelial cells of Indian cows.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
NASA Technical Reports Server (NTRS)
Abrams, D.; Williams, C.
1999-01-01
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.
Su, Yun; He, Wen-Bo; Wang, Jia; Li, Jun-Min; Liu, Shu-Sheng; Wang, Xiao-Wei
2013-06-01
Quantitative real-time reverse transcription polymerase chain reaction is widely used for gene expression analysis, and robust normalization against stably expressed endogenous reference genes (ERGs) is necessary to obtain accurate results. In this study, the stability of nine housekeeping genes of the sweetpotato whitefly, Bemisia tabaci (Hemiptera: Aleyrodidae) Mediterranean were evaluated in various conditions by quantitative real-time reverse transcription polymerase chain reaction using geNorm and Normfinder programs. Both programs suggested alpha-tubulin/ubiquitin and 18S small subunit ribosomal RNA the most stable genes for bacterium- and insecticide-treated whiteflies, respectively. For developmental stages, organs, and the samples including salivary glands and the whole body, transcription initiation factor TFIID subunit was calculated as the most stably expressed gene by both programs. In addition, we compared the RNA-seq data with the results of geNorm and Normfinder and found that the stable genes revealed by RNA-seq analysis were also the ERGs recommended by geNorm and Normfinder. Furthermore, the use of the most stable gene suggested by RNA-seq analysis as an ERG produced similar gene expression patterns compared with results generated from the normalization against the most stable gene selected by geNorm and Normfinder and multiple genes recommended by geNorm. It indicates that RNA-seq data are reliable and provide a great source for ERG candidate exploration. Our results benefit future research on gene expression profiles of whiteflies and possibly other organisms.
Cieslak, Jakub; Mackowski, Mariusz; Czyzak-Runowska, Grazyna; Wojtowski, Jacek; Puppel, Kamila; Kuczynska, Beata; Pawlak, Piotr
2015-01-01
Apart from the well-known role of somatic cell count as a parameter reflecting the inflammatory status of the mammary gland, the composition of cells isolated from milk is considered as a valuable material for gene expression studies in mammals. Due to its unique composition, in recent years an increasing interest in mare's milk consumption has been observed. Thus, investigating the genetic background of horse's milk variability presents and interesting study model. Relying on 39 milk samples collected from mares representing three breeds (Polish Primitive Horse, Polish Cold-blooded Horse, Polish Warmblood Horse) we aimed to investigate the utility of equine milk somatic cells as a source of mRNA and to screen the best reference genes for RT-qPCR using geNorm and NormFinder algorithms. The results showed that despite relatively low somatic cell counts in mare's milk, the amount and the quality of the extracted RNA are sufficient for gene expression studies. The analysis of the utility of 7 potential reference genes for RT-qPCR experiments for the normalization of equine milk somatic cells revealed some differences between the outcomes of the applied algorithms, although in both cases the KRT8 and TOP2B genes were pointed as the most stable. Analysis by geNorm showed that the combination of 4 reference genes (ACTB, GAPDH, TOP2B and KRT8) is required for apropriate RT-qPCR experiments normalization, whereas NormFinder algorithm pointed the combination of KRT8 and RPS9 genes as the most suitable. The trial study of the relative transcript abundance of the beta-casein gene with the use of various types and numbers of internal control genes confirmed once again that the selection of proper reference gene combinations is crucial for the final results of each real-time PCR experiment.
NASA Astrophysics Data System (ADS)
Wolfe, William J.; Wood, David; Sorensen, Stephen E.
1996-12-01
This paper discusses automated scheduling as it applies to complex domains such as factories, transportation, and communications systems. The window-constrained-packing problem is introduced as an ideal model of the scheduling trade offs. Specific algorithms are compared in terms of simplicity, speed, and accuracy. In particular, dispatch, look-ahead, and genetic algorithms are statistically compared on randomly generated job sets. The conclusion is that dispatch methods are fast and fairly accurate; while modern algorithms, such as genetic and simulate annealing, have excessive run times, and are too complex to be practical.
Sobel, E.; Lange, K.; O`Connell, J.R.
1996-12-31
Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.
NASA Astrophysics Data System (ADS)
Skarzyńska, Agnieszka; Pawełkowicz, Magdalena; PlÄ der, Wojciech; Przybecki, Zbigniew
2016-09-01
Real-time quantitative polymerase chain reaction is consider as the most reliable method for gene expression studies. However, the expression of target gene could be misinterpreted due to improper normalization. Therefore, the crucial step for analysing of qPCR data is selection of suitable reference genes, which should be validated experimentally. In order to choice the gene with stable expression in the designed experiment, we performed reference gene expression analysis. In this study genes described in the literature and novel genes predicted as control genes, based on the in silico analysis of transcriptome data were used. Analysis with geNorm and NormFinder algorithms allow to create the ranking of candidate genes and indicate the best reference for flower morphogenesis study. According to the results, genes CACS and CYCL were characterised the most stable expression, but the least suitable genes were TUA and EF.
Kapila, Neha; Kishore, Amit; Sodhi, Monika; Sharma, Ankita; Kumar, Pawan; Mohanty, A K; Jerath, Tanushri; Mukesh, M
2013-01-01
Gene expression studies require appropriate normalization methods for proper evaluation of reference genes. To date, not many studies have been reported on the identification of suitable reference genes in buffaloes. The present study was undertaken to determine the panel of suitable reference genes in heat-stressed buffalo mammary epithelial cells (MECs). Briefly, MEC culture from buffalo mammary gland was exposed to 42 °C for one hour and subsequently allowed to recover at 37 °C for different time intervals (from 30 m to 48 h). Three different algorithms, geNorm, NormFinder, and BestKeeper softwares, were used to evaluate the stability of 16 potential reference genes from different functional classes. Our data identified RPL4, EEF1A1, and RPS23 genes to be the most appropriate reference genes that could be utilized for normalization of qPCR data in heat-stressed buffalo MECs.
2013-01-01
Background Phytoplasmas are phloem-limited phytopathogenic wall-less bacteria and represent a major threat to agriculture worldwide. They are transmitted in a persistent, propagative manner by phloem-sucking Hemipteran insects. For gene expression studies based on mRNA quantification by RT-qPCR, stability of housekeeping genes is crucial. The aim of this study was the identification of reference genes to study the effect of phytoplasma infection on gene expression of two leafhopper vector species. The identified reference genes will be useful tools to investigate differential gene expression of leafhopper vectors upon phytoplasma infection. Results The expression profiles of ribosomal 18S, actin, ATP synthase β, glyceraldehyde-3-phosphate dehydrogenase (GAPDH) and tropomyosin were determined in two leafhopper vector species (Hemiptera: Cicadellidae), both healthy and infected by “Candidatus Phytoplasma asteris” (chrysanthemum yellows phytoplasma strain, CYP). Insects were analyzed at three different times post acquisition, and expression stabilities of the selected genes were evaluated with BestKeeper, geNorm and Normfinder algorithms. In Euscelidius variegatus, all genes under all treatments were stable and could serve as reference genes. In Macrosteles quadripunctulatus, BestKeeper and Normfinder analysis indicated ATP synthase β, tropomyosin and GAPDH as the most stable, whereas geNorm identified reliable genes only for early stages of infection. Conclusions In this study a validation of five candidate reference genes was performed with three algorithms, and housekeeping genes were identified for over time transcript profiling of two leafhopper vector species infected by CYP. This work set up an experimental system to study the molecular basis of phytoplasma multiplication in the insect body, in order to elucidate mechanisms of vector specificity. Most of the sequences provided in this study are new for leafhoppers, which are vectors of economically
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.
1997-01-01
Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525
Validation of reference genes for RT-qPCR analysis in Herbaspirillum seropedicae.
Pessoa, Daniella Duarte Villarinho; Vidal, Marcia Soares; Baldani, José Ivo; Simoes-Araujo, Jean Luiz
2016-08-01
The RT-qPCR technique needs a validated set of reference genes for ensuring the consistency of the results from the gene expression. Expression stabilities for 9 genes from Herbaspirillum seropedicae, strain HRC54, grown with different carbon sources were calculated using geNorm and NormFinder, and the gene rpoA showed the best stability values.
Ofinran, Olumide; Bose, Ujjal; Hay, Daniel; Abdul, Summi; Tufatelli, Cristina; Khan, Raheela
2016-12-01
The use of reference genes is the most common method of controlling the variation in mRNA expression during quantitative polymerase chain reaction, although the use of traditional reference genes, such as β‑actin, glyceraldehyde‑3‑phosphate dehydrogenase or 18S ribosomal RNA, without validation occasionally leads to unreliable results. Therefore, the present study aimed to evaluate a set of five commonly used reference genes to determine the most suitable for gene expression studies in normal ovarian tissues, borderline ovarian and ovarian cancer tissues. The expression stabilities of these genes were ranked using two gene stability algorithms, geNorm and NormFinder. Using geNorm, the two best reference genes in ovarian cancer were β‑glucuronidase and β‑actin. Hypoxanthine phosphoribosyltransferase‑1 and β‑glucuronidase were the most stable in ovarian borderline tumours, and hypoxanthine phosphoribosyltransferase‑1 and glyceraldehyde‑3‑phosphate dehydrogenase were the most stable in normal ovarian tissues. NormFinder ranked β‑actin the most stable in ovarian cancer, and the best combination of two genes was β‑glucuronidase and β‑actin. In borderline tumours, hypoxanthine phosphoribosyltransferase‑1 was identified as the most stable, and the best combination was hypoxanthine phosphoribosyltransferase‑1 and β‑glucuronidase. In normal ovarian tissues, β‑glucuronidase was recommended as the optimum reference gene, and the most optimum pair of reference genes was hypoxanthine phosphoribosyltransferase‑1 and β‑actin. To the best of our knowledge, this is the first study to investigate the selection of a set of reference genes for normalisation in quantitative polymerase chain reactions in different ovarian tissues, and therefore it is recommended that β‑glucuronidase, β‑actin and hypoxanthine phosphoribosyltransferase‑1 are the most suitable reference genes for such analyses.
Sinha, Pallavi; Singh, Vikas K.; Suryanarayana, V.; Krishnamurthy, L.; Saxena, Rachit K.; Varshney, Rajeev K.
2015-01-01
Gene expression analysis using quantitative real-time PCR (qRT-PCR) is a very sensitive technique and its sensitivity depends on the stable performance of reference gene(s) used in the study. A number of housekeeping genes have been used in various expression studies in many crops however, their expression were found to be inconsistent under different stress conditions. As a result, species specific housekeeping genes have been recommended for different expression studies in several crop species. However, such specific housekeeping genes have not been reported in the case of pigeonpea (Cajanus cajan) despite the fact that genome sequence has become available for the crop. To identify the stable housekeeping genes in pigeonpea for expression analysis under drought stress conditions, the relative expression variations of 10 commonly used housekeeping genes (EF1α, UBQ10, GAPDH, 18SrRNA, 25SrRNA, TUB6, ACT1, IF4α, UBC and HSP90) were studied on root, stem and leaves tissues of Asha (ICPL 87119). Three statistical algorithms geNorm, NormFinder and BestKeeper were used to define the stability of candidate genes. geNorm analysis identified IF4α and TUB6 as the most stable housekeeping genes however, NormFinder analysis determined IF4α and HSP90 as the most stable housekeeping genes under drought stress conditions. Subsequently validation of the identified candidate genes was undertaken in qRT-PCR based gene expression analysis of uspA gene which plays an important role for drought stress conditions in pigeonpea. The relative quantification of the uspA gene varied according to the internal controls (stable and least stable genes), thus highlighting the importance of the choice of as well as validation of internal controls in such experiments. The identified stable and validated housekeeping genes will facilitate gene expression studies in pigeonpea especially under drought stress conditions. PMID:25849964
2009-01-01
Background Reference genes are used as internal standards to normalize mRNA abundance in quantitative real-time PCR and thereby allow a direct comparison between samples. So far most of these expression studies used human or classical laboratory model species whereas studies on non-model organism under in-situ conditions are quite rare. However, only studies in free-ranging populations can reveal the effects of natural selection on the expression levels of functional important genes. In order to test the feasibility of gene expression studies in wildlife samples we transferred and validated potential reference genes that were developed for lab mice (Mus musculus) to samples of wild yellow-necked mice, Apodemus flavicollis. The stability and suitability of eight potential reference genes was accessed by the programs BestKeeper, NormFinder and geNorm. Findings Although the three programs used different algorithms the ranking order of reference genes was significantly concordant and geNorm differed in only one, NormFinder in two positions compared to BestKeeper. The genes ordered by their mean rank from the most to the least stable gene were: Rps18, Sdha, Canx, Actg1, Pgk1, Ubc, Rpl13a and Actb. Analyses of the normalization factor revealed best results when the five most stable genes were included for normalization. Discussion We established a SYBR green qPCR assay for liver samples of wild A. flavicollis and conclude that five genes should be used for appropriate normalization. Our study provides the basis to investigate differential expression of genes under selection under natural selection conditions in liver samples of A. flavicollis. This approach might also be applicable to other non-model organisms. PMID:20030847
Sinha, Pallavi; Singh, Vikas K; Suryanarayana, V; Krishnamurthy, L; Saxena, Rachit K; Varshney, Rajeev K
2015-01-01
Gene expression analysis using quantitative real-time PCR (qRT-PCR) is a very sensitive technique and its sensitivity depends on the stable performance of reference gene(s) used in the study. A number of housekeeping genes have been used in various expression studies in many crops however, their expression were found to be inconsistent under different stress conditions. As a result, species specific housekeeping genes have been recommended for different expression studies in several crop species. However, such specific housekeeping genes have not been reported in the case of pigeonpea (Cajanus cajan) despite the fact that genome sequence has become available for the crop. To identify the stable housekeeping genes in pigeonpea for expression analysis under drought stress conditions, the relative expression variations of 10 commonly used housekeeping genes (EF1α, UBQ10, GAPDH, 18SrRNA, 25SrRNA, TUB6, ACT1, IF4α, UBC and HSP90) were studied on root, stem and leaves tissues of Asha (ICPL 87119). Three statistical algorithms geNorm, NormFinder and BestKeeper were used to define the stability of candidate genes. geNorm analysis identified IF4α and TUB6 as the most stable housekeeping genes however, NormFinder analysis determined IF4α and HSP90 as the most stable housekeeping genes under drought stress conditions. Subsequently validation of the identified candidate genes was undertaken in qRT-PCR based gene expression analysis of uspA gene which plays an important role for drought stress conditions in pigeonpea. The relative quantification of the uspA gene varied according to the internal controls (stable and least stable genes), thus highlighting the importance of the choice of as well as validation of internal controls in such experiments. The identified stable and validated housekeeping genes will facilitate gene expression studies in pigeonpea especially under drought stress conditions.
Hu, Meizhen; Hu, Wenbin; Xia, Zhiqiang; Zhou, Xincheng; Wang, Wenquan
2016-01-01
Reverse transcription quantitative real-time polymerase chain reaction (real-time PCR, also referred to as quantitative RT-PCR or RT-qPCR) is a highly sensitive and high-throughput method used to study gene expression. Despite the numerous advantages of RT-qPCR, its accuracy is strongly influenced by the stability of internal reference genes used for normalizations. To date, few studies on the identification of reference genes have been performed on cassava (Manihot esculenta Crantz). Therefore, we selected 26 candidate reference genes mainly via the three following channels: reference genes used in previous studies on cassava, the orthologs of the most stable Arabidopsis genes, and the sequences obtained from 32 cassava transcriptome sequence data. Then, we employed ABI 7900 HT and SYBR Green PCR mix to assess the expression of these genes in 21 materials obtained from various cassava samples under different developmental and environmental conditions. The stability of gene expression was analyzed using two statistical algorithms, namely geNorm and NormFinder. geNorm software suggests the combination of cassava4.1_017977 and cassava4.1_006391 as sufficient reference genes for major cassava samples, the union of cassava4.1_014335 and cassava4.1_006884 as best choice for drought stressed samples, and the association of cassava4.1_012496 and cassava4.1_006391 as optimal choice for normally grown samples. NormFinder software recommends cassava4.1_006884 or cassava4.1_006776 as superior reference for qPCR analysis of different materials and organs of drought stressed or normally grown cassava, respectively. Results provide an important resource for cassava reference genes under specific conditions. The limitations of these findings were also discussed. Furthermore, we suggested some strategies that may be used to select candidate reference genes.
Hu, Meizhen; Hu, Wenbin; Xia, Zhiqiang; Zhou, Xincheng; Wang, Wenquan
2016-01-01
Reverse transcription quantitative real-time polymerase chain reaction (real-time PCR, also referred to as quantitative RT-PCR or RT-qPCR) is a highly sensitive and high-throughput method used to study gene expression. Despite the numerous advantages of RT-qPCR, its accuracy is strongly influenced by the stability of internal reference genes used for normalizations. To date, few studies on the identification of reference genes have been performed on cassava (Manihot esculenta Crantz). Therefore, we selected 26 candidate reference genes mainly via the three following channels: reference genes used in previous studies on cassava, the orthologs of the most stable Arabidopsis genes, and the sequences obtained from 32 cassava transcriptome sequence data. Then, we employed ABI 7900 HT and SYBR Green PCR mix to assess the expression of these genes in 21 materials obtained from various cassava samples under different developmental and environmental conditions. The stability of gene expression was analyzed using two statistical algorithms, namely geNorm and NormFinder. geNorm software suggests the combination of cassava4.1_017977 and cassava4.1_006391 as sufficient reference genes for major cassava samples, the union of cassava4.1_014335 and cassava4.1_006884 as best choice for drought stressed samples, and the association of cassava4.1_012496 and cassava4.1_006391 as optimal choice for normally grown samples. NormFinder software recommends cassava4.1_006884 or cassava4.1_006776 as superior reference for qPCR analysis of different materials and organs of drought stressed or normally grown cassava, respectively. Results provide an important resource for cassava reference genes under specific conditions. The limitations of these findings were also discussed. Furthermore, we suggested some strategies that may be used to select candidate reference genes. PMID:27242878
Julian, Guilherme Silva; Oliveira, Renato Watanabe de; Tufik, Sergio; Chagas, Jair Ribeiro
2016-01-01
Obstructive sleep apnea (OSA) has been associated with oxidative stress and various cardiovascular consequences, such as increased cardiovascular disease risk. Quantitative real-time PCR is frequently employed to assess changes in gene expression in experimental models. In this study, we analyzed the effects of chronic intermittent hypoxia (an experimental model of OSA) on housekeeping gene expression in the left cardiac ventricle of rats. Analyses via four different approaches-use of the geNorm, BestKeeper, and NormFinder algorithms; and 2-ΔCt (threshold cycle) data analysis-produced similar results: all genes were found to be suitable for use, glyceraldehyde-3-phosphate dehydrogenase and 18S being classified as the most and the least stable, respectively. The use of more than one housekeeping gene is strongly advised. RESUMO A apneia obstrutiva do sono (AOS) tem sido associada ao estresse oxidativo e a várias consequências cardiovasculares, tais como risco aumentado de doença cardiovascular. A PCR quantitativa em tempo real é frequentemente empregada para avaliar alterações na expressão gênica em modelos experimentais. Neste estudo, analisamos os efeitos da hipóxia intermitente crônica (um modelo experimental de AOS) na expressão de genes de referência no ventrículo cardíaco esquerdo de ratos. Análises a partir de quatro abordagens - uso dos algoritmos geNorm, BestKeeper e NormFinder e análise de dados 2-ΔCt (ciclo limiar) - produziram resultados semelhantes: todos os genes mostraram-se adequados para uso, sendo que gliceraldeído-3-fosfato desidrogenase e 18S foram classificados como o mais e o menos estável, respectivamente. A utilização de mais de um gene de referência é altamente recomendada.
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Algorithm Animation with Galant.
Stallmann, Matthias F
2017-01-01
Although surveys suggest positive student attitudes toward the use of algorithm animations, it is not clear that they improve learning outcomes. The Graph Algorithm Animation Tool, or Galant, challenges and motivates students to engage more deeply with algorithm concepts, without distracting them with programming language details or GUIs. Even though Galant is specifically designed for graph algorithms, it has also been used to animate other algorithms, most notably sorting algorithms.
Geist, G.A.; Howell, G.W.; Watkins, D.S.
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
Margolis, C Z
1983-02-04
The clinical algorithm (flow chart) is a text format that is specially suited for representing a sequence of clinical decisions, for teaching clinical decision making, and for guiding patient care. A representative clinical algorithm is described in detail; five steps for writing an algorithm and seven steps for writing a set of algorithms are outlined. Five clinical education and patient care uses of algorithms are then discussed, including a map for teaching clinical decision making and protocol charts for guiding step-by-step care of specific problems. Clinical algorithms are compared as to their clinical usefulness with decision analysis. Three objections to clinical algorithms are answered, including the one that they restrict thinking. It is concluded that methods should be sought for writing clinical algorithms that represent expert consensus. A clinical algorithm could then be written for any area of medical decision making that can be standardized. Medical practice could then be taught more effectively, monitored accurately, and understood better.
Jacob, Francis; Guertler, Rea; Naim, Stephanie; Nixdorf, Sheri; Fedier, André; Hacker, Neville F.; Heinzelmann-Schwarz, Viola
2013-01-01
Reverse Transcription - quantitative Polymerase Chain Reaction (RT-qPCR) is a standard technique in most laboratories. The selection of reference genes is essential for data normalization and the selection of suitable reference genes remains critical. Our aim was to 1) review the literature since implementation of the MIQE guidelines in order to identify the degree of acceptance; 2) compare various algorithms in their expression stability; 3) identify a set of suitable and most reliable reference genes for a variety of human cancer cell lines. A PubMed database review was performed and publications since 2009 were selected. Twelve putative reference genes were profiled in normal and various cancer cell lines (n = 25) using 2-step RT-qPCR. Investigated reference genes were ranked according to their expression stability by five algorithms (geNorm, Normfinder, BestKeeper, comparative ΔCt, and RefFinder). Our review revealed 37 publications, with two thirds patient samples and one third cell lines. qPCR efficiency was given in 68.4% of all publications, but only 28.9% of all studies provided RNA/cDNA amount and standard curves. GeNorm and Normfinder algorithms were used in 60.5% in combination. In our selection of 25 cancer cell lines, we identified HSPCB, RRN18S, and RPS13 as the most stable expressed reference genes. In the subset of ovarian cancer cell lines, the reference genes were PPIA, RPS13 and SDHA, clearly demonstrating the necessity to select genes depending on the research focus. Moreover, a cohort of at least three suitable reference genes needs to be established in advance to the experiments, according to the guidelines. For establishing a set of reference genes for gene normalization we recommend the use of ideally three reference genes selected by at least three stability algorithms. The unfortunate lack of compliance to the MIQE guidelines reflects that these need to be further established in the research community. PMID:23554992
Software For Genetic Algorithms
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steve E.
1992-01-01
SPLICER computer program is genetic-algorithm software tool used to solve search and optimization problems. Provides underlying framework and structure for building genetic-algorithm application program. Written in Think C.
Algorithm-development activities
NASA Technical Reports Server (NTRS)
Carder, Kendall L.
1994-01-01
The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
Quantum algorithms: an overview
NASA Astrophysics Data System (ADS)
Montanaro, Ashley
2016-01-01
Quantum computers are designed to outperform standard computers by running quantum algorithms. Areas in which quantum algorithms can be applied include cryptography, search and optimisation, simulation of quantum systems and solving large systems of linear equations. Here we briefly survey some known quantum algorithms, with an emphasis on a broad overview of their applications rather than their technical details. We include a discussion of recent developments and near-term applications of quantum algorithms.
INSENS classification algorithm report
Hernandez, J.E.; Frerking, C.J.; Myers, D.W.
1993-07-28
This report describes a new algorithm developed for the Imigration and Naturalization Service (INS) in support of the INSENS project for classifying vehicles and pedestrians using seismic data. This algorithm is less sensitive to nuisance alarms due to environmental events than the previous algorithm. Furthermore, the algorithm is simple enough that it can be implemented in the 8-bit microprocessor used in the INSENS system.
NASA Astrophysics Data System (ADS)
Graf, Norman A.
2001-07-01
An object-oriented framework for undertaking clustering algorithm studies has been developed. We present here the definitions for the abstract Cells and Clusters as well as the interface for the algorithm. We intend to use this framework to investigate the interplay between various clustering algorithms and the resulting jet reconstruction efficiency and energy resolutions to assist in the design of the calorimeter detector.
License plate detection algorithm
NASA Astrophysics Data System (ADS)
Broitman, Michael; Klopovsky, Yuri; Silinskis, Normunds
2013-12-01
A novel algorithm for vehicle license plates localization is proposed. The algorithm is based on pixel intensity transition gradient analysis. Near to 2500 natural-scene gray-level vehicle images of different backgrounds and ambient illumination was tested. The best set of algorithm's parameters produces detection rate up to 0.94. Taking into account abnormal camera location during our tests and therefore geometrical distortion and troubles from trees this result could be considered as passable. Correlation between source data, such as license Plate dimensions and texture, cameras location and others, and parameters of algorithm were also defined.
Distributed Minimum Hop Algorithms
1982-01-01
acknowledgement), node d starts iteration i+1, and otherwise the algorithm terminates. A detailed description of the algorithm is given in pidgin algol...precise behavior of the algorithm under these circumstances is described by the pidgin algol program in the appendix which is executed by each node. The...l) < N!(2) for each neighbor j, and thus by induction,J -1 N!(2-1) < n-i + (Z-1) + N!(Z-1), completing the proof. Algorithm Dl in Pidgin Algol It is
Algorithm That Synthesizes Other Algorithms for Hashing
NASA Technical Reports Server (NTRS)
James, Mark
2010-01-01
An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the
Transitional Division Algorithms.
ERIC Educational Resources Information Center
Laing, Robert A.; Meyer, Ruth Ann
1982-01-01
A survey of general mathematics students whose teachers were taking an inservice workshop revealed that they had not yet mastered division. More direct introduction of the standard division algorithm is favored in elementary grades, with instruction of transitional processes curtailed. Weaknesses in transitional algorithms appear to outweigh…
Ultrametric Hierarchical Clustering Algorithms.
ERIC Educational Resources Information Center
Milligan, Glenn W.
1979-01-01
Johnson has shown that the single linkage and complete linkage hierarchical clustering algorithms induce a metric on the data known as the ultrametric. Johnson's proof is extended to four other common clustering algorithms. Two additional methods also produce hierarchical structures which can violate the ultrametric inequality. (Author/CTM)
The Training Effectiveness Algorithm.
ERIC Educational Resources Information Center
Cantor, Jeffrey A.
1988-01-01
Describes the Training Effectiveness Algorithm, a systematic procedure for identifying the cause of reported training problems which was developed for use in the U.S. Navy. A two-step review by subject matter experts is explained, and applications of the algorithm to other organizations and training systems are discussed. (Author/LRW)
Totally parallel multilevel algorithms
NASA Technical Reports Server (NTRS)
Frederickson, Paul O.
1988-01-01
Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.
The Algorithm Selection Problem
NASA Technical Reports Server (NTRS)
Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)
1994-01-01
Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.
Liu, Lin-Lin; Zhao, Hui; Ma, Teng-Fei; Ge, Fei; Chen, Ce-Shi; Zhang, Ya-Ping
2015-01-01
Reverse transcription-quantitative polymerase chain reaction (RT-qPCR) is a powerful technique for examining gene expression changes during tumorigenesis. Target gene expression is generally normalized by a stably expressed endogenous reference gene; however, reference gene expression may differ among tissues under various circumstances. Because no valid reference genes have been documented for human breast cancer cell lines containing different cancer subtypes treated with transient transfection, we identified appropriate and reliable reference genes from thirteen candidates in a panel of 10 normal and cancerous human breast cell lines under experimental conditions with/without transfection treatments with two transfection reagents. Reference gene expression stability was calculated using four algorithms (geNorm, NormFinder, BestKeeper and comparative delta Ct), and the recommended comprehensive ranking was provided using geometric means of the ranking values using the RefFinder tool. GeNorm analysis revealed that two reference genes should be sufficient for all cases in this study. A stability analysis suggests that 18S rRNA-ACTB is the best reference gene combination across all cell lines; ACTB-GAPDH is best for basal breast cancer cell lines; and HSPCB-ACTB is best for ER+ breast cancer cells. After transfection, the stability ranking of the reference gene fluctuated, especially with Lipofectamine 2000 transfection reagent in two subtypes of basal and ER+ breast cell lines. Comparisons of relative target gene (HER2) expression revealed different expressional patterns depending on the reference genes used for normalization. We suggest that identifying the most stable and suitable reference genes is critical for studying specific cell lines under certain circumstances.
Evaluation of reference genes for gene expression studies in human brown adipose tissue.
Taube, Magdalena; Andersson-Assarsson, Johanna C; Lindberg, Kristin; Pereira, Maria J; Gäbel, Markus; Svensson, Maria K; Eriksson, Jan W; Svensson, Per-Arne
2015-01-01
Human brown adipose tissue (BAT) has during the last 5 year been subjected to an increasing research interest, due to its putative function as a target for future obesity treatments. The most commonly used method for molecular studies of human BAT is the quantitative polymerase chain reaction (qPCR). This method requires normalization to a reference gene (genes with uniform expression under different experimental conditions, e.g. similar expression levels between human BAT and WAT), but so far no evaluation of reference genes for human BAT has been performed. Two different microarray datasets with samples containing human BAT were used to search for genes with low variability in expression levels. Seven genes (FAM96B, GNB1, GNB2, HUWE1, PSMB2, RING1 and TPT1) identified by microarray analysis, and 8 commonly used reference genes (18S, B2M, GAPDH, LRP10, PPIA, RPLP0, UBC, and YWHAZ) were selected and further analyzed by quantitative PCR in both BAT containing perirenal adipose tissue and subcutaneous adipose tissue. Results were analyzed using 2 different algorithms (Normfinder and geNorm). Most of the commonly used reference genes displayed acceptably low variability (geNorm M-values <0.5) in the samples analyzed, but the novel reference genes identified by microarray displayed an even lower variability (M-values <0.25). Our data suggests that PSMB2, GNB2 and GNB1 are suitable novel reference genes for qPCR analysis of human BAT and we recommend that they are included in future gene expression studies of human BAT.
Rueda-Martínez, Carmen; Fernández, M. Carmen; Soto-Navarrete, María Teresa; Jiménez-Navarro, Manuel; Durán, Ana Carmen; Fernández, Borja
2016-01-01
Bicuspid aortic valve (BAV) is the most frequent congenital cardiac malformation in humans, and appears frequently associated with dilatation of the ascending aorta. This association is likely the result of a common aetiology. Currently, a Syrian hamster strain with a relatively high (∼40%) incidence of BAV constitutes the only spontaneous animal model of BAV disease. The characterization of molecular alterations in the aorta of hamsters with BAV may serve to identify pathophysiological mechanisms and molecular markers of disease in humans. In this report, we evaluate the expression of ten candidate reference genes in aortic tissue of hamsters in order to identify housekeeping genes for normalization using quantitative real time PCR (RT-qPCR) assays. A total of 51 adult (180–240 days old) and 56 old (300–440 days old) animals were used. They belonged to a control strain of hamsters with normal, tricuspid aortic valve (TAV; n = 30), or to the affected strain of hamsters with TAV (n = 45) or BAV (n = 32). The expression stability of the candidate reference genes was determined by RT-qPCR using three statistical algorithms, GeNorm, NormFinder and Bestkeeper. The expression analyses showed that the most stable reference genes for the three algorithms employed were Cdkn1β, G3pdh and Polr2a. We propose the use of Cdkn1β, or both Cdkn1β and G3pdh as reference genes for mRNA expression analyses in Syrian hamster aorta. PMID:27711171
Yang, Chunxiao; Pan, Huipeng; Noland, Jeffrey Edward; Zhang, Deyong; Zhang, Zhanhong; Liu, Yong; Zhou, Xuguo
2015-12-10
Reverse transcriptase-quantitative polymerase chain reaction (RT-qPCR) is a reliable technique for quantifying gene expression across various biological processes, of which requires a set of suited reference genes to normalize the expression data. Coleomegilla maculata (Coleoptera: Coccinellidae), is one of the most extensively used biological control agents in the field to manage arthropod pest species. In this study, expression profiles of 16 housekeeping genes selected from C. maculata were cloned and investigated. The performance of these candidates as endogenous controls under specific experimental conditions was evaluated by dedicated algorithms, including geNorm, Normfinder, BestKeeper, and ΔCt method. In addition, RefFinder, a comprehensive platform integrating all the above-mentioned algorithms, ranked the overall stability of these candidate genes. As a result, various sets of suitable reference genes were recommended specifically for experiments involving different tissues, developmental stages, sex, and C. maculate larvae treated with dietary double stranded RNA. This study represents the critical first step to establish a standardized RT-qPCR protocol for the functional genomics research in a ladybeetle C. maculate. Furthermore, it lays the foundation for conducting ecological risk assessment of RNAi-based gene silencing biotechnologies on non-target organisms; in this case, a key predatory biological control agent.
Yang, Chunxiao; Pan, Huipeng; Noland, Jeffrey Edward; Zhang, Deyong; Zhang, Zhanhong; Liu, Yong; Zhou, Xuguo
2015-01-01
Reverse transcriptase-quantitative polymerase chain reaction (RT-qPCR) is a reliable technique for quantifying gene expression across various biological processes, of which requires a set of suited reference genes to normalize the expression data. Coleomegilla maculata (Coleoptera: Coccinellidae), is one of the most extensively used biological control agents in the field to manage arthropod pest species. In this study, expression profiles of 16 housekeeping genes selected from C. maculata were cloned and investigated. The performance of these candidates as endogenous controls under specific experimental conditions was evaluated by dedicated algorithms, including geNorm, Normfinder, BestKeeper, and ΔCt method. In addition, RefFinder, a comprehensive platform integrating all the above-mentioned algorithms, ranked the overall stability of these candidate genes. As a result, various sets of suitable reference genes were recommended specifically for experiments involving different tissues, developmental stages, sex, and C. maculate larvae treated with dietary double stranded RNA. This study represents the critical first step to establish a standardized RT-qPCR protocol for the functional genomics research in a ladybeetle C. maculate. Furthermore, it lays the foundation for conducting ecological risk assessment of RNAi-based gene silencing biotechnologies on non-target organisms; in this case, a key predatory biological control agent. PMID:26656102
Liu, Yong; Zhou, Xuguo
2015-01-01
Quantitative real-time PCR (qRT-PCR) is a powerful technique to quantify gene expression. To standardize gene expression studies and obtain more accurate qRT-PCR analysis, normalization relative to consistently expressed housekeeping genes (HKGs) is required. In this study, ten candidate HKGs including elongation factor 1 α (EF1A), ribosomal protein L11 (RPL11), ribosomal protein L14 (RPL14), ribosomal protein S8 (RPS8), ribosomal protein S23 (RPS23), NADH-ubiquinone oxidoreductase (NADH), vacuolar-type H+-ATPase (ATPase), heat shock protein 70 (HSP70), 18S ribosomal RNA (18S), and 12S ribosomal RNA (12S) from the cowpea aphid, Aphis craccivora Koch were selected. Four algorithms, geNorm, Normfinder, BestKeeper, and the ΔCt method were employed to evaluate the expression profiles of these HKGs as endogenous controls across different developmental stages and temperature regimes. Based on RefFinder, which integrates all four analytical algorithms to compare and rank the candidate HKGs, RPS8, RPL14, and RPL11 were the three most stable HKGs across different developmental stages and temperature conditions. This study is the first step to establish a standardized qRT-PCR analysis in A. craccivora following the MIQE guideline. Results from this study lay a foundation for the genomics and functional genomics research in this sap-sucking insect pest with substantial economic impact. PMID:26090683
Diagnostic Algorithm Benchmarking
NASA Technical Reports Server (NTRS)
Poll, Scott
2011-01-01
A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.
Inclusive Flavour Tagging Algorithm
NASA Astrophysics Data System (ADS)
Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex
2016-10-01
Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment.
2013-07-29
The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.
Implementation of Parallel Algorithms
1993-06-30
their socia ’ relations or to achieve some goals. For example, we define a pair-wise force law of i epulsion and attraction for a group of identical...quantization based compression schemes. Photo-refractive crystals, which provide high density recording in real time, are used as our holographic media . The...of Parallel Algorithms (J. Reif, ed.). Kluwer Academic Pu’ ishers, 1993. (4) "A Dynamic Separator Algorithm", D. Armon and J. Reif. To appear in
The Superior Lambert Algorithm
NASA Astrophysics Data System (ADS)
der, G.
2011-09-01
Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most
Parallel Wolff Cluster Algorithms
NASA Astrophysics Data System (ADS)
Bae, S.; Ko, S. H.; Coddington, P. D.
The Wolff single-cluster algorithm is the most efficient method known for Monte Carlo simulation of many spin models. Due to the irregular size, shape and position of the Wolff clusters, this method does not easily lend itself to efficient parallel implementation, so that simulations using this method have thus far been confined to workstations and vector machines. Here we present two parallel implementations of this algorithm, and show that one gives fairly good performance on a MIMD parallel computer.
Evolutionary pattern search algorithms
Hart, W.E.
1995-09-19
This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimental analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.
NASA Technical Reports Server (NTRS)
Dongarra, Jack
1998-01-01
This exploratory study initiated our inquiry into algorithms and applications that would benefit by latency tolerant approach to algorithm building, including the construction of new algorithms where appropriate. In a multithreaded execution, when a processor reaches a point where remote memory access is necessary, the request is sent out on the network and a context--switch occurs to a new thread of computation. This effectively masks a long and unpredictable latency due to remote loads, thereby providing tolerance to remote access latency. We began to develop standards to profile various algorithm and application parameters, such as the degree of parallelism, granularity, precision, instruction set mix, interprocessor communication, latency etc. These tools will continue to develop and evolve as the Information Power Grid environment matures. To provide a richer context for this research, the project also focused on issues of fault-tolerance and computation migration of numerical algorithms and software. During the initial phase we tried to increase our understanding of the bottlenecks in single processor performance. Our work began by developing an approach for the automatic generation and optimization of numerical software for processors with deep memory hierarchies and pipelined functional units. Based on the results we achieved in this study we are planning to study other architectures of interest, including development of cost models, and developing code generators appropriate to these architectures.
Algorithmization in Learning and Instruction.
ERIC Educational Resources Information Center
Landa, L. N.
An introduction to the theory of algorithms reviews the theoretical issues of teaching algorithms, the logical and psychological problems of devising algorithms of identification, and the selection of efficient algorithms; and then relates all of these to the classroom teaching process. It also descirbes some major research on the effectiveness of…
Power spectral estimation algorithms
NASA Technical Reports Server (NTRS)
Bhatia, Manjit S.
1989-01-01
Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.
Temperature Corrected Bootstrap Algorithm
NASA Technical Reports Server (NTRS)
Comiso, Joey C.; Zwally, H. Jay
1997-01-01
A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.
Optical rate sensor algorithms
NASA Technical Reports Server (NTRS)
Uhde-Lacovara, Jo A.
1989-01-01
Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.
Kernel Affine Projection Algorithms
NASA Astrophysics Data System (ADS)
Liu, Weifeng; Príncipe, José C.
2008-12-01
The combination of the famed kernel trick and affine projection algorithms (APAs) yields powerful nonlinear extensions, named collectively here, KAPA. This paper is a follow-up study of the recently introduced kernel least-mean-square algorithm (KLMS). KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance. More interestingly, it provides a unifying model for several neural network techniques, including kernel least-mean-square algorithms, kernel adaline, sliding-window kernel recursive-least squares (KRLS), and regularization networks. Therefore, many insights can be gained into the basic relations among them and the tradeoff between computation complexity and performance. Several simulations illustrate its wide applicability.
Parallel Algorithms and Patterns
Robey, Robert W.
2016-06-16
This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.
Improved Chaff Solution Algorithm
2009-03-01
Programme de démonstration de technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré...technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré pour déterminer automatiquement...0Z4 2. SECURITY CLASSIFICATION (Overall security classification of the document including special warning terms if applicable .) UNCLASSIFIED
Automatic design of decision-tree algorithms with evolutionary algorithms.
Barros, Rodrigo C; Basgalupp, Márcio P; de Carvalho, André C P L F; Freitas, Alex A
2013-01-01
This study reports the empirical analysis of a hyper-heuristic evolutionary algorithm that is capable of automatically designing top-down decision-tree induction algorithms. Top-down decision-tree algorithms are of great importance, considering their ability to provide an intuitive and accurate knowledge representation for classification problems. The automatic design of these algorithms seems timely, given the large literature accumulated over more than 40 years of research in the manual design of decision-tree induction algorithms. The proposed hyper-heuristic evolutionary algorithm, HEAD-DT, is extensively tested using 20 public UCI datasets and 10 microarray gene expression datasets. The algorithms automatically designed by HEAD-DT are compared with traditional decision-tree induction algorithms, such as C4.5 and CART. Experimental results show that HEAD-DT is capable of generating algorithms which are significantly more accurate than C4.5 and CART.
NASA Technical Reports Server (NTRS)
Nobbs, Steven G.
1995-01-01
An overview of the performance seeking control (PSC) algorithm and details of the important components of the algorithm are given. The onboard propulsion system models, the linear programming optimization, and engine control interface are described. The PSC algorithm receives input from various computers on the aircraft including the digital flight computer, digital engine control, and electronic inlet control. The PSC algorithm contains compact models of the propulsion system including the inlet, engine, and nozzle. The models compute propulsion system parameters, such as inlet drag and fan stall margin, which are not directly measurable in flight. The compact models also compute sensitivities of the propulsion system parameters to change in control variables. The engine model consists of a linear steady state variable model (SSVM) and a nonlinear model. The SSVM is updated with efficiency factors calculated in the engine model update logic, or Kalman filter. The efficiency factors are used to adjust the SSVM to match the actual engine. The propulsion system models are mathematically integrated to form an overall propulsion system model. The propulsion system model is then optimized using a linear programming optimization scheme. The goal of the optimization is determined from the selected PSC mode of operation. The resulting trims are used to compute a new operating point about which the optimization process is repeated. This process is continued until an overall (global) optimum is reached before applying the trims to the controllers.
Comprehensive eye evaluation algorithm
NASA Astrophysics Data System (ADS)
Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.
2016-03-01
In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.
Quantum gate decomposition algorithms.
Slepoy, Alexander
2006-07-01
Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.
The Xmath Integration Algorithm
ERIC Educational Resources Information Center
Bringslid, Odd
2009-01-01
The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…
Algorithm for reaction classification.
Kraut, Hans; Eiblmaier, Josef; Grethe, Guenter; Löw, Peter; Matuszczyk, Heinz; Saller, Heinz
2013-11-25
Reaction classification has important applications, and many approaches to classification have been applied. Our own algorithm tests all maximum common substructures (MCS) between all reactant and product molecules in order to find an atom mapping containing the minimum chemical distance (MCD). Recent publications have concluded that new MCS algorithms need to be compared with existing methods in a reproducible environment, preferably on a generalized test set, yet the number of test sets available is small, and they are not truly representative of the range of reactions that occur in real reaction databases. We have designed a challenging test set of reactions and are making it publicly available and usable with InfoChem's software or other classification algorithms. We supply a representative set of example reactions, grouped into different levels of difficulty, from a large number of reaction databases that chemists actually encounter in practice, in order to demonstrate the basic requirements for a mapping algorithm to detect the reaction centers in a consistent way. We invite the scientific community to contribute to the future extension and improvement of this data set, to achieve the goal of a common standard.
2005-03-30
The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.
Fast autodidactic adaptive equalization algorithms
NASA Astrophysics Data System (ADS)
Hilal, Katia
Autodidactic equalization by adaptive filtering is addressed in a mobile radio communication context. A general method, using an adaptive stochastic gradient Bussgang type algorithm, to deduce two low cost computation algorithms is given: one equivalent to the initial algorithm and the other having improved convergence properties thanks to a block criteria minimization. Two start algorithms are reworked: the Godard algorithm and the decision controlled algorithm. Using a normalization procedure, and block normalization, the performances are improved, and their common points are evaluated. These common points are used to propose an algorithm retaining the advantages of the two initial algorithms. This thus inherits the robustness of the Godard algorithm and the precision and phase correction of the decision control algorithm. The work is completed by a study of the stable states of Bussgang type algorithms and of the stability of the Godard algorithms, initial and normalized. The simulation of these algorithms, carried out in a mobile radio communications context, and under severe conditions on the propagation channel, gave a 75% reduction in the number of samples required for the processing in relation with the initial algorithms. The improvement of the residual error was of a much lower return. These performances are close to making possible the use of autodidactic equalization in the mobile radio system.
Benchmarking monthly homogenization algorithms
NASA Astrophysics Data System (ADS)
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.
2011-08-01
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data
Li, Meng-Yao; Wang, Feng; Jiang, Qian; Wang, Guan-Long; Tian, Chang; Xiong, Ai-Sheng
2016-01-01
A suitable reference gene is an important prerequisite for guarantying accurate and reliable results in qPCR analysis. Celery is one of the representative vegetable in Apiaceae and is widely cultivated and consumed in the world. However, no reports have been previously published concerning reference genes in celery. In this study, the expression stabilities of nine candidate reference genes in leaf blade and petiole at different development stages were evaluated using three statistics algorithms geNorm, NormFinder, and BestKeeper. Our results showed that TUB-B, TUB-A, and UBC were the most reference genes among all tested samples. GAPDH represented the maximum stability for most individual sample, while the UBQ displayed the minimum stability. To further validate the stability of reference genes, the expression pattern of AgAP2-2 was calculated by using the selected genes for normalization. In addition, the expression patterns of several development-related genes were studied using the selected reference gene. Our results will be beneficial for further studies on gene transcription in celery.
Guo, Jinlong; Ling, Hui; Wu, Qibin; Xu, Liping; Que, Youxiong
2014-11-13
Sugarcane (Saccharum spp. hybrids) is a world-wide cash crop for sugar and biofuel in tropical and subtropical regions and suffers serious losses in cane yield and sugar content under salinity and drought stresses. Although real-time quantitative PCR has a numerous advantage in the expression quantification of stress-related genes for the elaboration of the corresponding molecular mechanism in sugarcane, the variation happened across the process of gene expression quantification should be normalized and monitored by introducing one or several reference genes. To validate suitable reference genes or gene sets for sugarcane gene expression normalization, 13 candidate reference genes have been tested across 12 NaCl- and PEG-treated sugarcane samples for four sugarcane genotypes using four commonly used systematic statistical algorithms termed geNorm, BestKeeper, NormFinder and the deltaCt method. The results demonstrated that glyceraldehyde-3-phosphate dehydrogenase (GAPDH) and eukaryotic elongation factor 1-alpha (eEF-1a) were identified as suitable reference genes for gene expression normalization under salinity/drought-treatment in sugarcane. Moreover, the expression analyses of SuSK and 6PGDH further validated that a combination of clathrin adaptor complex (CAC) and cullin (CUL) as reference should be better for gene expression normalization. These results can facilitate the future research on gene expression in sugarcane under salinity and drought stresses.
Identification of reliable reference genes for qRT-PCR studies of the developing mouse mammary gland
van de Moosdijk, Anoeska Agatha Alida; van Amerongen, Renée
2016-01-01
Cell growth and differentiation are often driven by subtle changes in gene expression. Many challenges still exist in detecting these changes, particularly in the context of a complex, developing tissue. Quantitative reverse transcription polymerase chain reaction (qRT-PCR) allows relatively high-throughput evaluation of multiple genes and developmental time points. Proper quantification of gene expression levels by qRT-PCR requires normalization to one or more reference genes. Traditionally, these genes have been selected based on their presumed “housekeeping” function, with the implicit assumption that they are stably expressed over the entire experimental set. However, this is rarely tested empirically. Here we describe the identification of novel reference genes for the mouse mammary gland based on their stable expression in published microarray datasets. We compared eight novel candidate reference genes (Arpc3, Clock, Ctbp1, Phf7, Prdx1, Sugp2, Taf11 and Usp7) to eight traditional ones (18S, Actb, Gapdh, Hmbs, Hprt, Rpl13a, Sdha and Tbp) and analysed all genes for stable expression in the mouse mammary gland from pre-puberty to adulthood using four different algorithms (GeNorm, DeltaCt, BestKeeper and NormFinder). Prdx1, Phf7 and Ctbp1 were validated as novel and reliable, tissue-specific reference genes that outperform traditional reference genes in qRT-PCR studies of postnatal mammary gland development. PMID:27752147
Kianianmomeni, Arash; Hallmann, Armin
2013-12-01
Quantitative real-time reverse transcription polymerase chain reaction (qRT-PCR) is a sensitive technique for analysis of gene expression under a wide diversity of biological conditions. However, the identification of suitable reference genes is a critical factor for analysis of gene expression data. To determine potential reference genes for normalization of qRT-PCR data in the green alga Volvox carteri, the transcript levels of ten candidate reference genes were measured by qRT-PCR in three experimental sample pools containing different developmental stages, cell types and stress treatments. The expression stability of the candidate reference genes was then calculated using the algorithms geNorm, NormFinder and BestKeeper. The genes for 18S ribosomal RNA (18S) and eukaryotic translation elongation factor 1α2 (eef1) turned out to have the most stable expression levels among the samples both from different developmental stages and different stress treatments. The genes for the ribosomal protein L23 (rpl23) and the TATA-box binding protein (tbpA) showed equivalent transcript levels in the comparison of different cell types, and therefore, can be used as reference genes for cell-type specific gene expression analysis. Our results indicate that more than one reference gene is required for accurate normalization of qRT-PCRs in V. carteri. The reference genes in our study show a much better performance than the housekeeping genes used as a reference in previous studies.
Liu, Yong; Zhou, Xuguo
2015-01-01
Quantitative real-time PCR (qRT-PCR) is a reliable and reproducible technique for measuring mRNA expression. To facilitate gene expression studies and obtain more accurate qRT-PCR analysis, normalization relative to stable housekeeping genes is mandatory. In this study, ten housekeeping genes, including beta-actin (Actin) , elongation factor 1 α (EF1A) , glyceralde hyde-3-phosphate dehydrogenase (GAPDH) , ribosomal protein L13 (RPL13) , ribosomal protein 49 (RP49) , α-tubulin (Tubulin) , vacuolar-type H+-ATPase (v-ATPase) , succinate dehydrogenase subunit A (SDHA) , 28S ribosomal RNA (28S) , and 18S ribosomal RNA (18S) from the two-spotted spider mite, Tetranychus urticae, were selected as the candidate reference genes. Four algorithms, geNorm, Normfinder, BestKeeper, and the ΔCt method, were used to evaluate the performance of these candidates as endogenous controls across different developmental stages. In addition, RefFinder, which integrates the above-mentioned software tools, provided the overall ranking of the stability/suitability of these candidate reference genes. Among them, PRL13 and v-ATPase were the two most stable housekeeping genes across different developmental stages. This work is the first step toward establishing a standardized qRT-PCR analysis in T. urticae following the MIQE guideline. With the recent release of the T. urticae genome, results from this study provide a critical piece for the subsequent genomics and functional genomics research in this emerging model system. PMID:25822495
Sun, Bo-Guang; Hu, Yong-Hua
2015-06-01
Quantitative real-time reverse transcription polymerase chain reaction (qRT-PCR) has been used extensively for studying gene expression in diverse organisms including fish. In this study, with an aim to identify reliable reference genes for qRT-PCR in red drum (Sciaenops ocellatus), an economic fish species, we determined the expression stability of seven housekeeping genes in healthy and bacterium-infected red drum. Each of the selected candidate genes was amplified by qRT-PCR from the brain, gill, heart, intestine, kidney, liver, muscle, and spleen of red drum infected with or without a bacterial pathogen for 12 and 48 h. The mRNA levels of the genes were analyzed with the geNorm and NormFinder algorithms. The results showed that in the absence of bacterial infection, translation initiation factor 3, NADH dehydrogenase 1, and QM-like protein may be used together as internal references across the eight examined tissues. Bacterial infection caused variations in the rankings of the most stable genes in a tissue-dependent manner. For all tissues, two genes sufficed for reliable normalization at both 12 and 48 h post-infection. However, the optimal gene pairs differed among tissues and, for four of the examined eight tissues, between infection points. These results indicate that when studying gene expression in red drum under conditions of bacterial infection, the optimal reference genes should be selected on the basis of tissue type and, for accurate normalization, infection stage.
Bao, Wenlong; Qu, Yanli; Shan, Xiaoyi; Wan, Yinglang
2016-01-01
Cunninghamia lanceolata (Chinese fir) is a fast-growing and commercially important conifer of the Cupressaceae family. Due to the unavailability of complete genome sequences and relatively poor genetic background information of the Chinese fir, it is necessary to identify and analyze the expression levels of suitable housekeeping genes (HKGs) as internal reference for precise analysis. Based on the results of database analysis and transcriptome sequencing, we have chosen five candidate HKGs (Actin, GAPDH, EF1a, 18S rRNA, and UBQ) with conservative sequences in the Chinese fir and related species for quantitative analysis. The expression levels of these HKGs in roots and cotyledons under five different abiotic stresses in different time intervals were measured by qRT-PCR. The data were statistically analyzed using the following algorithms: NormFinder, BestKeeper, and geNorm. Finally, RankAggreg was applied to merge the sequences generated from three programs and rank these according to consensus sequences. The expression levels of these HKGs showed variable stabilities under different abiotic stresses. Among these, Actin was the most stable internal control in root, and GAPDH was the most stable housekeeping gene in cotyledon. We have also described an experimental procedure for selecting HKGs based on the de novo sequencing database of other non-model plants. PMID:27483238
A Versatile Panel of Reference Gene Assays for the Measurement of Chicken mRNA by Quantitative PCR
Maier, Helena J.; Van Borm, Steven; Young, John R.; Fife, Mark
2016-01-01
Quantitative real-time PCR assays are widely used for the quantification of mRNA within avian experimental samples. Multiple stably-expressed reference genes, selected for the lowest variation in representative samples, can be used to control random technical variation. Reference gene assays must be reliable, have high amplification specificity and efficiency, and not produce signals from contaminating DNA. Whilst recent research papers identify specific genes that are stable in particular tissues and experimental treatments, here we describe a panel of ten avian gene primer and probe sets that can be used to identify suitable reference genes in many experimental contexts. The panel was tested with TaqMan and SYBR Green systems in two experimental scenarios: a tissue collection and virus infection of cultured fibroblasts. GeNorm and NormFinder algorithms were able to select appropriate reference gene sets in each case. We show the effects of using the selected genes on the detection of statistically significant differences in expression. The results are compared with those obtained using 28s ribosomal RNA, the present most widely accepted reference gene in chicken work, identifying circumstances where its use might provide misleading results. Methods for eliminating DNA contamination of RNA reduced, but did not completely remove, detectable DNA. We therefore attached special importance to testing each qPCR assay for absence of signal using DNA template. The assays and analyses developed here provide a useful resource for selecting reference genes for investigations of avian biology. PMID:27537060
Selection of Reference Genes for Expression Studies of Xenobiotic Adaptation in Tetranychus urticae
Morales, Mariany Ashanty; Mendoza, Bianca Marie; Lavine, Laura Corley; Lavine, Mark Daniel; Walsh, Douglas Bruce; Zhu, Fang
2016-01-01
Quantitative real-time PCR (qRT-PCR) is an extensively used, high-throughput method to analyze transcriptional expression of genes of interest. An appropriate normalization strategy with reliable reference genes is required for calculating gene expression across diverse experimental conditions. In this study, we aim to identify the most stable reference genes for expression studies of xenobiotic adaptation in Tetranychus urticae, an extremely polyphagous herbivore causing significant yield reduction of agriculture. We chose eight commonly used housekeeping genes as candidates. The qRT-PCR expression data for these genes were evaluated from seven populations: a susceptible and three acaricide resistant populations feeding on lima beans, and three other susceptible populations which had been shifted host from lima beans to three other plant species. The stability of the candidate reference genes was then assessed using four different algorithms (comparative ΔCt method, geNorm, NormFinder, and BestKeeper). Additionally, we used an online web-based tool (RefFinder) to assign an overall final rank for each candidate gene. Our study found that CycA and Rp49 are best for investigating gene expression in acaricide susceptible and resistant populations. GAPDH, Rp49, and Rpl18 are best for host plant shift studies. And GAPDH and Rp49 were the most stable reference genes when investigating gene expression under changes in both experimental conditions. These results will facilitate research in revealing molecular mechanisms underlying the xenobiotic adaptation of this notorious agricultural pest. PMID:27570487
Dzaki, Najat; Ramli, Karima N; Azlan, Azali; Ishak, Intan H; Azzam, Ghows
2017-03-16
The mosquito Aedes aegypti (Ae. aegypti) is the most notorious vector of illness-causing viruses such as Dengue, Chikugunya, and Zika. Although numerous genetic expression studies utilizing quantitative real-time PCR (qPCR) have been conducted with regards to Ae. aegypti, a panel of genes to be used suitably as references for the purpose of expression-level normalization within this epidemiologically important insect is presently lacking. Here, the usability of seven widely-utilized reference genes i.e. actin (ACT), eukaryotic elongation factor 1 alpha (eEF1α), alpha tubulin (α-tubulin), ribosomal proteins L8, L32 and S17 (RPL8, RPL32 and RPS17), and glyceraldeyde 3-phosphate dehydrogenase (GAPDH) were investigated. Expression patterns of the reference genes were observed in sixteen pre-determined developmental stages and in cell culture. Gene stability was inferred from qPCR data through three freely available algorithms i.e. BestKeeper, geNorm, and NormFinder. The consensus rankings generated from stability values provided by these programs suggest a combination of at least two genes for normalization. ACT and RPS17 are the most dependably expressed reference genes and therefore, we propose an ACT/RPS17 combination for normalization in all Ae. aegypti derived samples. GAPDH performed least desirably, and is thus not a recommended reference gene. This study emphasizes the importance of validating reference genes in Ae. aegypti for qPCR based research.
Wang, Peihong; Xiong, Aisheng; Gao, Zhihong; Yu, Xinyi; Li, Man; Hou, Yingjun; Sun, Chao; Qu, Shenchun
2016-01-01
The success of quantitative real-time reverse transcription polymerase chain reaction (RT-qPCR) to quantify gene expression depends on the stability of the reference genes used for data normalization. To date, systematic screening for reference genes in persimmon (Diospyros kaki Thunb) has never been reported. In this study, 13 candidate reference genes were cloned from 'Nantongxiaofangshi' using information available in the transcriptome database. Their expression stability was assessed by geNorm and NormFinder algorithms under abiotic stress and hormone stimulation. Our results showed that the most suitable reference genes across all samples were UBC and GAPDH, and not the commonly used persimmon reference gene ACT. In addition, UBC combined with RPII or TUA were found to be appropriate for the "abiotic stress" group and α-TUB combined with PP2A were found to be appropriate for the "hormone stimuli" group. For further validation, the transcript level of the DkDREB2C homologue under heat stress was studied with the selected genes (CYP, GAPDH, TUA, UBC, α-TUB, and EF1-α). The results suggested that it is necessary to choose appropriate reference genes according to the test materials or experimental conditions. Our study will be useful for future studies on gene expression in persimmon.
Sheng, X G; Zhao, Z Q; Yu, H F; Wang, J S; Zheng, C F; Gu, H H
2016-07-15
Quantitative reverse-transcription PCR (qRT-PCR) is a versatile technique for the analysis of gene expression. The selection of stable reference genes is essential for the application of this technique. Cauliflower (Brassica oleracea L. var. botrytis) is a commonly consumed vegetable that is rich in vitamin, calcium, and iron. Thus far, to our knowledge, there have been no reports on the validation of suitable reference genes for the data normalization of qRT-PCR in cauliflower. In the present study, we analyzed 12 candidate housekeeping genes in cauliflower subjected to different abiotic stresses, hormone treatment conditions, and accessions. geNorm and NormFinder algorithms were used to assess the expression stability of these genes. ACT2 and TIP41 were selected as suitable reference genes across all experimental samples in this study. When different accessions were compared, ACT2 and UNK3 were found to be the most suitable reference genes. In the hormone and abiotic stress treatments, ACT2, TIP41, and UNK2 were the most stably expressed. Our study also provided guidelines for selecting the best reference genes under various experimental conditions.
Schaeck, M.; De Spiegelaere, W.; De Craene, J.; Van den Broeck, W.; De Spiegeleer, B.; Burvenich, C.; Haesebrouck, F.; Decostere, A.
2016-01-01
The increasing demand for a sustainable larviculture has promoted research regarding environmental parameters, diseases and nutrition, intersecting at the mucosal surface of the gastrointestinal tract of fish larvae. The combination of laser capture microdissection (LCM) and gene expression experiments allows cell specific expression profiling. This study aimed at optimizing an LCM protocol for intestinal tissue of sea bass larvae. Furthermore, a 3′/5′ integrity assay was developed for LCM samples of fish tissue, comprising low RNA concentrations. Furthermore, reliable reference genes for performing qPCR in larval sea bass gene expression studies were identified, as data normalization is critical in gene expression experiments using RT-qPCR. We demonstrate that a careful optimization of the LCM procedure allows recovery of high quality mRNA from defined cell populations in complex intestinal tissues. According to the geNorm and Normfinder algorithms, ef1a, rpl13a, rps18 and faua were the most stable genes to be implemented as reference genes for an appropriate normalization of intestinal tissue from sea bass across a range of experimental settings. The methodology developed here, offers a rapid and valuable approach to characterize cells/tissues in the intestinal tissue of fish larvae and their changes following pathogen exposure, nutritional/environmental changes, probiotic supplementation or a combination thereof. PMID:26883391
Petriccione, Milena; Mastrobuoni, Francesco; Zampella, Luigi; Scortichini, Marco
2015-01-01
Normalization of data, by choosing the appropriate reference genes (RGs), is fundamental for obtaining reliable results in reverse transcription-quantitative PCR (RT-qPCR). In this study, we assessed Actinidia deliciosa leaves inoculated with two doses of Pseudomonas syringae pv. actinidiae during a period of 13 days for the expression profile of nine candidate RGs. Their expression stability was calculated using four algorithms: geNorm, NormFinder, BestKeeper and the deltaCt method. Glyceraldehyde-3-phosphate dehydrogenase (GAPDH) and protein phosphatase 2A (PP2A) were the most stable genes, while β-tubulin and 7s-globulin were the less stable. Expression analysis of three target genes, chosen for RGs validation, encoding the reactive oxygen species scavenging enzymes ascorbate peroxidase (APX), superoxide dismutase (SOD) and catalase (CAT) indicated that a combination of stable RGs, such as GAPDH and PP2A, can lead to an accurate quantification of the expression levels of such target genes. The APX level varied during the experiment time course and according to the inoculum doses, whereas both SOD and CAT resulted down-regulated during the first four days, and up-regulated afterwards, irrespective of inoculum dose. These results can be useful for better elucidating the molecular interaction in the A. deliciosa/P. s. pv. actinidiae pathosystem and for RGs selection in bacteria-plant pathosystems. PMID:26581656
Ma, Yue-jiao; Sun, Xiao-hong; Xu, Xiao-yan; Zhao, Yong; Pan, Ying-jie; Hwang, Cheng-An; Wu, Vivian C. H.
2015-01-01
Vibrio parahaemolyticus is a significant human pathogen capable of causing foodborne gastroenteritis associated with the consumption of contaminated raw or undercooked seafood. Quantitative RT-PCR (qRT-PCR) is a useful tool for studying gene expression in V. parahaemolyticus to characterize its virulence factors and understand the effect of environmental conditions on its pathogenicity. However, there is not a stable gene in V. parahaemolyticus that has been identified for use as a reference gene for qRT-PCR. This study evaluated the stability of 6 reference genes (16S rRNA, recA, rpoS, pvsA, pvuA, and gapdh) in 5 V. parahaemolyticus strains (O3:K6-clinical strain-tdh+, ATCC33846-tdh+, ATCC33847-tdh+, ATCC17802-trh+, and F13-environmental strain-tdh+) cultured at 4 different temperatures (15, 25, 37 and 42°C). Stability values were calculated using GeNorm, NormFinder, BestKeeper, and Delta CT algorithms. The results indicated that recA was the most stably expressed gene in the V. parahaemolyticus strains cultured at different temperatures. This study examined multiple V. parahaemolyticus strains and growth temperatures, hence the finding provided stronger evidence that recA can be used as a reference gene for gene expression studies in V. parahaemolyticus. PMID:26659406
Qi, Shuai; Yang, Liwen; Wen, Xiaohui; Hong, Yan; Song, Xuebin; Zhang, Mengmeng; Dai, Silan
2016-01-01
Quantitative real-time PCR (qPCR) is a popular and powerful tool used to understand the molecular mechanisms of flower development. However, the accuracy of this approach depends on the stability of reference genes. The capitulum of chrysanthemums is very special, which is consisting of ray florets and disc florets. There are obvious differences between the two types of florets in symmetry, gender, histological structure, and function. Furthermore, the ray florets have various shapes. The objective of present study was to identify the stable reference genes in Chrysanthemum morifolium and Chrysanthemum lavandulifolium during the process of flower development. In this study, nine candidate reference genes were selected and evaluated for their expression stability acrosssamples during the process of flower development, and their stability was validated by four different algorithms (Bestkeeper, NormFinder, GeNorm, and Ref-finder). SAND (SAND family protein) was found to be the most stably expressed gene in all samples or different tissues during the process of C. lavandulifolium development. Both SAND and PGK (phosphoglycerate kinase) performed most stable in Chinese large-flowered chrysanthemum cultivars, and PGK was the best in potted chrysanthemums. There were differences in best reference genes among varieties as the genetic background of them were complex. These studies provide guidance for selecting reference genes for analyzing the expression pattern of floral development genes in chrysanthemums. PMID:27014310
Sagri, Efthimia; Koskinioti, Panagiota; Gregoriou, Maria-Eleni; Tsoumani, Konstantina T.; Bassiakos, Yiannis C.; Mathiopoulos, Kostas D.
2017-01-01
Real-time quantitative-PCR has been a priceless tool for gene expression analyses. The reaction, however, needs proper normalization with the use of housekeeping genes (HKGs), whose expression remains stable throughout the experimental conditions. Often, the combination of several genes is required for accurate normalization. Most importantly, there are no universal HKGs which can be used since their expression varies among different organisms, tissues or experimental conditions. In the present study, nine common HKGs (RPL19, tbp, ubx, GAPDH, α-TUB, β-TUB, 14-3-3zeta, RPE and actin3) are evaluated in thirteen different body parts, developmental stages and reproductive and olfactory tissues of two insects of agricultural importance, the medfly and the olive fly. Three software programs based on different algorithms were used (geNorm, NormFinder and BestKeeper) and gave different ranking of HKG stabilities. This confirms once again that the stability of common HKGs should not be taken for granted and demonstrates the caution that is needed in the choice of the appropriate HKGs. Finally, by estimating the average of a standard score of the stability values resulted by the three programs we were able to provide a useful consensus key for the choice of the best HKG combination in various tissues of the two insects. PMID:28368031
Gu, Chun-Sun; Liu, Liang-qin; Xu, Chen; Zhao, Yan-hai; Zhu, Xu-dong; Huang, Su-Zhen
2014-01-01
Quantitative real time PCR (RT-qPCR) has emerged as an accurate and sensitive method to measure the gene expression. However, obtaining reliable result depends on the selection of reference genes which normalize differences among samples. In this study, we assessed the expression stability of seven reference genes, namely, ubiquitin-protein ligase UBC9 (UBC), tubulin alpha-5 (TUBLIN), eukaryotic translation initiation factor (EIF-5A), translation elongation factor EF1A (EF1 α ), translation elongation factor EF1B (EF1b), actin11 (ACTIN), and histone H3 (HIS), in Iris. lactea var. chinensis (I. lactea var. chinensis) root when the plants were subjected to cadmium (Cd), lead (Pb), and salt stress conditions. All seven reference genes showed a relatively wide range of threshold cycles (C t ) values in different samples. GeNorm and NormFinder algorithms were used to assess the suitable reference genes. The results from the two software units showed that EIF-5A and UBC were the most stable reference genes across all of the tested samples, while TUBLIN was unsuitable as internal controls. I. lactea var. chinensis is tolerant to Cd, Pb, and salt. Our results will benefit future research on gene expression in response to the three abiotic stresses.
Lü, Zhi-Chuang; Liu, Wan-Xue; Wan, Fang-Hao
2017-01-01
The Bemisia tabaci Mediterranean (MED) cryptic species has been rapidly invading to most parts of the world owing to its strong ecological adaptability, which is considered as a model insect for stress tolerance studies under rapidly changing environments. Selection of a suitable reference gene for quantitative stress-responsive gene expression analysis based on qRT-PCR is critical for elaborating the molecular mechanisms of thermotolerance. To obtain accurate and reliable normalization data in MED, eight candidate reference genes (β-act, GAPDH, β-tub, EF1-α, GST, 18S, RPL13A and α-tub) were examined under various thermal stresses for varied time periods by using geNorm, NormFinder and BestKeeper algorithms, respectively. Our results revealed that β-tub and EF1-α were the best reference genes across all sample sets. On the other hand, 18S and GADPH showed the least stability for all the samples studied. β-act was proved to be highly stable only in case of short-term thermal stresses. To our knowledge this was the first comprehensive report on validation of reference genes under varying temperature stresses in MED. The study could expedite particular discovery of thermotolerance genes in MED. Further, the present results can form the basis of further research on suitable reference genes in this invasive insect and will facilitate transcript profiling in other invasive insects. PMID:28323834
Chao, Jinquan; Yang, Shuguang; Chen, Yueyi; Tian, Wei-Min
2016-01-01
Latex exploitation-caused latex flow is effective in enhancing latex regeneration in laticifer cells of rubber tree. It should be suitable for screening appropriate reference gene for analysis of the expression of latex regeneration-related genes by quantitative real-time PCR (qRT-PCR). In the present study, the expression stability of 23 candidate reference genes was evaluated on the basis of latex flow by using geNorm and NormFinder algorithms. Ubiquitin-protein ligase 2a (UBC2a) and ubiquitin-protein ligase 2b (UBC2b) were the two most stable genes among the selected candidate references in rubber tree clones with differential duration of latex flow. The two genes were also high-ranked in previous reference gene screening across different tissues and experimental conditions. By contrast, the transcripts of latex regeneration-related genes fluctuated significantly during latex flow. The results suggest that screening reference gene during latex flow should be an efficient and effective clue for selection of reference genes in qRT-PCR.
Shivhare, Radha; Lata, Charu
2016-03-14
Pearl millet [Pennisetum glaucum (L.) R. Br.] a widely used grain and forage crop, is grown in areas frequented with one or more abiotic stresses, has superior drought and heat tolerance and considered a model crop for stress tolerance studies. Selection of suitable reference genes for quantification of target stress-responsive gene expression through quantitative real-time (qRT)-PCR is important for elucidating the molecular mechanisms of improved stress tolerance. For precise normalization of gene expression data in pearl millet, ten candidate reference genes were examined in various developmental tissues as well as under different individual abiotic stresses and their combinations at 1 h (early) and 24 h (late) of stress using geNorm, NormFinder and RefFinder algorithms. Our results revealed EF-1α and UBC-E2 as the best reference genes across all samples, the specificity of which was confirmed by assessing the relative expression of a PgAP2 like-ERF gene that suggested use of these two reference genes is sufficient for accurate transcript normalization under different stress conditions. To our knowledge this is the first report on validation of reference genes under different individual and multiple abiotic stresses in pearl millet. The study can further facilitate fastidious discovery of stress-tolerance genes in this important stress-tolerant crop.
Chen, Lei; Zhong, Hai-ying; Kuang, Jian-fei; Li, Jian-guo; Lu, Wang-jin; Chen, Jian-ye
2011-08-01
Reverse transcription quantitative real-time PCR (RT-qPCR) is a sensitive technique for quantifying gene expression, but its success depends on the stability of the reference gene(s) used for data normalization. Only a few studies on validation of reference genes have been conducted in fruit trees and none in banana yet. In the present work, 20 candidate reference genes were selected, and their expression stability in 144 banana samples were evaluated and analyzed using two algorithms, geNorm and NormFinder. The samples consisted of eight sample sets collected under different experimental conditions, including various tissues, developmental stages, postharvest ripening, stresses (chilling, high temperature, and pathogen), and hormone treatments. Our results showed that different suitable reference gene(s) or combination of reference genes for normalization should be selected depending on the experimental conditions. The RPS2 and UBQ2 genes were validated as the most suitable reference genes across all tested samples. More importantly, our data further showed that the widely used reference genes, ACT and GAPDH, were not the most suitable reference genes in many banana sample sets. In addition, the expression of MaEBF1, a gene of interest that plays an important role in regulating fruit ripening, under different experimental conditions was used to further confirm the validated reference genes. Taken together, our results provide guidelines for reference gene(s) selection under different experimental conditions and a foundation for more accurate and widespread use of RT-qPCR in banana.
Schaeck, M; De Spiegelaere, W; De Craene, J; Van den Broeck, W; De Spiegeleer, B; Burvenich, C; Haesebrouck, F; Decostere, A
2016-02-17
The increasing demand for a sustainable larviculture has promoted research regarding environmental parameters, diseases and nutrition, intersecting at the mucosal surface of the gastrointestinal tract of fish larvae. The combination of laser capture microdissection (LCM) and gene expression experiments allows cell specific expression profiling. This study aimed at optimizing an LCM protocol for intestinal tissue of sea bass larvae. Furthermore, a 3'/5' integrity assay was developed for LCM samples of fish tissue, comprising low RNA concentrations. Furthermore, reliable reference genes for performing qPCR in larval sea bass gene expression studies were identified, as data normalization is critical in gene expression experiments using RT-qPCR. We demonstrate that a careful optimization of the LCM procedure allows recovery of high quality mRNA from defined cell populations in complex intestinal tissues. According to the geNorm and Normfinder algorithms, ef1a, rpl13a, rps18 and faua were the most stable genes to be implemented as reference genes for an appropriate normalization of intestinal tissue from sea bass across a range of experimental settings. The methodology developed here, offers a rapid and valuable approach to characterize cells/tissues in the intestinal tissue of fish larvae and their changes following pathogen exposure, nutritional/environmental changes, probiotic supplementation or a combination thereof.
Chao, Jinquan; Yang, Shuguang; Chen, Yueyi; Tian, Wei-Min
2016-01-01
Latex exploitation-caused latex flow is effective in enhancing latex regeneration in laticifer cells of rubber tree. It should be suitable for screening appropriate reference gene for analysis of the expression of latex regeneration-related genes by quantitative real-time PCR (qRT-PCR). In the present study, the expression stability of 23 candidate reference genes was evaluated on the basis of latex flow by using geNorm and NormFinder algorithms. Ubiquitin-protein ligase 2a (UBC2a) and ubiquitin-protein ligase 2b (UBC2b) were the two most stable genes among the selected candidate references in rubber tree clones with differential duration of latex flow. The two genes were also high-ranked in previous reference gene screening across different tissues and experimental conditions. By contrast, the transcripts of latex regeneration-related genes fluctuated significantly during latex flow. The results suggest that screening reference gene during latex flow should be an efficient and effective clue for selection of reference genes in qRT-PCR. PMID:27524995
Fiallos-Jurado, Jennifer; Pollier, Jacob; Moses, Tessa; Arendt, Philipp; Barriga-Medina, Noelia; Morillo, Eduardo; Arahana, Venancio; de Lourdes Torres, Maria; Goossens, Alain; Leon-Reyes, Antonio
2016-09-01
Quinoa (Chenopodium quinoa Willd.) is a highly nutritious pseudocereal with an outstanding protein, vitamin, mineral and nutraceutical content. The leaves, flowers and seed coat of quinoa contain triterpenoid saponins, which impart bitterness to the grain and make them unpalatable without postharvest removal of the saponins. In this study, we quantified saponin content in quinoa leaves from Ecuadorian sweet and bitter genotypes and assessed the expression of saponin biosynthetic genes in leaf samples elicited with methyl jasmonate. We found saponin accumulation in leaves after MeJA treatment in both ecotypes tested. As no reference genes were available to perform qPCR in quinoa, we mined publicly available RNA-Seq data for orthologs of 22 genes known to be stably expressed in Arabidopsis thaliana using geNorm, NormFinder and BestKeeper algorithms. The quinoa ortholog of At2g28390 (Monensin Sensitivity 1, MON1) was stably expressed and chosen as a suitable reference gene for qPCR analysis. Candidate saponin biosynthesis genes were screened in the quinoa RNA-Seq data and subsequent functional characterization in yeast led to the identification of CqbAS1, CqCYP716A78 and CqCYP716A79. These genes were found to be induced by MeJA, suggesting this phytohormone might also modulate saponin biosynthesis in quinoa leaves. Knowledge of the saponin biosynthesis and its regulation in quinoa may aid the further development of sweet cultivars that do not require postharvest processing.
Jiang, Hucheng; Qian, Zhaojun; Lu, Wei; Ding, Huaiyu; Yu, Hongwei; Wang, Hui; Li, Jiale
2015-09-08
qRT-PCR is a widely used technique for rapid and accurate quantification of gene expression data. The use of reference genes for normalization of the expression levels is crucial for accuracy. Several studies have shown that there is no perfect reference gene that is appropriate for use in all experimental conditions, and research on suitable reference genes in red swamp crawfish (Procambarus clarkii) is particularly scarce. In this study, eight commonly used crustacean reference genes were chosen from P. clarkii transcriptome data and investigated as potential candidates for normalization of qRT-PCR data. Expression of these genes under different experimental conditions was examined by qRT-PCR, and the stability of their expression was evaluated using three commonly used statistical algorithms, geNorm, NormFinder and BestKeeper. A final comprehensive ranking determined that EIF and 18S were the optimal reference genes for expression data from different tissues, while TBP and EIF were optimal for expression data from different ovarian developmental stages. To our knowledge, this is the first systematic analysis of reference genes for normalization of qRT-PCR data in P. clarkii. These results will facilitate more accurate and reliable expression studies of this and other crustacean species.
Quantitative RT-PCR Gene Evaluation and RNA Interference in the Brown Marmorated Stink Bug
Bansal, Raman; Mittapelly, Priyanka; Chen, Yuting; Mamidala, Praveen; Zhao, Chaoyang; Michel, Andy
2016-01-01
The brown marmorated stink bug (Halyomorpha halys) has emerged as one of the most important invasive insect pests in the United States. Functional genomics in H. halys remains unexplored as molecular resources in this insect have recently been developed. To facilitate functional genomics research, we evaluated ten common insect housekeeping genes (RPS26, EF1A, FAU, UBE4A, ARL2, ARP8, GUS, TBP, TIF6 and RPL9) for their stability across various treatments in H. halys. Our treatments included two biotic factors (tissues and developmental stages) and two stress treatments (RNAi injection and starvation). Reference gene stability was determined using three software algorithms (geNorm, NormFinder, BestKeeper) and a web-based tool (RefFinder). The qRT-PCR results indicated ARP8 and UBE4A exhibit the most stable expression across tissues and developmental stages, ARL2 and FAU for dsRNA treatment and TBP and UBE4A for starvation treatment. Following the dsRNA treatment, all genes except GUS showed relatively stable expression. To demonstrate the utility of validated reference genes in accurate gene expression analysis and to explore gene silencing in H. halys, we performed RNAi by administering dsRNA of target gene (catalase) through microinjection. A successful RNAi response with over 90% reduction in expression of target gene was observed. PMID:27144586
Liu, Yong; Zhou, Xuguo
2014-01-01
To facilitate gene expression study and obtain accurate qRT-PCR analysis, normalization relative to stable expressed housekeeping genes is required. In this study, expression profiles of 11 candidate reference genes, including actin (Actin), elongation factor 1 α (EF1A), TATA-box-binding protein (TATA), ribosomal protein L12 (RPL12), β-tubulin (Tubulin), NADH dehydrogenase (NADH), vacuolar-type H+-ATPase (v-ATPase), succinate dehydrogenase B (SDHB), 28S ribosomal RNA (28S), 16S ribosomal RNA (16S), and 18S ribosomal RNA (18S) from the pea aphid Acyrthosiphon pisum, under different developmental stages and temperature conditions, were investigated. A total of four analytical tools, geNorm, Normfinder, BestKeeper, and the ΔCt method, were used to evaluate the suitability of these genes as endogenous controls. According to RefFinder, a web-based software tool which integrates all four above-mentioned algorithms to compare and rank the reference genes, SDHB, 16S, and NADH were the three most stable house-keeping genes under different developmental stages and temperatures. This work is intended to establish a standardized qRT-PCR protocol in pea aphid and serves as a starting point for the genomics and functional genomics research in this emerging insect model. PMID:25423476
Dzaki, Najat; Ramli, Karima N.; Azlan, Azali; Ishak, Intan H.; Azzam, Ghows
2017-01-01
The mosquito Aedes aegypti (Ae. aegypti) is the most notorious vector of illness-causing viruses such as Dengue, Chikugunya, and Zika. Although numerous genetic expression studies utilizing quantitative real-time PCR (qPCR) have been conducted with regards to Ae. aegypti, a panel of genes to be used suitably as references for the purpose of expression-level normalization within this epidemiologically important insect is presently lacking. Here, the usability of seven widely-utilized reference genes i.e. actin (ACT), eukaryotic elongation factor 1 alpha (eEF1α), alpha tubulin (α-tubulin), ribosomal proteins L8, L32 and S17 (RPL8, RPL32 and RPS17), and glyceraldeyde 3-phosphate dehydrogenase (GAPDH) were investigated. Expression patterns of the reference genes were observed in sixteen pre-determined developmental stages and in cell culture. Gene stability was inferred from qPCR data through three freely available algorithms i.e. BestKeeper, geNorm, and NormFinder. The consensus rankings generated from stability values provided by these programs suggest a combination of at least two genes for normalization. ACT and RPS17 are the most dependably expressed reference genes and therefore, we propose an ACT/RPS17 combination for normalization in all Ae. aegypti derived samples. GAPDH performed least desirably, and is thus not a recommended reference gene. This study emphasizes the importance of validating reference genes in Ae. aegypti for qPCR based research. PMID:28300076
Dai, Tian-Mei; Lü, Zhi-Chuang; Liu, Wan-Xue; Wan, Fang-Hao
2017-01-01
The Bemisia tabaci Mediterranean (MED) cryptic species has been rapidly invading to most parts of the world owing to its strong ecological adaptability, which is considered as a model insect for stress tolerance studies under rapidly changing environments. Selection of a suitable reference gene for quantitative stress-responsive gene expression analysis based on qRT-PCR is critical for elaborating the molecular mechanisms of thermotolerance. To obtain accurate and reliable normalization data in MED, eight candidate reference genes (β-act, GAPDH, β-tub, EF1-α, GST, 18S, RPL13A and α-tub) were examined under various thermal stresses for varied time periods by using geNorm, NormFinder and BestKeeper algorithms, respectively. Our results revealed that β-tub and EF1-α were the best reference genes across all sample sets. On the other hand, 18S and GADPH showed the least stability for all the samples studied. β-act was proved to be highly stable only in case of short-term thermal stresses. To our knowledge this was the first comprehensive report on validation of reference genes under varying temperature stresses in MED. The study could expedite particular discovery of thermotolerance genes in MED. Further, the present results can form the basis of further research on suitable reference genes in this invasive insect and will facilitate transcript profiling in other invasive insects.
Wen, Shuxiang; Chen, Xiaoling; Xu, Fuzhou; Sun, Huiling
2016-01-01
Real-time quantitative reverse transcription PCR (qRT-PCR) offers a robust method for measurement of gene expression levels. Selection of reliable reference gene(s) for gene expression study is conducive to reduce variations derived from different amounts of RNA and cDNA, the efficiency of the reverse transcriptase or polymerase enzymes. Until now reference genes identified for other members of the family Pasteurellaceae have not been validated for Avibacterium paragallinarum. The aim of this study was to validate nine reference genes of serovars A, B, and C strains of A. paragallinarum in different growth phase by qRT-PCR. Three of the most widely used statistical algorithms, geNorm, NormFinder and ΔCT method were used to evaluate the expression stability of reference genes. Data analyzed by overall rankings showed that in exponential and stationary phase of serovar A, the most stable reference genes were gyrA and atpD respectively; in exponential and stationary phase of serovar B, the most stable reference genes were atpD and recN respectively; in exponential and stationary phase of serovar C, the most stable reference genes were rpoB and recN respectively. This study provides recommendations for stable endogenous control genes for use in further studies involving measurement of gene expression levels. PMID:27942007
Panahi, Yasin; Salasar Moghaddam, Fahimeh; Ghasemi, Zahra; Hadi Jafari, Mandana; Shervin Badv, Reza; Eskandari, Mohamad Reza; Pedram, Mehrdad
2016-01-01
Childhood autism is a severe form of complex genetically heterogeneous and behaviorally defined set of neurodevelopmental diseases, collectively termed as autism spectrum disorders (ASD). Reverse transcriptase quantitative real-time PCR (RT-qPCR) is a highly sensitive technique for transcriptome analysis, and it has been frequently used in ASD gene expression studies. However, normalization to stably expressed reference gene(s) is necessary to validate any alteration reported at the mRNA level for target genes. The main goal of the present study was to find the most stable reference genes in the salivary transcriptome for RT-qPCR analysis in non-syndromic male childhood autism. Saliva samples were obtained from nine drug naïve non-syndromic male children with autism and also sex-, age-, and location-matched healthy controls using the RNA-stabilizer kit from DNA Genotek. A systematic two-phased measurement of whole saliva mRNA levels for eight common housekeeping genes (HKGs) was carried out by RT-qPCR, and the stability of expression for each candidate gene was analyzed using two specialized algorithms, geNorm and NormFinder, in parallel. Our analysis shows that while the frequently used HKG ACTB is not a suitable reference gene, the combination of GAPDH and YWHAZ could be recommended for normalization of RT-qPCR analysis of salivary transcriptome in non-syndromic autistic male children. PMID:27754318
Simon, Á; Jávor, A; Bai, P; Oláh, J; Czeglédi, L
2017-03-15
This study was designed to investigate the stability of 10 candidate reference genes, namely ACTB, B2M, GAPDH, HMBS, LBR, POLR2B, RN18S, RPS17, TBP, and YWHAZ for the normalization of gene expression data obtained by quantitative real-time polymerase chain reaction (qPCR) in studies related to feed intake of chicken. Samples were isolated from hypothalamus under three different nutritional status (ad libitum, fasted for 24 hr, fasted for 24 hr then refed for 2 hr). Five different algorithms were applied for the analysis of reference gene stability: BestKeeper, geNorm, NormFinder, the comparative ΔCt method, and a novel approach using multivariate linear mixed-effects modelling for stable reference gene selection. TBP and POLR2B were identified as the two most suitable and B2M and RN18S as the two least stable reference genes for normalization. Despite our review, the current literature showing that RN18S is one of the most commonly used reference gene in chicken gene expression studies, its applicability for normalization should be evaluated before each qPCR experiment.
Ji, Chang Yoon; Park, Seyeon; Jeong, Jae cheol; Lee, Haeng-Soon; Kwak, Sang-Soo
2012-01-01
Reverse transcription quantitative real-time PCR (RT-qPCR) has become one of the most widely used methods for gene expression analysis, but its successful application depends on the stability of suitable reference genes used for data normalization. In plant studies, the choice and optimal number of reference genes must be experimentally determined for the specific conditions, plant species, and cultivars. In this study, ten candidate reference genes of sweetpotato (Ipomoea batatas) were isolated and the stability of their expression was analyzed using two algorithms, geNorm and NormFinder. The samples consisted of tissues from four sweetpotato cultivars subjected to four different environmental stress treatments, i.e., cold, drought, salt and oxidative stress. The results showed that, for sweetpotato, individual reference genes or combinations thereof should be selected for use in data normalization depending on the experimental conditions and the particular cultivar. In general, the genes ARF, UBI, COX, GAP and RPL were validated as the most suitable reference gene set for every cultivar across total tested samples. Interestingly, the genes ACT and TUB, although widely used, were not the most suitable reference genes in different sweetpotato sample sets. Taken together, these results provide guidelines for reference gene(s) selection under different experimental conditions. In addition, they serve as a foundation for the more accurate and widespread use of RT-qPCR in various sweetpotato cultivars. PMID:23251557
Shivhare, Radha; Lata, Charu
2016-01-01
Pearl millet [Pennisetum glaucum (L.) R. Br.] a widely used grain and forage crop, is grown in areas frequented with one or more abiotic stresses, has superior drought and heat tolerance and considered a model crop for stress tolerance studies. Selection of suitable reference genes for quantification of target stress-responsive gene expression through quantitative real-time (qRT)-PCR is important for elucidating the molecular mechanisms of improved stress tolerance. For precise normalization of gene expression data in pearl millet, ten candidate reference genes were examined in various developmental tissues as well as under different individual abiotic stresses and their combinations at 1 h (early) and 24 h (late) of stress using geNorm, NormFinder and RefFinder algorithms. Our results revealed EF-1α and UBC-E2 as the best reference genes across all samples, the specificity of which was confirmed by assessing the relative expression of a PgAP2 like-ERF gene that suggested use of these two reference genes is sufficient for accurate transcript normalization under different stress conditions. To our knowledge this is the first report on validation of reference genes under different individual and multiple abiotic stresses in pearl millet. The study can further facilitate fastidious discovery of stress-tolerance genes in this important stress-tolerant crop. PMID:26972345
Wang, Peihong; Xiong, Aisheng; Gao, Zhihong; Yu, Xinyi; Li, Man; Hou, Yingjun; Sun, Chao; Qu, Shenchun
2016-01-01
The success of quantitative real-time reverse transcription polymerase chain reaction (RT-qPCR) to quantify gene expression depends on the stability of the reference genes used for data normalization. To date, systematic screening for reference genes in persimmon (Diospyros kaki Thunb) has never been reported. In this study, 13 candidate reference genes were cloned from 'Nantongxiaofangshi' using information available in the transcriptome database. Their expression stability was assessed by geNorm and NormFinder algorithms under abiotic stress and hormone stimulation. Our results showed that the most suitable reference genes across all samples were UBC and GAPDH, and not the commonly used persimmon reference gene ACT. In addition, UBC combined with RPII or TUA were found to be appropriate for the "abiotic stress" group and α-TUB combined with PP2A were found to be appropriate for the "hormone stimuli" group. For further validation, the transcript level of the DkDREB2C homologue under heat stress was studied with the selected genes (CYP, GAPDH, TUA, UBC, α-TUB, and EF1-α). The results suggested that it is necessary to choose appropriate reference genes according to the test materials or experimental conditions. Our study will be useful for future studies on gene expression in persimmon. PMID:27513755
Sinha, Pallavi; Saxena, Rachit K.; Singh, Vikas K.; Krishnamurthy, L.; Varshney, Rajeev K.
2015-01-01
To identify stable housekeeping genes as a reference for expression analysis under heat and salt stress conditions in pigeonpea, the relative expression variation for 10 commonly used housekeeping genes (EF1α, UBQ10, GAPDH, 18Sr RNA, 25Sr RNA, TUB6, ACT1, IF4α, UBC, and HSP90) was studied in root, stem, and leaves tissues of Asha (ICPL 87119), a leading pigeonpea variety. Three statistical algorithms geNorm, NormFinder, and BestKeeper were used to define the stability of candidate genes. Under heat stress, UBC, HSP90, and GAPDH were found to be the most stable reference genes. In the case of salinity stress, GAPDH followed by UBC and HSP90 were identified to be the most stable reference genes. Subsequently, the above identified genes were validated using qRT-PCR based gene expression analysis of two universal stress-resposive genes namely uspA and uspB. The relative quantification of these two genes varied according to the internal controls (most stable, least stable, and combination of most stable and least stable housekeeping genes) and thus confirmed the choice as well as validation of internal controls in such experiments. The identified and validated housekeeping genes will facilitate gene expression studies under heat and salt stress conditions in pigeonpea. PMID:27242803
Evaluation of Reference Genes for Quantitative Real-Time PCR in Songbirds
Zinzow-Kramer, Wendy M.; Horton, Brent M.; Maney, Donna L.
2014-01-01
Quantitative real-time PCR (qPCR) is becoming a popular tool for the quantification of gene expression in the brain and endocrine tissues of songbirds. Accurate analysis of qPCR data relies on the selection of appropriate reference genes for normalization, yet few papers on songbirds contain evidence of reference gene validation. Here, we evaluated the expression of ten potential reference genes (18S, ACTB, GAPDH, HMBS, HPRT, PPIA, RPL4, RPL32, TFRC, and UBC) in brain, pituitary, ovary, and testis in two species of songbird: zebra finch and white-throated sparrow. We used two algorithms, geNorm and NormFinder, to assess the stability of these reference genes in our samples. We found that the suitability of some of the most popular reference genes for target gene normalization in mammals, such as 18S, depended highly on tissue type. Thus, they are not the best choices for brain and gonad in these songbirds. In contrast, we identified alternative genes, such as HPRT, RPL4 and PPIA, that were highly stable in brain, pituitary, and gonad in these species. Our results suggest that the validation of reference genes in mammals does not necessarily extrapolate to other taxonomic groups. For researchers wishing to identify and evaluate suitable reference genes for qPCR songbirds, our results should serve as a starting point and should help increase the power and utility of songbird models in behavioral neuroendocrinology. PMID:24780145
Gebeh, Alpha K; Marczylo, Emma L; Amoako, Akwasi A; Willets, Jonathon M; Konje, Justin C
2012-01-01
RT-qPCR is commonly employed in gene expression studies in ectopic pregnancy. Most use RN18S1, β-actin or GAPDH as internal controls without validation of their suitability as reference genes. A systematic study of the suitability of endogenous reference genes for gene expression studies in ectopic pregnancy is lacking. The aims of this study were therefore to evaluate the stability of 12 reference genes and suggest those that are stable for use as internal control genes in fallopian tubes and endometrium from ectopic pregnancy and healthy non-pregnant controls. Analysis of the results showed that the genes consistently ranked in the top six by geNorm and NormFinder algorithms, were UBC, GAPDH, CYC1 and EIF4A2 (fallopian tubes) and UBC and ATP5B (endometrium). mRNA expression of NAPE-PLD as a test gene of interest varied between the groups depending on which of the 12 reference genes was used as internal controls. This study demonstrates that arbitrary selection of reference genes for normalisation in RT-qPCR studies in ectopic pregnancy without validation, risk producing inaccurate data and should therefore be discouraged.
Li, Xiuying; Yang, Qiwei; Bai, Jinping; Xuan, Yali; Wang, Yimin
2015-01-01
Normalization to a reference gene is the method of choice for quantitative reverse transcription-PCR (RT-qPCR) analysis. The stability of reference genes is critical for accurate experimental results and conclusions. We have evaluated the expression stability of eight commonly used reference genes found in four different human mesenchymal stem cells (MSC). Using geNorm, NormFinder and BestKeeper algorithms, we show that beta-2-microglobulin and peptidyl-prolylisomerase A were the optimal reference genes for normalizing RT-qPCR data obtained from MSC, whereas the TATA box binding protein was not suitable due to its extensive variability in expression. Our findings emphasize the significance of validating reference genes for qPCR analyses. We offer a short list of reference genes to use for normalization and recommend some commercially-available software programs as a rapid approach to validate reference genes. We also demonstrate that the two reference genes, β-actin and glyceraldehyde-3-phosphate dehydrogenase, are frequently used are not always successful in many cases.
Gong, Huan; Sun, Liang; Chen, Beidong; Han, Yiwen; Pang, Jing; Wu, Wei; Qi, Ruomei; Zhang, Tie-mei
2016-01-01
Reverse transcription quantitative-polymerase chain reaction (RT-qPCR) is a routine method for gene expression analysis, and reliable results depend on proper normalization by stable reference genes. Caloric restriction (CR) is a robust lifestyle intervention to slow aging and delay onset of age-associated diseases via inducing global changes in gene expression. Reliable normalization of RT-qPCR data becomes crucial in CR studies. In this study, the expression stability of 12 candidate reference genes were evaluated in inguinal white adipose tissue (iWAT), skeletal muscle (Sk.M) and liver of CR mice by using three algorithms, geNorm, NormFinder, and Bestkeeper. Our results showed β2m, Ppia and Hmbs as the most stable genes in iWAT, Sk.M and liver, respectively. Moreover, two reference genes were sufficient to normalize RT-qPCR data in each tissue and the suitable pair of reference genes was β2m-Hprt in iWAT, Ppia-Gusb in Sk.M and Hmbs-β2m in liver. By contrast, the least stable gene in iWAT or Sk.M was Gapdh, and in liver was Pgk1. Furthermore, the expression of Leptin and Ppar-γ were profiled in these tissues to validate the selected reference genes. Our data provided a basis for gene expression analysis in future CR studies. PMID:27922100
Zhang, Qi-Lin; Zhu, Qian-Hua; Liao, Xin; Wang, Xiu-Qiang; Chen, Tao; Xu, Han-Ting; Wang, Juan; Yuan, Ming-Long; Chen, Jun-Yuan
2016-01-01
Amphioxus is a closest living proxy to the ancestor of cephalochordates with vertebrates, and key animal for novel understanding in the evolutionary origin of vertebrate body plan, genome, tissues and immune system. Reliable analyses using quantitative real-time PCR (qRT-PCR) for answering these scientific questions is heavily dependent on reliable reference genes (RGs). In this study, we evaluated stability of thirteen candidate RGs in qRT-PCR for different developmental stages and tissues of amphioxus by four independent (geNorm, NormFinder, BestKeeper and deltaCt) and one comparative algorithms (RefFinder). The results showed that the top two stable RGs were the following: (1) S20 and 18 S in thirteen developmental stages, (2) EF1A and ACT in seven normal tissues, (3) S20 and L13 in both intestine and hepatic caecum challenged with lipopolysaccharide (LPS), and (4) S20 and EF1A in gill challenged with LPS. The expression profiles of two target genes (EYA and HHEX) in thirteen developmental stages were used to confirm the reliability of chosen RGs. This study identified optimal RGs that can be used to accurately measure gene expression under these conditions, which will benefit evolutionary and functional genomics studies in amphioxus. PMID:27869224
Identification of stable housekeeping genes in response to ionizing radiation in cancer research
Iyer, Gopal; Wang, Albert R.; Brennan, Sean R.; Bourgeois, Shay; Armstrong, Eric; Shah, Pari; Harari, Paul M.
2017-01-01
Housekeeping genes (HKGs) are essential for basic maintenance of a variety of cellular processes. They ideally maintain uniform expression independent of experimental conditions. However, the effects of ionizing radiation (IR) on HKG expression is unclear. Statistical algorithms, geNorm and Normfinder were used for estimating the stability of HKGs as raw quantification cycle (Cq) values were not a reliable factor for normalization. Head and neck, non-small lung and pancreas cells were exposed to 2, 4 and 6 Gy IR doses and expression of fourteen HKGs was measured at 5 min to 48 h post-irradiation within a given tissue. Paired and single cell line analyses under these experimental conditions identified TATA-Box Binding Protein (TBP) and Importin 8 (IPO8) to be stable in non-small cell lung cancer. In addition to these two genes, Ubiquitin C (UBC) in head and neck cancer and Transferrin receptor (TFRC) and β-Glucuronidase (GUSB) in pancreatic cancer were identified to be stable as well. In summary we present a resource for top ranked five stable HKGs and their transcriptional behavior in commonly used cancer model cell lines and suggest the use of multiple HKGs under radiation treatment conditions is a reliable metric for quantifying gene expression. PMID:28262749
Cheung, Tanya T.; Weston, Mitchell K.
2017-01-01
The development of the brain is sex-dimorphic, and as a result so are many neurological disorders. One approach for studying sex-dimorphic brain development is to measure gene expression in biological samples using RT-qPCR. However, the accuracy and consistency of this technique relies on the reference gene(s) selected. We analyzed the expression of ten reference genes in male and female samples over three stages of brain development, using popular algorithms NormFinder, GeNorm and Bestkeeper. The top ranked reference genes at each time point were further used to quantify gene expression of three sex-dimorphic genes (Wnt10b, Xist and CYP7B1). When comparing gene expression between the sexes expression at specific time points the best reference gene combinations are: Sdha/Pgk1 at E11.5, RpL38/Sdha E12.5, and Actb/RpL37 at E15.5. When studying expression across time, the ideal reference gene(s) differs with sex. For XY samples a combination of Actb/Sdha. In contrast, when studying gene expression across developmental stage with XX samples, Sdha/Gapdh were the top reference genes. Our results identify the best combination of two reference genes when studying male and female brain development, and emphasize the importance of selecting the correct reference genes for comparisons between developmental stages. PMID:28133578
Pereira-Fantini, Prue M.; Rajapaksa, Anushi E.; Oakley, Regina; Tingay, David G.
2016-01-01
Preterm newborns often require invasive support, however even brief periods of supported ventilation applied inappropriately to the lung can cause injury. Real-time quantitative reverse transcriptase-PCR (qPCR) has been extensively employed in studies of ventilation-induced lung injury with the reference gene 18S ribosomal RNA (18S RNA) most commonly employed as the internal control reference gene. Whilst the results of these studies depend on the stability of the reference gene employed, the use of 18S RNA has not been validated. In this study the expression profile of five candidate reference genes (18S RNA, ACTB, GAPDH, TOP1 and RPS29) in two geographical locations, was evaluated by dedicated algorithms, including geNorm, Normfinder, Bestkeeper and ΔCt method and the overall stability of these candidate genes determined (RefFinder). Secondary studies examined the influence of reference gene choice on the relative expression of two well-validated lung injury markers; EGR1 and IL1B. In the setting of the preterm lamb model of lung injury, RPS29 reference gene expression was influenced by tissue location; however we determined that individual ventilation strategies influence reference gene stability. Whilst 18S RNA is the most commonly employed reference gene in preterm lamb lung studies, our results suggest that GAPDH is a more suitable candidate. PMID:27210246
Panahi, Yasin; Salasar Moghaddam, Fahimeh; Ghasemi, Zahra; Hadi Jafari, Mandana; Shervin Badv, Reza; Eskandari, Mohamad Reza; Pedram, Mehrdad
2016-10-12
Childhood autism is a severe form of complex genetically heterogeneous and behaviorally defined set of neurodevelopmental diseases, collectively termed as autism spectrum disorders (ASD). Reverse transcriptase quantitative real-time PCR (RT-qPCR) is a highly sensitive technique for transcriptome analysis, and it has been frequently used in ASD gene expression studies. However, normalization to stably expressed reference gene(s) is necessary to validate any alteration reported at the mRNA level for target genes. The main goal of the present study was to find the most stable reference genes in the salivary transcriptome for RT-qPCR analysis in non-syndromic male childhood autism. Saliva samples were obtained from nine drug naïve non-syndromic male children with autism and also sex-, age-, and location-matched healthy controls using the RNA-stabilizer kit from DNA Genotek. A systematic two-phased measurement of whole saliva mRNA levels for eight common housekeeping genes (HKGs) was carried out by RT-qPCR, and the stability of expression for each candidate gene was analyzed using two specialized algorithms, geNorm and NormFinder, in parallel. Our analysis shows that while the frequently used HKG ACTB is not a suitable reference gene, the combination of GAPDH and YWHAZ could be recommended for normalization of RT-qPCR analysis of salivary transcriptome in non-syndromic autistic male children.
Wang, Yaolong; Liu, Juan; Wang, Xumin; Liu, Shuang; Wang, Guoliang; Zhou, Junhui; Yuan, Yuan; Chen, Tiying; Jiang, Chao; Zha, Liangping; Huang, Luqi
2016-01-01
MicroRNAs (miRNAs), which play crucial regulatory roles in plant secondary metabolism and responses to the environment, could be developed as promising biomarkers for different varieties and production areas of herbal medicines. However, limited information is available for miRNAs from Lonicera japonica, which is widely used in East Asian countries owing to various pharmaceutically active secondary metabolites. Selection of suitable reference genes for quantification of target miRNA expression through quantitative real-time (qRT)-PCR is important for elucidating the molecular mechanisms of secondary metabolic regulation in different tissues and varieties of L. japonica. For precise normalization of gene expression data in L. japonica, 16 candidate miRNAs were examined in three tissues, as well as 21 cultivated varieties collected from 16 production areas, using GeNorm, NormFinder, and RefFinder algorithms. Our results revealed combination of u534122 and u3868172 as the best reference genes across all samples. Their specificity was confirmed by detecting the cycling threshold (Ct) value ranges in different varieties of L. japonica collected from diverse production areas, suggesting the use of these two reference miRNAs is sufficient for accurate transcript normalization with different tissues, varieties, and production areas. To our knowledge, this is the first report on validation of reference miRNAs in honeysuckle (Lonicera spp.). Restuls from this study can further facilitate discovery of functional regulatory miRNAs in different varieties of L. japonica. PMID:27507983
NASA Astrophysics Data System (ADS)
Zhao, Ye; Chen, Muyan; Wang, Tianming; Sun, Lina; Xu, Dongxue; Yang, Hongsheng
2014-11-01
Quantitative real-time reverse transcription-polymerase chain reaction (qRT-PCR) is a technique that is widely used for gene expression analysis, and its accuracy depends on the expression stability of the internal reference genes used as normalization factors. However, many applications of qRT-PCR used housekeeping genes as internal controls without validation. In this study, the expression stability of eight candidate reference genes in three tissues (intestine, respiratory tree, and muscle) of the sea cucumber Apostichopus japonicus was assessed during normal growth and aestivation using the geNorm, NormFinder, delta CT, and RefFinder algorithms. The results indicate that the reference genes exhibited significantly different expression patterns among the three tissues during aestivation. In general, the β-tubulin (TUBB) gene was relatively stable in the intestine and respiratory tree tissues. The optimal reference gene combination for intestine was 40S ribosomal protein S18 (RPS18), TUBB, and NADH dehydrogenase (NADH); for respiratory tree, it was β-actin (ACTB), TUBB, and succinate dehydrogenase cytochrome B small subunit (SDHC); and for muscle it was α-tubulin (TUBA) and NADH dehydrogenase [ubiquinone] 1 α subcomplex subunit 13 (NDUFA13). These combinations of internal control genes should be considered for use in further studies of gene expression in A. japonicus during aestivation.
Sang, Wen; He, Li; Wang, Xiao-Ping; Zhu-Salzman, Keyan; Lei, Chao-Liang
2015-04-01
Reverse transcriptase quantitative polymerase chain reaction (RT-qPCR) has become a widely used technique to quantify gene expression. It is necessary to select appropriate reference genes for normalization. In the present study, we assessed the expression stability of seven candidate genes in Tribolium castaneum (Herbst) (Coleoptera: Tenebrionidae) irradiated by ultraviolet B (UVB) at different developmental stages for various irradiation time periods. The algorithms of geNorm, NormFinder, and BestKeeper were applied to determine the stability of these candidate genes. Ribosomal protein genes RpS3, RpL13A, and β-actin gene (ActB) showed the highest stability across all UVB irradiation time points, whereas expression of other normally used reference genes, such as those encoding the β-tubulin gene TUBB and the E-cadherin gene CAD, varied at different developmental stages. This study will potentially provide more suitable reference gene candidates for RT-qPCR analysis in T. castaneum subjected to environmental stresses, particularly UV irradiation.
Genetic Algorithms and Local Search
NASA Technical Reports Server (NTRS)
Whitley, Darrell
1996-01-01
The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.
A MEDLINE categorization algorithm
Darmoni, Stefan J; Névéol, Aurelie; Renard, Jean-Marie; Gehanno, Jean-Francois; Soualmia, Lina F; Dahamna, Badisse; Thirion, Benoit
2006-01-01
Background Categorization is designed to enhance resource description by organizing content description so as to enable the reader to grasp quickly and easily what are the main topics discussed in it. The objective of this work is to propose a categorization algorithm to classify a set of scientific articles indexed with the MeSH thesaurus, and in particular those of the MEDLINE bibliographic database. In a large bibliographic database such as MEDLINE, finding materials of particular interest to a specialty group, or relevant to a particular audience, can be difficult. The categorization refines the retrieval of indexed material. In the CISMeF terminology, metaterms can be considered as super-concepts. They were primarily conceived to improve recall in the CISMeF quality-controlled health gateway. Methods The MEDLINE categorization algorithm (MCA) is based on semantic links existing between MeSH terms and metaterms on the one hand and between MeSH subheadings and metaterms on the other hand. These links are used to automatically infer a list of metaterms from any MeSH term/subheading indexing. Medical librarians manually select the semantic links. Results The MEDLINE categorization algorithm lists the medical specialties relevant to a MEDLINE file by decreasing order of their importance. The MEDLINE categorization algorithm is available on a Web site. It can run on any MEDLINE file in a batch mode. As an example, the top 3 medical specialties for the set of 60 articles published in BioMed Central Medical Informatics & Decision Making, which are currently indexed in MEDLINE are: information science, organization and administration and medical informatics. Conclusion We have presented a MEDLINE categorization algorithm in order to classify the medical specialties addressed in any MEDLINE file in the form of a ranked list of relevant specialties. The categorization method introduced in this paper is based on the manual indexing of resources with MeSH (terms
Reactive Collision Avoidance Algorithm
NASA Technical Reports Server (NTRS)
Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred
2010-01-01
The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on
Algorithm Visualization System for Teaching Spatial Data Algorithms
ERIC Educational Resources Information Center
Nikander, Jussi; Helminen, Juha; Korhonen, Ari
2010-01-01
TRAKLA2 is a web-based learning environment for data structures and algorithms. The system delivers automatically assessed algorithm simulation exercises that are solved using a graphical user interface. In this work, we introduce a novel learning environment for spatial data algorithms, SDA-TRAKLA2, which has been implemented on top of the…
Algorithms, games, and evolution
Chastain, Erick; Livnat, Adi; Papadimitriou, Christos; Vazirani, Umesh
2014-01-01
Even the most seasoned students of evolution, starting with Darwin himself, have occasionally expressed amazement that the mechanism of natural selection has produced the whole of Life as we see it around us. There is a computational way to articulate the same amazement: “What algorithm could possibly achieve all this in a mere three and a half billion years?” In this paper we propose an answer: We demonstrate that in the regime of weak selection, the standard equations of population genetics describing natural selection in the presence of sex become identical to those of a repeated game between genes played according to multiplicative weight updates (MWUA), an algorithm known in computer science to be surprisingly powerful and versatile. MWUA maximizes a tradeoff between cumulative performance and entropy, which suggests a new view on the maintenance of diversity in evolution. PMID:24979793
Tomasz Plawski, J. Hovater
2010-09-01
A digital low level radio frequency (RF) system typically incorporates either a heterodyne or direct sampling technique, followed by fast ADCs, then an FPGA, and finally a transmitting DAC. This universal platform opens up the possibilities for a variety of control algorithm implementations. The foremost concern for an RF control system is cavity field stability, and to meet the required quality of regulation, the chosen control system needs to have sufficient feedback gain. In this paper we will investigate the effectiveness of the regulation for three basic control system algorithms: I&Q (In-phase and Quadrature), Amplitude & Phase and digital SEL (Self Exciting Loop) along with the example of the Jefferson Lab 12 GeV cavity field control system.
Irregular Applications: Architectures & Algorithms
Feo, John T.; Villa, Oreste; Tumeo, Antonino; Secchi, Simone
2012-02-06
Irregular applications are characterized by irregular data structures, control and communication patterns. Novel irregular high performance applications which deal with large data sets and require have recently appeared. Unfortunately, current high performance systems and software infrastructures executes irregular algorithms poorly. Only coordinated efforts by end user, area specialists and computer scientists that consider both the architecture and the software stack may be able to provide solutions to the challenges of modern irregular applications.
Basic cluster compression algorithm
NASA Technical Reports Server (NTRS)
Hilbert, E. E.; Lee, J.
1980-01-01
Feature extraction and data compression of LANDSAT data is accomplished by BCCA program which reduces costs associated with transmitting, storing, distributing, and interpreting multispectral image data. Algorithm uses spatially local clustering to extract features from image data to describe spectral characteristics of data set. Approach requires only simple repetitive computations, and parallel processing can be used for very high data rates. Program is written in FORTRAN IV for batch execution and has been implemented on SEL 32/55.
NASA Astrophysics Data System (ADS)
Reda, Ibrahim; Andreas, Afshin
2015-04-01
The Solar Position Algorithm (SPA) calculates the solar zenith and azimuth angles in the period from the year -2000 to 6000, with uncertainties of +/- 0.0003 degrees based on the date, time, and location on Earth. SPA is implemented in C; in addition to being available for download, an online calculator using this code is available at http://www.nrel.gov/midc/solpos/spa.html.
Algorithmic Complexity. Volume II.
1982-06-01
works, give an example, and discuss the inherent weaknesses and their causes. Electrical Network Analysis Knuth mentions the applicability of...of these 3 products of 2-coefficient 2 1 polynomials can be found by a repeated application of the 3 multiplication W Ascheme, only 3.3-9 scalar...see another application of this paradigm later. We now investigate the efficiency of the divide-and-conquer polynomial multiplication algorithm. Let M(n
ARPANET Routing Algorithm Improvements
1978-10-01
IMPROVEMENTS . .PFOnINI ORG. REPORT MUNDER -- ) _ .. .... 3940 7, AUT(c) .. .. .. CONTRACT Of GRANT NUMSlet e) SJ. M. /Mc~uillan E. C./Rosen I...8217), this problem may persist for a very long time, causing extremely bad performance throughout the whole network (for instance, if w’ reports that one of...algorithm may naturally tend to oscillate between bad routing paths and become itself a major contributor to network congestion. These examples show
1983-10-13
determining the solu- tion using the Moore - Penrose inverse . An expression for the mean square error is derived [8,9]. The expression indicates that...Proc. 10. "An Iterative Algorithm for Finding the Minimum Eigenvalue of a Class of Symmetric Matrices," D. Fuhrmann and B. Liu, submitted to 1984 IEEE...Int. Conf. Acous. Sp. 5V. Proc. 11. "Approximating the Eigenvectors of a Symmetric Toeplitz Matrix," by D. Fuhrmann and B. Liu, 1983 Allerton Conf. an
2016-06-07
XBT’s sound speed values instead of temperature values. Studies show that the sound speed at the surface in a specific location varies less than...be entered at the terminal in metric or English temperatures or sound speeds. The algorithm automatically determines which form each data point was... sound speeds. Leroy’s equation is used to derive sound speed from temperature or temperature from sound speed. The previous, current, and next months
Adaptive continuous twisting algorithm
NASA Astrophysics Data System (ADS)
Moreno, Jaime A.; Negrete, Daniel Y.; Torres-González, Victor; Fridman, Leonid
2016-09-01
In this paper, an adaptive continuous twisting algorithm (ACTA) is presented. For double integrator, ACTA produces a continuous control signal ensuring finite time convergence of the states to zero. Moreover, the control signal generated by ACTA compensates the Lipschitz perturbation in finite time, i.e. its value converges to the opposite value of the perturbation. ACTA also keeps its convergence properties, even in the case that the upper bound of the derivative of the perturbation exists, but it is unknown.
NOSS altimeter algorithm specifications
NASA Technical Reports Server (NTRS)
Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.
1982-01-01
A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.
Genetic Algorithm for Optimization: Preprocessor and Algorithm
NASA Technical Reports Server (NTRS)
Sen, S. K.; Shaykhian, Gholam A.
2006-01-01
Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.
Stubbs, Allston Julius; Atilla, Halis Atil
2016-01-01
Summary Background Despite the rapid advancement of imaging and arthroscopic techniques about the hip joint, missed diagnoses are still common. As a deep joint and compared to the shoulder and knee joints, localization of hip symptoms is difficult. Hip pathology is not easily isolated and is often related to intra and extra-articular abnormalities. In light of these diagnostic challenges, we recommend an algorithmic approach to effectively diagnoses and treat hip pain. Methods In this review, hip pain is evaluated from diagnosis to treatment in a clear decision model. First we discuss emergency hip situations followed by the differentiation of intra and extra-articular causes of the hip pain. We differentiate the intra-articular hip as arthritic and non-arthritic and extra-articular pain as surrounding or remote tissue generated. Further, extra-articular hip pain is evaluated according to pain location. Finally we summarize the surgical treatment approach with an algorithmic diagram. Conclusion Diagnosis of hip pathology is difficult because the etiologies of pain may be various. An algorithmic approach to hip restoration from diagnosis to rehabilitation is crucial to successfully identify and manage hip pathologies. Level of evidence: V. PMID:28066734
Baudoin, T; Grgić, M V; Zadravec, D; Geber, G; Tomljenović, D; Kalogjera, L
2013-12-01
ENT navigation has given new opportunities in performing Endoscopic Sinus Surgery (ESS) and improving surgical outcome of the patients` treatment. ESS assisted by a navigation system could be called Navigated Endoscopic Sinus Surgery (NESS). As it is generally accepted that the NESS should be performed only in cases of complex anatomy and pathology, it has not yet been established as a state-of-the-art procedure and thus not used on a daily basis. This paper presents an algorithm for use of a navigation system for basic ESS in the treatment of chronic rhinosinusitis (CRS). The algorithm includes five units that should be highlighted using a navigation system. They are as follows: 1) nasal vestibule unit, 2) OMC unit, 3) anterior ethmoid unit, 4) posterior ethmoid unit, and 5) sphenoid unit. Each unit has a shape of a triangular pyramid and consists of at least four reference points or landmarks. As many landmarks as possible should be marked when determining one of the five units. Navigated orientation in each unit should always precede any surgical intervention. The algorithm should improve the learning curve of trainees and enable surgeons to use the navigation system routinely and systematically.
Large scale tracking algorithms
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.
1995-09-01
This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.
Algorithm for Constructing Contour Plots
NASA Technical Reports Server (NTRS)
Johnson, W.; Silva, F.
1984-01-01
General computer algorithm developed for construction of contour plots. algorithm accepts as input data values at set of points irregularly distributed over plane. Algorithm based on interpolation scheme: points in plane connected by straight-line segments to form set of triangles. Program written in FORTRAN IV.
Two Meanings of Algorithmic Mathematics.
ERIC Educational Resources Information Center
Maurer, Stephen B.
1984-01-01
Two mathematical topics are interpreted from the viewpoints of traditional (performing algorithms) and contemporary (creating algorithms and thinking in terms of them for solving problems and developing theory) algorithmic mathematics. The two topics are Horner's method for evaluating polynomials and Gauss's method for solving systems of linear…
Greedy algorithms in disordered systems
NASA Astrophysics Data System (ADS)
Duxbury, P. M.; Dobrin, R.
1999-08-01
We discuss search, minimal path and minimal spanning tree algorithms and their applications to disordered systems. Greedy algorithms solve these problems exactly, and are related to extremal dynamics in physics. Minimal cost path (Dijkstra) and minimal cost spanning tree (Prim) algorithms provide extremal dynamics for a polymer in a random medium (the KPZ universality class) and invasion percolation (without trapping) respectively.
Grammar Rules as Computer Algorithms.
ERIC Educational Resources Information Center
Rieber, Lloyd
1992-01-01
One college writing teacher engaged his class in the revision of a computer program to check grammar, focusing on improvement of the algorithms for identifying inappropriate uses of the passive voice. Process and problems of constructing new algorithms, effects on student writing, and other algorithm applications are discussed. (MSE)
Verifying a Computer Algorithm Mathematically.
ERIC Educational Resources Information Center
Olson, Alton T.
1986-01-01
Presents an example of mathematics from an algorithmic point of view, with emphasis on the design and verification of this algorithm. The program involves finding roots for algebraic equations using the half-interval search algorithm. The program listing is included. (JN)
Selfish Gene Algorithm Vs Genetic Algorithm: A Review
NASA Astrophysics Data System (ADS)
Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed
2016-11-01
Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.
Join-Graph Propagation Algorithms
Mateescu, Robert; Kask, Kalev; Gogate, Vibhav; Dechter, Rina
2010-01-01
The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearl's belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and then move to the iterative scheme called Iterative Join-Graph Propagation (IJGP), that combines both iteration and bounded inference. Algorithm IJGP belongs to the class of Generalized Belief Propagation algorithms, a framework that allowed connections with approximate algorithms from statistical physics and is shown empirically to surpass the performance of mini-clustering and belief propagation, as well as a number of other state-of-the-art algorithms on several classes of networks. We also provide insight into the accuracy of iterative BP and IJGP by relating these algorithms to well known classes of constraint propagation schemes. PMID:20740057
Parallel algorithm development
Adams, T.F.
1996-06-01
Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.
Algorithm performance evaluation
NASA Astrophysics Data System (ADS)
Smith, Richard N.; Greci, Anthony M.; Bradley, Philip A.
1995-03-01
Traditionally, the performance of adaptive antenna systems is measured using automated antenna array pattern measuring equipment. This measurement equipment produces a plot of the receive gain of the antenna array as a function of angle. However, communications system users more readily accept and understand bit error rate (BER) as a performance measure. The work reported on here was conducted to characterize adaptive antenna receiver performance in terms of overall communications system performance using BER as a performance measure. The adaptive antenna system selected for this work featured a linear array, least mean square (LMS) adaptive algorithm and a high speed phase shift keyed (PSK) communications modem.
NASA Technical Reports Server (NTRS)
Rabideau, Gregg R.; Chien, Steve A.
2010-01-01
AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.
NASA Technical Reports Server (NTRS)
Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John
2005-01-01
The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.
Online Pairwise Learning Algorithms.
Ying, Yiming; Zhou, Ding-Xuan
2016-04-01
Pairwise learning usually refers to a learning task that involves a loss function depending on pairs of examples, among which the most notable ones are bipartite ranking, metric learning, and AUC maximization. In this letter we study an online algorithm for pairwise learning with a least-square loss function in an unconstrained setting of a reproducing kernel Hilbert space (RKHS) that we refer to as the Online Pairwise lEaRning Algorithm (OPERA). In contrast to existing works (Kar, Sriperumbudur, Jain, & Karnick, 2013 ; Wang, Khardon, Pechyony, & Jones, 2012 ), which require that the iterates are restricted to a bounded domain or the loss function is strongly convex, OPERA is associated with a non-strongly convex objective function and learns the target function in an unconstrained RKHS. Specifically, we establish a general theorem that guarantees the almost sure convergence for the last iterate of OPERA without any assumptions on the underlying distribution. Explicit convergence rates are derived under the condition of polynomially decaying step sizes. We also establish an interesting property for a family of widely used kernels in the setting of pairwise learning and illustrate the convergence results using such kernels. Our methodology mainly depends on the characterization of RKHSs using its associated integral operators and probability inequalities for random variables with values in a Hilbert space.
STAR Algorithm Integration Team - Facilitating operational algorithm development
NASA Astrophysics Data System (ADS)
Mikles, V. J.
2015-12-01
The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.
Algorithm aversion: people erroneously avoid algorithms after seeing them err.
Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade
2015-02-01
Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.
Multisensor data fusion algorithm development
Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.
1995-12-01
This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.
Fighting Censorship with Algorithms
NASA Astrophysics Data System (ADS)
Mahdian, Mohammad
In countries such as China or Iran where Internet censorship is prevalent, users usually rely on proxies or anonymizers to freely access the web. The obvious difficulty with this approach is that once the address of a proxy or an anonymizer is announced for use to the public, the authorities can easily filter all traffic to that address. This poses a challenge as to how proxy addresses can be announced to users without leaking too much information to the censorship authorities. In this paper, we formulate this question as an interesting algorithmic problem. We study this problem in a static and a dynamic model, and give almost tight bounds on the number of proxy servers required to give access to n people k of whom are adversaries. We will also discuss how trust networks can be used in this context.
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess
2011-01-01
More efficient versions of an interpolation method, called kriging, have been introduced in order to reduce its traditionally high computational cost. Written in C++, these approaches were tested on both synthetic and real data. Kriging is a best unbiased linear estimator and suitable for interpolation of scattered data points. Kriging has long been used in the geostatistic and mining communities, but is now being researched for use in the image fusion of remotely sensed data. This allows a combination of data from various locations to be used to fill in any missing data from any single location. To arrive at the faster algorithms, sparse SYMMLQ iterative solver, covariance tapering, Fast Multipole Methods (FMM), and nearest neighbor searching techniques were used. These implementations were used when the coefficient matrix in the linear system is symmetric, but not necessarily positive-definite.
One improved LSB steganography algorithm
NASA Astrophysics Data System (ADS)
Song, Bing; Zhang, Zhi-hong
2013-03-01
It is easy to be detected by X2 and RS steganalysis with high accuracy that using LSB algorithm to hide information in digital image. We started by selecting information embedded location and modifying the information embedded method, combined with sub-affine transformation and matrix coding method, improved the LSB algorithm and a new LSB algorithm was proposed. Experimental results show that the improved one can resist the X2 and RS steganalysis effectively.
Algorithm Diversity for Resilent Systems
2016-06-27
4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Algorithm Diversity for Resilent Systems N/A 5b. GRANT NUMBER NOOO 141512208 5c. PROGRAM ELEMENT NUMBER...changes to a prograrn’s state during execution. Specifically, the project aims to develop techniques to introduce algorithm -level diversity, in contrast...to existing work on execution-level diversity. Algorithm -level diversity can introduce larger differences between variants than execution-level
Fuentes, Alejandra; Ortiz, Javier; Saavedra, Nicolás; Salazar, Luis A; Meneses, Claudio; Arriagada, Cesar
2016-04-01
The gene expression stability of candidate reference genes in the roots and leaves of Solanum lycopersicum inoculated with arbuscular mycorrhizal fungi was investigated. Eight candidate reference genes including elongation factor 1 α (EF1), glyceraldehyde-3-phosphate dehydrogenase (GAPDH), phosphoglycerate kinase (PGK), protein phosphatase 2A (PP2Acs), ribosomal protein L2 (RPL2), β-tubulin (TUB), ubiquitin (UBI) and actin (ACT) were selected, and their expression stability was assessed to determine the most stable internal reference for quantitative PCR normalization in S. lycopersicum inoculated with the arbuscular mycorrhizal fungus Rhizophagus irregularis. The stability of each gene was analysed in leaves and roots together and separated using the geNorm and NormFinder algorithms. Differences were detected between leaves and roots, varying among the best-ranked genes depending on the algorithm used and the tissue analysed. PGK, TUB and EF1 genes showed higher stability in roots, while EF1 and UBI had higher stability in leaves. Statistical algorithms indicated that the GAPDH gene was the least stable under the experimental conditions assayed. Then, we analysed the expression levels of the LePT4 gene, a phosphate transporter whose expression is induced by fungal colonization in host plant roots. No differences were observed when the most stable genes were used as reference genes. However, when GAPDH was used as the reference gene, we observed an overestimation of LePT4 expression. In summary, our results revealed that candidate reference genes present variable stability in S. lycopersicum arbuscular mycorrhizal symbiosis depending on the algorithm and tissue analysed. Thus, reference gene selection is an important issue for obtaining reliable results in gene expression quantification.
Messy genetic algorithms: Recent developments
Kargupta, H.
1996-09-01
Messy genetic algorithms define a rare class of algorithms that realize the need for detecting appropriate relations among members of the search domain in optimization. This paper reviews earlier works in messy genetic algorithms and describes some recent developments. It also describes the gene expression messy GA (GEMGA)--an {Omicron}({Lambda}{sup {kappa}}({ell}{sup 2} + {kappa})) sample complexity algorithm for the class of order-{kappa} delineable problems (problems that can be solved by considering no higher than order-{kappa} relations) of size {ell} and alphabet size {Lambda}. Experimental results are presented to demonstrate the scalability of the GEMGA.
DNABIT Compress - Genome compression algorithm.
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-22
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.
Quantum algorithm for data fitting.
Wiebe, Nathan; Braun, Daniel; Lloyd, Seth
2012-08-03
We provide a new quantum algorithm that efficiently determines the quality of a least-squares fit over an exponentially large data set by building upon an algorithm for solving systems of linear equations efficiently [Harrow et al., Phys. Rev. Lett. 103, 150502 (2009)]. In many cases, our algorithm can also efficiently find a concise function that approximates the data to be fitted and bound the approximation error. In cases where the input data are pure quantum states, the algorithm can be used to provide an efficient parametric estimation of the quantum state and therefore can be applied as an alternative to full quantum-state tomography given a fault tolerant quantum computer.
Preconditioned quantum linear system algorithm.
Clader, B D; Jacobs, B C; Sprouse, C R
2013-06-21
We describe a quantum algorithm that generalizes the quantum linear system algorithm [Harrow et al., Phys. Rev. Lett. 103, 150502 (2009)] to arbitrary problem specifications. We develop a state preparation routine that can initialize generic states, show how simple ancilla measurements can be used to calculate many quantities of interest, and integrate a quantum-compatible preconditioner that greatly expands the number of problems that can achieve exponential speedup over classical linear systems solvers. To demonstrate the algorithm's applicability, we show how it can be used to compute the electromagnetic scattering cross section of an arbitrary target exponentially faster than the best classical algorithm.
NOSS Altimeter Detailed Algorithm specifications
NASA Technical Reports Server (NTRS)
Hancock, D. W.; Mcmillan, J. D.
1982-01-01
The details of the algorithms and data sets required for satellite radar altimeter data processing are documented in a form suitable for (1) development of the benchmark software and (2) coding the operational software. The algorithms reported in detail are those established for altimeter processing. The algorithms which required some additional development before documenting for production were only scoped. The algorithms are divided into two levels of processing. The first level converts the data to engineering units and applies corrections for instrument variations. The second level provides geophysical measurements derived from altimeter parameters for oceanographic users.
Research on Routing Selection Algorithm Based on Genetic Algorithm
NASA Astrophysics Data System (ADS)
Gao, Guohong; Zhang, Baojian; Li, Xueyong; Lv, Jinna
The hereditary algorithm is a kind of random searching and method of optimizing based on living beings natural selection and hereditary mechanism. In recent years, because of the potentiality in solving complicate problems and the successful application in the fields of industrial project, hereditary algorithm has been widely concerned by the domestic and international scholar. Routing Selection communication has been defined a standard communication model of IP version 6.This paper proposes a service model of Routing Selection communication, and designs and implements a new Routing Selection algorithm based on genetic algorithm.The experimental simulation results show that this algorithm can get more resolution at less time and more balanced network load, which enhances search ratio and the availability of network resource, and improves the quality of service.
Evaluation of housekeeping genes for quantitative gene expression analysis in the equine kidney
AZARPEYKAN, Sara; DITTMER, Keren E.
2016-01-01
ABSTRACT Housekeeping genes (HKGs) are used as internal controls for normalising and calculating the relative expression of target genes in RT-qPCR experiments. There is no unique universal HKG and HKGs vary among organisms and tissues, so this study aimed to determine the most stably expressed HKGs in the equine kidney. The evaluated HKGs included 18S ribosomal RNA (18S), 28S ribosomal RNA (28S), ribosomal protein L32 (RPL32), β-2-microglobulin (B2M), glyceraldehyde-3-phosphate dehydrogenase (GAPDH), succinate dehydrogenase complex (SDHA), zeta polypeptide (YWHAZ), and hypoxanthine phosphoribosyltransferase 1 (HPRT1). The HKGs expression stability data were analysed with two software packages, geNorm and NormFinder. The lowest stability values for geNorm suggests that YWHAZ and HPRT1 would be most optimal (M=0.31 and 0.32, respectively). Further, these two genes had the best pairwise stability value using NormFinder (geNorm V=0.085). Therefore, these two genes were considered the most useful for RT-qPCR studies in equine kidney. PMID:27974876
Using DFX for Algorithm Evaluation
Beiriger, J.I.; Funkhouser, D.R.; Young, C.J.
1998-10-20
Evaluating whether or not a new seismic processing algorithm can improve the performance of the operational system can be problematic: it maybe difficult to isolate the comparable piece of the operational system it maybe necessary to duplicate ancillary timctions; and comparing results to the tuned, full-featured operational system maybe an unsat- isfactory basis on which to draw conclusions. Algorithm development and evaluation in an environment that more closely resembles the operational system can be achieved by integrating the algorithm with the custom user library of the Detection and Feature Extraction (DFX) code, developed by Science Applications kternational Corporation. This integration gives the seismic researcher access to all of the functionality of DFX, such as database access, waveform quality control, and station-specific tuning, and provides a more meaningfid basis for evaluation. The goal of this effort is to make the DFX environment more accessible to seismic researchers for algorithm evalua- tion. Typically, anew algorithm will be developed as a C-language progmm with an ASCII test parameter file. The integration process should allow the researcher to focus on the new algorithm developmen~ with minimum attention to integration issues. Customizing DFX, however, requires soflsvare engineering expertise, knowledge of the Scheme and C programming languages, and familiarity with the DFX source code. We use a C-language spatial coherence processing algorithm with a parameter and recipe file to develop a general process for integrating and evaluating a new algorithm in the DFX environment. To aid in configuring and managing the DFX environment, we develop a simple parameter management tool. We also identifi and examine capabilities that could simplify the process further, thus reducing the barriers facing researchers in using DFX..These capabilities include additional parameter manage- ment features, a Scheme-language template for algorithm testing, a
Modular algorithm concept evaluation tool (MACET) sensor fusion algorithm testbed
NASA Astrophysics Data System (ADS)
Watson, John S.; Williams, Bradford D.; Talele, Sunjay E.; Amphay, Sengvieng A.
1995-07-01
Target acquisition in a high clutter environment in all-weather at any time of day represents a much needed capability for the air-to-surface strike mission. A considerable amount of the research at the Armament Directorate at Wright Laboratory, Advanced Guidance Division WL/MNG, has been devoted to exploring various seeker technologies, including multi-spectral sensor fusion, that may yield a cost efficient system with these capabilities. Critical elements of any such seekers are the autonomous target acquisition and tracking algorithms. These algorithms allow the weapon system to operate independently and accurately in realistic battlefield scenarios. In order to assess the performance of the multi-spectral sensor fusion algorithms being produced as part of the seeker technology development programs, the Munition Processing Technology Branch of WL/MN is developing an algorithm testbed. This testbed consists of the Irma signature prediction model, data analysis workstations, such as the TABILS Analysis and Management System (TAMS), and the Modular Algorithm Concept Evaluation Tool (MACET) algorithm workstation. All three of these components are being enhanced to accommodate multi-spectral sensor fusion systems. MACET is being developed to provide a graphical interface driven simulation by which to quickly configure algorithm components and conduct performance evaluations. MACET is being developed incrementally with each release providing an additional channel of operation. To date MACET 1.0, a passive IR algorithm environment, has been delivered. The second release, MACET 1.1 is presented in this paper using the MMW/IR data from the Advanced Autonomous Dual Mode Seeker (AADMS) captive flight demonstration. Once completed, the delivered software from past algorithm development efforts will be converted to the MACET library format, thereby providing an on-line database of the algorithm research conducted to date.
Algorithms on ensemble quantum computers.
Boykin, P Oscar; Mor, Tal; Roychowdhury, Vwani; Vatan, Farrokh
2010-06-01
In ensemble (or bulk) quantum computation, all computations are performed on an ensemble of computers rather than on a single computer. Measurements of qubits in an individual computer cannot be performed; instead, only expectation values (over the complete ensemble of computers) can be measured. As a result of this limitation on the model of computation, many algorithms cannot be processed directly on such computers, and must be modified, as the common strategy of delaying the measurements usually does not resolve this ensemble-measurement problem. Here we present several new strategies for resolving this problem. Based on these strategies we provide new versions of some of the most important quantum algorithms, versions that are suitable for implementing on ensemble quantum computers, e.g., on liquid NMR quantum computers. These algorithms are Shor's factorization algorithm, Grover's search algorithm (with several marked items), and an algorithm for quantum fault-tolerant computation. The first two algorithms are simply modified using a randomizing and a sorting strategies. For the last algorithm, we develop a classical-quantum hybrid strategy for removing measurements. We use it to present a novel quantum fault-tolerant scheme. More explicitly, we present schemes for fault-tolerant measurement-free implementation of Toffoli and σ(z)(¼) as these operations cannot be implemented "bitwise", and their standard fault-tolerant implementations require measurement.
Search for New Quantum Algorithms
2006-05-01
Topological computing for beginners, (slide presentation), Lecture Notes for Chapter 9 - Physics 219 - Quantum Computation. (http...14 II.A.8. A QHS algorithm for Feynman integrals ......................................................18 II.A.9. Non-abelian QHS algorithms -- A...idea is that NOT all environmentally entangling transformations are equally likely. In particular, for spatially separated physical quantum
Algorithm Calculates Cumulative Poisson Distribution
NASA Technical Reports Server (NTRS)
Bowerman, Paul N.; Nolty, Robert C.; Scheuer, Ernest M.
1992-01-01
Algorithm calculates accurate values of cumulative Poisson distribution under conditions where other algorithms fail because numbers are so small (underflow) or so large (overflow) that computer cannot process them. Factors inserted temporarily to prevent underflow and overflow. Implemented in CUMPOIS computer program described in "Cumulative Poisson Distribution Program" (NPO-17714).
FORTRAN Algorithm for Image Processing
NASA Technical Reports Server (NTRS)
Roth, Don J.; Hull, David R.
1987-01-01
FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.
Testing an earthquake prediction algorithm
Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.
1997-01-01
A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.
Algorithm for Autonomous Landing
NASA Technical Reports Server (NTRS)
Kuwata, Yoshiaki
2011-01-01
Because of their small size, high maneuverability, and easy deployment, micro aerial vehicles (MAVs) are used for a wide variety of both civilian and military missions. One of their current drawbacks is the vast array of sensors (such as GPS, altimeter, radar, and the like) required to make a landing. Due to the MAV s small payload size, this is a major concern. Replacing the imaging sensors with a single monocular camera is sufficient to land a MAV. By applying optical flow algorithms to images obtained from the camera, time-to-collision can be measured. This is a measurement of position and velocity (but not of absolute distance), and can avoid obstacles as well as facilitate a landing on a flat surface given a set of initial conditions. The key to this approach is to calculate time-to-collision based on some image on the ground. By holding the angular velocity constant, horizontal speed decreases linearly with the height, resulting in a smooth landing. Mathematical proofs show that even with actuator saturation or modeling/ measurement uncertainties, MAVs can land safely. Landings of this nature may have a higher velocity than is desirable, but this can be compensated for by a cushioning or dampening system, or by using a system of legs to grab onto a surface. Such a monocular camera system can increase vehicle payload size (or correspondingly reduce vehicle size), increase speed of descent, and guarantee a safe landing by directly correlating speed to height from the ground.
Algorithmic advances in stochastic programming
Morton, D.P.
1993-07-01
Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.
Scheduling with genetic algorithms
NASA Technical Reports Server (NTRS)
Fennel, Theron R.; Underbrink, A. J., Jr.; Williams, George P. W., Jr.
1994-01-01
In many domains, scheduling a sequence of jobs is an important function contributing to the overall efficiency of the operation. At Boeing, we develop schedules for many different domains, including assembly of military and commercial aircraft, weapons systems, and space vehicles. Boeing is under contract to develop scheduling systems for the Space Station Payload Planning System (PPS) and Payload Operations and Integration Center (POIC). These applications require that we respect certain sequencing restrictions among the jobs to be scheduled while at the same time assigning resources to the jobs. We call this general problem scheduling and resource allocation. Genetic algorithms (GA's) offer a search method that uses a population of solutions and benefits from intrinsic parallelism to search the problem space rapidly, producing near-optimal solutions. Good intermediate solutions are probabalistically recombined to produce better offspring (based upon some application specific measure of solution fitness, e.g., minimum flowtime, or schedule completeness). Also, at any point in the search, any intermediate solution can be accepted as a final solution; allowing the search to proceed longer usually produces a better solution while terminating the search at virtually any time may yield an acceptable solution. Many processes are constrained by restrictions of sequence among the individual jobs. For a specific job, other jobs must be completed beforehand. While there are obviously many other constraints on processes, it is these on which we focussed for this research: how to allocate crews to jobs while satisfying job precedence requirements and personnel, and tooling and fixture (or, more generally, resource) requirements.
Cui, Bintao; Smooker, Peter M; Rouch, Duncan A; Deighton, Margaret A
2016-08-01
Accurate and reproducible measurement of gene transcription requires appropriate reference genes, which are stably expressed under different experimental conditions to provide normalization. Staphylococcus capitis is a human pathogen that produces biofilm under stress, such as imposed by antimicrobial agents. In this study, a set of five commonly used staphylococcal reference genes (gyrB, sodA, recA, tuf and rpoB) were systematically evaluated in two clinical isolates of Staphylococcus capitis (S. capitis subspecies urealyticus and capitis, respectively) under erythromycin stress in mid-log and stationary phases. Two public software programs (geNorm and NormFinder) and two manual calculation methods, reference residue normalization (RRN) and relative quantitative (RQ), were applied. The potential reference genes selected by the four algorithms were further validated by comparing the expression of a well-studied biofilm gene (icaA) with phenotypic biofilm formation in S. capitis under four different experimental conditions. The four methods differed considerably in their ability to predict the most suitable reference gene or gene combination for comparing icaA expression under different conditions. Under the conditions used here, the RQ method provided better selection of reference genes than the other three algorithms; however, this finding needs to be confirmed with a larger number of isolates. This study reinforces the need to assess the stability of reference genes for analysis of target gene expression under different conditions and the use of more than one algorithm in such studies. Although this work was conducted using a specific human pathogen, it emphasizes the importance of selecting suitable reference genes for accurate normalization of gene expression more generally.
Zhang, Ming-Fang
2016-01-01
Normalization to reference genes is the most common method to avoid bias in real-time quantitative PCR (qPCR), which has been widely used for quantification of gene expression. Despite several studies on gene expression, Lilium, and particularly L. regale, has not been fully investigated regarding the evaluation of reference genes suitable for normalization. In this study, nine putative reference genes, namely 18S rRNA, ACT, BHLH, CLA, CYP, EF1, GAPDH, SAND and TIP41, were analyzed for accurate quantitative PCR normalization at different developmental stages and under different stress conditions, including biotic (Botrytis elliptica), drought, salinity, cold and heat stress. All these genes showed a wide variation in their Cq (quantification Cycle) values, and their stabilities were calculated by geNorm, NormFinder and BestKeeper. In a combination of the results from the three algorithms, BHLH was superior to the other candidates when all the experimental treatments were analyzed together; CLA and EF1 were also recommended by two of the three algorithms. As for specific conditions, EF1 under various developmental stages, SAND under biotic stress, CYP/GAPDH under drought stress, and TIP41 under salinity stress were generally considered suitable. All the algorithms agreed on the stability of SAND and GAPDH under cold stress, while only CYP was selected under heat stress by all of them. Additionally, the selection of optimal reference genes under biotic stress was further verified by analyzing the expression level of LrLOX in leaves inoculated with B. elliptica. Our study would be beneficial for future studies on gene expression and molecular breeding of Lilium. PMID:27019788
Portable Health Algorithms Test System
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.
2010-01-01
A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.
A universal symmetry detection algorithm.
Maurer, Peter M
2015-01-01
Research on symmetry detection focuses on identifying and detecting new types of symmetry. The paper presents an algorithm that is capable of detecting any type of permutation-based symmetry, including many types for which there are no existing algorithms. General symmetry detection is library-based, but symmetries that can be parameterized, (i.e. total, partial, rotational, and dihedral symmetry), can be detected without using libraries. In many cases it is faster than existing techniques. Furthermore, it is simpler than most existing techniques, and can easily be incorporated into existing software. The algorithm can also be used with virtually any type of matrix-based symmetry, including conjugate symmetry.
Review of jet reconstruction algorithms
NASA Astrophysics Data System (ADS)
Atkin, Ryan
2015-10-01
Accurate jet reconstruction is necessary for understanding the link between the unobserved partons and the jets of observed collimated colourless particles the partons hadronise into. Understanding this link sheds light on the properties of these partons. A review of various common jet algorithms is presented, namely the Kt, Anti-Kt, Cambridge/Aachen, Iterative cones and the SIScone, highlighting their strengths and weaknesses. If one is interested in studying jets, the Anti-Kt algorithm is the best choice, however if ones interest is in the jet substructures then the Cambridge/Aachen algorithm would be the best option.
Routing Algorithm Exploits Spatial Relations
NASA Technical Reports Server (NTRS)
Okino, Clayton; Jennings, Esther
2004-01-01
A recently developed routing algorithm for broadcasting in an ad hoc wireless communication network takes account of, and exploits, the spatial relationships among the locations of nodes, in addition to transmission power levels and distances between the nodes. In contrast, most prior algorithms for discovering routes through ad hoc networks rely heavily on transmission power levels and utilize limited graph-topology techniques that do not involve consideration of the aforesaid spatial relationships. The present algorithm extracts the relevant spatial-relationship information by use of a construct denoted the relative-neighborhood graph (RNG).
Belief network algorithms: A study of performance
Jitnah, N.
1996-12-31
This abstract gives an overview of the work. We present a survey of Belief Network algorithms and propose a domain characterization system to be used as a basis for algorithm comparison and for predicting algorithm performance.
Do You Understand Your Algorithms?
ERIC Educational Resources Information Center
Pickreign, Jamar; Rogers, Robert
2006-01-01
This article discusses relationships between the development of an understanding of algorithms and algebraic thinking. It also provides some sample activities for middle school teachers of mathematics to help promote students' algebraic thinking. (Contains 11 figures.)
Fibonacci Numbers and Computer Algorithms.
ERIC Educational Resources Information Center
Atkins, John; Geist, Robert
1987-01-01
The Fibonacci Sequence describes a vast array of phenomena from nature. Computer scientists have discovered and used many algorithms which can be classified as applications of Fibonacci's sequence. In this article, several of these applications are considered. (PK)
APL simulation of Grover's algorithm
NASA Astrophysics Data System (ADS)
Lipovaca, Samir
2012-02-01
Grover's algorithm is a fast quantum search algorithm. Classically, to solve the search problem for a search space of size N we need approximately N operations. Grover's algorithm offers a quadratic speedup. Since present quantum computers are not robust enough for code writing and execution, to experiment with Grover's algorithm, we will simulate it using the APL programming language. The APL programming language is especially suited for this task. For example, to compute Walsh-Hadamard transformation matrix for N quantum states via a tensor product of N Hadamard matrices we need to iterate N-1 times only one line of the code. Initial study indicates the quantum mechanical amplitude of the solution is almost independent of the search space size and rapidly reaches 0.999 values with slight variations at higher decimal places.
NASA Astrophysics Data System (ADS)
Rao, Sailesh K.; Kollath, T.
1986-07-01
In this paper, we show that every systolic array executes a Regular Iterative Algorithm with a strongly separating hyperplane and conversely, that every such algorithm can be implemented on a systolic array. This characterization provides us with an unified framework for describing the contributions of other authors. It also exposes the relevance of many fundamental concepts that were introduced in the sixties by Hennie, Waite and Karp, Miller and Winograd, to the present day concern of systolic array
Programming the gradient projection algorithm
NASA Technical Reports Server (NTRS)
Hargrove, A.
1983-01-01
The gradient projection method of numerical optimization which is applied to problems having linear constraints but nonlinear objective functions is described and analyzed. The algorithm is found to be efficient and thorough for small systems, but requires the addition of auxiliary methods and programming for large scale systems with severe nonlinearities. In order to verify the theoretical results a digital computer is used to simulate the algorithm.
Genetic algorithms as discovery programs
Hilliard, M.R.; Liepins, G.
1986-01-01
Genetic algorithms are mathematical counterparts to natural selection and gene recombination. As such, they have provided one of the few significant breakthroughs in machine learning. Used with appropriate reward functions and apportionment of credit, they have been successfully applied to gas pipeline operation, x-ray registration and mathematical optimization problems. This paper discusses the basics of genetic algorithms, describes a few successes, and reports on current progress at Oak Ridge National Laboratory in applications to set covering and simulated robots.
Inversion Algorithms for Geophysical Problems
1987-12-16
ktdud* Sccumy Oass/Kjoon) Inversion Algorithms for Geophysical Problems (U) 12. PERSONAL AUTHOR(S) Lanzano, Paolo 13 «. TYPE OF REPORT Final 13b...spectral density. 20. DISTRIBUTION/AVAILABILITY OF ABSTRACT 13 UNCLASSIFIED/UNLIMITED D SAME AS RPT n OTIC USERS 22a. NAME OF RESPONSIBLE...Research Laboratory ’^^ SSZ ’.Washington. DC 20375-5000 NRLrMemorandum Report-6138 Inversion Algorithms for Geophysical Problems p. LANZANO Space
Label Ranking Algorithms: A Survey
NASA Astrophysics Data System (ADS)
Vembu, Shankar; Gärtner, Thomas
Label ranking is a complex prediction task where the goal is to map instances to a total order over a finite set of predefined labels. An interesting aspect of this problem is that it subsumes several supervised learning problems, such as multiclass prediction, multilabel classification, and hierarchical classification. Unsurprisingly, there exists a plethora of label ranking algorithms in the literature due, in part, to this versatile nature of the problem. In this paper, we survey these algorithms.
A retrodictive stochastic simulation algorithm
Vaughan, T.G. Drummond, P.D.; Drummond, A.J.
2010-05-20
In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.
Tactical Synthesis Of Efficient Global Search Algorithms
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2009-01-01
Algorithm synthesis transforms a formal specification into an efficient algorithm to solve a problem. Algorithm synthesis in Specware combines the formal specification of a problem with a high-level algorithm strategy. To derive an efficient algorithm, a developer must define operators that refine the algorithm by combining the generic operators in the algorithm with the details of the problem specification. This derivation requires skill and a deep understanding of the problem and the algorithmic strategy. In this paper we introduce two tactics to ease this process. The tactics serve a similar purpose to tactics used for determining indefinite integrals in calculus, that is suggesting possible ways to attack the problem.
Rotational Invariant Dimensionality Reduction Algorithms.
Lai, Zhihui; Xu, Yong; Yang, Jian; Shen, Linlin; Zhang, David
2016-06-30
A common intrinsic limitation of the traditional subspace learning methods is the sensitivity to the outliers and the image variations of the object since they use the L₂ norm as the metric. In this paper, a series of methods based on the L₂,₁-norm are proposed for linear dimensionality reduction. Since the L₂,₁-norm based objective function is robust to the image variations, the proposed algorithms can perform robust image feature extraction for classification. We use different ideas to design different algorithms and obtain a unified rotational invariant (RI) dimensionality reduction framework, which extends the well-known graph embedding algorithm framework to a more generalized form. We provide the comprehensive analyses to show the essential properties of the proposed algorithm framework. This paper indicates that the optimization problems have global optimal solutions when all the orthogonal projections of the data space are computed and used. Experimental results on popular image datasets indicate that the proposed RI dimensionality reduction algorithms can obtain competitive performance compared with the previous L₂ norm based subspace learning algorithms.
Multimodal Estimation of Distribution Algorithms.
Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun
2016-02-15
Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.
Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal
2012-01-01
Background Expression levels for genes of interest must be normalized with an appropriate reference, or housekeeping gene, to make accurate comparisons of quantitative real-time PCR results. The purpose of this study was to identify the most stable housekeeping genes in porcine articular cartilage subjected to a mechanical injury from a panel of 10 candidate genes. Results Ten candidate housekeeping genes were evaluated in three different treatment groups of mechanically impacted porcine articular cartilage. The genes evaluated were: beta actin, beta-2-microglobulin, glyceraldehyde-3-phosphate dehydrogenase, hydroxymethylbilane synthase, hypoxanthine phosphoribosyl transferase, peptidylprolyl isomerase A (cyclophilin A), ribosomal protein L4, succinate dehydrogenase flavoprotein subunit A, TATA box binding protein, and tyrosine 3-monooxygenase/tryptophan 5-monooxygenase activation protein—zeta polypeptide. The stability of the genes was measured using geNorm, BestKeeper, and NormFinder software. The four most stable genes measured via geNorm were (most to least stable) succinate dehydrogenase flavoprotein, subunit A, peptidylprolyl isomerase A, glyceraldehyde-3-phosphate dehydrogenase, beta actin; the four most stable genes measured via BestKeeper were glyceraldehyde-3-phosphate dehydrogenase, peptidylprolyl isomerase A, beta actin, succinate dehydrogenase flavoprotein, subunit A; and the four most stable genes measured via NormFinder were peptidylprolyl isomerase A, succinate dehydrogenase flavoprotein, subunit A, glyceraldehyde-3-phosphate dehydrogenase, beta actin. Conclusions BestKeeper, geNorm, and NormFinder all generated similar results for the most stable genes in porcine articular cartilage. The use of these appropriate reference genes will facilitate accurate gene expression studies of porcine articular cartilage and suggest appropriate housekeeping genes for articular cartilage studies in other species. PMID:23146128
López-Landavery, Edgar A; Portillo-López, Amelia; Gallardo-Escárate, Cristian; Del Río-Portilla, Miguel A
2014-10-10
The red abalone Haliotis rufescens is one of the most important species for aquaculture in Baja California, México, and despite this, few gene expression studies have been done in tissues such as gill, head and gonad. For this purpose, reverse transcription and quantitative real time PCR (RT-qPCR) is a powerful tool for gene expression evaluation. For a reliable analysis, however, it is necessary to select and validate housekeeping genes that allow proper transcription quantification. Stability of nine housekeeping genes (ACTB, BGLU, TUBB, CY, GAPDH, HPRTI, RPL5, SDHA and UBC) was evaluated in different tissues of red abalone (gill, head and gonad/digestive gland). Four-fold serial dilutions of cDNA (from 25 ngμL(-1) to 0.39 ngμL(-1)) were used to prepare the standard curve, and it showed gene efficiencies between 0.95 and 0.99, with R(2)=0.99. geNorm and NormFinder analysis showed that RPL5 and CY were the most stable genes considering all tissues, whereas in gill HPRTI and BGLU were most stable. In gonad/digestive gland, RPL5 and TUBB were the most stable genes with geNorm, while SDHA and HPRTI were the best using NormFinder. Similarly, in head the best genes were RPL5 and UBC with geNorm, and GAPDH and CY with NormFinder. The technical variability analysis with RPL5 and abalone gonad/digestive gland tissue indicated a high repeatability with a variation coefficient within groups ≤ 0.56% and between groups ≤ 1.89%. These results will help us for further research in reproduction, thermoregulation and endocrinology in red abalone.
Li, Dandan; Hu, Bo; Wang, Qing; Liu, Hongchang; Pan, Feng; Wu, Wei
2015-01-01
Safflower (Carthamus tinctorius L.) has received a significant amount of attention as a medicinal plant and oilseed crop. Gene expression studies provide a theoretical molecular biology foundation for improving new traits and developing new cultivars. Real-time quantitative PCR (RT-qPCR) has become a crucial approach for gene expression analysis. In addition, appropriate reference genes (RGs) are essential for accurate and rapid relative quantification analysis of gene expression. In this study, fifteen candidate RGs involved in multiple metabolic pathways of plants were finally selected and validated under different experimental treatments, at different seed development stages and in different cultivars and tissues for real-time PCR experiments. These genes were ABCS, 60SRPL10, RANBP1, UBCL, MFC, UBCE2, EIF5A, COA, EF1-β, EF1, GAPDH, ATPS, MBF1, GTPB and GST. The suitability evaluation was executed by the geNorm and NormFinder programs. Overall, EF1, UBCE2, EIF5A, ATPS and 60SRPL10 were the most stable genes, and MBF1, as well as MFC, were the most unstable genes by geNorm and NormFinder software in all experimental samples. To verify the validation of RGs selected by the two programs, the expression analysis of 7 CtFAD2 genes in safflower seeds at different developmental stages under cold stress was executed using different RGs in RT-qPCR experiments for normalization. The results showed similar expression patterns when the most stable RGs selected by geNorm or NormFinder software were used. However, the differences were detected using the most unstable reference genes. The most stable combination of genes selected in this study will help to achieve more accurate and reliable results in a wide variety of samples in safflower. PMID:26457898
Mathematical algorithms for approximate reasoning
NASA Technical Reports Server (NTRS)
Murphy, John H.; Chay, Seung C.; Downs, Mary M.
1988-01-01
Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away
Improved autonomous star identification algorithm
NASA Astrophysics Data System (ADS)
Luo, Li-Yan; Xu, Lu-Ping; Zhang, Hua; Sun, Jing-Rong
2015-06-01
The log-polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. Project supported by the National Natural Science Foundation of China (Grant Nos. 61172138 and 61401340), the Open Research Fund of the Academy of Satellite Application, China (Grant No. 2014_CXJJ-DH_12), the Fundamental Research Funds for the Central Universities, China (Grant Nos. JB141303 and 201413B), the Natural Science Basic Research Plan in Shaanxi Province, China (Grant No. 2013JQ8040), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130203120004), and the Xi’an Science and Technology Plan, China (Grant. No CXY1350(4)).
Conflict-Aware Scheduling Algorithm
NASA Technical Reports Server (NTRS)
Wang, Yeou-Fang; Borden, Chester
2006-01-01
conflict-aware scheduling algorithm is being developed to help automate the allocation of NASA s Deep Space Network (DSN) antennas and equipment that are used to communicate with interplanetary scientific spacecraft. The current approach for scheduling DSN ground resources seeks to provide an equitable distribution of tracking services among the multiple scientific missions and is very labor intensive. Due to the large (and increasing) number of mission requests for DSN services, combined with technical and geometric constraints, the DSN is highly oversubscribed. To help automate the process, and reduce the DSN and spaceflight project labor effort required for initiating, maintaining, and negotiating schedules, a new scheduling algorithm is being developed. The scheduling algorithm generates a "conflict-aware" schedule, where all requests are scheduled based on a dynamic priority scheme. The conflict-aware scheduling algorithm allocates all requests for DSN tracking services while identifying and maintaining the conflicts to facilitate collaboration and negotiation between spaceflight missions. These contrast with traditional "conflict-free" scheduling algorithms that assign tracks that are not in conflict and mark the remainder as unscheduled. In the case where full schedule automation is desired (based on mission/event priorities, fairness, allocation rules, geometric constraints, and ground system capabilities/ constraints), a conflict-free schedule can easily be created from the conflict-aware schedule by removing lower priority items that are in conflict.
Fourier Lucas-Kanade algorithm.
Lucey, Simon; Navarathna, Rajitha; Ashraf, Ahmed Bilal; Sridharan, Sridha
2013-06-01
In this paper, we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one preprocesses the source image and template/model with a bank of filters (e.g., oriented edges, Gabor, etc.) as 1) it can handle substantial illumination variations, 2) the inefficient preprocessing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, 3) unlike traditional LK, the computational cost is invariant to the number of filters and as a result is far more efficient, and 4) this approach can be extended to the Inverse Compositional (IC) form of the LK algorithm where nearly all steps (including Fourier transform and filter bank preprocessing) can be precomputed, leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to nonrigid object alignment tasks that are considered extensions of the LK algorithm, such as those found in Active Appearance Models (AAMs).
Algorithms for automated DNA assembly
Densmore, Douglas; Hsiau, Timothy H.-C.; Kittleson, Joshua T.; DeLoache, Will; Batten, Christopher; Anderson, J. Christopher
2010-01-01
Generating a defined set of genetic constructs within a large combinatorial space provides a powerful method for engineering novel biological functions. However, the process of assembling more than a few specific DNA sequences can be costly, time consuming and error prone. Even if a correct theoretical construction scheme is developed manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and formal approaches are needed for exploring these vast design spaces. By automating the design of DNA fabrication schemes using computational algorithms, we can eliminate human error while reducing redundant operations, thus minimizing the time and cost required for conducting biological engineering experiments. Here, we provide algorithms that optimize the simultaneous assembly of a collection of related DNA sequences. We compare our algorithms to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with random search approaches on two real-world datasets show that our algorithms can also quickly find lower-cost solutions for large datasets. PMID:20335162
SDR Input Power Estimation Algorithms
NASA Technical Reports Server (NTRS)
Nappier, Jennifer M.; Briones, Janette C.
2013-01-01
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.
Metal detector depth estimation algorithms
NASA Astrophysics Data System (ADS)
Marble, Jay; McMichael, Ian
2009-05-01
This paper looks at depth estimation techniques using electromagnetic induction (EMI) metal detectors. Four algorithms are considered. The first utilizes a vertical gradient sensor configuration. The second is a dual frequency approach. The third makes use of dipole and quadrapole receiver configurations. The fourth looks at coils of different sizes. Each algorithm is described along with its associated sensor. Two figures of merit ultimately define algorithm/sensor performance. The first is the depth of penetration obtainable. (That is, the maximum detection depth obtainable.) This describes the performance of the method to achieve detection of deep targets. The second is the achievable statistical depth resolution. This resolution describes the precision with which depth can be estimated. In this paper depth of penetration and statistical depth resolution are qualitatively determined for each sensor/algorithm. A scientific method is used to make these assessments. A field test was conducted using 2 lanes with emplaced UXO. The first lane contains 155 shells at increasing depths from 0" to 48". The second is more realistic containing objects of varying size. The first lane is used for algorithm training purposes, while the second is used for testing. The metal detectors used in this study are the: Geonics EM61, Geophex GEM5, Minelab STMR II, and the Vallon VMV16.
POSE Algorithms for Automated Docking
NASA Technical Reports Server (NTRS)
Heaton, Andrew F.; Howard, Richard T.
2011-01-01
POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.
Parallel job-scheduling algorithms
Rodger, S.H.
1989-01-01
In this thesis, we consider solving job scheduling problems on the CREW PRAM model. We show how to adapt Cole's pipeline merge technique to yield several efficient parallel algorithms for a number of job scheduling problems and one optimal parallel algorithm for the following job scheduling problem: Given a set of n jobs defined by release times, deadlines and processing times, find a schedule that minimizes the maximum lateness of the jobs and allows preemption when the jobs are scheduled to run on one machine. In addition, we present the first NC algorithm for the following job scheduling problem: Given a set of n jobs defined by release times, deadlines and unit processing times, determine if there is a schedule of jobs on one machine, and calculate the schedule if it exists. We identify the notion of a canonical schedule, which is the type of schedule our algorithm computes if there is a schedule. Our algorithm runs in O((log n){sup 2}) time and uses O(n{sup 2}k{sup 2}) processors, where k is the minimum number of distinct offsets of release times or deadlines.
Chapman, Joanne R; Helin, Anu S; Wille, Michelle; Atterby, Clara; Järhult, Josef D; Fridlund, Jimmy S; Waldenström, Jonas
2016-01-01
Determining which reference genes have the highest stability, and are therefore appropriate for normalising data, is a crucial step in the design of real-time quantitative PCR (qPCR) gene expression studies. This is particularly warranted in non-model and ecologically important species for which appropriate reference genes are lacking, such as the mallard--a key reservoir of many diseases with relevance for human and livestock health. Previous studies assessing gene expression changes as a consequence of infection in mallards have nearly universally used β-actin and/or GAPDH as reference genes without confirming their suitability as normalisers. The use of reference genes at random, without regard for stability of expression across treatment groups, can result in erroneous interpretation of data. Here, eleven putative reference genes for use in gene expression studies of the mallard were evaluated, across six different tissues, using a low pathogenic avian influenza A virus infection model. Tissue type influenced the selection of reference genes, whereby different genes were stable in blood, spleen, lung, gastrointestinal tract and colon. β-actin and GAPDH generally displayed low stability and are therefore inappropriate reference genes in many cases. The use of different algorithms (GeNorm and NormFinder) affected stability rankings, but for both algorithms it was possible to find a combination of two stable reference genes with which to normalise qPCR data in mallards. These results highlight the importance of validating the choice of normalising reference genes before conducting gene expression studies in ducks. The fact that nearly all previous studies of the influence of pathogen infection on mallard gene expression have used a single, non-validated reference gene is problematic. The toolkit of putative reference genes provided here offers a solid foundation for future studies of gene expression in mallards and other waterfowl.
Lacerda, Ana L. M.; Fonseca, Leonardo N.; Blawid, Rosana; Boiteux, Leonardo S.; Ribeiro, Simone G.; Brasileiro, Ana C. M.
2015-01-01
Quantitative Polymerase Chain Reaction (qPCR) is currently the most sensitive technique used for absolute and relative quantification of a target gene transcript, requiring the use of appropriated reference genes for data normalization. To accurately estimate the relative expression of target tomato (Solanum lycopersicum L.) genes responsive to several virus species in reverse transcription qPCR analysis, the identification of reliable reference genes is mandatory. In the present study, ten reference genes were analyzed across a set of eight samples: two tomato contrasting genotypes (‘Santa Clara’, susceptible, and its near-isogenic line ‘LAM 157’, resistant); subjected to two treatments (inoculation with Tomato chlorotic mottle virus (ToCMoV) and its mock-inoculated control) and in two distinct times after inoculation (early and late). Reference genes stability was estimated by three statistical programs (geNorm, NormFinder and BestKeeper). To validate the results over broader experimental conditions, a set of ten samples, corresponding to additional three tomato-virus pathosystems that included tospovirus, crinivirus and tymovirus + tobamovirus, was analyzed together with the tomato-ToCMoV pathosystem dataset, using the same algorithms. Taking into account the combined analyses of the ranking order outputs from the three algorithms, TIP41 and EF1 were identified as the most stable genes for tomato-ToCMoV pathosystem, and TIP41 and EXP for the four pathosystems together, and selected to be used as reference in the forthcoming expression qPCR analysis of target genes in experimental conditions involving the aforementioned tomato-virus pathosystems. PMID:26317870
Using Alternative Multiplication Algorithms to "Offload" Cognition
ERIC Educational Resources Information Center
Jazby, Dan; Pearn, Cath
2015-01-01
When viewed through a lens of embedded cognition, algorithms may enable aspects of the cognitive work of multi-digit multiplication to be "offloaded" to the environmental structure created by an algorithm. This study analyses four multiplication algorithms by viewing different algorithms as enabling cognitive work to be distributed…
Seamless Merging of Hypertext and Algorithm Animation
ERIC Educational Resources Information Center
Karavirta, Ville
2009-01-01
Online learning material that students use by themselves is one of the typical usages of algorithm animation (AA). Thus, the integration of algorithm animations into hypertext is seen as an important topic today to promote the usage of algorithm animation in teaching. This article presents an algorithm animation viewer implemented purely using…
Synthesis of Greedy Algorithms Using Dominance Relations
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2010-01-01
Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.
Next Generation Suspension Dynamics Algorithms
Schunk, Peter Randall; Higdon, Jonathon; Chen, Steven
2014-12-01
This research project has the objective to extend the range of application, improve the efficiency and conduct simulations with the Fast Lubrication Dynamics (FLD) algorithm for concentrated particle suspensions in a Newtonian fluid solvent. The research involves a combination of mathematical development, new computational algorithms, and application to processing flows of relevance in materials processing. The mathematical developments clarify the underlying theory, facilitate verification against classic monographs in the field and provide the framework for a novel parallel implementation optimized for an OpenMP shared memory environment. The project considered application to consolidation flows of major interest in high throughput materials processing and identified hitherto unforeseen challenges in the use of FLD in these applications. Extensions to the algorithm have been developed to improve its accuracy in these applications.
Optimizing connected component labeling algorithms
NASA Astrophysics Data System (ADS)
Wu, Kesheng; Otoo, Ekow; Shoshani, Arie
2005-04-01
This paper presents two new strategies that can be used to greatly improve the speed of connected component labeling algorithms. To assign a label to a new object, most connected component labeling algorithms use a scanning step that examines some of its neighbors. The first strategy exploits the dependencies among them to reduce the number of neighbors examined. When considering 8-connected components in a 2D image, this can reduce the number of neighbors examined from four to one in many cases. The second strategy uses an array to store the equivalence information among the labels. This replaces the pointer based rooted trees used to store the same equivalence information. It reduces the memory required and also produces consecutive final labels. Using an array instead of the pointer based rooted trees speeds up the connected component labeling algorithms by a factor of 5 ~ 100 in our tests on random binary images.
Learning with the ratchet algorithm.
Hush, D. R.; Scovel, James C.
2003-01-01
This paper presents a randomized algorithm called Ratchet that asymptotically minimizes (with probability 1) functions that satisfy a positive-linear-dependent (PLD) property. We establish the PLD property and a corresponding realization of Ratchet for a generalized loss criterion for both linear machines and linear classifiers. We describe several learning criteria that can be obtained as special cases of this generalized loss criterion, e.g. classification error, classification loss and weighted classification error. We also establish the PLD property and a corresponding realization of Ratchet for the Neyman-Pearson criterion for linear classifiers. Finally we show how, for linear classifiers, the Ratchet algorithm can be derived as a modification of the Pocket algorithm.
Some nonlinear space decomposition algorithms
Tai, Xue-Cheng; Espedal, M.
1996-12-31
Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.
Two Algorithms for Processing Electronic Nose Data
NASA Technical Reports Server (NTRS)
Young, Rebecca; Linnell, Bruce
2007-01-01
Two algorithms for processing the digitized readings of electronic noses, and computer programs to implement the algorithms, have been devised in a continuing effort to increase the utility of electronic noses as means of identifying airborne compounds and measuring their concentrations. One algorithm identifies the two vapors in a two-vapor mixture and estimates the concentration of each vapor (in principle, this algorithm could be extended to more than two vapors). The other algorithm identifies a single vapor and estimates its concentration.
Parallel algorithms for unconstrained optimizations by multisplitting
He, Qing
1994-12-31
In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.
Adaptive-feedback control algorithm.
Huang, Debin
2006-06-01
This paper is motivated by giving the detailed proofs and some interesting remarks on the results the author obtained in a series of papers [Phys. Rev. Lett. 93, 214101 (2004); Phys. Rev. E 71, 037203 (2005); 69, 067201 (2004)], where an adaptive-feedback algorithm was proposed to effectively stabilize and synchronize chaotic systems. This note proves in detail the strictness of this algorithm from the viewpoint of mathematics, and gives some interesting remarks for its potential applications to chaos control & synchronization. In addition, a significant comment on synchronization-based parameter estimation is given, which shows some techniques proposed in literature less strict and ineffective in some cases.
ALGORITHM DEVELOPMENT FOR SPATIAL OPERATORS.
Claire, Robert W.
1984-01-01
An approach is given that develops spatial operators about the basic geometric elements common to spatial data structures. In this fashion, a single set of spatial operators may be accessed by any system that reduces its operands to such basic generic representations. Algorithms based on this premise have been formulated to perform operations such as separation, overlap, and intersection. Moreover, this generic approach is well suited for algorithms that exploit concurrent properties of spatial operators. The results may provide a framework for a geometry engine to support fundamental manipulations within a geographic information system.
Deceptiveness and genetic algorithm dynamics
Liepins, G.E. ); Vose, M.D. )
1990-01-01
We address deceptiveness, one of at least four reasons genetic algorithms can fail to converge to function optima. We construct fully deceptive functions and other functions of intermediate deceptiveness. For the fully deceptive functions of our construction, we generate linear transformations that induce changes of representation to render the functions fully easy. We further model genetic algorithm selection recombination as the interleaving of linear and quadratic operators. Spectral analysis of the underlying matrices allows us to draw preliminary conclusions about fixed points and their stability. We also obtain an explicit formula relating the nonuniform Walsh transform to the dynamics of genetic search. 21 refs.
An algorithm for haplotype analysis
Lin, Shili; Speed, T.P.
1997-12-01
This paper proposes an algorithm for haplotype analysis based on a Monte Carlo method. Haplotype configurations are generated according to the distribution of joint haplotypes of individuals in a pedigree given their phenotype data, via a Markov chain Monte Carlo algorithm. The haplotype configuration which maximizes this conditional probability distribution can thus be estimated. In addition, the set of haplotype configurations with relatively high probabilities can also be estimated as possible alternatives to the most probable one. This flexibility enables geneticists to choose the haplotype configurations which are most reasonable to them, allowing them to include their knowledge of the data under analysis. 18 refs., 2 figs., 1 tab.
A generalized memory test algorithm
NASA Technical Reports Server (NTRS)
Milner, E. J.
1982-01-01
A general algorithm for testing digital computer memory is presented. The test checks that (1) every bit can be cleared and set in each memory work, and (2) bits are not erroneously cleared and/or set elsewhere in memory at the same time. The algorithm can be applied to any size memory block and any size memory word. It is concise and efficient, requiring the very few cycles through memory. For example, a test of 16-bit-word-size memory requries only 384 cycles through memory. Approximately 15 seconds were required to test a 32K block of such memory, using a microcomputer having a cycle time of 133 nanoseconds.
Gossip algorithms in quantum networks
NASA Astrophysics Data System (ADS)
Siomau, Michael
2017-01-01
Gossip algorithms is a common term to describe protocols for unreliable information dissemination in natural networks, which are not optimally designed for efficient communication between network entities. We consider application of gossip algorithms to quantum networks and show that any quantum network can be updated to optimal configuration with local operations and classical communication. This allows to speed-up - in the best case exponentially - the quantum information dissemination. Irrespective of the initial configuration of the quantum network, the update requiters at most polynomial number of local operations and classical communication.
Cognitive Algorithms for Signal Processing
2011-03-18
63] L. I. Perlovsky and R. Kozma. Eds. Neurodynamics of Higher-Level Cognition and Consciousness. Heidelberg, Germany: Springer-Verlag, 2007. [64...AFRL-RY-HS-TR-2011-0013 ________________________________________________________________________ Cognitive Algorithms for Signal Processing...in more details in [46]. ..................................... 16 1 Abstract Processes in the mind: perception, cognition
Coagulation algorithms with size binning
NASA Technical Reports Server (NTRS)
Statton, David M.; Gans, Jason; Williams, Eric
1994-01-01
The Smoluchowski equation describes the time evolution of an aerosol particle size distribution due to aggregation or coagulation. Any algorithm for computerized solution of this equation requires a scheme for describing the continuum of aerosol particle sizes as a discrete set. One standard form of the Smoluchowski equation accomplishes this by restricting the particle sizes to integer multiples of a basic unit particle size (the monomer size). This can be inefficient when particle concentrations over a large range of particle sizes must be calculated. Two algorithms employing a geometric size binning convention are examined: the first assumes that the aerosol particle concentration as a function of size can be considered constant within each size bin; the second approximates the concentration as a linear function of particle size within each size bin. The output of each algorithm is compared to an analytical solution in a special case of the Smoluchowski equation for which an exact solution is known . The range of parameters more appropriate for each algorithm is examined.
Adaptive protection algorithm and system
Hedrick, Paul [Pittsburgh, PA; Toms, Helen L [Irwin, PA; Miller, Roger M [Mars, PA
2009-04-28
An adaptive protection algorithm and system for protecting electrical distribution systems traces the flow of power through a distribution system, assigns a value (or rank) to each circuit breaker in the system and then determines the appropriate trip set points based on the assigned rank.
Genetic Algorithms: A gentle introduction
Jong, K.D.
1994-12-31
Information is presented on genetic algorithms in outline form. The following topics are discussed: how are new samples generated, a genotypic viewpoint, a phenotypic viewpoint, an optimization viewpoint, an intuitive view, parameter optimization problems, evolving production rates, genetic programming, GAs and NNs, formal analysis, Lemmas and theorems, discrete Walsh transforms, deceptive problems, Markov chain analysis, and PAC learning analysis.
Aerocapture Guidance Algorithm Comparison Campaign
NASA Technical Reports Server (NTRS)
Rousseau, Stephane; Perot, Etienne; Graves, Claude; Masciarelli, James P.; Queen, Eric
2002-01-01
The aerocapture is a promising technique for the future human interplanetary missions. The Mars Sample Return was initially based on an insertion by aerocapture. A CNES orbiter Mars Premier was developed to demonstrate this concept. Mainly due to budget constraints, the aerocapture was cancelled for the French orbiter. A lot of studies were achieved during the three last years to develop and test different guidance algorithms (APC, EC, TPC, NPC). This work was shared between CNES and NASA, with a fruitful joint working group. To finish this study an evaluation campaign has been performed to test the different algorithms. The objective was to assess the robustness, accuracy, capability to limit the load, and the complexity of each algorithm. A simulation campaign has been specified and performed by CNES, with a similar activity on the NASA side to confirm the CNES results. This evaluation has demonstrated that the numerical guidance principal is not competitive compared to the analytical concepts. All the other algorithms are well adapted to guaranty the success of the aerocapture. The TPC appears to be the more robust, the APC the more accurate, and the EC appears to be a good compromise.
Algorithms, complexity, and the sciences
Papadimitriou, Christos
2014-01-01
Algorithms, perhaps together with Moore’s law, compose the engine of the information technology revolution, whereas complexity—the antithesis of algorithms—is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal—and therefore less compelling—than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene’s cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution. PMID:25349382
The minimal time detection algorithm
NASA Technical Reports Server (NTRS)
Kim, Sungwan
1995-01-01
An aerospace vehicle may operate throughout a wide range of flight environmental conditions that affect its dynamic characteristics. Even when the control design incorporates a degree of robustness, system parameters may drift enough to cause its performance to degrade below an acceptable level. The object of this paper is to develop a change detection algorithm so that we can build a highly adaptive control system applicable to aircraft systems. The idea is to detect system changes with minimal time delay. The algorithm developed is called Minimal Time-Change Detection Algorithm (MT-CDA) which detects the instant of change as quickly as possible with false-alarm probability below a certain specified level. Simulation results for the aircraft lateral motion with a known or unknown change in control gain matrices, in the presence of doublet input, indicate that the algorithm works fairly well as theory indicates though there is a difficulty in deciding the exact amount of change in some situations. One of MT-CDA distinguishing properties is that detection delay of MT-CDA is superior to that of Whiteness Test.
Fission Reaction Event Yield Algorithm
Hagmann, Christian; Verbeke, Jerome; Vogt, Ramona; Roundrup, Jorgen
2016-05-31
FREYA (Fission Reaction Event Yield Algorithm) is a code that simulated the decay of a fissionable nucleus at specified excitation energy. In its present form, FREYA models spontaneous fission and neutron-induced fission up to 20 MeV. It includes the possibility of neutron emission from the nuclear prior to its fussion (nth chance fission).
Associative Algorithms for Computational Creativity
ERIC Educational Resources Information Center
Varshney, Lav R.; Wang, Jun; Varshney, Kush R.
2016-01-01
Computational creativity, the generation of new, unimagined ideas or artifacts by a machine that are deemed creative by people, can be applied in the culinary domain to create novel and flavorful dishes. In fact, we have done so successfully using a combinatorial algorithm for recipe generation combined with statistical models for recipe ranking…
Key Concepts in Informatics: Algorithm
ERIC Educational Resources Information Center
Szlávi, Péter; Zsakó, László
2014-01-01
"The system of key concepts contains the most important key concepts related to the development tasks of knowledge areas and their vertical hierarchy as well as the links of basic key concepts of different knowledge areas." (Vass 2011) One of the most important of these concepts is the algorithm. In everyday life, when learning or…
Algorithm Visualization in Teaching Practice
ERIC Educational Resources Information Center
Törley, Gábor
2014-01-01
This paper presents the history of algorithm visualization (AV), highlighting teaching-methodology aspects. A combined, two-group pedagogical experiment will be presented as well, which measured the efficiency and the impact on the abstract thinking of AV. According to the results, students, who learned with AV, performed better in the experiment.
Understanding Algorithms in Different Presentations
ERIC Educational Resources Information Center
Csernoch, Mária; Biró, Piroska; Abari, Kálmán; Máth, János
2015-01-01
Within the framework of the Testing Algorithmic and Application Skills project we tested first year students of Informatics at the beginning of their tertiary education. We were focusing on the students' level of understanding in different programming environments. In the present paper we provide the results from the University of Debrecen, the…
Listless zerotree image compression algorithm
NASA Astrophysics Data System (ADS)
Lian, Jing; Wang, Ke
2006-09-01
In this paper, an improved zerotree structure and a new coding procedure are adopted, which improve the reconstructed image qualities. Moreover, the lists in SPIHT are replaced by flag maps, and lifting scheme is adopted to realize wavelet transform, which lowers the memory requirements and speeds up the coding process. Experimental results show that the algorithm is more effective and efficient compared with SPIHT.
An Algorithm for Suffix Stripping
ERIC Educational Resources Information Center
Porter, M. F.
2006-01-01
Purpose: The automatic removal of suffixes from words in English is of particular interest in the field of information retrieval. This work was originally published in Program in 1980 and is republished as part of a series of articles commemorating the 40th anniversary of the journal. Design/methodology/approach: An algorithm for suffix stripping…
Threshold extended ID3 algorithm
NASA Astrophysics Data System (ADS)
Kumar, A. B. Rajesh; Ramesh, C. Phani; Madhusudhan, E.; Padmavathamma, M.
2012-04-01
Information exchange over insecure networks needs to provide authentication and confidentiality to the database in significant problem in datamining. In this paper we propose a novel authenticated multiparty ID3 Algorithm used to construct multiparty secret sharing decision tree for implementation in medical transactions.
Multilevel algorithms for nonlinear optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Dennis, J. E., Jr.
1994-01-01
Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.
Linear Bregman algorithm implemented in parallel GPU
NASA Astrophysics Data System (ADS)
Li, Pengyan; Ke, Jue; Sui, Dong; Wei, Ping
2015-08-01
At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people's fast growing demand to information technology.
Algorithms, modelling and VO₂ kinetics.
Capelli, Carlo; Carlo, Capelli; Cautero, Michela; Michela, Cautero; Pogliaghi, Silvia; Silvia, Pogliaghi
2011-03-01
This article summarises the pros and cons of different algorithms developed for estimating breath-by-breath (B-by-B) alveolar O(2) transfer (VO 2A) in humans. VO 2A is the difference between O(2) uptake at the mouth and changes in alveolar O(2) stores (∆ VO(2s)), which for any given breath, are equal to the alveolar volume change at constant FAO2/FAiO2 ∆VAi plus the O(2) alveolar fraction change at constant volume [V Ai-1(F Ai - F Ai-1) O2, where V (Ai-1) is the alveolar volume at the beginning of a breath. Therefore, VO 2A can be determined B-by-B provided that V (Ai-1) is: (a) set equal to the subject's functional residual capacity (algorithm of Auchincloss, A) or to zero; (b) measured (optoelectronic plethysmography, OEP); (c) selected according to a procedure that minimises B-by-B variability (algorithm of Busso and Robbins, BR). Alternatively, the respiratory cycle can be redefined as the time between equal FO(2) in two subsequent breaths (algorithm of Grønlund, G), making any assumption of V (Ai-1) unnecessary. All the above methods allow an unbiased estimate of VO2 at steady state, albeit with different precision. Yet the algorithms "per se" affect the parameters describing the B-by-B kinetics during exercise transitions. Among these approaches, BR and G, by increasing the signal-to-noise ratio of the measurements, reduce the number of exercise repetitions necessary to study VO2 kinetics, compared to A approach. OEP and G (though technically challenging and conceptually still debated), thanks to their ability to track ∆VO(2s) changes during the early phase of exercise transitions, appear rather promising for investigating B-by-B gas exchange.
Birkhoffian symplectic algorithms derived from Hamiltonian symplectic algorithms
NASA Astrophysics Data System (ADS)
Xin-Lei, Kong; Hui-Bin, Wu; Feng-Xiang, Mei
2016-01-01
In this paper, we focus on the construction of structure preserving algorithms for Birkhoffian systems, based on existing symplectic schemes for the Hamiltonian equations. The key of the method is to seek an invertible transformation which drives the Birkhoffian equations reduce to the Hamiltonian equations. When there exists such a transformation, applying the corresponding inverse map to symplectic discretization of the Hamiltonian equations, then resulting difference schemes are verified to be Birkhoffian symplectic for the original Birkhoffian equations. To illustrate the operation process of the method, we construct several desirable algorithms for the linear damped oscillator and the single pendulum with linear dissipation respectively. All of them exhibit excellent numerical behavior, especially in preserving conserved quantities. Project supported by the National Natural Science Foundation of China (Grant No. 11272050), the Excellent Young Teachers Program of North China University of Technology (Grant No. XN132), and the Construction Plan for Innovative Research Team of North China University of Technology (Grant No. XN129).
Space complexity of estimation of distribution algorithms.
Gao, Yong; Culberson, Joseph
2005-01-01
In this paper, we investigate the space complexity of the Estimation of Distribution Algorithms (EDAs), a class of sampling-based variants of the genetic algorithm. By analyzing the nature of EDAs, we identify criteria that characterize the space complexity of two typical implementation schemes of EDAs, the factorized distribution algorithm and Bayesian network-based algorithms. Using random additive functions as the prototype, we prove that the space complexity of the factorized distribution algorithm and Bayesian network-based algorithms is exponential in the problem size even if the optimization problem has a very sparse interaction structure.
Higher-order force gradient symplectic algorithms
NASA Astrophysics Data System (ADS)
Chin, Siu A.; Kidwell, Donald W.
2000-12-01
We show that a recently discovered fourth order symplectic algorithm, which requires one evaluation of force gradient in addition to three evaluations of the force, when iterated to higher order, yielded algorithms that are far superior to similarly iterated higher order algorithms based on the standard Forest-Ruth algorithm. We gauge the accuracy of each algorithm by comparing the step-size independent error functions associated with energy conservation and the rotation of the Laplace-Runge-Lenz vector when solving a highly eccentric Kepler problem. For orders 6, 8, 10, and 12, the new algorithms are approximately a factor of 103, 104, 104, and 105 better.
Why is Boris Algorithm So Good?
et al, Hong Qin
2013-03-03
Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this letter, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.
Treatment algorithms in refractory partial epilepsy.
Jobst, Barbara C
2009-09-01
An algorithm is a "step-by-step procedure for solving a problem or accomplishing some end....in a finite number of steps." (Merriam-Webster, 2009). Medical algorithms are decision trees to help with diagnostic and therapeutic decisions. For the treatment of epilepsy there is no generally accepted treatment algorithm, as individual epilepsy centers follow different diagnostic and therapeutic guidelines. This article presents two algorithms to guide decisions in the treatment of refractory partial epilepsy. The treatment algorithm describes a stepwise diagnostic and therapeutic approach to intractable medial temporal and neocortical epilepsy. The surgical algorithm guides decisions in the surgical treatment of neocortical epilepsy.
Algorithmic methods in diffraction microscopy
NASA Astrophysics Data System (ADS)
Thibault, Pierre
Recent diffraction imaging techniques use properties of coherent sources (most notably x-rays and electrons) to transfer a portion of the imaging task to computer algorithms. "Diffraction microscopy" is a method which consists in reconstructing the image of a specimen from its diffraction pattern. Because only the amplitude of a wavefield incident on a detector is measured, reconstruction of the image entails to recovering the lost phases. This extension of the 'phase problem" commonly met in crystallography is solved only if additional information is available. The main topic of this thesis is the development of algorithmic techniques in diffraction microscopy. In addition to introducing new methods, it is meant to be a review of the algorithmic aspects of the field of diffractive imaging. An overview of the scattering approximations used in the interpretation of diffraction datasets is first given, as well as a numerical propagation tool useful in conditions where known approximations fail. Concepts central to diffraction microscopy---such as oversampling---are then introduced and other similar imaging techniques described. A complete description of iterative reconstruction algorithms follows, with a special emphasis on the difference map, the algorithm used in this thesis. The formalism, based on constraint sets and projection onto these sets, is then defined and explained. Simple projections commonly used in diffraction imaging are then described. The various ways experimental realities can affect reconstruction methods will then be enumerated. Among the diverse sources of algorithmic difficulties, one finds that noise, missing data and partial coherence are typically the most important. Other related difficulties discussed are the detrimental effects of crystalline domains in a specimen, and the convergence problems occurring when the support of a complex-valued specimen is not well known. The last part of this thesis presents reconstruction results; an
Molecular beacon sequence design algorithm.
Monroe, W Todd; Haselton, Frederick R
2003-01-01
A method based on Web-based tools is presented to design optimally functioning molecular beacons. Molecular beacons, fluorogenic hybridization probes, are a powerful tool for the rapid and specific detection of a particular nucleic acid sequence. However, their synthesis costs can be considerable. Since molecular beacon performance is based on its sequence, it is imperative to rationally design an optimal sequence before synthesis. The algorithm presented here uses simple Microsoft Excel formulas and macros to rank candidate sequences. This analysis is carried out using mfold structural predictions along with other free Web-based tools. For smaller laboratories where molecular beacons are not the focus of research, the public domain algorithm described here may be usefully employed to aid in molecular beacon design.
Algorithm refinement for fluctuating hydrodynamics
Williams, Sarah A.; Bell, John B.; Garcia, Alejandro L.
2007-07-03
This paper introduces an adaptive mesh and algorithmrefinement method for fluctuating hydrodynamics. This particle-continuumhybrid simulates the dynamics of a compressible fluid with thermalfluctuations. The particle algorithm is direct simulation Monte Carlo(DSMC), a molecular-level scheme based on the Boltzmann equation. Thecontinuum algorithm is based on the Landau-Lifshitz Navier-Stokes (LLNS)equations, which incorporate thermal fluctuations into macroscopichydrodynamics by using stochastic fluxes. It uses a recently-developedsolver for LLNS, based on third-order Runge-Kutta. We present numericaltests of systems in and out of equilibrium, including time-dependentsystems, and demonstrate dynamic adaptive refinement by the computationof a moving shock wave. Mean system behavior and second moment statisticsof our simulations match theoretical values and benchmarks well. We findthat particular attention should be paid to the spectrum of the flux atthe interface between the particle and continuum methods, specificallyfor the non-hydrodynamic (kinetic) time scales.
Algorithms for intravenous insulin delivery.
Braithwaite, Susan S; Clement, Stephen
2008-08-01
This review aims to classify algorithms for intravenous insulin infusion according to design. Essential input data include the current blood glucose (BG(current)), the previous blood glucose (BG(previous)), the test time of BG(current) (test time(current)), the test time of BG(previous) (test time(previous)), and the previous insulin infusion rate (IR(previous)). Output data consist of the next insulin infusion rate (IR(next)) and next test time. The classification differentiates between "IR" and "MR" algorithm types, both defined as a rule for assigning an insulin infusion rate (IR), having a glycemic target. Both types are capable of assigning the IR for the next iteration of the algorithm (IR(next)) as an increasing function of BG(current), IR(previous), and rate-of-change of BG with respect to time, each treated as an independent variable. Algorithms of the IR type directly seek to define IR(next) as an incremental adjustment to IR(previous). At test time(current), under an IR algorithm the differences in values of IR(next) that might be assigned depending upon the value of BG(current) are not necessarily continuously dependent upon, proportionate to, or commensurate with either the IR(previous) or the rate-of-change of BG. Algorithms of the MR type create a family of IR functions of BG differing according to maintenance rate (MR), each being an iso-MR curve. The change of IR(next) with respect to BG(current) is a strictly increasing function of MR. At test time(current), algorithms of the MR type use IR(previous) and the rate-of-change of BG to define the MR, multiplier, or column assignment, which will be used for patient assignment to the right iso-MR curve and as precedent for IR(next). Bolus insulin therapy is especially effective when used in proportion to carbohydrate load to cover anticipated incremental transitory enteral or parenteral carbohydrate exposure. Specific distinguishing algorithm design features and choice of parameters may be important to
MUSIC algorithms for rebar detection
NASA Astrophysics Data System (ADS)
Solimene, Raffaele; Leone, Giovanni; Dell'Aversano, Angela
2013-12-01
The MUSIC (MUltiple SIgnal Classification) algorithm is employed to detect and localize an unknown number of scattering objects which are small in size as compared to the wavelength. The ensemble of objects to be detected consists of both strong and weak scatterers. This represents a scattering environment challenging for detection purposes as strong scatterers tend to mask the weak ones. Consequently, the detection of more weakly scattering objects is not always guaranteed and can be completely impaired when the noise corrupting data is of a relatively high level. To overcome this drawback, here a new technique is proposed, starting from the idea of applying a two-stage MUSIC algorithm. In the first stage strong scatterers are detected. Then, information concerning their number and location is employed in the second stage focusing only on the weak scatterers. The role of an adequate scattering model is emphasized to improve drastically detection performance in realistic scenarios.
Innovations in Lattice QCD Algorithms
Konstantinos Orginos
2006-06-25
Lattice QCD calculations demand a substantial amount of computing power in order to achieve the high precision results needed to better understand the nature of strong interactions, assist experiment to discover new physics, and predict the behavior of a diverse set of physical systems ranging from the proton itself to astrophysical objects such as neutron stars. However, computer power alone is clearly not enough to tackle the calculations we need to be doing today. A steady stream of recent algorithmic developments has made an important impact on the kinds of calculations we can currently perform. In this talk I am reviewing these algorithms and their impact on the nature of lattice QCD calculations performed today.
A fast meteor detection algorithm
NASA Astrophysics Data System (ADS)
Gural, P.
2016-01-01
A low latency meteor detection algorithm for use with fast steering mirrors had been previously developed to track and telescopically follow meteors in real-time (Gural, 2007). It has been rewritten as a generic clustering and tracking software module for meteor detection that meets both the demanding throughput requirements of a Raspberry Pi while also maintaining a high probability of detection. The software interface is generalized to work with various forms of front-end video pre-processing approaches and provides a rich product set of parameterized line detection metrics. Discussion will include the Maximum Temporal Pixel (MTP) compression technique as a fast thresholding option for feeding the detection module, the detection algorithm trade for maximum processing throughput, details on the clustering and tracking methodology, processing products, performance metrics, and a general interface description.
Introductory Students, Conceptual Understanding, and Algorithmic Success.
ERIC Educational Resources Information Center
Pushkin, David B.
1998-01-01
Addresses the distinction between conceptual and algorithmic learning and the clarification of what is meant by a second-tier student. Explores why novice learners in chemistry and physics are able to apply algorithms without significant conceptual understanding. (DDR)
Spaceborne SAR Imaging Algorithm for Coherence Optimized
Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun
2016-01-01
This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446
Teaching Multiplication Algorithms from Other Cultures
ERIC Educational Resources Information Center
Lin, Cheng-Yao
2007-01-01
This article describes a number of multiplication algorithms from different cultures around the world: Hindu, Egyptian, Russian, Japanese, and Chinese. Students can learn these algorithms and better understand the operation and properties of multiplication.
Algorithms Could Automate Cancer Diagnosis
NASA Technical Reports Server (NTRS)
Baky, A. A.; Winkler, D. G.
1982-01-01
Five new algorithms are a complete statistical procedure for quantifying cell abnormalities from digitized images. Procedure could be basis for automated detection and diagnosis of cancer. Objective of procedure is to assign each cell an atypia status index (ASI), which quantifies level of abnormality. It is possible that ASI values will be accurate and economical enough to allow diagnoses to be made quickly and accurately by computer processing of laboratory specimens extracted from patients.
Algorithms for Automated DNA Assembly
2010-01-01
polyketide synthase gene cluster. Proc. Natl Acad. Sci. USA, 101, 15573–15578. 16. Shetty,R.P., Endy,D. and Knight,T.F. Jr (2008) Engineering BioBrick vectors...correct theoretical construction scheme is de- veloped manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and...to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with
Algorithmic deformation of matrix factorisations
NASA Astrophysics Data System (ADS)
Carqueville, Nils; Dowdy, Laura; Recknagel, Andreas
2012-04-01
Branes and defects in topological Landau-Ginzburg models are described by matrix factorisations. We revisit the problem of deforming them and discuss various deformation methods as well as their relations. We have implemented these algorithms and apply them to several examples. Apart from explicit results in concrete cases, this leads to a novel way to generate new matrix factorisations via nilpotent substitutions, and to criteria whether boundary obstructions can be lifted by bulk deformations.
Consensus Algorithms Over Fading Channels
2010-10-01
studying the effect of fading and collisions on the performance of wireless consensus gossiping and in comparing its cost (measured in terms of number of...not assumed to be symmetric under A2. III. RELATED WORK There has been a resurgence of interest in characterizing consen- sus and gossip algorithms...tree, and then distribute the consensus value, with a finite number of exchanges. The price paid is clearly that of finding the appropriate routing
Numerical Algorithms and Parallel Tasking.
1984-07-01
34 Principal Investigator, Virginia Klema, Research Staff, George Cybenko and Elizabeth Ducot . During the period, May 15, 1983 through May 14, 1984...Virginia Klema and Elizabeth Ducot have been supported for four months, and George Cybenko has been supported for one month. During this time system...algorithms or applications is the responsibility of the user. Virginia Klema and Elizabeth Ducot presented a description of the concurrent computing
Network Games and Approximation Algorithms
2008-01-03
I also spend time during the last three years writing a textbook on Algorithm Design (with Jon Kleinberg) that had now been adopted by a number of...Minimum-Size Bounded-Capacity Cut (MSBCC) problem, in which we are given a graph with an identified source and seek to find a cut minimizing the number ...Distributed Computing (Special Issue PODC 05) Volume 19, Number 4, 2007, 255-266. http://www.springerlink.com/content/x 148746507861 np7/ ?p
QCCM Center for Quantum Algorithms
2008-10-17
and A. Ekert and C. Macchiavello and M. Mosca quant-ph/0609160v1 Phase map decompositions for unitaries Niel de Beaudrap, Vincent Danos, Elham...Quantum Algorithms and Complexity M. Mosca Proceedings of NATO ASI Quantum Computation and Information 2005, Chania, Crete, Greece, IOS Press (2006), in...press Quantum Cellular Automata and Single Spin Measurement C. Perez, D. Cheung, M. Mosca , P. Cappellaro, D. Cory Proceedings of Asian Conference on
Parallel Algorithms for Image Analysis.
1982-06-01
8217 _ _ _ _ _ _ _ 4. TITLE (aid Subtitle) S. TYPE OF REPORT & PERIOD COVERED PARALLEL ALGORITHMS FOR IMAGE ANALYSIS TECHNICAL 6. PERFORMING O4G. REPORT NUMBER TR-1180...Continue on reverse side it neceesary aid Identlfy by block number) Image processing; image analysis ; parallel processing; cellular computers. 20... IMAGE ANALYSIS TECHNICAL 6. PERFORMING ONG. REPORT NUMBER TR-1180 - 7. AUTHOR(&) S. CONTRACT OR GRANT NUMBER(s) Azriel Rosenfeld AFOSR-77-3271 9
Halftoning and Image Processing Algorithms
1999-02-01
screening techniques with the quality advantages of error diffusion in the half toning of color maps, and on color image enhancement for halftone ...image quality. Our goals in this research were to advance the understanding in image science for our new halftone algorithm and to contribute to...image retrieval and noise theory for such imagery. In the field of color halftone printing, research was conducted on deriving a theoretical model of our
Principles for Developing Algorithmic Instruction.
1978-12-01
information-processing theories to test their applicability with instruction directed by learning algorithms. A version of a logical, or familiar, and a...intent of our research was to borrow~ from information-processing theory factors which are known to affect learning in a predictable manner and to apply... learning studies where processing theories are tested by minute performance or latency differences. -~ It is not surprising that differences are seldom found
Global Positioning System Navigation Algorithms
1977-05-01
Historical Remarks on Navigation In Greek mythology , Odysseus sailed safely by the Sirens only to encounter the monsters Scylla and Charybdis...TNED 000 00 1(.7 BIBLIOGRAPHY 1. Pinsent, John. Greek Mythology . Paul Hamlyn, London, 1969. 2. Kline, Morris. Mathematical Thought from Ancient to...Algorithms 20. ABS AACT (Continue an reverse sid* If necessary and identify by block nttrnber) The Global Positioning System (CPS) will be a constellation of
Efficient GPS Position Determination Algorithms
2007-06-01
Dilution of Precision ( GDOP ) conditions. The novel differential GPS algorithm for a network of users that has been developed in this research uses a...performance is achieved, even under high Geometric Dilution of Precision ( GDOP ) conditions. The second part of this research investigates a...respect to the receiver produces high Geometric Dilution of Precision ( GDOP ), which can adversely affect GPS position solutions [1]. Four
Algorithms for optimal redundancy allocation
Vandenkieboom, J.; Youngblood, R.
1993-01-01
Heuristic and exact methods for solving the redundancy allocation problem are compared to an approach based on genetic algorithms. The various methods are applied to the bridge problem, which has been used as a benchmark in earlier work on optimization methods. Comparisons are presented in terms of the best configuration found by each method, and the computation effort which was necessary in order to find it.
An algorithm for segmenting range imagery
Roberts, R.S.
1997-03-01
This report describes the technical accomplishments of the FY96 Cross Cutting and Advanced Technology (CC&AT) project at Los Alamos National Laboratory. The project focused on developing algorithms for segmenting range images. The image segmentation algorithm developed during the project is described here. In addition to segmenting range images, the algorithm can fuse multiple range images thereby providing true 3D scene models. The algorithm has been incorporated into the Rapid World Modelling System at Sandia National Laboratory.
Algorithms and Requirements for Measuring Network Bandwidth
Jin, Guojun
2002-12-08
This report unveils new algorithms for actively measuring (not estimating) available bandwidths with very low intrusion, computing cross traffic, thus estimating the physical bandwidth, provides mathematical proof that the algorithms are accurate, and addresses conditions, requirements, and limitations for new and existing algorithms for measuring network bandwidths. The paper also discusses a number of important terminologies and issues for network bandwidth measurement, and introduces a fundamental parameter -Maximum Burst Size that is critical for implementing algorithms based on multiple packets.
Efficient Algorithm for Rectangular Spiral Search
NASA Technical Reports Server (NTRS)
Brugarolas, Paul; Breckenridge, William
2008-01-01
An algorithm generates grid coordinates for a computationally efficient spiral search pattern covering an uncertain rectangular area spanned by a coordinate grid. The algorithm does not require that the grid be fixed; the algorithm can search indefinitely, expanding the grid and spiral, as needed, until the target of the search is found. The algorithm also does not require memory of coordinates of previous points on the spiral to generate the current point on the spiral.
The Cartan algorithm in five dimensions
NASA Astrophysics Data System (ADS)
McNutt, D. D.; Coley, A. A.; Forget, A.
2017-03-01
In this paper, we introduce an algorithm to determine the equivalence of five dimensional spacetimes, which generalizes the Karlhede algorithm for four dimensional general relativity. As an alternative to the Petrov type classification, we employ the alignment classification to algebraically classify the Weyl tensor. To illustrate the algorithm, we discuss three examples: the singly rotating Myers-Perry solution, the Kerr (Anti-) de Sitter solution, and the rotating black ring solution. We briefly discuss some applications of the Cartan algorithm in five dimensions.
Improved LMS algorithm for adaptive beamforming
NASA Technical Reports Server (NTRS)
Godara, Lal C.
1990-01-01
Two adaptive algorithms which make use of all the available samples to estimate the required gradient are proposed and studied. The first algorithm is referred to as the recursive LMS (least mean squares) and is applicable to a general array. The second algorithm is referred to as the improved LMS algorithm and exploits the Toeplitz structure of the ACM (array correlation matrix); it can be used only for an equispaced linear array.
Screening Reliable Reference Genes for RT-qPCR Analysis of Gene Expression in Moringa oleifera
Deng, Li-Ting; Wu, Yu-Ling; Li, Jun-Cheng; OuYang, Kun-Xi; Ding, Mei-Mei; Zhang, Jun-Jie; Li, Shu-Qi; Lin, Meng-Fei; Chen, Han-Bin; Hu, Xin-Sheng; Chen, Xiao-Yang
2016-01-01
Moringa oleifera is a promising plant species for oil and forage, but its genetic improvement is limited. Our current breeding program in this species focuses on exploiting the functional genes associated with important agronomical traits. Here, we screened reliable reference genes for accurately quantifying the expression of target genes using the technique of real-time quantitative polymerase chain reaction (RT-qPCR) in M. oleifera. Eighteen candidate reference genes were selected from a transcriptome database, and their expression stabilities were examined in 90 samples collected from the pods in different developmental stages, various tissues, and the roots and leaves under different conditions (low or high temperature, sodium chloride (NaCl)- or polyethyleneglycol (PEG)- simulated water stress). Analyses with geNorm, NormFinder and BestKeeper algorithms revealed that the reliable reference genes differed across sample designs and that ribosomal protein L1 (RPL1) and acyl carrier protein 2 (ACP2) were the most suitable reference genes in all tested samples. The experiment results demonstrated the significance of using the properly validated reference genes and suggested the use of more than one reference gene to achieve reliable expression profiles. In addition, we applied three isotypes of the superoxide dismutase (SOD) gene that are associated with plant adaptation to abiotic stress to confirm the efficacy of the validated reference genes under NaCl and PEG water stresses. Our results provide a valuable reference for future studies on identifying important functional genes from their transcriptional expressions via RT-qPCR technique in M. oleifera. PMID:27541138
Gharbi, Sedigheh; Shamsara, Mehdi; Khateri, Shahriar; Soroush, Mohammad Reza; Ghorbanmehr, Nassim; Tavallaei, Mahmood; Nourani, Mohammad Reza; Mowla, Seyed Javad
2015-01-01
Objective In spite of accumulating information about pathological aspects of sulfur mustard (SM), the precise mechanism responsible for its effects is not well understood. Circulating microRNAs (miRNAs) are promising biomarkers for disease diagnosis and prognosis. Accurate normalization using appropriate reference genes, is a critical step in miRNA expression studies. In this study, we aimed to identify appropriate reference gene for microRNA quantification in serum samples of SM victims. Materials and Methods In this case and control experimental study, using quantitative real-time polymerase chain reaction (qRT-PCR), we evaluated the suitability of a panel of small RNAs including SNORD38B, SNORD49A, U6, 5S rRNA, miR-423-3p, miR-191, miR-16 and miR-103 in sera of 28 SM-exposed veterans of Iran-Iraq war (1980-1988) and 15 matched control volunteers. Different statistical algorithms including geNorm, Normfinder, best-keeper and comparative delta-quantification cycle (Cq) method were employed to find the least variable reference gene. Results miR-423-3p was identified as the most stably expressed reference gene, and miR- 103 and miR-16 ranked after that. Conclusion We demonstrate that non-miRNA reference genes have the least stabil- ity in serum samples and that some house-keeping miRNAs may be used as more reliable reference genes for miRNAs in serum. In addition, using the geometric mean of two reference genes could increase the reliability of the normalizers. PMID:26464821
V. Patankar, Himanshu; M. Assaha, Dekoum V.; Al-Yahyai, Rashid; Sunkar, Ramanjulu
2016-01-01
Date palm is an important crop plant in the arid and semi-arid regions supporting human population in the Middle East and North Africa. These areas have been largely affected by drought and salinity due to insufficient rainfall and improper irrigation practices. Date palm is a relatively salt- and drought-tolerant plant and more recently efforts have been directed to identifying genes and pathways that confer stress tolerance in this species. Quantitative real-time PCR (qPCR) is a promising technique for the analysis of stress-induced differential gene expression, which involves the use of stable reference genes for normalizing gene expression. In an attempt to find the best reference genes for date palm’s drought and salinity research, we evaluated the stability of 12 most commonly used reference genes using the geNorm, NormFinder, BestKeeper statistical algorithms and the comparative ΔCT method. The comprehensive results revealed that HEAT SHOCK PROTEIN (HSP), UBIQUITIN (UBQ) and YTH domain-containing family protein (YT521) were stable in drought-stressed leaves whereas GLYCERALDEHYDE-3-PHOSPHATE DEHYDROGENASE (GAPDH), ACTIN and TUBULIN were stable in drought-stressed roots. On the other hand, SMALL SUBUNIT RIBOSOMAL RNA (25S), YT521 and 18S ribosomal RNA (18S); and UBQ, ACTIN and ELONGATION FACTOR 1-ALPHA (eEF1a) were stable in leaves and roots, respectively, under salt stress. The stability of these reference genes was verified by using the abiotic stress-responsive CYTOSOLIC Cu/Zn SUPEROXIDE DISMUTASE (Cyt-Cu/Zn SOD), an ABA RECEPTOR, and a PROLINE TRANSPORTER 2 (PRO) genes. A combination of top 2 or 3 stable reference genes were found to be suitable for normalization of the target gene expression and will facilitate gene expression analysis studies aimed at identifying functional genes associated with drought and salinity tolerance in date palm. PMID:27824922
Niu, Longjian; Tao, Yan-Bin; Chen, Mao-Sheng; Fu, Qiantang; Li, Chaoqiong; Dong, Yuling; Wang, Xiulan; He, Huiying; Xu, Zeng-Fu
2015-06-03
Real-time quantitative PCR (RT-qPCR) is a reliable and widely used method for gene expression analysis. The accuracy of the determination of a target gene expression level by RT-qPCR demands the use of appropriate reference genes to normalize the mRNA levels among different samples. However, suitable reference genes for RT-qPCR have not been identified in Sacha inchi (Plukenetia volubilis), a promising oilseed crop known for its polyunsaturated fatty acid (PUFA)-rich seeds. In this study, using RT-qPCR, twelve candidate reference genes were examined in seedlings and adult plants, during flower and seed development and for the entire growth cycle of Sacha inchi. Four statistical algorithms (delta cycle threshold (ΔCt), BestKeeper, geNorm, and NormFinder) were used to assess the expression stabilities of the candidate genes. The results showed that ubiquitin-conjugating enzyme (UCE), actin (ACT) and phospholipase A22 (PLA) were the most stable genes in Sacha inchi seedlings. For roots, stems, leaves, flowers, and seeds from adult plants, 30S ribosomal protein S13 (RPS13), cyclophilin (CYC) and elongation factor-1alpha (EF1α) were recommended as reference genes for RT-qPCR. During the development of reproductive organs, PLA, ACT and UCE were the optimal reference genes for flower development, whereas UCE, RPS13 and RNA polymerase II subunit (RPII) were optimal for seed development. Considering the entire growth cycle of Sacha inchi, UCE, ACT and EF1α were sufficient for the purpose of normalization. Our results provide useful guidelines for the selection of reliable reference genes for the normalization of RT-qPCR data for seedlings and adult plants, for reproductive organs, and for the entire growth cycle of Sacha inchi.
Niu, Longjian; Tao, Yan-Bin; Chen, Mao-Sheng; Fu, Qiantang; Li, Chaoqiong; Dong, Yuling; Wang, Xiulan; He, Huiying; Xu, Zeng-Fu
2015-01-01
Real-time quantitative PCR (RT-qPCR) is a reliable and widely used method for gene expression analysis. The accuracy of the determination of a target gene expression level by RT-qPCR demands the use of appropriate reference genes to normalize the mRNA levels among different samples. However, suitable reference genes for RT-qPCR have not been identified in Sacha inchi (Plukenetia volubilis), a promising oilseed crop known for its polyunsaturated fatty acid (PUFA)-rich seeds. In this study, using RT-qPCR, twelve candidate reference genes were examined in seedlings and adult plants, during flower and seed development and for the entire growth cycle of Sacha inchi. Four statistical algorithms (delta cycle threshold (ΔCt), BestKeeper, geNorm, and NormFinder) were used to assess the expression stabilities of the candidate genes. The results showed that ubiquitin-conjugating enzyme (UCE), actin (ACT) and phospholipase A22 (PLA) were the most stable genes in Sacha inchi seedlings. For roots, stems, leaves, flowers, and seeds from adult plants, 30S ribosomal protein S13 (RPS13), cyclophilin (CYC) and elongation factor-1alpha (EF1α) were recommended as reference genes for RT-qPCR. During the development of reproductive organs, PLA, ACT and UCE were the optimal reference genes for flower development, whereas UCE, RPS13 and RNA polymerase II subunit (RPII) were optimal for seed development. Considering the entire growth cycle of Sacha inchi, UCE, ACT and EF1α were sufficient for the purpose of normalization. Our results provide useful guidelines for the selection of reliable reference genes for the normalization of RT-qPCR data for seedlings and adult plants, for reproductive organs, and for the entire growth cycle of Sacha inchi. PMID:26047338
Ji, Nanjing; Li, Ling; Lin, Lingxiao; Lin, Senjie
2015-01-01
The raphidophyte Heterosigma akashiwo is a globally distributed harmful alga that has been associated with fish kills in coastal waters. To understand the mechanisms of H. akashiwo bloom formation, gene expression analysis is often required. To accurately characterize the expression levels of a gene of interest, proper reference genes are essential. In this study, we assessed ten of the previously reported algal candidate genes (rpL17-2, rpL23, cox2, cal, tua, tub, ef1, 18S, gapdh, and mdh) for their suitability as reference genes in this species. We used qRT-PCR to quantify the expression levels of these genes in H. akashiwo grown under different temperatures, light intensities, nutrient concentrations, and time points over a diel cycle. The expression stability of these genes was evaluated using geNorm and NormFinder algorithms. Although none of these genes exhibited invariable expression levels, cal, tub, rpL17-2 and rpL23 expression levels were the most stable across the different conditions tested. For further validation, these selected genes were used to normalize the expression levels of ribulose-1, 5-bisphosphate carboxylase/oxygenase large unite (HrbcL) over a diel cycle. Results showed that the expression of HrbcL normalized against each of these reference genes was the highest at midday and lowest at midnight, similar to the diel patterns typically documented for this gene in algae. While the validated reference genes will be useful for future gene expression studies on H. akashiwo, we expect that the procedure used in this study may be helpful to future efforts to screen reference genes for other algae.
Gong, Lei; Yang, Yajun; Chen, Yuchao; Shi, Jing; Song, Yuxia; Zhang, Hongxia
2016-01-01
For quantitative real-time PCR (qRT-PCR) analysis, the key prerequisite that determines result accuracy is the selection of appropriate reference gene(s). Goji (Lycium barbarum L.) is a multi-branched shrub belonging to the Solanaceae family. To date, no systematic screening or evaluation of reference gene(s) in Goji has been performed. In this work, we identified 18 candidate reference genes from the transcriptomic sequencing data of 14 samples of Goji at different developmental stages and under drought stress condition. The expression stability of these candidate genes was rigorously analyzed using qRT-PCR and four different statistical algorithms: geNorm, BestKeeper, NormFinder and RefFinder. Two novel reference genes LbCML38 and LbRH52 showed the most stable expression, whereas the traditionally used reference genes such as LbGAPDH, LbHSP90 and LbTUB showed unstable expression in the tested samples. Expression of a target gene LbMYB1 was also tested and compared using optimal reference genes LbCML38 and LbRH52, mediocre reference gene LbActin7, and poor reference gene LbHSP90 as normalization standards, respectively. As expected, calculation of the target gene expression by normalization against LbCML38, LbActin7 or LbHSP90 showed significant differences. Our findings suggest that LbCML38 and LbRH52 can be used as reference genes for gene expression analysis in Goji. PMID:27841319
Riemer, Angelika B; Keskin, Derin B; Reinherz, Ellis L
2012-08-01
Based on the exquisite sensitivity, reproducibility and wide dynamic range of quantitative reverse-transcription real-time polymerase chain reaction (qRT-PCR), it is currently the gold standard for gene expression studies. Target gene expression is calculated relative to a stably expressed reference gene. An ideal reference should be uniformly expressed during all experimental conditions within the given experimental system. However, no commonly applicable 'best' reference gene has been identified. Thus, endogenous controls must be determined for every experimental system. As no appropriate reference genes have been reported for immunological studies in keratinocytes, we aimed at identifying and validating a set of endogenous controls for these settings. An extensive validation of sixteen possible endogenous controls in a panel of 8 normal and transformed keratinocyte cell lines in experimental conditions with and without interferon-γ was performed. RNA and cDNA quality was stringently controlled. Candidate reference genes were assessed by TaqMan(®) qRT-PCR. Two different statistical algorithms were used to determine the most stably and reproducibly expressed housekeeping genes. mRNA abundance was compared and reference genes with widely different ranges of expression than possible target genes were excluded. Subsequent geNorm and NormFinder analyses identified GAPDH, PGK1, IPO8 and PPIA as the most stably expressed genes in the keratinocyte panel under the given experimental conditions. We conclude that the geometric means of expression values of these four genes represents a robust normalization factor for qRT-PCR analyses in interferon-γ-dependent gene expression studies in keratinocytes. The methodology and results herein may help other researchers by facilitating their choice of reference genes.
Barragán, M; Martínez, A; Llonch, S; Pujol, A; Vernaeve, V; Vassena, R
2015-07-01
Although the male gamete participates in a significant proportion of infertility cases, there are currently no proven molecular markers of sperm quality. The search for significant gene expression markers is partially hindered by the lack of a recognized set of reference genes (RGs) to normalize reverse transcription quantitative PCR (RT-qPCR) data across studies. The aim of this study is to define a set of RGs in assisted reproduction patients undergoing different sample collection and RNA isolation methods. Twenty-two normozoospermic men were included in the study. From each man, semen was either cryopreserved by slow freezing or analyzed fresh, and, for each, RNA was extracted with either phenol-free or phenol-based methods. In two cases, both methods were used to isolate RNA. Twenty putative RGs were analyzed and their mRNA abundance across samples was estimated by RT-qPCR. To determine the genes whose steady-state mRNA abundance remains unchanged, three different algorithms (geNorm, BestKeeper and NormFinder) were applied to the qPCR data. We found that RGs such as GAPDH or ACTB, useful in other biological contexts, cannot be used as reference for human spermatozoa. It is possible to compare gene expression from fresh and cryopreserved sperm samples using the same isolation method, while the mRNA abundance of expressed genes becomes different depending on the RNA isolation technique employed. In our conditions, the most appropriate RGs for RT-qPCR analysis were RPLP1, RPL13A, and RPLP2. Published discrepancies in gene expression studies in human spermatozoa may be due in part to inappropriate RGs selection, suggesting a possible different interpretation of PCR data in several reports, which were normalized using unstable RGs.
Wu, Zhi-Jun; Tian, Chang; Jiang, Qian; Li, Xing-Hui; Zhuang, Jing
2016-01-01
Tea plant (Camellia sinensis) leaf is an important non-alcoholic beverage resource. The application of quantitative real time polymerase chain reaction (qRT-PCR) has a profound significance for the gene expression studies of tea plant, especially when applied to tea leaf development and metabolism. In this study, nine candidate reference genes (i.e., CsACT7, CsEF-1α, CseIF-4α, CsGAPDH, CsPP2A, CsSAND, CsTBP, CsTIP41, and CsTUB) of C. sinensis were cloned. The quantitative expression data of these genes were investigated in five tea leaf developmental stages (i.e., 1st, 2nd, 3rd, 4th, and older leaves) and normal growth tea leaves subjected to five hormonal stimuli (i.e., ABA, GA, IAA, MeJA, and SA), and gene expression stability was calculated using three common statistical algorithms, namely, geNorm, NormFinder, and Bestkeeper. Results indicated that CsTBP and CsTIP41 were the most stable genes in tea leaf development and CsTBP was the best gene under hormonal stimuli; by contrast, CsGAPDH and CsTUB genes showed the least stability. The gene expression profile of CsNAM gene was analyzed to confirm the validity of the reference genes in this study. Our data provide basis for the selection of reference genes for future biological research in the leaf development and hormonal stimuli of C. sinensis. PMID:26813576
Tumor suppressor microRNAs are downregulated in myelodysplastic syndrome with spliceosome mutations
Aslan, Derya; Garde, Christian; Nygaard, Mette Katrine; Helbo, Alexandra Søgaard; Dimopoulos, Konstantinos; Hansen, Jakob Werner; Severinsen, Marianne Tang; Treppendahl, Marianne Bach; Sjø, Lene Dissing; Grønbæk, Kirsten; Kristensen, Lasse Sommer
2016-01-01
Spliceosome mutations are frequently observed in patients with myelodysplastic syndromes (MDS). However, it is largely unknown how these mutations contribute to the disease. MicroRNAs (miRNAs) are small noncoding RNAs, which have been implicated in most human cancers due to their role in post transcriptional gene regulation. The aim of this study was to analyze the impact of spliceosome mutations on the expression of miRNAs in a cohort of 34 MDS patients. In total, the expression of 76 miRNAs, including mirtrons and splice site overlapping miRNAs, was accurately quantified using reverse transcriptase quantitative PCR. The majority of the studied miRNAs have previously been implicated in MDS. Stably expressed miRNA genes for normalization of the data were identified using GeNorm and NormFinder algorithms. High-resolution melting assays covering all mutational hotspots within SF3B1, SRSF2, and U2AF1 (U2AF35) were developed, and all detected mutations were confirmed by Sanger sequencing. Overall, canonical miRNAs were downregulated in spliceosome mutated samples compared to wild-type (P = 0.002), and samples from spliceosome mutated patients clustered together in hierarchical cluster analyses. Among the most downregulated miRNAs were several tumor-suppressor miRNAs, including several let-7 family members, miR-423, and miR-103a. Finally, we observed that the predicted targets of the most downregulated miRNAs were involved in apoptosis, hematopoiesis, and acute myeloid leukemia among other cancer- and metabolic pathways. Our data indicate that spliceosome mutations may play an important role in MDS pathophysiology by affecting the expression of tumor suppressor miRNA genes involved in the development and progression of MDS. PMID:26848861
Zhang, YuanYuan; Hua, Chaoju; Wang, Zishuai; Li, Kui
2016-01-01
The selection of suitable reference genes is crucial to accurately evaluate and normalize the relative expression level of target genes for gene function analysis. However, commonly used reference genes have variable expression levels in developing skeletal muscle. There are few reports that systematically evaluate the expression stability of reference genes across prenatal and postnatal developing skeletal muscle in mammals. Here, we used quantitative PCR to examine the expression levels of 15 candidate reference genes (ACTB, GAPDH, RNF7, RHOA, RPS18, RPL32, PPIA, H3F3, API5, B2M, AP1S1, DRAP1, TBP, WSB, and VAPB) in porcine skeletal muscle at 26 different developmental stages (15 prenatal and 11 postnatal periods). We evaluated gene expression stability using the computer algorithms geNorm, NormFinder, and BestKeeper. Our results indicated that GAPDH and ACTB had the greatest variability among the candidate genes across prenatal and postnatal stages of skeletal muscle development. RPS18, API5, and VAPB had stable expression levels in prenatal stages, whereas API5, RPS18, RPL32, and H3F3 had stable expression levels in postnatal stages. API5 and H3F3 expression levels had the greatest stability in all tested prenatal and postnatal stages, and were the most appropriate reference genes for gene expression normalization in developing skeletal muscle. Our data provide valuable information for gene expression analysis during different stages of skeletal muscle development in mammals. This information can provide a valuable guide for the analysis of human diseases. PMID:27994956
Li, Xiaoshuang; Zhang, Daoyuan; Li, Haiyan; Gao, Bei; Yang, Honglan; Zhang, Yuanming; Wood, Andrew J.
2015-01-01
Syntrichia caninervis is the dominant bryophyte of the biological soil crusts found in the Gurbantunggut desert. The extreme desert environment is characterized by prolonged drought, temperature extremes, high radiation and frequent cycles of hydration and dehydration. S. caninervis is an ideal organism for the identification and characterization of genes related to abiotic stress tolerance. Reverse transcription quantitative real-time polymerase chain reaction (RT-qPCR) expression analysis is a powerful analytical technique that requires the use of stable reference genes. Using available S. caninervis transcriptome data, we selected 15 candidate reference genes and analyzed their relative expression stabilities in S. caninervis gametophores exposed to a range of abiotic stresses or a hydration-desiccation-rehydration cycle. The programs geNorm, NormFinder, and RefFinder were used to assess and rank the expression stability of the 15 candidate genes. The stability ranking results of reference genes under each specific experimental condition showed high consistency using different algorithms. For abiotic stress treatments, the combination of two genes (α-TUB2 and CDPK) were sufficient for accurate normalization. For the hydration-desiccation-rehydration process, the combination of two genes (α-TUB1 and CDPK) were sufficient for accurate normalization. 18S was among the least stable genes in all of the experimental sets and was unsuitable as reference gene in S. caninervis. This is the first systematic investigation and comparison of reference gene selection for RT-qPCR work in S. caninervis. This research will facilitate gene expression studies in S. caninervis, related moss species from the Syntrichia complex and other mosses. PMID:25699066
Lemma, Silvia; Avnet, Sofia; Salerno, Manuela; Chano, Tokuhiro; Baldini, Nicola
2016-01-01
The characterization of cancer stem cell (CSC) subpopulation, through the comparison of the gene expression signature in respect to the native cancer cells, is particularly important for the identification of novel and more effective anticancer strategies. However, CSC have peculiar characteristics in terms of adhesion, growth, and metabolism that possibly implies a different modulation of the expression of the most commonly used housekeeping genes (HKG), like b-actin (ACTB). Although it is crucial to identify which are the most stable HKG genes to normalize the data derived from quantitative Real-Time PCR analysis to obtain robust and consistent results, an exhaustive validation of reference genes in CSC is still missing. Here, we isolated CSC spheres from different musculoskeletal sarcomas and carcinomas as a model to investigate on the stability of the mRNA expression of 15 commonly used HKG, in respect to the native cells. The selected genes were analysed for the variation coefficient and compared using the popular algorithms NormFinder and geNorm to evaluate stability ranking. As a result, we found that: 1) Tata Binding Protein (TBP), Tyrosine 3-monooxygenase/tryptophan 5-monooxygenase activation protein zeta polypeptide (YWHAZ), Peptidylprolyl isomerase A (PPIA), and Hydroxymethylbilane synthase (HMBS) are the most stable HKG for the comparison between CSC and native cells; 2) at least four reference genes should be considered for robust results; 3) the use of ACTB should not be recommended, 4) specific HKG should be considered for studies that are focused only on a specific tumor type, like sarcoma or carcinoma. Our results should be taken in consideration for all the studies of gene expression analysis of CSC, and will substantially contribute for future investigations aimed to identify novel anticancer therapy based on CSC targeting.
Zhang, Jian; Hu, Yong-hua; Sun, Bo-guang; Xiao, Zhi-zhong; Sun, Li
2013-04-15
Disease outbreaks caused by iridoviruses are known to affect farmed flounder (Paralichthys olivaceus) and turbot (Scophthalmus maximus). To facilitate quantitative real time RT-PCR (qRT-PCR) analysis of gene expression in flounder and turbot during viral infection, we in this study examined the potentials of 9 housekeeping genes of flounder and turbot as internal references for qRT-PCR under conditions of experimental infection with megalocytivirus, a member of the Iridoviridae family. The mRNA levels of the 9 housekeeping genes in the brain, gill, heart, intestine, kidney, liver, muscle, and spleen of flounder and turbot were determined by qRT-PCR at 24h and 72h post-viral infection, and the expression stabilities of the genes were determined with geNorm and NormFinder algorithms. The results showed that (i) viral infection induced significant changes in the mRNA levels of the all the examined genes in a manner that was dependent on both tissue type and infection stage; (ii) for a given time point of infection, stability predictions made by the two algorisms were highly consistent for most tissues; (iii) the optimum reference genes differed at different infection time points at least in some tissues; (iv) at both examined time points, no common reference genes were identified across all tissue types. These results indicate that when studying gene expression in flounder and turbot in relation to viral infection, different internal references may have to be used not only for different tissues but also for different infection stages.
Identification of reference genes for qPCR analysis during hASC long culture maintenance
Palombella, Silvia; Pirrone, Cristina; Cherubino, Mario; Valdatta, Luigi; Bernardini, Giovanni
2017-01-01
Up to now quantitative PCR based assay is the most common method for characterizing or confirming gene expression patterns and comparing mRNA levels in different sample populations. Since this technique is relative easy and low cost compared to other methods of characterization, e.g. flow cytometry, we used it to typify human adipose-derived stem cells (hASCs). hASCs possess several characteristics that make them attractive for scientific research and clinical applications. Accurate normalization of gene expression relies on good selection of reference genes and the best way to choose them appropriately is to follow the common rule of the “Best 3”, at least three reference genes, three different validation software and three sample replicates. Analysis was performed on hASCs cultivated until the eleventh cell confluence using twelve candidate reference genes, initially selected from literature, whose stability was evaluated by the algorithms NormFinder, BestKeeper, RefFinder and IdealRef, a home-made version of GeNorm. The best gene panel (RPL13A, RPS18, GAPDH, B2M, PPIA and ACTB), determined in one patient by IdealRef calculation, was then investigated in other four donors. Although patients demonstrated a certain gene expression variability, we can assert that ACTB is the most unreliable gene whereas ribosomal proteins (RPL13A and RPS18) show minor inconstancy in their mRNA expression. This work underlines the importance of validating reference genes before conducting each experiment and proposes a free software as alternative to those existing. PMID:28182697
Reddy, Dumbala Srinivas; Bhatnagar-Mathur, Pooja; Reddy, Palakolanu Sudhakar; Sri Cindhuri, Katamreddy; Sivaji Ganesh, Adusumalli; Sharma, Kiran Kumar
2016-01-01
Quantitative Real-Time PCR (qPCR) is a preferred and reliable method for accurate quantification of gene expression to understand precise gene functions. A total of 25 candidate reference genes including traditional and new generation reference genes were selected and evaluated in a diverse set of chickpea samples. The samples used in this study included nine chickpea genotypes (Cicer spp.) comprising of cultivated and wild species, six abiotic stress treatments (drought, salinity, high vapor pressure deficit, abscisic acid, cold and heat shock), and five diverse tissues (leaf, root, flower, seedlings and seed). The geNorm, NormFinder and RefFinder algorithms used to identify stably expressed genes in four sample sets revealed stable expression of UCP and G6PD genes across genotypes, while TIP41 and CAC were highly stable under abiotic stress conditions. While PP2A and ABCT genes were ranked as best for different tissues, ABCT, UCP and CAC were most stable across all samples. This study demonstrated the usefulness of new generation reference genes for more accurate qPCR based gene expression quantification in cultivated as well as wild chickpea species. Validation of the best reference genes was carried out by studying their impact on normalization of aquaporin genes PIP1;4 and TIP3;1, in three contrasting chickpea genotypes under high vapor pressure deficit (VPD) treatment. The chickpea TIP3;1 gene got significantly up regulated under high VPD conditions with higher relative expression in the drought susceptible genotype, confirming the suitability of the selected reference genes for expression analysis. This is the first comprehensive study on the stability of the new generation reference genes for qPCR studies in chickpea across species, different tissues and abiotic stresses.
Reference gene alternatives to Gapdh in rodent and human heart failure gene expression studies
2010-01-01
Background Quantitative real-time RT-PCR (RT-qPCR) is a highly sensitive method for mRNA quantification, but requires invariant expression of the chosen reference gene(s). In pathological myocardium, there is limited information on suitable reference genes other than the commonly used Gapdh mRNA and 18S ribosomal RNA. Our aim was to evaluate and identify suitable reference genes in human failing myocardium, in rat and mouse post-myocardial infarction (post-MI) heart failure and across developmental stages in fetal and neonatal rat myocardium. Results The abundance of Arbp, Rpl32, Rpl4, Tbp, Polr2a, Hprt1, Pgk1, Ppia and Gapdh mRNA and 18S ribosomal RNA in myocardial samples was quantified by RT-qPCR. The expression variability of these transcripts was evaluated by the geNorm and Normfinder algorithms and by a variance component analysis method. Biological variability was a greater contributor to sample variability than either repeated reverse transcription or PCR reactions. Conclusions The most stable reference genes were Rpl32, Gapdh and Polr2a in mouse post-infarction heart failure, Polr2a, Rpl32 and Tbp in rat post-infarction heart failure and Rpl32 and Pgk1 in human heart failure (ischemic disease and cardiomyopathy). The overall most stable reference genes across all three species was Rpl32 and Polr2a. In rat myocardium, all reference genes tested showed substantial variation with developmental stage, with Rpl4 as was most stable among the tested genes. PMID:20331858
Wu, Jianyang; Zhang, Hongna; Liu, Liqin; Li, Weicai; Wei, Yongzan; Shi, Shengyou
2016-01-01
Reverse transcription quantitative PCR (RT-qPCR) as the accurate and sensitive method is use for gene expression analysis, but the veracity and reliability result depends on whether select appropriate reference gene or not. To date, several reliable reference gene validations have been reported in fruits trees, but none have been done on preharvest and postharvest longan fruits. In this study, 12 candidate reference genes, namely, CYP, RPL, GAPDH, TUA, TUB, Fe-SOD, Mn-SOD, Cu/Zn-SOD, 18SrRNA, Actin, Histone H3, and EF-1a, were selected. Expression stability of these genes in 150 longan samples was evaluated and analyzed using geNorm and NormFinder algorithms. Preharvest samples consisted of seven experimental sets, including different developmental stages, organs, hormone stimuli (NAA, 2,4-D, and ethephon) and abiotic stresses (bagging and girdling with defoliation). Postharvest samples consisted of different temperature treatments (4 and 22°C) and varieties. Our findings indicate that appropriate reference gene(s) should be picked for each experimental condition. Our data further showed that the commonly used reference gene Actin does not exhibit stable expression across experimental conditions in longan. Expression levels of the DlACO gene, which is a key gene involved in regulating fruit abscission under girdling with defoliation treatment, was evaluated to validate our findings. In conclusion, our data provide a useful framework for choice of suitable reference genes across different experimental conditions for RT-qPCR analysis of preharvest and postharvest longan fruits. PMID:27375640
Zhang, Juan; Tang, Hongju; Zhang, Yuqing; Deng, Ruyuan; Shao, Li; Liu, Yun; Li, Fengying; Wang, Xiao; Zhou, Libin
2014-05-01
Quantitative reverse transcription PCR (qRT-PCR) is becoming increasingly important in the effort to gain insight into the molecular mechanisms underlying adipogenesis. However, the expression profile of a target gene may be misinterpreted due to the unstable expression of the reference genes under different experimental conditions. Therefore, in this study, we investigated the expression stability of 10 commonly used reference genes during 3T3-L1 adipocyte differentiation. The mRNA expression levels of glyceraldehyde-3-phosphate dehydrogenase (GAPDH) and transferrin receptor (TFRC) significantly increased during the course of 3T3-L1 adipocyte differentiation, which was decreased by berberine, an inhibitor of adipogenesis. Three popular algorithms, GeNorm, NormFinder and BestKeeper, identified 18 ribosomal RNA and hydroxymethylbilane synthase (HMBS) as the most stable reference genes, while GAPDH and TFRC were the least stable ones. Peptidylprolyl isomerase A [PIPA (cyclophilin A)], ribosomal protein, large, P0 (36-B4), beta-2-microglobulin (B2M), α1-tubulin, hypoxanthine-guanine phosphoribosyltransferase (HPRT) and β-actin showed relatively stable expression levels. The choice of reference genes with various expression stabilities exerted a profound influence on the expression profiles of 2 target genes, peroxisome proliferator-activated receptor (PPAR)γ2 and C/EBPα. In addition, western blot analysis revealed that the increased protein expression of GAPDH was markedly inhibited by berberine during adipocyte differentiation. This study highlights the importance of selecting suitable reference genes for qRT-PCR studies of gene expression during the process of adipogenesis.
Karuppaiya, Palaniyandi; Yan, Xiao-Xue; Liao, Wang; Chen, Fang; Tang, Lin
2017-01-01
Physic nut (Jatropha curcas L) seed oil is a natural resource for the alternative production of fossil fuel. Seed oil production is mainly depended on seed yield, which was restricted by the low ratio of staminate flowers to pistillate flowers. Further, the mechanism of physic nut flower sex differentiation has not been fully understood yet. Quantitative Real Time—Polymerase Chain Reaction is a reliable and widely used technique to quantify the gene expression pattern in biological samples. However, for accuracy of qRT-PCR, appropriate reference gene is highly desirable to quantify the target gene level. Hence, the present study was aimed to identify the stable reference genes in staminate and pistillate flowers of J. curcas. In this study, 10 candidate reference genes were selected and evaluated for their expression stability in staminate and pistillate flowers, and their stability was validated by five different algorithms (ΔCt, BestKeeper, NormFinder, GeNorm and RefFinder). Resulting, TUB and EF found to be the two most stably expressed reference for staminate flower; while GAPDH1 and EF found to be the most stably expressed reference gene for pistillate flowers. Finally, RT-qPCR assays of target gene AGAMOUS using the identified most stable reference genes confirmed the reliability of selected reference genes in different stages of flower development. AGAMOUS gene expression levels at different stages were further proved by gene copy number analysis. Therefore, the present study provides guidance for selecting appropriate reference genes for analyzing the expression pattern of floral developmental genes in staminate and pistillate flowers of J. curcas. PMID:28234941
Zhang, Yan; Zeng, Chang-Jun; He, Lian; Ding, Li; Tang, Ke-Yi; Peng, Wen-Pei
2015-03-01
It is important to select high-quality reference genes for the accurate interpretation of quantitative reverse transcription polymerase chain reaction data, in particular for certain miRNAs that may demonstrate unstable expression. Although several studies have attempted to validate reference miRNA genes in the porcine testis, spermatozoa, and other tissues, no validation studies have been carried out on cryopreserved boar spermatozoa. In this study, 15 commonly used reference miRNA genes (5S, let-7c-5p, ssc-miR-16-5p, ssc-miR-17-5p, ssc-miR-20a, ssc-miR-23a, ssc-miR-24-3p, ssc-miR-26a, ssc-miR-27a-3p, ssc-miR-92a, ssc-miR-103-3p, ssc-miR-106a, ssc-miR-107-3p, ssc-miR-186, and ssc-miR-221-3p) were selected to evaluate the expression stability of target miRNAs in boar spermatozoa under different experimental conditions and concentrations. The stability of the expression of these reference miRNAs across each sample was evaluated using geNorm, NormFinder, and BestKeeper software. The results showed that ssc-miR-186 (mean rank value = 5.00), ssc-miR-23a (5.33), and ssc-miR-27a (5.33) were the most suitable reference genes using three different statistical algorithms and comprehensive ranking. The identification of these reference miRNAs will allow for more accurate quantification of the changes in miRNA expression during cryopreservation of boar spermatozoa.
Optimal Reference Genes for Gene Expression Normalization in Trichomonas vaginalis.
dos Santos, Odelta; de Vargas Rigo, Graziela; Frasson, Amanda Piccoli; Macedo, Alexandre José; Tasca, Tiana
2015-01-01
Trichomonas vaginalis is the etiologic agent of trichomonosis, the most common non-viral sexually transmitted disease worldwide. This infection is associated with several health consequences, including cervical and prostate cancers and HIV acquisition. Gene expression analysis has been facilitated because of available genome sequences and large-scale transcriptomes in T. vaginalis, particularly using quantitative real-time polymerase chain reaction (qRT-PCR), one of the most used methods for molecular studies. Reference genes for normalization are crucial to ensure the accuracy of this method. However, to the best of our knowledge, a systematic validation of reference genes has not been performed for T. vaginalis. In this study, the transcripts of nine candidate reference genes were quantified using qRT-PCR under different cultivation conditions, and the stability of these genes was compared using the geNorm and NormFinder algorithms. The most stable reference genes were α-tubulin, actin and DNATopII, and, conversely, the widely used T. vaginalis reference genes GAPDH and β-tubulin were less stable. The PFOR gene was used to validate the reliability of the use of these candidate reference genes. As expected, the PFOR gene was upregulated when the trophozoites were cultivated with ferrous ammonium sulfate when the DNATopII, α-tubulin and actin genes were used as normalizing gene. By contrast, the PFOR gene was downregulated when the GAPDH gene was used as an internal control, leading to misinterpretation of the data. These results provide an important starting point for reference gene selection and gene expression analysis with qRT-PCR studies of T. vaginalis.
Karuppaiya, Palaniyandi; Yan, Xiao-Xue; Liao, Wang; Wu, Jun; Chen, Fang; Tang, Lin
2017-01-01
Physic nut (Jatropha curcas L) seed oil is a natural resource for the alternative production of fossil fuel. Seed oil production is mainly depended on seed yield, which was restricted by the low ratio of staminate flowers to pistillate flowers. Further, the mechanism of physic nut flower sex differentiation has not been fully understood yet. Quantitative Real Time-Polymerase Chain Reaction is a reliable and widely used technique to quantify the gene expression pattern in biological samples. However, for accuracy of qRT-PCR, appropriate reference gene is highly desirable to quantify the target gene level. Hence, the present study was aimed to identify the stable reference genes in staminate and pistillate flowers of J. curcas. In this study, 10 candidate reference genes were selected and evaluated for their expression stability in staminate and pistillate flowers, and their stability was validated by five different algorithms (ΔCt, BestKeeper, NormFinder, GeNorm and RefFinder). Resulting, TUB and EF found to be the two most stably expressed reference for staminate flower; while GAPDH1 and EF found to be the most stably expressed reference gene for pistillate flowers. Finally, RT-qPCR assays of target gene AGAMOUS using the identified most stable reference genes confirmed the reliability of selected reference genes in different stages of flower development. AGAMOUS gene expression levels at different stages were further proved by gene copy number analysis. Therefore, the present study provides guidance for selecting appropriate reference genes for analyzing the expression pattern of floral developmental genes in staminate and pistillate flowers of J. curcas.
Niu, Guanglin; Yang, Yalan; Zhang, YuanYuan; Hua, Chaoju; Wang, Zishuai; Tang, Zhonglin; Li, Kui
2016-01-01
The selection of suitable reference genes is crucial to accurately evaluate and normalize the relative expression level of target genes for gene function analysis. However, commonly used reference genes have variable expression levels in developing skeletal muscle. There are few reports that systematically evaluate the expression stability of reference genes across prenatal and postnatal developing skeletal muscle in mammals. Here, we used quantitative PCR to examine the expression levels of 15 candidate reference genes (ACTB, GAPDH, RNF7, RHOA, RPS18, RPL32, PPIA, H3F3, API5, B2M, AP1S1, DRAP1, TBP, WSB, and VAPB) in porcine skeletal muscle at 26 different developmental stages (15 prenatal and 11 postnatal periods). We evaluated gene expression stability using the computer algorithms geNorm, NormFinder, and BestKeeper. Our results indicated that GAPDH and ACTB had the greatest variability among the candidate genes across prenatal and postnatal stages of skeletal muscle development. RPS18, API5, and VAPB had stable expression levels in prenatal stages, whereas API5, RPS18, RPL32, and H3F3 had stable expression levels in postnatal stages. API5 and H3F3 expression levels had the greatest stability in all tested prenatal and postnatal stages, and were the most appropriate reference genes for gene expression normalization in developing skeletal muscle. Our data provide valuable information for gene expression analysis during different stages of skeletal muscle development in mammals. This information can provide a valuable guide for the analysis of human diseases.
Cassan-Wang, Hua; Soler, Marçal; Yu, Hong; Camargo, Eduardo Leal O; Carocha, Victor; Ladouce, Nathalie; Savelli, Bruno; Paiva, Jorge A P; Leplé, Jean-Charles; Grima-Pettenati, Jacqueline
2012-12-01
Interest in the genomics of Eucalyptus has skyrocketed thanks to the recent sequencing of the genome of Eucalyptus grandis and to a growing number of large-scale transcriptomic studies. Quantitative reverse transcription-PCR (RT-PCR) is the method of choice for gene expression analysis and can now also be used as a high-throughput method. The selection of appropriate internal controls is becoming of utmost importance to ensure accurate expression results in Eucalyptus. To this end, we selected 21 candidate reference genes and used high-throughput microfluidic dynamic arrays to assess their expression among a large panel of developmental and environmental conditions with a special focus on wood-forming tissues. We analyzed the expression stability of these genes by using three distinct statistical algorithms (geNorm, NormFinder and ΔCt), and used principal component analysis to compare methods and rankings. We showed that the most stable genes identified depended not only on the panel of biological samples considered but also on the statistical method used. We then developed a comprehensive integration of the rankings generated by the three methods and identified the optimal reference genes for 17 distinct experimental sets covering 13 organs and tissues, as well as various developmental and environmental conditions. The expression patterns of Eucalyptus master genes EgMYB1 and EgMYB2 experimentally validated our selection. Our findings provide an important resource for the selection of appropriate reference genes for accurate and reliable normalization of gene expression data in the organs and tissues of Eucalyptus trees grown in a range of conditions including abiotic stresses.
Müller, Gabrielle do Amaral E Silva; de Lima, Daína; Zacchi, Flávia Lucena; Piazza, Rômi Sharon; Lüchmann, Karim Hahn; Mattos, Jacó Joaquim; Schlenk, Daniel; Bainy, Afonso Celso Dias
2017-02-04
Bivalves show remarkable plasticity to environmental changes and have been proposed as sentinel organisms in biomonitoring. Studies related to transcriptional analysis using quantitative real-time PCR (qRT-PCR) in these organisms have notably increased, imposing a need to identify and validate adequate reference genes for an accurate and reliable analysis. In the present study, nine reference genes were selected from transcriptome data of Crassostrea brasiliana in order to identify their suitability as qRT-PCR normalizer genes. The transcriptional patterns were analyzed in gills of oysters under three different conditions: different temperatures (18°C, 24°C or 32°C) and phenanthrene (PHE) (100 µg.L(-1) ) combined exposure; different salinities (10, 25 or 35 ‰) and PHE combined exposure and 10% of diesel fuel water-accommodated fraction (diesel-WAF) exposure. Reference gene stability was calculated using five algorithms (geNorm, NormFinder, BestKeeper, ΔCt, RefFinder). Transcripts of ankyrin-like (ANK), GAPDH-like and α tubulin-like (TUBA) genes showed minor changes in different temperature/PHE treatment. Transcripts of ANK, β actin-like and β tubulin-like genes showed better stability at salinity/PHE treatment, and ANK, TUBA and 28S ribosomal protein-like genes showed the most stable transcription pattern in oysters exposed to diesel-WAF exposure. This study constitutes the first systematic analysis on reference gene selection for qRT-PCR normalization in C. brasiliana. These genes could be employed in studies using qRT-PCR analysis under similar experimental conditions. This article is protected by copyright. All rights reserved.
Xu, Xiao-Yan; Shen, Yu-Bang; Fu, Jian-Jun; Lu, Li-Qun; Li, Jia-Le
2014-02-01
Relative quantification is the strategy of choice for processing real-time quantitative reverse transcription polymerase chain reaction (RT-qPCR) data in microRNA (miRNA) expression studies. Normalization of relative quantification data is performed by comparison to reference genes. In teleost species, such as grass carp (Ctenopharyngodon idella), the determination of reference miRNAs and the optimal numbers of these that should be used has not been widely studied. In the present study, the stability of seven miRNAs (miR-126-3p, miR-101a, miR-451, miR-22a, miR-146, miR-142a-5p and miR-192) was investigated by RT-qPCR in different tissues and in different development stages of grass carp. Stability values were calculated with geNorm, NormFinder, BestKeeper and Delta CT algorithms. The results showed that tissue type is an important variability factor for miRNA expression stability. All seven miRNAs had good stability values and, therefore, could be used as reference miRNAs. When all tissues and developmental stages were considered, miR-101a was the most stable miRNA. When each tissue type was considered separately, the most stable miRNAs were 126-3p in blood and liver, 101a in the gills, 192 in the kidney, 451 in the intestine and 22a in the brain, head kidney, spleen, heart, muscle, skin and fin. 126-3p was the most stable reference miRNA gene during developmental stages 1-5, while 22a was the most stable during developmental stages 6-18. Overall, this study provides valuable information about the reference miRNAs that can be used to perform appropriate normalizations when undertaking relative quantification in RT-qPCR studies of grass carp.
Hibbeler, Sascha; Scharsack, Joern P; Becker, Sven
2008-01-01
Background During the last years the quantification of immune response under immunological challenges, e.g. parasitation, has been a major focus of research. In this context, the expression of immune response genes in teleost fish has been surveyed for scientific and commercial purposes. Despite the fact that it was shown in teleostei and other taxa that the gene for beta-actin is not the most stably expressed housekeeping gene (HKG), depending on the tissue and experimental treatment, the gene has been used as a reference gene in such studies. In the three-spined stickleback, Gasterosteus aculeatus, other HKG than the one for beta-actin have not been established so far. Results To establish a reliable method for the measurement of immune gene expression in Gasterosteus aculeatus, sequences from the now available genome database and an EST library of the same species were used to select oligonucleotide primers for HKG, in order to perform quantitative reverse-transcription (RT) PCR. The expression stability of ten candidate reference genes was evaluated in three different tissues, and in five parasite treatment groups, using the three algorithms BestKeeper, geNorm and NormFinder. Our results showed that in most of the tissues and treatments HKG that could not be used so far due to unknown sequences, proved to be more stably expressed than the one for beta-actin. Conclusion As they were the most stably expressed genes in all tissues examined, we suggest using the genes for the L13a ribosomal binding protein and ubiquitin as alternative or additional reference genes in expression analysis in Gasterosteus aculeatus. PMID:18230138
Identification of reference genes for circulating microRNA analysis in colorectal cancer
Niu, Yanqin; Wu, Yike; Huang, Jinyong; Li, Qing; Kang, Kang; Qu, Junle; Li, Furong; Gou, Deming
2016-01-01
Quantitative real-time PCR (qPCR) is the most frequently used method for measuring expression levels of microRNAs (miRNAs), which is based on normalization to endogenous references. Although circulating miRNAs have been regarded as potential non-invasive biomarker of disease, no study has been performed so far on reference miRNAs for normalization in colorectal cancer. In this study we tried to identify optimal reference miRNAs for qPCR analysis across colorectal cancer patients and healthy individuals. 485 blood-derived miRNAs were profiled in serum sample pools of both colorectal cancer and healthy control. Seven candidate miRNAs chosen from profiling results as well as three previous reported reference miRNAs were validated using qPCR in 30 colorectal cancer patients and 30 healthy individuals, and thereafter analyzed by statistical algorithms BestKeeper, geNorm and NormFinder. Taken together, hsa-miR-93-5p, hsa-miR-25-3p and hsa-miR-106b-5p were recommended as a set of suitable reference genes. More interestingly, the three miRNAs validated from 485 miRNAs are derived from a single primary transcript, indicting the cluster may be highly conserved in colorectal cancer. However, all three miRNAs differed significantly between healthy individuals and non-small cell lung cancer or breast cancer patients and could not be used as reference genes in the two types of cancer. PMID:27759076
Yang, Qi; Zou, Bo; Ren, Weibo; Ding, Yong; Wang, Zhen; Wang, Ruigang; Wang, Kai; Hou, Xiangyang
2017-01-01
Stipa grandis P. Smirn. is a dominant plant species in the typical steppe of the Xilingole Plateau of Inner Mongolia. Selection of suitable reference genes for the quantitative real-time reverse transcription polymerase chain reaction (qRT-PCR) is important for gene expression analysis and research into the molecular mechanisms underlying the stress responses of S. grandis. In the present study, 15 candidate reference genes (EF1 beta, ACT, GAPDH, SamDC, CUL4, CAP, SNF2, SKIP1, SKIP5, SKIP11, UBC2, UBC15, UBC17, UCH, and HERC2) were evaluated for their stability as potential reference genes for qRT-PCR under different stresses. Four algorithms were used: GeNorm, NormFinder, BestKeeper, and RefFinder. The results showed that the most stable reference genes were different under different stress conditions: EF1beta and UBC15 during drought and salt stresses; ACT and GAPDH under heat stress; SKIP5 and UBC17 under cold stress; UBC15 and HERC2 under high pH stress; UBC2 and UBC15 under wounding stress; EF1beta and UBC17 under jasmonic acid treatment; UBC15 and CUL4 under abscisic acid treatment; and HERC2 and UBC17 under salicylic acid treatment. EF1beta and HERC2 were the most suitable genes for the global analysis of all samples. Furthermore, six target genes, SgPOD, SgPAL, SgLEA, SgLOX, SgHSP90 and SgPR1, were selected to validate the most and least stable reference genes under different treatments. Our results provide guidelines for reference gene selection for more accurate qRT-PCR quantification and will promote studies of gene expression in S. grandis subjected to environmental stress. PMID:28056110
Selection and Validation of Reference Genes for Quantitative Real-time PCR in Gentiana macrophylla
He, Yihan; Yan, Hailing; Hua, Wenping; Huang, Yaya; Wang, Zhezhi
2016-01-01
Real time quantitative PCR (RT-qPCR or qPCR) has been extensively applied for analyzing gene expression because of its accuracy, sensitivity, and high throughput. However, the unsuitable choice of reference gene(s) can lead to a misinterpretation of results. We evaluated the stability of 10 candidates – five traditional housekeeping genes (UBC21, GAPC2, EF-1α4, UBQ10, and UBC10) and five novel genes (SAND1, FBOX, PTB1, ARP, and Expressed1) – using the transcriptome data of Gentiana macrophylla. Common statistical algorithms ΔCt, GeNorm, NormFinder, and BestKeeper were run with samples collected from plants under various experimental conditions. For normalizing expression levels from tissues at different developmental stages, GAPC2 and UBC21 had the highest rankings. Both SAND1 and GAPC2 proved to be the optimal reference genes for roots from plants exposed to abiotic stresses while EF-1α4 and SAND1 were optimal when examining expression data from the leaves of stressed plants. Based on a comprehensive ranking of stability under different experimental conditions, we recommend that SAND1 and EF-1α4 are the most suitable overall. In this study, to find a suitable reference gene and its real-time PCR assay for G. macrophylla DNA content quantification, we evaluated three target genes including WRKY30, G10H, and SLS, through qualitative and absolute quantitative PCR with leaves under elicitors stressed experimental conditions. Arbitrary use of reference genes without previous evaluation can lead to a misinterpretation of the data. Our results will benefit future research on the expression of genes related to secoiridoid biosynthesis in this species under different experimental conditions. PMID:27446172
Reddy, Palakolanu Sudhakar; Sri Cindhuri, Katamreddy; Sivaji Ganesh, Adusumalli; Sharma, Kiran Kumar
2016-01-01
Quantitative Real-Time PCR (qPCR) is a preferred and reliable method for accurate quantification of gene expression to understand precise gene functions. A total of 25 candidate reference genes including traditional and new generation reference genes were selected and evaluated in a diverse set of chickpea samples. The samples used in this study included nine chickpea genotypes (Cicer spp.) comprising of cultivated and wild species, six abiotic stress treatments (drought, salinity, high vapor pressure deficit, abscisic acid, cold and heat shock), and five diverse tissues (leaf, root, flower, seedlings and seed). The geNorm, NormFinder and RefFinder algorithms used to identify stably expressed genes in four sample sets revealed stable expression of UCP and G6PD genes across genotypes, while TIP41 and CAC were highly stable under abiotic stress conditions. While PP2A and ABCT genes were ranked as best for different tissues, ABCT, UCP and CAC were most stable across all samples. This study demonstrated the usefulness of new generation reference genes for more accurate qPCR based gene expression quantification in cultivated as well as wild chickpea species. Validation of the best reference genes was carried out by studying their impact on normalization of aquaporin genes PIP1;4 and TIP3;1, in three contrasting chickpea genotypes under high vapor pressure deficit (VPD) treatment. The chickpea TIP3;1 gene got significantly up regulated under high VPD conditions with higher relative expression in the drought susceptible genotype, confirming the suitability of the selected reference genes for expression analysis. This is the first comprehensive study on the stability of the new generation reference genes for qPCR studies in chickpea across species, different tissues and abiotic stresses. PMID:26863232
Barros Rodrigues, Thaís; Khajuria, Chitvan; Wang, Haichuan; Matz, Natalie; Cunha Cardoso, Danielle; Valicente, Fernando Hercos; Zhou, Xuguo; Siegfried, Blair
2014-01-01
Quantitative Real-time PCR (qRT-PCR) is a powerful technique to investigate comparative gene expression. In general, normalization of results using a highly stable housekeeping gene (HKG) as an internal control is recommended and necessary. However, there are several reports suggesting that regulation of some HKGs is affected by different conditions. The western corn rootworm (WCR), Diabrotica virgifera virgifera LeConte (Coleoptera: Chrysomelidae), is a serious pest of corn in the United States and Europe. The expression profile of target genes related to insecticide exposure, resistance, and RNA interference has become an important experimental technique for study of western corn rootworms; however, lack of information on reliable HKGs under different conditions makes the interpretation of qRT-PCR results difficult. In this study, four distinct algorithms (Genorm, NormFinder, BestKeeper and delta-CT) and five candidate HKGs to genes of reference (β-actin; GAPDH, glyceraldehyde-3-phosphate dehydrogenase; β-tubulin; RPS9, ribosomal protein S9; EF1a, elongation factor-1α) were evaluated to determine the most reliable HKG under different experimental conditions including exposure to dsRNA and Bt toxins and among different tissues and developmental stages. Although all the HKGs tested exhibited relatively stable expression among the different treatments, some differences were noted. Among the five candidate reference genes evaluated, β-actin exhibited highly stable expression among different life stages. RPS9 exhibited the most similar pattern of expression among dsRNA treatments, and both experiments indicated that EF1a was the second most stable gene. EF1a was also the most stable for Bt exposure and among different tissues. These results will enable researchers to use more accurate and reliable normalization of qRT-PCR data in WCR experiments. PMID:25356627
Ferradás, Yolanda; Rey, Laura; Martínez, Óscar; Rey, Manuel; González, Ma Victoria
2016-05-01
Identification and validation of reference genes are required for the normalization of qPCR data. We studied the expression stability produced by eight primer pairs amplifying four common genes used as references for normalization. Samples representing different tissues, organs and developmental stages in kiwifruit (Actinidia chinensis var. deliciosa (A. Chev.) A. Chev.) were used. A total of 117 kiwifruit samples were divided into five sample sets (mature leaves, axillary buds, stigmatic arms, fruit flesh and seeds). All samples were also analysed as a single set. The expression stability of the candidate primer pairs was tested using three algorithms (geNorm, NormFinder and BestKeeper). The minimum number of reference genes necessary for normalization was also determined. A unique primer pair was selected for amplifying the 18S rRNA gene. The primer pair selected for amplifying the ACTIN gene was different depending on the sample set. 18S 2 and ACT 2 were the candidate primer pairs selected for normalization in the three sample sets (mature leaves, fruit flesh and stigmatic arms). 18S 2 and ACT 3 were the primer pairs selected for normalization in axillary buds. No primer pair could be selected for use as the reference for the seed sample set. The analysis of all samples in a single set did not produce the selection of any stably expressing primer pair. Considering data previously reported in the literature, we validated the selected primer pairs amplifying the FLOWERING LOCUS T gene for use in the normalization of gene expression in kiwifruit.
Siegfried, Blair D.; Zhou, Xuguo
2015-01-01
Reverse transcriptase-quantitative polymerase chain reaction (RT-qPCR) is a reliable, rapid, and reproducible technique for measuring and evaluating changes in gene expression. To facilitate gene expression studies and obtain more accurate RT-qPCR data, normalization relative to stable reference genes is required. In this study, expression profiles of seven candidate reference genes, including β-actin (Actin), elongation factor 1 α (EF1A), glyceralde hyde-3-phosphate dehydro-genase (GAPDH), cyclophilins A (CypA), vacuolar-type H+-ATPase (ATPase), 28S ribosomal RNA (28S), and 18S ribosomal RNA (18S) from Hippodamia convergens were investigated. H. convergens is an abundant predatory species in the New World, and has been widely used as a biological control agent against sap-sucking insect pests, primarily aphids. A total of four analytical methods, geNorm, Normfinder, BestKeeper, and the ΔCt method, were employed to evaluate the performance of these seven genes as endogenous controls under diverse experimental conditions. Additionally, RefFinder, a comprehensive evaluation platform integrating the four above mentioned algorithms, ranked the overall stability of these candidate genes. A suite of reference genes were specifically recommended for each experimental condition. Among them, 28S, EF1A, and CypA were the best reference genes across different development stages; GAPDH, 28S, and CypA were most stable in different tissues. GAPDH and CypA were most stable in female and male adults and photoperiod conditions, 28S and EF1A were most stable under a range of temperatures, Actin and CypA were most stable under dietary RNAi condition. This work establishes a standardized RT-qPCR analysis in H. convergens. Additionally, this study lays a foundation for functional genomics research in H. convergens and sheds light on the ecological risk assessment of RNAi-based biopesticides on this non-target biological control agent. PMID:25915640
Müller, Oliver A.; Grau, Jan; Thieme, Sabine; Prochaska, Heike; Adlung, Norman; Sorgatz, Anika; Bonas, Ulla
2015-01-01
The Gram-negative bacterium Xanthomonas campestris pv. vesicatoria (Xcv) causes bacterial spot disease of pepper and tomato by direct translocation of type III effector proteins into the plant cell cytosol. Once in the plant cell the effectors interfere with host cell processes and manipulate the plant transcriptome. Quantitative RT-PCR (qRT-PCR) is usually the method of choice to analyze transcriptional changes of selected plant genes. Reliable results depend, however, on measuring stably expressed reference genes that serve as internal normalization controls. We identified the most stably expressed tomato genes based on microarray analyses of Xcv-infected tomato leaves and evaluated the reliability of 11 genes for qRT-PCR studies in comparison to four traditionally employed reference genes. Three different statistical algorithms, geNorm, NormFinder and BestKeeper, concordantly determined the superiority of the newly identified reference genes. The most suitable reference genes encode proteins with homology to PHD finger family proteins and the U6 snRNA-associated protein LSm7. In addition, we identified pepper orthologs and validated several genes as reliable normalization controls for qRT-PCR analysis of Xcv-infected pepper plants. The newly identified reference genes will be beneficial for future qRT-PCR studies of the Xcv-tomato and Xcv-pepper pathosystems, as well as for the identification of suitable normalization controls for qRT-PCR studies of other plant-pathogen interactions, especially, if related plant species are used in combination with bacterial pathogens. PMID:26313760
Müller, Oliver A; Grau, Jan; Thieme, Sabine; Prochaska, Heike; Adlung, Norman; Sorgatz, Anika; Bonas, Ulla
2015-01-01
The Gram-negative bacterium Xanthomonas campestris pv. vesicatoria (Xcv) causes bacterial spot disease of pepper and tomato by direct translocation of type III effector proteins into the plant cell cytosol. Once in the plant cell the effectors interfere with host cell processes and manipulate the plant transcriptome. Quantitative RT-PCR (qRT-PCR) is usually the method of choice to analyze transcriptional changes of selected plant genes. Reliable results depend, however, on measuring stably expressed reference genes that serve as internal normalization controls. We identified the most stably expressed tomato genes based on microarray analyses of Xcv-infected tomato leaves and evaluated the reliability of 11 genes for qRT-PCR studies in comparison to four traditionally employed reference genes. Three different statistical algorithms, geNorm, NormFinder and BestKeeper, concordantly determined the superiority of the newly identified reference genes. The most suitable reference genes encode proteins with homology to PHD finger family proteins and the U6 snRNA-associated protein LSm7. In addition, we identified pepper orthologs and validated several genes as reliable normalization controls for qRT-PCR analysis of Xcv-infected pepper plants. The newly identified reference genes will be beneficial for future qRT-PCR studies of the Xcv-tomato and Xcv-pepper pathosystems, as well as for the identification of suitable normalization controls for qRT-PCR studies of other plant-pathogen interactions, especially, if related plant species are used in combination with bacterial pathogens.
Storage capacity of the Tilinglike Learning Algorithm
NASA Astrophysics Data System (ADS)
Buhot, Arnaud; Gordon, Mirta B.
2001-02-01
The storage capacity of an incremental learning algorithm for the parity machine, the Tilinglike Learning Algorithm, is analytically determined in the limit of a large number of hidden perceptrons. Different learning rules for the simple perceptron are investigated. The usual Gardner-Derrida rule leads to a storage capacity close to the upper bound, which is independent of the learning algorithm considered.
A Robustly Stabilizing Model Predictive Control Algorithm
NASA Technical Reports Server (NTRS)
Ackmece, A. Behcet; Carson, John M., III
2007-01-01
A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.
Active Processor Scheduling Using Evolutionary Algorithms
2002-12-01
xiii Active Processor Scheduling Using Evolutionary Algorithms I. Introduction A distributed system offers the ability to run applications across...calculations are made. This model is sometimes referred to as a form of the island model of evolutionary computation because each population is evolved... Evolutionary Algorithms for Solving Multi-Objective Problems. Genetic Algorithms and Evolutionary Computation , New York: Kluwer Academic Publishers, 2002
Learning Intelligent Genetic Algorithms Using Japanese Nonograms
ERIC Educational Resources Information Center
Tsai, Jinn-Tsong; Chou, Ping-Yi; Fang, Jia-Cen
2012-01-01
An intelligent genetic algorithm (IGA) is proposed to solve Japanese nonograms and is used as a method in a university course to learn evolutionary algorithms. The IGA combines the global exploration capabilities of a canonical genetic algorithm (CGA) with effective condensed encoding, improved fitness function, and modified crossover and…
Predicting Protein Structure Using Parallel Genetic Algorithms.
1994-12-01
By " Predicting rotein Structure D istribticfiar.. ................ Using Parallel Genetic Algorithms ,Avaiu " ’ •"... Dist THESIS I IGeorge H...iiLite-d Approved for public release; distribution unlimited AFIT/ GCS /ENG/94D-03 Predicting Protein Structure Using Parallel Genetic Algorithms ...1-1 1.2 Genetic Algorithms ......... ............................ 1-3 1.3 The Protein Folding Problem
In-Trail Procedure (ITP) Algorithm Design
NASA Technical Reports Server (NTRS)
Munoz, Cesar A.; Siminiceanu, Radu I.
2007-01-01
The primary objective of this document is to provide a detailed description of the In-Trail Procedure (ITP) algorithm, which is part of the Airborne Traffic Situational Awareness In-Trail Procedure (ATSA-ITP) application. To this end, the document presents a high level description of the ITP Algorithm and a prototype implementation of this algorithm in the programming language C.
An algorithm on distributed mining association rules
NASA Astrophysics Data System (ADS)
Xu, Fan
2005-12-01
With the rapid development of the Internet/Intranet, distributed databases have become a broadly used environment in various areas. It is a critical task to mine association rules in distributed databases. The algorithms of distributed mining association rules can be divided into two classes. One is a DD algorithm, and another is a CD algorithm. A DD algorithm focuses on data partition optimization so as to enhance the efficiency. A CD algorithm, on the other hand, considers a setting where the data is arbitrarily partitioned horizontally among the parties to begin with, and focuses on parallelizing the communication. A DD algorithm is not always applicable, however, at the time the data is generated, it is often already partitioned. In many cases, it cannot be gathered and repartitioned for reasons of security and secrecy, cost transmission, or sheer efficiency. A CD algorithm may be a more appealing solution for systems which are naturally distributed over large expenses, such as stock exchange and credit card systems. An FDM algorithm provides enhancement to CD algorithm. However, CD and FDM algorithms are both based on net-structure and executing in non-shareable resources. In practical applications, however, distributed databases often are star-structured. This paper proposes an algorithm based on star-structure networks, which are more practical in application, have lower maintenance costs and which are more practical in the construction of the networks. In addition, the algorithm provides high efficiency in communication and good extension in parallel computation.
New Results in Astrodynamics Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Coverstone-Carroll, V.; Hartmann, J. W.; Williams, S. N.; Mason, W. J.
1998-01-01
Generic algorithms have gained popularity as an effective procedure for obtaining solutions to traditionally difficult space mission optimization problems. In this paper, a brief survey of the use of genetic algorithms to solve astrodynamics problems is presented and is followed by new results obtained from applying a Pareto genetic algorithm to the optimization of low-thrust interplanetary spacecraft missions.
Optimisation of nonlinear motion cueing algorithm based on genetic algorithm
NASA Astrophysics Data System (ADS)
Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid
2015-04-01
Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching
Empirical Studies of the Value of Algorithm Animation in Algorithm Understanding
1993-08-01
A series of studies is presented using algorithm animation to teach computer algorithms . These studies are organized into three components: eliciting...lecture with experimenter-preprepared data sets. This work has implications for the design and use of animated algorithms in teaching computer algorithms and
Problem solving with genetic algorithms and Splicer
NASA Technical Reports Server (NTRS)
Bayer, Steven E.; Wang, Lui
1991-01-01
Genetic algorithms are highly parallel, adaptive search procedures (i.e., problem-solving methods) loosely based on the processes of population genetics and Darwinian survival of the fittest. Genetic algorithms have proven useful in domains where other optimization techniques perform poorly. The main purpose of the paper is to discuss a NASA-sponsored software development project to develop a general-purpose tool for using genetic algorithms. The tool, called Splicer, can be used to solve a wide variety of optimization problems and is currently available from NASA and COSMIC. This discussion is preceded by an introduction to basic genetic algorithm concepts and a discussion of genetic algorithm applications.
The Rational Hybrid Monte Carlo algorithm
NASA Astrophysics Data System (ADS)
Clark, Michael
2006-12-01
The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.
Towards General Algorithms for Grammatical Inference
NASA Astrophysics Data System (ADS)
Clark, Alexander
Many algorithms for grammatical inference can be viewed as instances of a more general algorithm which maintains a set of primitive elements, which distributionally define sets of strings, and a set of features or tests that constrain various inference rules. Using this general framework, which we cast as a process of logical inference, we re-analyse Angluin's famous lstar algorithm and several recent algorithms for the inference of context-free grammars and multiple context-free grammars. Finally, to illustrate the advantages of this approach, we extend it to the inference of functional transductions from positive data only, and we present a new algorithm for the inference of finite state transducers.
An Optimal Class Association Rule Algorithm
NASA Astrophysics Data System (ADS)
Jean Claude, Turiho; Sheng, Yang; Chuang, Li; Kaia, Xie
Classification and association rule mining algorithms are two important aspects of data mining. Class association rule mining algorithm is a promising approach for it involves the use of association rule mining algorithm to discover classification rules. This paper introduces an optimal class association rule mining algorithm known as OCARA. It uses optimal association rule mining algorithm and the rule set is sorted by priority of rules resulting into a more accurate classifier. It outperforms the C4.5, CBA, RMR on UCI eight data sets, which is proved by experimental results.
Efficient demultiplexing algorithm for noncontiguous carriers
NASA Technical Reports Server (NTRS)
Thanawala, A. A.; Kwatra, S. C.; Jamali, M. M.; Budinger, J.
1992-01-01
A channel separation algorithm for the frequency division multiple access/time division multiplexing (FDMA/TDM) scheme is presented. It is shown that implementation using this algorithm can be more effective than the fast Fourier transform (FFT) algorithm when only a small number of carriers need to be selected from many, such as satellite Earth terminals. The algorithm is based on polyphase filtering followed by application of a generalized Walsh-Hadamard transform (GWHT). Comparison of the transform technique used in this algorithm with discrete Fourier transform (DFT) and FFT is given. Estimates of the computational rates and power requirements to implement this system are also given.
The global Minmax k-means algorithm.
Wang, Xiaoyan; Bai, Yanping
2016-01-01
The global k-means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k-means to minimize the sum of the intra-cluster variances. However the global k-means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k-means algorithm. In this paper, we modified the global k-means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k-means clustering error method to global k-means algorithm to overcome the effect of bad initialization, proposed the global Minmax k-means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k-means algorithm, the global k-means algorithm and the MinMax k-means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper.
Algorithms versus architectures for computational chemistry
NASA Technical Reports Server (NTRS)
Partridge, H.; Bauschlicher, C. W., Jr.
1986-01-01
The algorithms employed are computationally intensive and, as a result, increased performance (both algorithmic and architectural) is required to improve accuracy and to treat larger molecular systems. Several benchmark quantum chemistry codes are examined on a variety of architectures. While these codes are only a small portion of a typical quantum chemistry library, they illustrate many of the computationally intensive kernels and data manipulation requirements of some applications. Furthermore, understanding the performance of the existing algorithm on present and proposed supercomputers serves as a guide for future programs and algorithm development. The algorithms investigated are: (1) a sparse symmetric matrix vector product; (2) a four index integral transformation; and (3) the calculation of diatomic two electron Slater integrals. The vectorization strategies are examined for these algorithms for both the Cyber 205 and Cray XMP. In addition, multiprocessor implementations of the algorithms are looked at on the Cray XMP and on the MIT static data flow machine proposed by DENNIS.
A hybrid algorithm with GA and DAEM
NASA Astrophysics Data System (ADS)
Wan, HongJie; Deng, HaoJiang; Wang, XueWei
2013-03-01
Although the expectation-maximization (EM) algorithm has been widely used for finding maximum likelihood estimation of parameters in probabilistic models, it has the problem of trapping by local maxima. To overcome this problem, the deterministic annealing EM (DAEM) algorithm was once proposed and had achieved better performance than EM algorithm, but it is not very effective at avoiding local maxima. In this paper, a solution is proposed by integrating GA and DAEM into one procedure to further improve the solution quality. The population based search of genetic algorithm will produce different solutions and thus can increase the search space of DAEM. Therefore, the proposed algorithm will reach better solution than just using DAEM. The algorithm retains the property of DAEM and gets the better solution by genetic operation. Experiment results on Gaussian mixture model parameter estimation demonstrate that the proposed algorithm can achieve better performance.
MM Algorithms for Some Discrete Multivariate Distributions.
Zhou, Hua; Lange, Kenneth
2010-09-01
The MM (minorization-maximization) principle is a versatile tool for constructing optimization algorithms. Every EM algorithm is an MM algorithm but not vice versa. This article derives MM algorithms for maximum likelihood estimation with discrete multivariate distributions such as the Dirichlet-multinomial and Connor-Mosimann distributions, the Neerchal-Morel distribution, the negative-multinomial distribution, certain distributions on partitions, and zero-truncated and zero-inflated distributions. These MM algorithms increase the likelihood at each iteration and reliably converge to the maximum from well-chosen initial values. Because they involve no matrix inversion, the algorithms are especially pertinent to high-dimensional problems. To illustrate the performance of the MM algorithms, we compare them to Newton's method on data used to classify handwritten digits.
Optimal Multistage Algorithm for Adjoint Computation
Aupy, Guillaume; Herrmann, Julien; Hovland, Paul; Robert, Yves
2016-01-01
We reexamine the work of Stumm and Walther on multistage algorithms for adjoint computation. We provide an optimal algorithm for this problem when there are two levels of checkpoints, in memory and on disk. Previously, optimal algorithms for adjoint computations were known only for a single level of checkpoints with no writing and reading costs; a well-known example is the binomial checkpointing algorithm of Griewank and Walther. Stumm and Walther extended that binomial checkpointing algorithm to the case of two levels of checkpoints, but they did not provide any optimality results. We bridge the gap by designing the first optimal algorithm in this context. We experimentally compare our optimal algorithm with that of Stumm and Walther to assess the difference in performance.
A compilation of jet finding algorithms
Flaugher, B.; Meier, K.
1992-12-31
Technical descriptions of jet finding algorithms currently in use in p{anti p} collider experiments (CDF, UA1, UA2), e{sup +}e{sup {minus}} experiments and Monte-Carlo event generators (LUND programs, ISAJET) have been collected. For the hadron collider experiments, the clustering methods fall into two categories: cone algorithms and nearest-neighbor algorithms. In addition, UA2 has employed a combination of both methods for some analysis. While there are clearly differences between the cone and nearest-neighbor algorithms, the authors have found that there are also differences among the cone algorithms in the details of how the centroid of a cone cluster is located and how the E{sub T} and P{sub T} of the jet are defined. The most commonly used jet algorithm in electron-positron experiments is the JADE-type cluster algorithm. Five various incarnations of this approach have been described.
Smell Detection Agent Based Optimization Algorithm
NASA Astrophysics Data System (ADS)
Vinod Chandra, S. S.
2016-09-01
In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.
ALFA: Automated Line Fitting Algorithm
NASA Astrophysics Data System (ADS)
Wesson, R.
2015-12-01
ALFA fits emission line spectra of arbitrary wavelength coverage and resolution, fully automatically. It uses a catalog of lines which may be present to construct synthetic spectra, the parameters of which are then optimized by means of a genetic algorithm. Uncertainties are estimated using the noise structure of the residuals. An emission line spectrum containing several hundred lines can be fitted in a few seconds using a single processor of a typical contemporary desktop or laptop PC. Data cubes in FITS format can be analysed using multiple processors, and an analysis of tens of thousands of deep spectra obtained with instruments such as MUSE will take a few hours.
Algorithms for skiascopy measurement automatization
NASA Astrophysics Data System (ADS)
Fomins, Sergejs; Trukša, Renārs; KrūmiĆa, Gunta
2014-10-01
Automatic dynamic infrared retinoscope was developed, which allows to run procedure at a much higher rate. Our system uses a USB image sensor with up to 180 Hz refresh rate equipped with a long focus objective and 850 nm infrared light emitting diode as light source. Two servo motors driven by microprocessor control the rotation of semitransparent mirror and motion of retinoscope chassis. Image of eye pupil reflex is captured via software and analyzed along the horizontal plane. Algorithm for automatic accommodative state analysis is developed based on the intensity changes of the fundus reflex.
Wire Detection Algorithms for Navigation
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia I.
2002-01-01
In this research we addressed the problem of obstacle detection for low altitude rotorcraft flight. In particular, the problem of detecting thin wires in the presence of image clutter and noise was studied. Wires present a serious hazard to rotorcrafts. Since they are very thin, their detection early enough so that the pilot has enough time to take evasive action is difficult, as their images can be less than one or two pixels wide. Two approaches were explored for this purpose. The first approach involved a technique for sub-pixel edge detection and subsequent post processing, in order to reduce the false alarms. After reviewing the line detection literature, an algorithm for sub-pixel edge detection proposed by Steger was identified as having good potential to solve the considered task. The algorithm was tested using a set of images synthetically generated by combining real outdoor images with computer generated wire images. The performance of the algorithm was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter. The second approach involved the use of an example-based learning scheme namely, Support Vector Machines. The purpose of this approach was to explore the feasibility of an example-based learning based approach for the task of detecting wires from their images. Support Vector Machines (SVMs) have emerged as a promising pattern classification tool and have been used in various applications. It was found that this approach is not suitable for very thin wires and of course, not suitable at all for sub-pixel thick wires. High dimensionality of the data as such does not present a major problem for SVMs. However it is desirable to have a large number of training examples especially for high dimensional data. The main difficulty in using SVMs (or any other example-based learning
Sensor network algorithms and applications.
Trigoni, Niki; Krishnamachari, Bhaskar
2012-01-13
A sensor network is a collection of nodes with processing, communication and sensing capabilities deployed in an area of interest to perform a monitoring task. There has now been about a decade of very active research in the area of sensor networks, with significant accomplishments made in terms of both designing novel algorithms and building exciting new sensing applications. This Theme Issue provides a broad sampling of the central challenges and the contributions that have been made towards addressing these challenges in the field, and illustrates the pervasive and central role of sensor networks in monitoring human activities and the environment.
Cluster Algorithm Special Purpose Processor
NASA Astrophysics Data System (ADS)
Talapov, A. L.; Shchur, L. N.; Andreichenko, V. B.; Dotsenko, Vl. S.
We describe a Special Purpose Processor, realizing the Wolff algorithm in hardware, which is fast enough to study the critical behaviour of 2D Ising-like systems containing more than one million spins. The processor has been checked to produce correct results for a pure Ising model and for Ising model with random bonds. Its data also agree with the Nishimori exact results for spin glass. Only minor changes of the SPP design are necessary to increase the dimensionality and to take into account more complex systems such as Potts models.
The Complexity of Parallel Algorithms,
1985-11-01
Much of this work was done in collaboration with my advisor, Ernst Mayr . He was also supported in part by ONR contract N00014-85-C-0731. F ’. Table...Helinbold and Mayr in their algorithn to compute an optimal two processor schedule [HM2]. One of the promising developments in parallel algorithms is that...lei can be solved by it fast parallel algorithmmmi if the nmlmmmibers are smiall. llehmibold and Mayr JIlM I] have slhowm that. if Ole job timies are
An efficient parallel termination detection algorithm
Baker, A. H.; Crivelli, S.; Jessup, E. R.
2004-05-27
Information local to any one processor is insufficient to monitor the overall progress of most distributed computations. Typically, a second distributed computation for detecting termination of the main computation is necessary. In order to be a useful computational tool, the termination detection routine must operate concurrently with the main computation, adding minimal overhead, and it must promptly and correctly detect termination when it occurs. In this paper, we present a new algorithm for detecting the termination of a parallel computation on distributed-memory MIMD computers that satisfies all of those criteria. A variety of termination detection algorithms have been devised. Of these, the algorithm presented by Sinha, Kale, and Ramkumar (henceforth, the SKR algorithm) is unique in its ability to adapt to the load conditions of the system on which it runs, thereby minimizing the impact of termination detection on performance. Because their algorithm also detects termination quickly, we consider it to be the most efficient practical algorithm presently available. The termination detection algorithm presented here was developed for use in the PMESC programming library for distributed-memory MIMD computers. Like the SKR algorithm, our algorithm adapts to system loads and imposes little overhead. Also like the SKR algorithm, ours is tree-based, and it does not depend on any assumptions about the physical interconnection topology of the processors or the specifics of the distributed computation. In addition, our algorithm is easier to implement and requires only half as many tree traverses as does the SKR algorithm. This paper is organized as follows. In section 2, we define our computational model. In section 3, we review the SKR algorithm. We introduce our new algorithm in section 4, and prove its correctness in section 5. We discuss its efficiency and present experimental results in section 6.
Ligand Identification Scoring Algorithm (LISA)
Zheng, Zheng; Merz, Kenneth M.
2011-01-01
A central problem in de novo drug design is determining the binding affinity of a ligand with a receptor. A new scoring algorithm is presented that estimates the binding affinity of a protein-ligand complex given a three-dimensional structure. The method, LISA (Ligand Identification Scoring Algorithm), uses an empirical scoring function to describe the binding free energy. Interaction terms have been designed to account for van der Waals (VDW) contacts, hydrogen bonding, desolvation effects and metal chelation to model the dissociation equilibrium constants using a linear model. Atom types have been introduced to differentiate the parameters for VDW, H-bonding interactions and metal chelation between different atom pairs. A training set of 492 protein-ligand complexes was selected for the fitting process. Different test sets have been examined to evaluate its ability to predict experimentally measured binding affinities. By comparing with other well known scoring functions, the results show that LISA has advantages over many existing scoring functions in simulating protein-ligand binding affinity, especially metalloprotein-ligand binding affinity. Artificial Neural Network (ANN) was also used in order to demonstrate that the energy terms in LISA are well designed and do not require extra cross terms. PMID:21561101
The Aquarius Salinity Retrieval Algorithm
NASA Technical Reports Server (NTRS)
Meissner, Thomas; Wentz, Frank; Hilburn, Kyle; Lagerloef, Gary; Le Vine, David
2012-01-01
The first part of this presentation gives an overview over the Aquarius salinity retrieval algorithm. The instrument calibration [2] converts Aquarius radiometer counts into antenna temperatures (TA). The salinity retrieval algorithm converts those TA into brightness temperatures (TB) at a flat ocean surface. As a first step, contributions arising from the intrusion of solar, lunar and galactic radiation are subtracted. The antenna pattern correction (APC) removes the effects of cross-polarization contamination and spillover. The Aquarius radiometer measures the 3rd Stokes parameter in addition to vertical (v) and horizontal (h) polarizations, which allows for an easy removal of ionospheric Faraday rotation. The atmospheric absorption at L-band is almost entirely due to molecular oxygen, which can be calculated based on auxiliary input fields from numerical weather prediction models and then successively removed from the TB. The final step in the TA to TB conversion is the correction for the roughness of the sea surface due to wind, which is addressed in more detail in section 3. The TB of the flat ocean surface can now be matched to a salinity value using a surface emission model that is based on a model for the dielectric constant of sea water [3], [4] and an auxiliary field for the sea surface temperature. In the current processing only v-pol TB are used for this last step.
Filter selection using genetic algorithms
NASA Astrophysics Data System (ADS)
Patel, Devesh
1996-03-01
Convolution operators act as matched filters for certain types of variations found in images and have been extensively used in the analysis of images. However, filtering through a bank of N filters generates N filtered images, consequently increasing the amount of data considerably. Moreover, not all these filters have the same discriminatory capabilities for the individual images, thus making the task of any classifier difficult. In this paper, we use genetic algorithms to select a subset of relevant filters. Genetic algorithms represent a class of adaptive search techniques where the processes are similar to natural selection of biological evolution. The steady state model (GENITOR) has been used in this paper. The reduction of filters improves the performance of the classifier (which in this paper is the multi-layer perceptron neural network) and furthermore reduces the computational requirement. In this study we use the Laws filters which were proposed for the analysis of texture images. Our aim is to recognize the different textures on the images using the reduced filter set.
Enhanced algorithms for stochastic programming
Krishna, A.S.
1993-09-01
In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean of a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.
Ligand Identification Scoring Algorithm (LISA).
Zheng, Zheng; Merz, Kenneth M
2011-06-27
A central problem in de novo drug design is determining the binding affinity of a ligand with a receptor. A new scoring algorithm is presented that estimates the binding affinity of a protein-ligand complex given a three-dimensional structure. The method, LISA (Ligand Identification Scoring Algorithm), uses an empirical scoring function to describe the binding free energy. Interaction terms have been designed to account for van der Waals (VDW) contacts, hydrogen bonding, desolvation effects, and metal chelation to model the dissociation equilibrium constants using a linear model. Atom types have been introduced to differentiate the parameters for VDW, H-bonding interactions, and metal chelation between different atom pairs. A training set of 492 protein-ligand complexes was selected for the fitting process. Different test sets have been examined to evaluate its ability to predict experimentally measured binding affinities. By comparing with other well-known scoring functions, the results show that LISA has advantages over many existing scoring functions in simulating protein-ligand binding affinity, especially metalloprotein-ligand binding affinity. Artificial Neural Network (ANN) was also used in order to demonstrate that the energy terms in LISA are well designed and do not require extra cross terms.
Effects of visualization on algorithm comprehension
NASA Astrophysics Data System (ADS)
Mulvey, Matthew
Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.
Liu, Chenlin; Wu, Guangting; Huang, Xiaohang; Liu, Shenghao; Cong, Bailin
2012-05-01
Antarctic ice alga Chlamydomonas sp. ICE-L can endure extreme low temperature and high salinity stress under freezing conditions. To elucidate the molecular acclimation mechanisms using gene expression analysis, the expression stabilities of ten housekeeping genes of Chlamydomonas sp. ICE-L during freezing stress were analyzed. Some discrepancies were detected in the ranking of the candidate reference genes between geNorm and NormFinder programs, but there was substantial agreement between the groups of genes with the most and the least stable expression. RPL19 was ranked as the best candidate reference genes. Pairwise variation (V) analysis indicated the combination of two reference genes was sufficient for qRT-PCR data normalization under the experimental conditions. Considering the co-regulation between RPL19 and RPL32 (the most stable gene pairs given by geNorm program), we propose that the mean data rendered by RPL19 and GAPDH (the most stable gene pairs given by NormFinder program) be used to normalize gene expression values in Chlamydomonas sp. ICE-L more accurately. The example of FAD3 gene expression calculation demonstrated the importance of selecting an appropriate category and number of reference genes to achieve an accurate and reliable normalization of gene expression during freeze acclimation in Chlamydomonas sp. ICE-L.
Reuther, Sebastian; Reiter, Martina; Raabe, Annette; Dikomey, Ekkehard
2013-11-01
The aim of this study was to determine the effects of ionizing radiation on gene expression by using for a first time a qPCR platform specifically established for the detection of 94 DNA repair genes but also to test the robustness of these results by using three analytical methods (global pattern recognition, ΔΔCq/Normfinder and ΔΔCq/Genorm). Study was focused on these genes because DNA repair is known primarily to determine the radiation response. Six strains of normal human fibroblasts were exposed to 2 Gy, and changes in gene expression were analyzed 24 h thereafter. A significant change in gene expression was found for only few genes, but the genes detected were mostly different for the three analytical methods used. For GPR, a significant change was found for four genes, in contrast to the eight or nine genes when applying ΔΔCq/Genorm or ΔΔCq/Normfinder, respectively. When using all three methods, a significant change in expression was only seen for GADD45A and PCNA. These data demonstrate that (1) the genes identified to show an altered expression upon irradiation strongly depend on the analytical method applied, and that (2) overall GADD45A and PCNA appear to play a central role in this response, while no significant change is induced for any of the other DNA repair genes tested.
Nakayama, T J; Rodrigues, F A; Neumaier, N; Marcelino-Guimarães, F C; Farias, J R B; de Oliveira, M C N; Borém, A; de Oliveira, A C B; Emygdio, B M; Nepomuceno, A L
2014-02-13
Quantitative real-time polymerase chain reaction (RT-qPCR) is a powerful tool used to measure gene expression. However, because of its high sensitivity, the method is strongly influenced by the quality and concentration of the template cDNA and by the amplification efficiency. Relative quantification is an effective strategy for correcting random and systematic errors by using the expression level of reference gene(s) to normalize the expression level of the genes of interest. To identify soybean reference genes for use in studies of flooding stress, we compared 5 candidate reference genes (CRGs) with the NormFinder and GeNorm programs to select the best internal control. The expression stability of the CRGs was evaluated in root tissues from soybean plants subjected to hypoxic conditions. Elongation factor 1-beta and actin-11 were identified as the most appropriate genes for RT-qPCR normalization by both the NormFinder and GeNorm analyses. The expression profiles of the genes for alcohol dehydrogenase 1, sucrose synthase 4, and ascorbate peroxidase 2 were analyzed by comparing different normalizing combinations (including no normalization) of the selected reference genes. Here, we have identified potential genes for use as references for RT-qPCR normalization in experiments with soybean roots growing in O2-depleted environments, such as flooding-stressed plants.
Validation of housekeeping genes for studying differential gene expression in the bovine myometrium.
Rekawiecki, Robert; Kowalik, Magdalena K; Kotwica, Jan
2013-12-01
The aim of this study was to determine the steady-state expression of 13 selected housekeeping genes in the myometrium of cyclic and pregnant cows. Cells taken from bovine myometrium on days 1-5, 6-10, 11-16 and 17-20 of the oestrous cycle and in weeks 3-5, 6-8 and 9-12 of pregnancy were used. Reverse transcribed RNA was amplified in real-time PCR using designed primers. Reaction efficiency was determined with the Linreg programme. The geNorm and NormFinder programmes were used to select the best housekeeping genes. They calculate the expression stability factor for each used housekeeping gene with the smallest value for most stably expressed genes. According to geNorm, the most stable housekeeping genes in the myometrium were C2orf29, TPB and TUBB2B, while the least stably expressed genes were 18S RNA, HPRT1 and GAPDH. NormFinder identified the best genes in the myometrium as C2orf29, MRPL12 and TBP, while the worst genes were 18S RNA, B2M and SF3A1. Differences in stability factors between the two programmes may also indicate that the physiological status of the female, e.g. pregnancy, affects the stability of expression of housekeeping genes. The different expression stability of housekeeping genes did not affect progesterone receptor expression but it could be important if small differences in gene expression were measured between studies.
Borowska, D; Rothwell, L; Bailey, R A; Watson, K; Kaiser, P
2016-02-01
Quantitative polymerase chain reaction (qPCR) is a powerful technique for quantification of gene expression, especially genes involved in immune responses. Although qPCR is a very efficient and sensitive tool, variations in the enzymatic efficiency, quality of RNA and the presence of inhibitors can lead to errors. Therefore, qPCR needs to be normalised to obtain reliable results and allow comparison. The most common approach is to use reference genes as internal controls in qPCR analyses. In this study, expression of seven genes, including β-actin (ACTB), β-2-microglobulin (B2M), glyceraldehyde-3-phosphate dehydrogenase (GAPDH), β-glucuronidase (GUSB), TATA box binding protein (TBP), α-tubulin (TUBAT) and 28S ribosomal RNA (r28S), was determined in cells isolated from chicken lymphoid tissues and stimulated with three different mitogens. The stability of the genes was measured using geNorm, NormFinder and BestKeeper software. The results from both geNorm and NormFinder were that the three most stably expressed genes in this panel were TBP, GAPDH and r28S. BestKeeper did not generate clear answers because of the highly heterogeneous sample set. Based on these data we will include TBP in future qPCR normalisation. The study shows the importance of appropriate reference gene normalisation in other tissues before qPCR analysis.
A Probabilistic Cell Tracking Algorithm
NASA Astrophysics Data System (ADS)
Steinacker, Reinhold; Mayer, Dieter; Leiding, Tina; Lexer, Annemarie; Umdasch, Sarah
2013-04-01
The research described below was carried out during the EU-Project Lolight - development of a low cost, novel and accurate lightning mapping and thunderstorm (supercell) tracking system. The Project aims to develop a small-scale tracking method to determine and nowcast characteristic trajectories and velocities of convective cells and cell complexes. The results of the algorithm will provide a higher accuracy than current locating systems distributed on a coarse scale. Input data for the developed algorithm are two temporally separated lightning density fields. Additionally a Monte Carlo method minimizing a cost function is utilizied which leads to a probabilistic forecast for the movement of thunderstorm cells. In the first step the correlation coefficients between the first and the second density field are computed. Hence, the first field is shifted by all shifting vectors which are physically allowed. The maximum length of each vector is determined by the maximum possible speed of thunderstorm cells and the difference in time for both density fields. To eliminate ambiguities in determination of directions and velocities, the so called Random Walker of the Monte Carlo process is used. Using this method a grid point is selected at random. Moreover, one vector out of all predefined shifting vectors is suggested - also at random but with a probability that is related to the correlation coefficient. If this exchange of shifting vectors reduces the cost function, the new direction and velocity are accepted. Otherwise it is discarded. This process is repeated until the change of cost functions falls below a defined threshold. The Monte Carlo run gives information about the percentage of accepted shifting vectors for all grid points. In the course of the forecast, amplifications of cell density are permitted. For this purpose, intensity changes between the investigated areas of both density fields are taken into account. Knowing the direction and speed of thunderstorm
Visualizing output for a data learning algorithm
NASA Astrophysics Data System (ADS)
Carson, Daniel; Graham, James; Ternovskiy, Igor
2016-05-01
This paper details the process we went through to visualize the output for our data learning algorithm. We have been developing a hierarchical self-structuring learning algorithm based around the general principles of the LaRue model. One example of a proposed application of this algorithm would be traffic analysis, chosen because it is conceptually easy to follow and there is a significant amount of already existing data and related research material with which to work with. While we choose the tracking of vehicles for our initial approach, it is by no means the only target of our algorithm. Flexibility is the end goal, however, we still need somewhere to start. To that end, this paper details our creation of the visualization GUI for our algorithm, the features we included and the initial results we obtained from our algorithm running a few of the traffic based scenarios we designed.
Runtime support for parallelizing data mining algorithms
NASA Astrophysics Data System (ADS)
Jin, Ruoming; Agrawal, Gagan
2002-03-01
With recent technological advances, shared memory parallel machines have become more scalable, and offer large main memories and high bus bandwidths. They are emerging as good platforms for data warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms. We have developed a series of techniques for parallelization of data mining algorithms, including full replication, full locking, fixed locking, optimized full locking, and cache-sensitive locking. Unlike previous work on shared memory parallelization of specific data mining algorithms, all of our techniques apply to a large number of common data mining algorithms. In addition, we propose a reduction-object based interface for specifying a data mining algorithm. We show how our runtime system can apply any of the technique we have developed starting from a common specification of the algorithm.
A boundary finding algorithm and its applications
NASA Technical Reports Server (NTRS)
Gupta, J. N.; Wintz, P. A.
1975-01-01
An algorithm for locating gray level and/or texture edges in digitized pictures is presented. The algorithm is based on the concept of hypothesis testing. The digitized picture is first subdivided into subsets of picture elements, e.g., 2 x 2 arrays. The algorithm then compares the first- and second-order statistics of adjacent subsets; adjacent subsets having similar first- and/or second-order statistics are merged into blobs. By continuing this process, the entire picture is segmented into blobs such that the picture elements within each blob have similar characteristics. The boundaries between the blobs comprise the boundaries. The algorithm always generates closed boundaries. The algorithm was developed for multispectral imagery of the earth's surface. Application of this algorithm to various image processing techniques such as efficient coding, information extraction (terrain classification), and pattern recognition (feature selection) are included.
ENAS-RIF algorithm for image restoration
NASA Astrophysics Data System (ADS)
Yang, Yang; Yang, Zhen-wen; Shen, Tian-shuang; Chen, Bo
2012-11-01
mage of objects is inevitably encountered by space-based working in the atmospheric turbulence environment, such as those used in astronomy, remote sensing and so on. The observed images are seriously blurred. The restoration is required for reconstruction turbulence degraded images. In order to enhance the performance of image restoration, a novel enhanced nonnegativity and support constants recursive inverse filtering(ENAS-RIF) algorithm was presented, which was based on the reliable support region and enhanced cost function. Firstly, the Curvelet denoising algorithm was used to weaken image noise. Secondly, the reliable object support region estimation was used to accelerate the algorithm convergence. Then, the average gray was set as the gray of image background pixel. Finally, an object construction limit and the logarithm function were add to enhance algorithm stability. The experimental results prove that the convergence speed of the novel ENAS-RIF algorithm is faster than that of NAS-RIF algorithm and it is better in image restoration.
Tilted cone beam VCT reconstruction algorithm
NASA Astrophysics Data System (ADS)
Hsieh, Jiang; Tang, Xiangyang
2005-04-01
Reconstruction algorithms for volumetric CT have been the focus of many studies. Several exact and approximate reconstruction algorithms have been proposed for step-and-shoot and helical scanning trajectories to combat cone beam related artifacts. In this paper, we present a closed form cone beam reconstruction formula for tilted gantry data acquisition. Although several algorithms were proposed to compensate for errors induced by the gantry tilt, none of the algorithms addresses the case in which the cone beam geometry is first rebinned to a set of parallel beams prior to the filtered backprojection. Because of the rebinning process, the amount of iso-center adjustment depends not only on the projection angle and tilt angle, but also on the reconstructed pixel location. The proposed algorithm has been tested extensively on both 16 and 64 slice VCT with phantoms and clinical data. The efficacy of the algorithm is clearly demonstrated by the experiments.
An Algorithmic Framework for Multiobjective Optimization
Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.
2013-01-01
Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795
Adaptive link selection algorithms for distributed estimation
NASA Astrophysics Data System (ADS)
Xu, Songcen; de Lamare, Rodrigo C.; Poor, H. Vincent
2015-12-01
This paper presents adaptive link selection algorithms for distributed estimation and considers their application to wireless sensor networks and smart grids. In particular, exhaustive search-based least mean squares (LMS) / recursive least squares (RLS) link selection algorithms and sparsity-inspired LMS / RLS link selection algorithms that can exploit the topology of networks with poor-quality links are considered. The proposed link selection algorithms are then analyzed in terms of their stability, steady-state, and tracking performance and computational complexity. In comparison with the existing centralized or distributed estimation strategies, the key features of the proposed algorithms are as follows: (1) more accurate estimates and faster convergence speed can be obtained and (2) the network is equipped with the ability of link selection that can circumvent link failures and improve the estimation performance. The performance of the proposed algorithms for distributed estimation is illustrated via simulations in applications of wireless sensor networks and smart grids.
Anaphora Resolution Algorithm for Sanskrit
NASA Astrophysics Data System (ADS)
Pralayankar, Pravin; Devi, Sobha Lalitha
This paper presents an algorithm, which identifies different types of pronominal and its antecedents in Sanskrit, an Indo-European language. The computational grammar implemented here uses very familiar concepts such as clause, subject, object etc., which are identified with the help of morphological information and concepts such as precede and follow. It is well known that natural languages contain anaphoric expressions, gaps and elliptical constructions of various kinds and that understanding of natural languages involves assignment of interpretations to these elements. Therefore, it is only to be expected that natural language understanding systems must have the necessary mechanism to resolve the same. The method we adopt here for resolving the anaphors is by exploiting the morphological richness of the language. The system is giving encouraging results when tested with a small corpus.
Improved Heat-Stress Algorithm
NASA Technical Reports Server (NTRS)
Teets, Edward H., Jr.; Fehn, Steven
2007-01-01
NASA Dryden presents an improved and automated site-specific algorithm for heat-stress approximation using standard atmospheric measurements routinely obtained from the Edwards Air Force Base weather detachment. Heat stress, which is the net heat load a worker may be exposed to, is officially measured using a thermal-environment monitoring system to calculate the wet-bulb globe temperature (WBGT). This instrument uses three independent thermometers to measure wet-bulb, dry-bulb, and the black-globe temperatures. By using these improvements, a more realistic WBGT estimation value can now be produced. This is extremely useful for researchers and other employees who are working on outdoor projects that are distant from the areas that the Web system monitors. Most importantly, the improved WBGT estimations will make outdoor work sites safer by reducing the likelihood of heat stress.
Algorithmic synthesis using Python compiler
NASA Astrophysics Data System (ADS)
Cieszewski, Radoslaw; Romaniuk, Ryszard; Pozniak, Krzysztof; Linczuk, Maciej
2015-09-01
This paper presents a python to VHDL compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and translate it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the programmed circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. This can be achieved by using many computational resources at the same time. Creating parallel programs implemented in FPGAs in pure HDL is difficult and time consuming. Using higher level of abstraction and High-Level Synthesis compiler implementation time can be reduced. The compiler has been implemented using the Python language. This article describes design, implementation and results of created tools.
LDRD Report: Scheduling Irregular Algorithms
Boman, Erik G.
2014-10-01
This LDRD project was a campus exec fellowship to fund (in part) Donald Nguyen’s PhD research at UT-Austin. His work has focused on parallel programming models, and scheduling irregular algorithms on shared-memory systems using the Galois framework. Galois provides a simple but powerful way for users and applications to automatically obtain good parallel performance using certain supported data containers. The naïve user can write serial code, while advanced users can optimize performance by advanced features, such as specifying the scheduling policy. Galois was used to parallelize two sparse matrix reordering schemes: RCM and Sloan. Such reordering is important in high-performance computing to obtain better data locality and thus reduce run times.
Online Planning Algorithms for POMDPs
Ross, Stéphane; Pineau, Joelle; Paquet, Sébastien; Chaib-draa, Brahim
2009-01-01
Partially Observable Markov Decision Processes (POMDPs) provide a rich framework for sequential decision-making under uncertainty in stochastic domains. However, solving a POMDP is often intractable except for small problems due to their complexity. Here, we focus on online approaches that alleviate the computational complexity by computing good local policies at each decision step during the execution. Online algorithms generally consist of a lookahead search to find the best action to execute at each time step in an environment. Our objectives here are to survey the various existing online POMDP methods, analyze their properties and discuss their advantages and disadvantages; and to thoroughly evaluate these online approaches in different environments under various metrics (return, error bound reduction, lower bound improvement). Our experimental results indicate that state-of-the-art online heuristic search methods can handle large POMDP domains efficiently. PMID:19777080
Energy functions for regularization algorithms
NASA Technical Reports Server (NTRS)
Delingette, H.; Hebert, M.; Ikeuchi, K.
1991-01-01
Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.
[A simple algorithm for anemia].
Egyed, Miklós
2014-03-09
The author presents a novel algorithm for anaemia based on the erythrocyte haemoglobin content. The scheme is based on the aberrations of erythropoiesis and not on the pathophysiology of anaemia. The hemoglobin content of one erytrocyte is between 28-35 picogram. Any disturbance in hemoglobin synthesis can lead to a lower than 28 picogram hemoglobin content of the erythrocyte which will lead to hypochromic anaemia. In contrary, disturbances of nucleic acid metabolism will result in a hemoglobin content greater than 36 picogram, and this will result in hyperchromic anaemia. Normochromic anemia, characterised by hemoglobin content of erythrocytes between 28 and 35 picogram, is the result of alteration in the proliferation of erythropoeisis. Based on these three categories of anaemia, a unique system can be constructed, which can be used as a model for basic laboratory investigations and work-up of anaemic patients.
Novel MRC algorithms using GPGPU
NASA Astrophysics Data System (ADS)
Kato, Kokoro; Taniguchi, Yoshiyuki; Inoue, Tadao; Kadota, Kazuya
2012-06-01
GPGPU (General Purpose Graphic Processor Unit) has been attracting many engineers and scientists who develop their own software for massive numerical computation. With hundreds of core-processors and tens of thousands of threads operating concurrently, GPGPU programs can run significantly fast if their software architecture is well optimized. The basic program model used in GPGPU is SIMD (Single Instruction Multiple Data stream), and one must adapt his programming model to SIMD. However, conditional branching is fundamentally not allowed in SIMD and this limitation is quite challenging to apply GPGPU to photomask related software such as MDP or MRC. In this paper unique methods are proposed to utilize GPU for MRC operation. We explain novel algorithms of mask layout verification by GPGPU.
Algorithms for tensor network renormalization
NASA Astrophysics Data System (ADS)
Evenbly, G.
2017-01-01
We discuss in detail algorithms for implementing tensor network renormalization (TNR) for the study of classical statistical and quantum many-body systems. First, we recall established techniques for how the partition function of a 2 D classical many-body system or the Euclidean path integral of a 1 D quantum system can be represented as a network of tensors, before describing how TNR can be implemented to efficiently contract the network via a sequence of coarse-graining transformations. The efficacy of the TNR approach is then benchmarked for the 2 D classical statistical and 1 D quantum Ising models; in particular the ability of TNR to maintain a high level of accuracy over sustained coarse-graining transformations, even at a critical point, is demonstrated.
Innovative algorithm for cast detection
NASA Astrophysics Data System (ADS)
Gasparini, Francesca; Schettini, Raimondo; Gallina, Paolo
2001-12-01
The paper describes a method for detecting a color cast (i.e. a superimposed dominant color) in a digital image without any a priori knowledge of its semantic content. The color gamut of the image is first mapped in the CIELab color space. The color distribution of the whole image and of the so-called Near Neutral Objects (NNO) is then investigated using statistical tools then, to determine the presence of a cast. The boundaries of the near neutral objects in the color space are set adaptively by the algorithm on the basis of a preliminary analysis of the image color gamut. The method we propose has been tuned and successfully tested on a large data set of images, downloaded from personal web-pages or acquired using various digital and traditional cameras.
Evolving evolutionary algorithms using linear genetic programming.
Oltean, Mihai
2005-01-01
A new model for evolving Evolutionary Algorithms is proposed in this paper. The model is based on the Linear Genetic Programming (LGP) technique. Every LGP chromosome encodes an EA which is used for solving a particular problem. Several Evolutionary Algorithms for function optimization, the Traveling Salesman Problem and the Quadratic Assignment Problem are evolved by using the considered model. Numerical experiments show that the evolved Evolutionary Algorithms perform similarly and sometimes even better than standard approaches for several well-known benchmarking problems.
Robustness of Tree Extraction Algorithms from LIDAR
NASA Astrophysics Data System (ADS)
Dumitru, M.; Strimbu, B. M.
2015-12-01
Forest inventory faces a new era as unmanned aerial systems (UAS) increased the precision of measurements, while reduced field effort and price of data acquisition. A large number of algorithms were developed to identify various forest attributes from UAS data. The objective of the present research is to assess the robustness of two types of tree identification algorithms when UAS data are combined with digital elevation models (DEM). The algorithms use as input photogrammetric point cloud, which are subsequent rasterized. The first type of algorithms associate tree crown with an inversed watershed (subsequently referred as watershed based), while the second type is based on simultaneous representation of tree crown as an individual entity, and its relation with neighboring crowns (subsequently referred as simultaneous representation). A DJI equipped with a SONY a5100 was used to acquire images over an area from center Louisiana. The images were processed with Pix4D, and a photogrammetric point cloud with 50 points / m2 was attained. DEM was obtained from a flight executed in 2013, which also supplied a LIDAR point cloud with 30 points/m2. The algorithms were tested on two plantations with different species and crown class complexities: one homogeneous (i.e., a mature loblolly pine plantation), and one heterogeneous (i.e., an unmanaged uneven-aged stand with mixed species pine -hardwoods). Tree identification on photogrammetric point cloud reveled that simultaneous representation algorithm outperforms watershed algorithm, irrespective stand complexity. Watershed algorithm exhibits robustness to parameters, but the results were worse than majority sets of parameters needed by the simultaneous representation algorithm. The simultaneous representation algorithm is a better alternative to watershed algorithm even when parameters are not accurately estimated. Similar results were obtained when the two algorithms were run on the LIDAR point cloud.
MRCK_3D contact detonation algorithm
Rougier, Esteban; Munjiza, Antonio
2010-01-01
Large-scale Combined Finite-Discrete Element Methods (FEM-DEM) and Discrete Element Methods (DEM) simulations involving contact of a large number of separate bod ies need an efficient, robust and flexible contact detection algorithm. In this work the MRCK-3D search algorithm is outlined and its main CPU perfonnances are evaluated. One of the most important aspects of this newly developed search algorithm is that it is applicable to systems consisting of many bodies of different shapes and sizes.
Testing block subdivision algorithms on block designs
NASA Astrophysics Data System (ADS)
Wiseman, Natalie; Patterson, Zachary
2016-01-01
Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.
A Traffic Motion Object Extraction Algorithm
NASA Astrophysics Data System (ADS)
Wu, Shaofei
2015-12-01
A motion object extraction algorithm based on the active contour model is proposed. Firstly, moving areas involving shadows are segmented with the classical background difference algorithm. Secondly, performing shadow detection and coarse removal, then a grid method is used to extract initial contours. Finally, the active contour model approach is adopted to compute the contour of the real object by iteratively tuning the parameter of the model. Experiments show the algorithm can remove the shadow and keep the integrity of a moving object.
Dynamic Shortest Path Algorithms for Hypergraphs
2012-01-01
geometric hypergraphs and the Enron email data set. The latter illustrates the application of the proposed algorithms in social networks for identifying...analyze the time complexity of the proposed algorithms and perform simulation experiments for both random geometric hypergraphs and the Enron email data...geometric hypergraph model and a real data set of a social network ( Enron email data set), we study the average performance of these two algorithms in
Algorithm for Compressing Time-Series Data
NASA Technical Reports Server (NTRS)
Hawkins, S. Edward, III; Darlington, Edward Hugo
2012-01-01
An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").
Angelic Hierarchical Planning: Optimal and Online Algorithms
2008-12-06
describe an alternative “satisficing” algorithm, AHSS . 4.1 Abstract Lookahead Trees Our ALT data structures support our search algorithms by efficiently...Angelic Hierarchical Satisficing Search ( AHSS ), which at- tempts to find a plan that reaches the goal with at most some pre-specified cost α. AHSS can be...much more efficient than AHA*, since it can commit to a plan without first proving its optimality. At each step, AHSS (see Algorithm 3) begins by
Genetic Algorithms Viewed as Anticipatory Systems
NASA Astrophysics Data System (ADS)
Mocanu, Irina; Kalisz, Eugenia; Negreanu, Lorina
2010-11-01
This paper proposes a new version of genetic algorithms—the anticipatory genetic algorithm AGA. The performance evaluation included in the paper shows that AGA is superior to traditional genetic algorithm from both speed and accuracy points of view. The paper also presents how this algorithm can be applied to solve a complex problem: image annotation, intended to be used in content based image retrieval systems.
Advanced spectral signature discrimination algorithm
NASA Astrophysics Data System (ADS)
Chakravarty, Sumit; Cao, Wenjie; Samat, Alim
2013-05-01
This paper presents a novel approach to the task of hyperspectral signature analysis. Hyperspectral signature analysis has been studied a lot in literature and there has been a lot of different algorithms developed which endeavors to discriminate between hyperspectral signatures. There are many approaches for performing the task of hyperspectral signature analysis. Binary coding approaches like SPAM and SFBC use basic statistical thresholding operations to binarize a signature which are then compared using Hamming distance. This framework has been extended to techniques like SDFC wherein a set of primate structures are used to characterize local variations in a signature together with the overall statistical measures like mean. As we see such structures harness only local variations and do not exploit any covariation of spectrally distinct parts of the signature. The approach of this research is to harvest such information by the use of a technique similar to circular convolution. In the approach we consider the signature as cyclic by appending the two ends of it. We then create two copies of the spectral signature. These three signatures can be placed next to each other like the rotating discs of a combination lock. We then find local structures at different circular shifts between the three cyclic spectral signatures. Texture features like in SDFC can be used to study the local structural variation for each circular shift. We can then create different measure by creating histogram from the shifts and thereafter using different techniques for information extraction from the histograms. Depending on the technique used different variant of the proposed algorithm are obtained. Experiments using the proposed technique show the viability of the proposed methods and their performances as compared to current binary signature coding techniques.
Algorithms for Labeling Focus Regions.
Fink, M; Haunert, Jan-Henrik; Schulz, A; Spoerhase, J; Wolff, A
2012-12-01
In this paper, we investigate the problem of labeling point sites in focus regions of maps or diagrams. This problem occurs, for example, when the user of a mapping service wants to see the names of restaurants or other POIs in a crowded downtown area but keep the overview over a larger area. Our approach is to place the labels at the boundary of the focus region and connect each site with its label by a linear connection, which is called a leader. In this way, we move labels from the focus region to the less valuable context region surrounding it. In order to make the leader layout well readable, we present algorithms that rule out crossings between leaders and optimize other characteristics such as total leader length and distance between labels. This yields a new variant of the boundary labeling problem, which has been studied in the literature. Other than in traditional boundary labeling, where leaders are usually schematized polylines, we focus on leaders that are either straight-line segments or Bezier curves. Further, we present algorithms that, given the sites, find a position of the focus region that optimizes the above characteristics. We also consider a variant of the problem where we have more sites than space for labels. In this situation, we assume that the sites are prioritized by the user. Alternatively, we take a new facility-location perspective which yields a clustering of the sites. We label one representative of each cluster. If the user wishes, we apply our approach to the sites within a cluster, giving details on demand.
Iterative phase retrieval algorithms. I: optimization.
Guo, Changliang; Liu, Shi; Sheridan, John T
2015-05-20
Two modified Gerchberg-Saxton (GS) iterative phase retrieval algorithms are proposed. The first we refer to as the spatial phase perturbation GS algorithm (SPP GSA). The second is a combined GS hybrid input-output algorithm (GS/HIOA). In this paper (Part I), it is demonstrated that the SPP GS and GS/HIO algorithms are both much better at avoiding stagnation during phase retrieval, allowing them to successfully locate superior solutions compared with either the GS or the HIO algorithms. The performances of the SPP GS and GS/HIO algorithms are also compared. Then, the error reduction (ER) algorithm is combined with the HIO algorithm (ER/HIOA) to retrieve the input object image and the phase, given only some knowledge of its extent and the amplitude in the Fourier domain. In Part II, the algorithms developed here are applied to carry out known plaintext and ciphertext attacks on amplitude encoding and phase encoding double random phase encryption systems. Significantly, ER/HIOA is then used to carry out a ciphertext-only attack on AE DRPE systems.
Overview of an Algorithm Plugin Package (APP)
NASA Astrophysics Data System (ADS)
Linda, M.; Tilmes, C.; Fleig, A. J.
2004-12-01
Science software that runs operationally is fundamentally different than software that runs on a scientist's desktop. There are complexities in hosting software for automated production that are necessary and significant. Identifying common aspects of these complexities can simplify algorithm integration. We use NASA's MODIS and OMI data production systems as examples. An Algorithm Plugin Package (APP) is science software that is combined with algorithm-unique elements that permit the algorithm to interface with, and function within, the framework of a data processing system. The framework runs algorithms operationally against large quantities of data. The extra algorithm-unique items are constrained by the design of the data processing system. APPs often include infrastructure that is vastly similar. When the common elements in APPs are identified and abstracted, the cost of APP development, testing, and maintenance will be reduced. This paper is an overview of the extra algorithm-unique pieces that are shared between MODAPS and OMIDAPS APPs. Our exploration of APP structure will help builders of other production systems identify their common elements and reduce algorithm integration costs. Our goal is to complete the development of a library of functions and a menu of implementation choices that reflect common needs of APPs. The library and menu will reduce the time and energy required for science developers to integrate algorithms into production systems.
One cutting plane algorithm using auxiliary functions
NASA Astrophysics Data System (ADS)
Zabotin, I. Ya; Kazaeva, K. E.
2016-11-01
We propose an algorithm for solving a convex programming problem from the class of cutting methods. The algorithm is characterized by the construction of approximations using some auxiliary functions, instead of the objective function. Each auxiliary function bases on the exterior penalty function. In proposed algorithm the admissible set and the epigraph of each auxiliary function are embedded into polyhedral sets. In connection with the above, the iteration points are found by solving linear programming problems. We discuss the implementation of the algorithm and prove its convergence.
Monte Carlo algorithm for free energy calculation.
Bi, Sheng; Tong, Ning-Hua
2015-07-01
We propose a Monte Carlo algorithm for the free energy calculation based on configuration space sampling. An upward or downward temperature scan can be used to produce F(T). We implement this algorithm for the Ising model on a square lattice and triangular lattice. Comparison with the exact free energy shows an excellent agreement. We analyze the properties of this algorithm and compare it with the Wang-Landau algorithm, which samples in energy space. This method is applicable to general classical statistical models. The possibility of extending it to quantum systems is discussed.
Ascent guidance algorithm using lidar wind measurements
NASA Technical Reports Server (NTRS)
Cramer, Evin J.; Bradt, Jerre E.; Hardtla, John W.
1990-01-01
The formulation of a general nonlinear programming guidance algorithm that incorporates wind measurements in the computation of ascent guidance steering commands is discussed. A nonlinear programming (NLP) algorithm that is designed to solve a very general problem has the potential to address the diversity demanded by future launch systems. Using B-splines for the command functional form allows the NLP algorithm to adjust the shape of the command profile to achieve optimal performance. The algorithm flexibility is demonstrated by simulation of ascent with dynamic loading constraints through a set of random wind profiles with and without wind sensing capability.
A new frame-based registration algorithm
NASA Technical Reports Server (NTRS)
Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Sumanaweera, T. S.; Yen, S. Y.; Napel, S.
1998-01-01
This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required.
Solving Maximal Clique Problem through Genetic Algorithm
NASA Astrophysics Data System (ADS)
Rajawat, Shalini; Hemrajani, Naveen; Menghani, Ekta
2010-11-01
Genetic algorithm is one of the most interesting heuristic search techniques. It depends basically on three operations; selection, crossover and mutation. The outcome of the three operations is a new population for the next generation. Repeating these operations until the termination condition is reached. All the operations in the algorithm are accessible with today's molecular biotechnology. The simulations show that with this new computing algorithm, it is possible to get a solution from a very small initial data pool, avoiding enumerating all candidate solutions. For randomly generated problems, genetic algorithm can give correct solution within a few cycles at high probability.
Automatic control algorithm effects on energy production
NASA Technical Reports Server (NTRS)
Mcnerney, G. M.
1981-01-01
A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.
Distilling the Verification Process for Prognostics Algorithms
NASA Technical Reports Server (NTRS)
Roychoudhury, Indranil; Saxena, Abhinav; Celaya, Jose R.; Goebel, Kai
2013-01-01
The goal of prognostics and health management (PHM) systems is to ensure system safety, and reduce downtime and maintenance costs. It is important that a PHM system is verified and validated before it can be successfully deployed. Prognostics algorithms are integral parts of PHM systems. This paper investigates a systematic process of verification of such prognostics algorithms. To this end, first, this paper distinguishes between technology maturation and product development. Then, the paper describes the verification process for a prognostics algorithm as it moves up to higher maturity levels. This process is shown to be an iterative process where verification activities are interleaved with validation activities at each maturation level. In this work, we adopt the concept of technology readiness levels (TRLs) to represent the different maturity levels of a prognostics algorithm. It is shown that at each TRL, the verification of a prognostics algorithm depends on verifying the different components of the algorithm according to the requirements laid out by the PHM system that adopts this prognostics algorithm. Finally, using simplified examples, the systematic process for verifying a prognostics algorithm is demonstrated as the prognostics algorithm moves up TRLs.
A simple greedy algorithm for reconstructing pedigrees.
Cowell, Robert G
2013-02-01
This paper introduces a simple greedy algorithm for searching for high likelihood pedigrees using micro-satellite (STR) genotype information on a complete sample of related individuals. The core idea behind the algorithm is not new, but it is believed that putting it into a greedy search setting, and specifically the application to pedigree learning, is novel. The algorithm does not require age or sex information, but this information can be incorporated if desired. The algorithm is applied to human and non-human genetic data and in a simulation study.
Thermostat algorithm for generating target ensembles.
Bravetti, A; Tapias, D
2016-02-01
We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator.
A parallel algorithm for global routing
NASA Technical Reports Server (NTRS)
Brouwer, Randall J.; Banerjee, Prithviraj
1990-01-01
A Parallel Hierarchical algorithm for Global Routing (PHIGURE) is presented. The router is based on the work of Burstein and Pelavin, but has many extensions for general global routing and parallel execution. Main features of the algorithm include structured hierarchical decomposition into separate independent tasks which are suitable for parallel execution and adaptive simplex solution for adding feedthroughs and adjusting channel heights for row-based layout. Alternative decomposition methods and the various levels of parallelism available in the algorithm are examined closely. The algorithm is described and results are presented for a shared-memory multiprocessor implementation.
Generation of attributes for learning algorithms
Hu, Yuh-Jyh; Kibler, D.
1996-12-31
Inductive algorithms rely strongly on their representational biases. Constructive induction can mitigate representational inadequacies. This paper introduces the notion of a relative gain measure and describes a new constructive induction algorithm (GALA) which is independent of the learning algorithm. Unlike most previous research on constructive induction, our methods are designed as preprocessing step before standard machine learning algorithms are applied. We present the results which demonstrate the effectiveness of GALA on artificial and real domains for several learners: C4.5, CN2, perceptron and backpropagation.
Improved algorithm for hyperspectral data dimension determination
NASA Astrophysics Data System (ADS)
CHEN, Jie; DU, Lei; LI, Jing; HAN, Yachao; GAO, Zihong
2017-02-01
The correlation between adjacent bands of hyperspectral image data is relatively strong. However, signal coexists with noise and the HySime (hyperspectral signal identification by minimum error) algorithm which is based on the principle of least squares is designed to calculate the estimated noise value and the estimated signal correlation matrix value. The algorithm is effective with accurate noise value but ineffective with estimated noise value obtained from spectral dimension reduction and de-correlation process. This paper proposes an improved HySime algorithm based on noise whitening process. It carries out the noise whitening, instead of removing noise pixel by pixel, process on the original data first, obtains the noise covariance matrix estimated value accurately, and uses the HySime algorithm to calculate the signal correlation matrix value in order to improve the precision of results. With simulated as well as real data experiments in this paper, results show that: firstly, the improved HySime algorithm are more accurate and stable than the original HySime algorithm; secondly, the improved HySime algorithm results have better consistency under the different conditions compared with the classic noise subspace projection algorithm (NSP); finally, the improved HySime algorithm improves the adaptability of non-white image noise with noise whitening process.
Introduction to Cluster Monte Carlo Algorithms
NASA Astrophysics Data System (ADS)
Luijten, E.
This chapter provides an introduction to cluster Monte Carlo algorithms for classical statistical-mechanical systems. A brief review of the conventional Metropolis algorithm is given, followed by a detailed discussion of the lattice cluster algorithm developed by Swendsen and Wang and the single-cluster variant introduced by Wolff. For continuum systems, the geometric cluster algorithm of Dress and Krauth is described. It is shown how their geometric approach can be generalized to incorporate particle interactions beyond hardcore repulsions, thus forging a connection between the lattice and continuum approaches. Several illustrative examples are discussed.
Approximate learning algorithm in Boltzmann machines.
Yasuda, Muneki; Tanaka, Kazuyuki
2009-11-01
Boltzmann machines can be regarded as Markov random fields. For binary cases, they are equivalent to the Ising spin model in statistical mechanics. Learning systems in Boltzmann machines are one of the NP-hard problems. Thus, in general we have to use approximate methods to construct practical learning algorithms in this context. In this letter, we propose new and practical learning algorithms for Boltzmann machines by using the belief propagation algorithm and the linear response approximation, which are often referred as advanced mean field methods. Finally, we show the validity of our algorithm using numerical experiments.
NASA Astrophysics Data System (ADS)
Nagao, Toshiyasu; Takeuchi, Akihiro; Nakamura, Kenji
2011-03-01
There are a number of reports on seismic quiescence phenomena before large earthquakes. The RTL algorithm is a weighted coefficient statistical method that takes into account the magnitude, occurrence time, and place of earthquake when seismicity pattern changes before large earthquakes are being investigated. However, we consider the original RTL algorithm to be overweighted on distance. In this paper, we introduce a modified RTL algorithm, called the RTM algorithm, and apply it to three large earthquakes in Japan, namely, the Hyogo-ken Nanbu earthquake in 1995 ( M JMA7.3), the Noto Hanto earthquake in 2007 ( M JMA 6.9), and the Iwate-Miyagi Nairiku earthquake in 2008 ( M JMA 7.2), as test cases. Because this algorithm uses several parameters to characterize the weighted coefficients, multiparameter sets have to be prepared for the tests. The results show that the RTM algorithm is more sensitive than the RTL algorithm to seismic quiescence phenomena. This paper represents the first step in a series of future analyses of seismic quiescence phenomena using the RTM algorithm. At this moment, whole surveyed parameters are empirically selected for use in the method. We have to consider the physical meaning of the "best fit" parameter, such as the relation of ACFS, among others, in future analyses.
A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem
Liu, Dong-sheng; Fan, Shu-jiang
2014-01-01
In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389
Liu, Dong-sheng; Fan, Shu-jiang
2014-01-01
In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity.
Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms
NASA Technical Reports Server (NTRS)
Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)
2000-01-01
In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.
Control algorithms for dynamic attenuators
Hsieh, Scott S.; Pelc, Norbert J.
2014-06-15
Purpose: The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. Methods: The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not requirea priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. Results: The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current
Algorithmic Mechanism Design of Evolutionary Computation
Pei, Yan
2015-01-01
We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777
Algorithm for genome contig assembly. Final report
1995-09-01
An algorithm was developed for genome contig assembly which extended the range of data types that could be included in assembly and which ran on the order of a hundred times faster than the algorithm it replaced. Maps of all existing cosmid clone and YAC data at the Human Genome Information Resource were assembled using ICA. The resulting maps are summarized.
QPSO-based adaptive DNA computing algorithm.
Karakose, Mehmet; Cigdem, Ugur
2013-01-01
DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm.
Algorithms and the Teaching of Grammar.
ERIC Educational Resources Information Center
Edwards, K. Ffoulkes
1967-01-01
The construction of algorithms to present grammatical rules is advocated on the basis of clarity and ease of memorization. Algorithmic procedure is demonstrated for the introduction of subordinate clauses by conjunctions in German, and the formation of plural nouns in English. (AF)
Excursion-Set-Mediated Genetic Algorithm
NASA Technical Reports Server (NTRS)
Noever, David; Baskaran, Subbiah
1995-01-01
Excursion-set-mediated genetic algorithm (ESMGA) is embodiment of method of searching for and optimizing computerized mathematical models. Incorporates powerful search and optimization techniques based on concepts analogous to natural selection and laws of genetics. In comparison with other genetic algorithms, this one achieves stronger condition for implicit parallelism. Includes three stages of operations in each cycle, analogous to biological generation.
Explaining the Cross-Multiplication Algorithm
ERIC Educational Resources Information Center
Handa, Yuichi
2009-01-01
Many high-school mathematics teachers have likely been asked by a student, "Why does the cross-multiplication algorithm work?" It is a commonly used algorithm when dealing with proportion problems, conversion of units, or fractional linear equations. For most teachers, the explanation usually involves the idea of finding a common denominator--one…
A quantum Algorithm for the Moebius Function
NASA Astrophysics Data System (ADS)
Love, Peter
We give an efficient quantum algorithm for the Moebius function from the natural numbers to -1,0,1. The cost of the algorithm is asymptotically quadratic in log n and does not require the computation of the prime factorization of n as an intermediate step.
Force-Control Algorithm for Surface Sampling
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Quadrelli, Marco B.; Phan, Linh
2008-01-01
A G-FCON algorithm is designed for small-body surface sampling. It has a linearization component and a feedback component to enhance performance. The algorithm regulates the contact force between the tip of a robotic arm attached to a spacecraft and a surface during sampling.
Genetic algorithms and the immune system
Forrest, S. . Dept. of Computer Science); Perelson, A.S. )
1990-01-01
Using genetic algorithm techniques we introduce a model to examine the hypothesis that antibody and T cell receptor genes evolved so as to encode the information needed to recognize schemas that characterize common pathogens. We have implemented the algorithm on the Connection Machine for 16,384 64-bit antigens and 512 64-bit antibodies. 8 refs.
The Porter Stemming Algorithm: Then and Now
ERIC Educational Resources Information Center
Willett, Peter
2006-01-01
Purpose: In 1980, Porter presented a simple algorithm for stemming English language words. This paper summarises the main features of the algorithm, and highlights its role not just in modern information retrieval research, but also in a range of related subject domains. Design/methodology/approach: Review of literature and research involving use…
Faster Algorithms on Branch and Clique Decompositions
NASA Astrophysics Data System (ADS)
Bodlaender, Hans L.; van Leeuwen, Erik Jan; van Rooij, Johan M. M.; Vatshelle, Martin
We combine two techniques recently introduced to obtain faster dynamic programming algorithms for optimization problems on graph decompositions. The unification of generalized fast subset convolution and fast matrix multiplication yields significant improvements to the running time of previous algorithms for several optimization problems. As an example, we give an O^{*}(3^{ω/2k}) time algorithm for Minimum Dominating Set on graphs of branchwidth k, improving on the previous O *(4 k ) algorithm. Here ω is the exponent in the running time of the best matrix multiplication algorithm (currently ω< 2.376). For graphs of cliquewidth k, we improve from O *(8 k ) to O *(4 k ). We also obtain an algorithm for counting the number of perfect matchings of a graph, given a branch decomposition of width k, that runs in time O^{*}(2^{ω/2k}). Generalizing these approaches, we obtain faster algorithms for all so-called [ρ,σ]-domination problems on branch decompositions if ρ and σ are finite or cofinite. The algorithms presented in this paper either attain or are very close to natural lower bounds for these problems.
Quantum Algorithm for Linear Programming Problems
NASA Astrophysics Data System (ADS)
Joag, Pramod; Mehendale, Dhananjay
The quantum algorithm (PRL 103, 150502, 2009) solves a system of linear equations with exponential speedup over existing classical algorithms. We show that the above algorithm can be readily adopted in the iterative algorithms for solving linear programming (LP) problems. The first iterative algorithm that we suggest for LP problem follows from duality theory. It consists of finding nonnegative solution of the equation forduality condition; forconstraints imposed by the given primal problem and for constraints imposed by its corresponding dual problem. This problem is called the problem of nonnegative least squares, or simply the NNLS problem. We use a well known method for solving the problem of NNLS due to Lawson and Hanson. This algorithm essentially consists of solving in each iterative step a new system of linear equations . The other iterative algorithms that can be used are those based on interior point methods. The same technique can be adopted for solving network flow problems as these problems can be readily formulated as LP problems. The suggested quantum algorithm cansolveLP problems and Network Flow problems of very large size involving millions of variables.
Synthesis of an algorithm for interference immunity
NASA Astrophysics Data System (ADS)
Kartsan, I. N.; Tyapkin, V. N.; Dmitriev, D. D.; Goncharov, A. E.; Zelenkov, P. V.; Kovalev, I. V.
2016-11-01
This paper discusses the synthesis of an algorithm for adaptive interference nulling of an 8-element phased antenna array. An adaptive beamforming system has been built on the basis of the algorithm. The paper discusses results of experimental functioning of navigation satellite systems user equipment fitted with an adaptive phased antenna array in interference environments.
Trees, bialgebras and intrinsic numerical algorithms
NASA Technical Reports Server (NTRS)
Crouch, Peter; Grossman, Robert; Larson, Richard
1990-01-01
Preliminary work about intrinsic numerical integrators evolving on groups is described. Fix a finite dimensional Lie group G; let g denote its Lie algebra, and let Y(sub 1),...,Y(sub N) denote a basis of g. A class of numerical algorithms is presented that approximate solutions to differential equations evolving on G of the form: dot-x(t) = F(x(t)), x(0) = p is an element of G. The algorithms depend upon constants c(sub i) and c(sub ij), for i = 1,...,k and j is less than i. The algorithms have the property that if the algorithm starts on the group, then it remains on the group. In addition, they also have the property that if G is the abelian group R(N), then the algorithm becomes the classical Runge-Kutta algorithm. The Cayley algebra generated by labeled, ordered trees is used to generate the equations that the coefficients c(sub i) and c(sub ij) must satisfy in order for the algorithm to yield an rth order numerical integrator and to analyze the resulting algorithms.
The Modular Clock Algorithm for Blind Rendezvous
2009-03-26
Theory . . . . . . . . . . . . . . . . . . . . . . . 20 Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 vi Page...50 Random Strategy vs. Modular Clock Algorithm . . . . . . . . . . . . . . 53 Modified Modular Clock Algorithm...Spectrum has become such a precious commodity that the auction of five blocks of 700 MHz spectrum raised $20 billion dollars from big market players
New pole placement algorithm - Polynomial matrix approach
NASA Technical Reports Server (NTRS)
Shafai, B.; Keel, L. H.
1990-01-01
A simple and direct pole-placement algorithm is introduced for dynamical systems having a block companion matrix A. The algorithm utilizes well-established properties of matrix polynomials. Pole placement is achieved by appropriately assigning coefficient matrices of the corresponding matrix polynomial. This involves only matrix additions and multiplications without requiring matrix inversion. A numerical example is given for the purpose of illustration.
Efficient Learning Algorithms with Limited Information
ERIC Educational Resources Information Center
De, Anindya
2013-01-01
The thesis explores efficient learning algorithms in settings which are more restrictive than the PAC model of learning (Valiant) in one of the following two senses: (i) The learning algorithm has a very weak access to the unknown function, as in, it does not get labeled samples for the unknown function (ii) The error guarantee required from the…
A novel algorithm for Bluetooth ECG.
Pandya, Utpal T; Desai, Uday B
2012-11-01
In wireless transmission of ECG, data latency will be significant when battery power level and data transmission distance are not maintained. In applications like home monitoring or personalized care, to overcome the joint effect of previous issues of wireless transmission and other ECG measurement noises, a novel filtering strategy is required. Here, a novel algorithm, identified as peak rejection adaptive sampling modified moving average (PRASMMA) algorithm for wireless ECG is introduced. This algorithm first removes error in bit pattern of received data if occurred in wireless transmission and then removes baseline drift. Afterward, a modified moving average is implemented except in the region of each QRS complexes. The algorithm also sets its filtering parameters according to different sampling rate selected for acquisition of signals. To demonstrate the work, a prototyped Bluetooth-based ECG module is used to capture ECG with different sampling rate and in different position of patient. This module transmits ECG wirelessly to Bluetooth-enabled devices where the PRASMMA algorithm is applied on captured ECG. The performance of PRASMMA algorithm is compared with moving average and S-Golay algorithms visually as well as numerically. The results show that the PRASMMA algorithm can significantly improve the ECG reconstruction by efficiently removing the noise and its use can be extended to any parameters where peaks are importance for diagnostic purpose.
Alocomotino Control Algorithm for Robotic Linkage Systems
Dohner, Jeffrey L.
2016-10-01
This dissertation describes the development of a control algorithm that transitions a robotic linkage system between stabilized states producing responsive locomotion. The developed algorithm is demonstrated using a simple robotic construction consisting of a few links with actuation and sensing at each joint. Numerical and experimental validation is presented.
Statistical Methods in Algorithm Design and Analysis.
ERIC Educational Resources Information Center
Weide, Bruce W.
The use of statistical methods in the design and analysis of discrete algorithms is explored. The introductory chapter contains a literature survey and background material on probability theory. In Chapter 2, probabilistic approximation algorithms are discussed with the goal of exposing and correcting some oversights in previous work. Chapter 3…
A Parallel Rendering Algorithm for MIMD Architectures
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.; Orloff, Tobias
1991-01-01
Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well.
Algorithm refinement for the stochastic Burgers' equation
Bell, John B.; Foo, Jasmine; Garcia, Alejandro L. . E-mail: algarcia@algarcia.org
2007-04-10
In this paper, we develop an algorithm refinement (AR) scheme for an excluded random walk model whose mean field behavior is given by the viscous Burgers' equation. AR hybrids use the adaptive mesh refinement framework to model a system using a molecular algorithm where desired while allowing a computationally faster continuum representation to be used in the remainder of the domain. The focus in this paper is the role of fluctuations on the dynamics. In particular, we demonstrate that it is necessary to include a stochastic forcing term in Burgers' equation to accurately capture the correct behavior of the system. The conclusion we draw from this study is that the fidelity of multiscale methods that couple disparate algorithms depends on the consistent modeling of fluctuations in each algorithm and on a coupling, such as algorithm refinement, that preserves this consistency.
Passive microwave algorithm development and evaluation
NASA Technical Reports Server (NTRS)
Petty, Grant W.
1995-01-01
The scientific objectives of this grant are: (1) thoroughly evaluate, both theoretically and empirically, all available Special Sensor Microwave Imager (SSM/I) retrieval algorithms for column water vapor, column liquid water, and surface wind speed; (2) where both appropriate and feasible, develop, validate, and document satellite passive microwave retrieval algorithms that offer significantly improved performance compared with currently available algorithms; and (3) refine and validate a novel physical inversion scheme for retrieving rain rate over the ocean. This report summarizes work accomplished or in progress during the first year of a three year grant. The emphasis during the first year has been on the validation and refinement of the rain rate algorithm published by Petty and on the analysis of independent data sets that can be used to help evaluate the performance of rain rate algorithms over remote areas of the ocean. Two articles in the area of global oceanic precipitation are attached.
An efficient quantum algorithm for spectral estimation
NASA Astrophysics Data System (ADS)
Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth
2017-03-01
We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum–classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.
A region labeling algorithm based on block
NASA Astrophysics Data System (ADS)
Wang, Jing
2009-10-01
The time performance of region labeling algorithm is important for image process. However, common region labeling algorithms cannot meet the requirements of real-time image processing. In this paper, a technique using block to record the connective area is proposed. By this technique, connective closure and information related to the target can be computed during a one-time image scan. It records the edge pixel's coordinate, including outer side edges and inner side edges, as well as the label, and then it can calculate connecting area's shape center, area and gray. Compared to others, this block based region labeling algorithm is more efficient. It can well meet the time requirements of real-time processing. Experiment results also validate the correctness and efficiency of the algorithm. Experiment results show that it can detect any connecting areas in binary images, which contains various complex and quaint patterns. The block labeling algorithm is used in a real-time image processing program now.
MM Algorithms for Geometric and Signomial Programming.
Lange, Kenneth; Zhou, Hua
2014-02-01
This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.
Efficient multiple-way graph partitioning algorithms
Dasdan, A.; Aykanat, C.
1995-12-01
Graph partitioning deals with evenly dividing a graph into two or more parts such that the total weight of edges interconnecting these parts, i.e., cutsize, is minimized. Graph partitioning has important applications in VLSI layout, mapping, and sparse Gaussian elimination. Since graph partitioning problem is NP-hard, we should resort to polynomial-time algorithms to obtain a good solution, or hopefully a near-optimal solution. Kernighan-Lin (KL) propsoed a 2-way partitioning algorithms. Fiduccia-Mattheyses (FM) introduced a faster version of KL algorithm. Sanchis (FMS) generalized FM algorithm to a multiple-way partitioning algorithm. Simulated Annealing (SA) is one of the most successful approaches that are not KL-based.
Algorithms for improved performance in cryptographic protocols.
Schroeppel, Richard Crabtree; Beaver, Cheryl Lynn
2003-11-01
Public key cryptographic algorithms provide data authentication and non-repudiation for electronic transmissions. The mathematical nature of the algorithms, however, means they require a significant amount of computation, and encrypted messages and digital signatures possess high bandwidth. Accordingly, there are many environments (e.g. wireless, ad-hoc, remote sensing networks) where public-key requirements are prohibitive and cannot be used. The use of elliptic curves in public-key computations has provided a means by which computations and bandwidth can be somewhat reduced. We report here on the research conducted in an LDRD aimed to find even more efficient algorithms and to make public-key cryptography available to a wider range of computing environments. We improved upon several algorithms, including one for which a patent has been applied. Further we discovered some new problems and relations on which future cryptographic algorithms may be based.
Marshall Rosenbluth and the Metropolis algorithm
Gubernatis, J.E.
2005-05-15
The 1953 publication, 'Equation of State Calculations by Very Fast Computing Machines' by N. Metropolis, A. W. Rosenbluth and M. N. Rosenbluth, and M. Teller and E. Teller [J. Chem. Phys. 21, 1087 (1953)] marked the beginning of the use of the Monte Carlo method for solving problems in the physical sciences. The method described in this publication subsequently became known as the Metropolis algorithm, undoubtedly the most famous and most widely used Monte Carlo algorithm ever published. As none of the authors made subsequent use of the algorithm, they became unknown to the large simulation physics community that grew from this publication and their roles in its development became the subject of mystery and legend. At a conference marking the 50th anniversary of the 1953 publication, Marshall Rosenbluth gave his recollections of the algorithm's development. The present paper describes the algorithm, reconstructs the historical context in which it was developed, and summarizes Marshall's recollections.
Algorithms for radio networks with dynamic topology
NASA Astrophysics Data System (ADS)
Shacham, Nachum; Ogier, Richard; Rutenburg, Vladislav V.; Garcia-Luna-Aceves, Jose
1991-08-01
The objective of this project was the development of advanced algorithms and protocols that efficiently use network resources to provide optimal or nearly optimal performance in future communication networks with highly dynamic topologies and subject to frequent link failures. As reflected by this report, we have achieved our objective and have significantly advanced the state-of-the-art in this area. The research topics of the papers summarized include the following: efficient distributed algorithms for computing shortest pairs of disjoint paths; minimum-expected-delay alternate routing algorithms for highly dynamic unreliable networks; algorithms for loop-free routing; multipoint communication by hierarchically encoded data; efficient algorithms for extracting the maximum information from event-driven topology updates; methods for the neural network solution of link scheduling and other difficult problems arising in communication networks; and methods for robust routing in networks subject to sophisticated attacks.
A Learning Algorithm for Multimodal Grammar Inference.
D'Ulizia, A; Ferri, F; Grifoni, P
2011-12-01
The high costs of development and maintenance of multimodal grammars in integrating and understanding input in multimodal interfaces lead to the investigation of novel algorithmic solutions in automating grammar generation and in updating processes. Many algorithms for context-free grammar inference have been developed in the natural language processing literature. An extension of these algorithms toward the inference of multimodal grammars is necessary for multimodal input processing. In this paper, we propose a novel grammar inference mechanism that allows us to learn a multimodal grammar from its positive samples of multimodal sentences. The algorithm first generates the multimodal grammar that is able to parse the positive samples of sentences and, afterward, makes use of two learning operators and the minimum description length metrics in improving the grammar description and in avoiding the over-generalization problem. The experimental results highlight the acceptable performances of the algorithm proposed in this paper since it has a very high probability of parsing valid sentences.
Algorithms for security in robotics and networks
NASA Astrophysics Data System (ADS)
Simov, Borislav Hristov
The dissertation presents algorithms for robotics and security. The first chapter gives an overview of the area of visibility-based pursuit-evasion. The following two chapters introduce two specific algorithms in that area. The algorithms are based on research done together with Dr. Giora Slutzki and Dr. Steven LaValle. Chapter 2 presents a polynomial-time algorithm for clearing a polygon by a single 1-searcher. The result is extended to a polynomial-time algorithm for a pair of 1-searchers in Chapter 3. Chapters 4 and 5 contain joint research with Dr. Srini Tridandapani, Dr. Jason Jue and Dr. Michael Borella in the area of computer networks. Chapter 4 presents a method of providing privacy over an insecure channel which does not require encryption. Chapter 5 gives approximate bounds for the link utilization in multicast traffic.
Automatic ionospheric layers detection: Algorithms analysis
NASA Astrophysics Data System (ADS)
Molina, María G.; Zuccheretti, Enrico; Cabrera, Miguel A.; Bianchi, Cesidio; Sciacca, Umberto; Baskaradas, James
2016-03-01
Vertical sounding is a widely used technique to obtain ionosphere measurements, such as an estimation of virtual height versus frequency scanning. It is performed by high frequency radar for geophysical applications called ;ionospheric sounder; (or ;ionosonde;). Radar detection depends mainly on targets characteristics. While several targets behavior and correspondent echo detection algorithms have been studied, a survey to address a suitable algorithm for ionospheric sounder has to be carried out. This paper is focused on automatic echo detection algorithms implemented in particular for an ionospheric sounder, target specific characteristics were studied as well. Adaptive threshold detection algorithms are proposed, compared to the current implemented algorithm, and tested using actual data obtained from the Advanced Ionospheric Sounder (AIS-INGV) at Rome Ionospheric Observatory. Different cases of study have been selected according typical ionospheric and detection conditions.
Petascale algorithms for reactor hydrodynamics.
Fischer, P.; Lottes, J.; Pointer, W. D.; Siegel, A.
2008-01-01
We describe recent algorithmic developments that have enabled large eddy simulations of reactor flows on up to P = 65, 000 processors on the IBM BG/P at the Argonne Leadership Computing Facility. Petascale computing is expected to play a pivotal role in the design and analysis of next-generation nuclear reactors. Argonne's SHARP project is focused on advanced reactor simulation, with a current emphasis on modeling coupled neutronics and thermal-hydraulics (TH). The TH modeling comprises a hierarchy of computational fluid dynamics approaches ranging from detailed turbulence computations, using DNS (direct numerical simulation) and LES (large eddy simulation), to full core analysis based on RANS (Reynolds-averaged Navier-Stokes) and subchannel models. Our initial study is focused on LES of sodium-cooled fast reactor cores. The aim is to leverage petascale platforms at DOE's Leadership Computing Facilities (LCFs) to provide detailed information about heat transfer within the core and to provide baseline data for less expensive RANS and subchannel models.
Formation Algorithms and Simulation Testbed
NASA Technical Reports Server (NTRS)
Wette, Matthew; Sohl, Garett; Scharf, Daniel; Benowitz, Edward
2004-01-01
Formation flying for spacecraft is a rapidly developing field that will enable a new era of space science. For one of its missions, the Terrestrial Planet Finder (TPF) project has selected a formation flying interferometer design to detect earth-like planets orbiting distant stars. In order to advance technology needed for the TPF formation flying interferometer, the TPF project has been developing a distributed real-time testbed to demonstrate end-to-end operation of formation flying with TPF-like functionality and precision. This is the Formation Algorithms and Simulation Testbed (FAST) . This FAST was conceived to bring out issues in timing, data fusion, inter-spacecraft communication, inter-spacecraft sensing and system-wide formation robustness. In this paper we describe the FAST and show results from a two-spacecraft formation scenario. The two-spacecraft simulation is the first time that precision end-to-end formation flying operation has been demonstrated in a distributed real-time simulation environment.
Firmware algorithms for PINGU experiment
NASA Astrophysics Data System (ADS)
Pankova, Daria; Anderson, Tyler; IceCube Collaboration
2017-01-01
PINGU is a future low energy extension for the IceCube experiment. It will be implemented as several additional closer positioned stings of digital optical modules (DOMs) inside the main detector volume. PINGU would be able to register neutrinos with energies as low as few GeV. One of the proposed designs for the new PINGU DOMs is an updated version of IceCube DOMs with newer electronic components, particularly a better more modern FPGA. With those improvements it is desirable to run some waveform feature extraction directly on the DOM, thus decreasing amount of data sent over the detector's bandwidth-limited cable. In order to use the existing feature extraction package for this purpose the signal waveform needs to be prepared by subtracting of a variable baseline from it. The baseline shape is dependant mostly on the environment temperature, which causes the long term drift of the signal, and the induction used in signal readout electronics, which modifies the signal shape. Algorithms have been selected to counter those baseline variances, modeled and partly implemented in FPGA fabric. The simulation shows good agreement between initial signal and the ``corrected'' version.
Advanced algorithms for information science
Argo, P.; Brislawn, C.; Fitzgerald, T.J.; Kelley, B.; Kim, W.H.; Mazieres, B.; Roeder, H.; Strottman, D.
1998-12-31
This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). In a modern information-controlled society the importance of fast computational algorithms facilitating data compression and image analysis cannot be overemphasized. Feature extraction and pattern recognition are key to many LANL projects and the same types of dimensionality reduction and compression used in source coding are also applicable to image understanding. The authors have begun developing wavelet coding which decomposes data into different length-scale and frequency bands. New transform-based source-coding techniques offer potential for achieving better, combined source-channel coding performance by using joint-optimization techniques. They initiated work on a system that compresses the video stream in real time, and which also takes the additional step of analyzing the video stream concurrently. By using object-based compression schemes (where an object is an identifiable feature of the video signal, repeatable in time or space), they believe that the analysis is directly related to the efficiency of the compression.
Genetic algorithms for route discovery.
Gelenbe, Erol; Liu, Peixiang; Lainé, Jeremy
2006-12-01
Packet routing in networks requires knowledge about available paths, which can be either acquired dynamically while the traffic is being forwarded, or statically (in advance) based on prior information of a network's topology. This paper describes an experimental investigation of path discovery using genetic algorithms (GAs). We start with the quality-of-service (QoS)-driven routing protocol called "cognitive packet network" (CPN), which uses smart packets (SPs) to dynamically select routes in a distributed autonomic manner based on a user's QoS requirements. We extend it by introducing a GA at the source routers, which modifies and filters the paths discovered by the CPN. The GA can combine the paths that were previously discovered to create new untested but valid source-to-destination paths, which are then selected on the basis of their "fitness." We present an implementation of this approach, where the GA runs in background mode so as not to overload the ingress routers. Measurements conducted on a network test bed indicate that when the background-traffic load of the network is light to medium, the GA can result in improved QoS. When the background-traffic load is high, it appears that the use of the GA may be detrimental to the QoS experienced by users as compared to CPN routing because the GA uses less timely state information in its decision making.
Exploration of new multivariate spectral calibration algorithms.
Van Benthem, Mark Hilary; Haaland, David Michael; Melgaard, David Kennett; Martin, Laura Elizabeth; Wehlburg, Christine Marie; Pell, Randy J.; Guenard, Robert D.
2004-03-01
A variety of multivariate calibration algorithms for quantitative spectral analyses were investigated and compared, and new algorithms were developed in the course of this Laboratory Directed Research and Development project. We were able to demonstrate the ability of the hybrid classical least squares/partial least squares (CLSIPLS) calibration algorithms to maintain calibrations in the presence of spectrometer drift and to transfer calibrations between spectrometers from the same or different manufacturers. These methods were found to be as good or better in prediction ability as the commonly used partial least squares (PLS) method. We also present the theory for an entirely new class of algorithms labeled augmented classical least squares (ACLS) methods. New factor selection methods are developed and described for the ACLS algorithms. These factor selection methods are demonstrated using near-infrared spectra collected from a system of dilute aqueous solutions. The ACLS algorithm is also shown to provide improved ease of use and better prediction ability than PLS when transferring calibrations between near-infrared calibrations from the same manufacturer. Finally, simulations incorporating either ideal or realistic errors in the spectra were used to compare the prediction abilities of the new ACLS algorithm with that of PLS. We found that in the presence of realistic errors with non-uniform spectral error variance across spectral channels or with spectral errors correlated between frequency channels, ACLS methods generally out-performed the more commonly used PLS method. These results demonstrate the need for realistic error structure in simulations when the prediction abilities of various algorithms are compared. The combination of equal or superior prediction ability and the ease of use of the ACLS algorithms make the new ACLS methods the preferred algorithms to use for multivariate spectral calibrations.
Recent Advancements in Lightning Jump Algorithm Work
NASA Technical Reports Server (NTRS)
Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.
2010-01-01
In the past year, the primary objectives were to show the usefulness of total lightning as compared to traditional cloud-to-ground (CG) networks, test the lightning jump algorithm configurations in other regions of the country, increase the number of thunderstorms within our thunderstorm database, and to pinpoint environments that could prove difficult for any lightning jump configuration. A total of 561 thunderstorms have been examined in the past year (409 non-severe, 152 severe) from four regions of the country (North Alabama, Washington D.C., High Plains of CO/KS, and Oklahoma). Results continue to indicate that the 2 lightning jump algorithm configuration holds the most promise in terms of prospective operational lightning jump algorithms, with a probability of detection (POD) at 81%, a false alarm rate (FAR) of 45%, a critical success index (CSI) of 49% and a Heidke Skill Score (HSS) of 0.66. The second best performing algorithm configuration was the Threshold 4 algorithm, which had a POD of 72%, FAR of 51%, a CSI of 41% and an HSS of 0.58. Because a more complex algorithm configuration shows the most promise in terms of prospective operational lightning jump algorithms, accurate thunderstorm cell tracking work must be undertaken to track lightning trends on an individual thunderstorm basis over time. While these numbers for the 2 configuration are impressive, the algorithm does have its weaknesses. Specifically, low-topped and tropical cyclone thunderstorm environments are present issues for the 2 lightning jump algorithm, because of the suppressed vertical depth impact on overall flash counts (i.e., a relative dearth in lightning). For example, in a sample of 120 thunderstorms from northern Alabama that contained 72 missed events by the 2 algorithm 36% of the misses were associated with these two environments (17 storms).
NASA Astrophysics Data System (ADS)
Zheng, Genrang; Lin, ZhengChun
The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.
Enghauser, Michael
2015-02-01
The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.
Parallelization of Edge Detection Algorithm using MPI on Beowulf Cluster
NASA Astrophysics Data System (ADS)
Haron, Nazleeni; Amir, Ruzaini; Aziz, Izzatdin A.; Jung, Low Tan; Shukri, Siti Rohkmah
In this paper, we present the design of parallel Sobel edge detection algorithm using Foster's methodology. The parallel algorithm is implemented using MPI message passing library and master/slave algorithm. Every processor performs the same sequential algorithm but on different part of the image. Experimental results conducted on Beowulf cluster are presented to demonstrate the performance of the parallel algorithm.
A new algorithm for attitude-independent magnetometer calibration
NASA Technical Reports Server (NTRS)
Alonso, Roberto; Shuster, Malcolm D.
1994-01-01
A new algorithm is developed for inflight magnetometer bias determination without knowledge of the attitude. This algorithm combines the fast convergence of a heuristic algorithm currently in use with the correct treatment of the statistics and without discarding data. The algorithm performance is examined using simulated data and compared with previous algorithms.
SAGE II inversion algorithm. [Stratospheric Aerosol and Gas Experiment
NASA Technical Reports Server (NTRS)
Chu, W. P.; Mccormick, M. P.; Lenoble, J.; Brogniez, C.; Pruvost, P.
1989-01-01
The operational Stratospheric Aerosol and Gas Experiment II multichannel data inversion algorithm is described. Aerosol and ozone retrievals obtained with the algorithm are discussed. The algorithm is compared to an independently developed algorithm (Lenoble, 1989), showing that the inverted aerosol and ozone profiles from the two algorithms are similar within their respective uncertainties.
ALGORITHM FOR SORTING GROUPED DATA
NASA Technical Reports Server (NTRS)
Evans, J. D.
1994-01-01
It is often desirable to sort data sets in ascending or descending order. This becomes more difficult for grouped data, i.e., multiple sets of data, where each set of data involves several measurements or related elements. The sort becomes increasingly cumbersome when more than a few elements exist for each data set. In order to achieve an efficient sorting process, an algorithm has been devised in which the maximum most significant element is found, and then compared to each element in succession. The program was written to handle the daily temperature readings of the Voyager spacecraft, particularly those related to the special tracking requirements of Voyager 2. By reducing each data set to a single representative number, the sorting process becomes very easy. The first step in the process is to reduce the data set of width 'n' to a data set of width '1'. This is done by representing each data set by a polynomial of length 'n' based on the differences of the maximum and minimum elements. These single numbers are then sorted and converted back to obtain the original data sets. Required input data are the name of the data file to read and sort, and the starting and ending record numbers. The package includes a sample data file, containing 500 sets of data with 5 elements in each set. This program will perform a sort of the 500 data sets in 3 - 5 seconds on an IBM PC-AT with a hard disk; on a similarly equipped IBM PC-XT the time is under 10 seconds. This program is written in BASIC (specifically the Microsoft QuickBasic compiler) for interactive execution and has been implemented on the IBM PC computer series operating under PC-DOS with a central memory requirement of approximately 40K of 8 bit bytes. A hard disk is desirable for speed considerations, but is not required. This program was developed in 1986.
Distributed sensor data compression algorithm
NASA Astrophysics Data System (ADS)
Ambrose, Barry; Lin, Freddie
2006-04-01
Theoretically it is possible for two sensors to reliably send data at rates smaller than the sum of the necessary data rates for sending the data independently, essentially taking advantage of the correlation of sensor readings to reduce the data rate. In 2001, Caltech researchers Michelle Effros and Qian Zhao developed new techniques for data compression code design for correlated sensor data, which were published in a paper at the 2001 Data Compression Conference (DCC 2001). These techniques take advantage of correlations between two or more closely positioned sensors in a distributed sensor network. Given two signals, X and Y, the X signal is sent using standard data compression. The goal is to design a partition tree for the Y signal. The Y signal is sent using a code based on the partition tree. At the receiving end, if ambiguity arises when using the partition tree to decode the Y signal, the X signal is used to resolve the ambiguity. We have extended this work to increase the efficiency of the code search algorithms. Our results have shown that development of a highly integrated sensor network protocol that takes advantage of a correlation in sensor readings can result in 20-30% sensor data transport cost savings. In contrast, the best possible compression using state-of-the-art compression techniques that did not take into account the correlation of the incoming data signals achieved only 9-10% compression at most. This work was sponsored by MDA, but has very widespread applicability to ad hoc sensor networks, hyperspectral imaging sensors and vehicle health monitoring sensors for space applications.
Novel and efficient tag SNPs selection algorithms.
Chen, Wen-Pei; Hung, Che-Lun; Tsai, Suh-Jen Jane; Lin, Yaw-Ling
2014-01-01
SNPs are the most abundant forms of genetic variations amongst species; the association studies between complex diseases and SNPs or haplotypes have received great attention. However, these studies are restricted by the cost of genotyping all SNPs; thus, it is necessary to find smaller subsets, or tag SNPs, representing the rest of the SNPs. In fact, the existing tag SNP selection algorithms are notoriously time-consuming. An efficient algorithm for tag SNP selection was presented, which was applied to analyze the HapMap YRI data. The experimental results show that the proposed algorithm can achieve better performance than the existing tag SNP selection algorithms; in most cases, this proposed algorithm is at least ten times faster than the existing methods. In many cases, when the redundant ratio of the block is high, the proposed algorithm can even be thousands times faster than the previously known methods. Tools and web services for haplotype block analysis integrated by hadoop MapReduce framework are also developed using the proposed algorithm as computation kernels.
Least significant qubit algorithm for quantum images
NASA Astrophysics Data System (ADS)
Sang, Jianzhi; Wang, Shen; Li, Qiong
2016-11-01
To study the feasibility of the classical image least significant bit (LSB) information hiding algorithm on quantum computer, a least significant qubit (LSQb) information hiding algorithm of quantum image is proposed. In this paper, we focus on a novel quantum representation for color digital images (NCQI). Firstly, by designing the three qubits comparator and unitary operators, the reasonability and feasibility of LSQb based on NCQI are presented. Then, the concrete LSQb information hiding algorithm is proposed, which can realize the aim of embedding the secret qubits into the least significant qubits of RGB channels of quantum cover image. Quantum circuit of the LSQb information hiding algorithm is also illustrated. Furthermore, the secrets extracting algorithm and circuit are illustrated through utilizing control-swap gates. The two merits of our algorithm are: (1) it is absolutely blind and (2) when extracting secret binary qubits, it does not need any quantum measurement operation or any other help from classical computer. Finally, simulation and comparative analysis show the performance of our algorithm.
Algorithm Optimally Allocates Actuation of a Spacecraft
NASA Technical Reports Server (NTRS)
Motaghedi, Shi
2007-01-01
A report presents an algorithm that solves the following problem: Allocate the force and/or torque to be exerted by each thruster and reaction-wheel assembly on a spacecraft for best performance, defined as minimizing the error between (1) the total force and torque commanded by the spacecraft control system and (2) the total of forces and torques actually exerted by all the thrusters and reaction wheels. The algorithm incorporates the matrix vector relationship between (1) the total applied force and torque and (2) the individual actuator force and torque values. It takes account of such constraints as lower and upper limits on the force or torque that can be applied by a given actuator. The algorithm divides the aforementioned problem into two optimization problems that it solves sequentially. These problems are of a type, known in the art as semi-definite programming problems, that involve linear matrix inequalities. The algorithm incorporates, as sub-algorithms, prior algorithms that solve such optimization problems very efficiently. The algorithm affords the additional advantage that the solution requires the minimum rate of consumption of fuel for the given best performance.
An algorithmic approach to crustal deformation analysis
NASA Technical Reports Server (NTRS)
Iz, Huseyin Baki
1987-01-01
In recent years the analysis of crustal deformation measurements has become important as a result of current improvements in geodetic methods and an increasing amount of theoretical and observational data provided by several earth sciences. A first-generation data analysis algorithm which combines a priori information with current geodetic measurements was proposed. Relevant methods which can be used in the algorithm were discussed. Prior information is the unifying feature of this algorithm. Some of the problems which may arise through the use of a priori information in the analysis were indicated and preventive measures were demonstrated. The first step in the algorithm is the optimal design of deformation networks. The second step in the algorithm identifies the descriptive model of the deformation field. The final step in the algorithm is the improved estimation of deformation parameters. Although deformation parameters are estimated in the process of model discrimination, they can further be improved by the use of a priori information about them. According to the proposed algorithm this information must first be tested against the estimates calculated using the sample data only. Null-hypothesis testing procedures were developed for this purpose. Six different estimators which employ a priori information were examined. Emphasis was put on the case when the prior information is wrong and analytical expressions for possible improvements under incompatible prior information were derived.
Improved wavefront reconstruction algorithm from slope measurements
NASA Astrophysics Data System (ADS)
Phuc, Phan Huy; Manh, Nguyen The; Rhee, Hyug-Gyo; Ghim, Young-Sik; Yang, Ho-Soon; Lee, Yun-Woo
2017-03-01
In this paper, we propose a wavefront reconstruction algorithm from slope measurements based on a zonal method. In this algorithm, the slope measurement sampling geometry used is the Southwell geometry, in which the phase values and the slope data are measured at the same nodes. The proposed algorithm estimates the phase value at a node point using the slope measurements of eight points around the node, as doing so is believed to result in better accuracy with regard to the wavefront. For optimization of the processing time, a successive over-relaxation method is applied to iteration loops. We use a trial-and-error method to determine the best relaxation factor for each type of wavefront in order to optimize the iteration time and, thus, the processing time of the algorithm. Specifically, for a circularly symmetric wavefront, the convergence rate of the algorithm can be improved by using the result of a Fourier Transform as an initial value for the iteration. Various simulations are presented to demonstrate the improvements realized when using the proposed algorithm. Several experimental measurements of deflectometry are also processed by using the proposed algorithm.
Passive MMW algorithm performance characterization using MACET
NASA Astrophysics Data System (ADS)
Williams, Bradford D.; Watson, John S.; Amphay, Sengvieng A.
1997-06-01
As passive millimeter wave sensor technology matures, algorithms which are tailored to exploit the benefits of this technology are being developed. The expedient development of such algorithms requires an understanding of not only the gross phenomenology, but also specific quirks and limitations inherent in sensors and the data gathering methodology specific to this regime. This level of understanding is approached as the technology matures and increasing amounts of data become available for analysis. The Armament Directorate of Wright Laboratory, WL/MN, has spearheaded the advancement of passive millimeter-wave technology in algorithm development tools and modeling capability as well as sensor development. A passive MMW channel is available within WL/MNs popular multi-channel modeling program Irma, and a sample passive MMW algorithm is incorporated into the Modular Algorithm Concept Evaluation Tool, an algorithm development and evaluation system. The Millimeter Wave Analysis of Passive Signatures system provides excellent data collection capability in the 35, 60, and 95 GHz MMW bands. This paper exploits these assets for the study of the PMMW signature of a High Mobility Multi- Purpose Wheeled Vehicle in the three bands mentioned, and the effect of camouflage upon this signature and autonomous target recognition algorithm performance.
Algorithm for dynamic Speckle pattern processing
NASA Astrophysics Data System (ADS)
Cariñe, J.; Guzmán, R.; Torres-Ruiz, F. A.
2016-07-01
In this paper we present a new algorithm for determining surface activity by processing speckle pattern images recorded with a CCD camera. Surface activity can be produced by motility or small displacements among other causes, and is manifested as a change in the pattern recorded in the camera with reference to a static background pattern. This intensity variation is considered to be a small perturbation compared with the mean intensity. Based on a perturbative method we obtain an equation with which we can infer information about the dynamic behavior of the surface that generates the speckle pattern. We define an activity index based on our algorithm that can be easily compared with the outcomes from other algorithms. It is shown experimentally that this index evolves in time in the same way as the Inertia Moment method, however our algorithm is based on direct processing of speckle patterns without the need for other kinds of post-processes (like THSP and co-occurrence matrix), making it a viable real-time method. We also show how this algorithm compares with several other algorithms when applied to calibration experiments. From these results we conclude that our algorithm offer qualitative and quantitative advantages over current methods.
A survey of DNA motif finding algorithms
Das, Modan K; Dai, Ho-Kwok
2007-01-01
Background Unraveling the mechanisms that regulate gene expression is a major challenge in biology. An important task in this challenge is to identify regulatory elements, especially the binding sites in deoxyribonucleic acid (DNA) for transcription factors. These binding sites are short DNA segments that are called motifs. Recent advances in genome sequence availability and in high-throughput gene expression analysis technologies have allowed for the development of computational methods for motif finding. As a result, a large number of motif finding algorithms have been implemented and applied to various motif models over the past decade. This survey reviews the latest developments in DNA motif finding algorithms. Results Earlier algorithms use promoter sequences of coregulated genes from single genome and search for statistically overrepresented motifs. Recent algorithms are designed to use phylogenetic footprinting or orthologous sequences and also an integrated approach where promoter sequences of coregulated genes and phylogenetic footprinting are used. All the algorithms studied have been reported to correctly detect the motifs that have been previously detected by laboratory experimental approaches, and some algorithms were able to find novel motifs. However, most of these motif finding algorithms have been shown to work successfully in yeast and other lower organisms, but perform significantly worse in higher organisms. Conclusion Despite considerable efforts to date, DNA motif finding remains a complex challenge for biologists and computer scientists. Researchers have taken many different approaches in developing motif discovery tools and the progress made in this area of research is very encouraging. Performance comparison of different motif finding tools and identification of the best tools have proven to be a difficult task because tools are designed based on algorithms and motif models that are diverse and complex and our incomplete understanding of
Selecting materialized views using random algorithm
NASA Astrophysics Data System (ADS)
Zhou, Lijuan; Hao, Zhongxiao; Liu, Chi
2007-04-01
The data warehouse is a repository of information collected from multiple possibly heterogeneous autonomous distributed databases. The information stored at the data warehouse is in form of views referred to as materialized views. The selection of the materialized views is one of the most important decisions in designing a data warehouse. Materialized views are stored in the data warehouse for the purpose of efficiently implementing on-line analytical processing queries. The first issue for the user to consider is query response time. So in this paper, we develop algorithms to select a set of views to materialize in data warehouse in order to minimize the total view maintenance cost under the constraint of a given query response time. We call it query_cost view_ selection problem. First, cost graph and cost model of query_cost view_ selection problem are presented. Second, the methods for selecting materialized views by using random algorithms are presented. The genetic algorithm is applied to the materialized views selection problem. But with the development of genetic process, the legal solution produced become more and more difficult, so a lot of solutions are eliminated and producing time of the solutions is lengthened in genetic algorithm. Therefore, improved algorithm has been presented in this paper, which is the combination of simulated annealing algorithm and genetic algorithm for the purpose of solving the query cost view selection problem. Finally, in order to test the function and efficiency of our algorithms experiment simulation is adopted. The experiments show that the given methods can provide near-optimal solutions in limited time and works better in practical cases. Randomized algorithms will become invaluable tools for data warehouse evolution.
Algorithms for Disconnected Diagrams in Lattice QCD
Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Konstantinos; Yoon, Boram; Gupta, Rajan; Syritsyn, Sergey
2016-11-01
Computing disconnected diagrams in Lattice QCD (operator insertion in a quark loop) entails the computationally demanding problem of taking the trace of the all to all quark propagator. We first outline the basic algorithm used to compute a quark loop as well as improvements to this method. Then, we motivate and introduce an algorithm based on the synergy between hierarchical probing and singular value deflation. We present results for the chiral condensate using a 2+1-flavor clover ensemble and compare estimates of the nucleon charges with the basic algorithm.
Algorithmic Perspectives on Problem Formulations in MDO
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia M.; Lewis, Robert Michael
2000-01-01
This work is concerned with an approach to formulating the multidisciplinary optimization (MDO) problem that reflects an algorithmic perspective on MDO problem solution. The algorithmic perspective focuses on formulating the problem in light of the abilities and inabilities of optimization algorithms, so that the resulting nonlinear programming problem can be solved reliably and efficiently by conventional optimization techniques. We propose a modular approach to formulating MDO problems that takes advantage of the problem structure, maximizes the autonomy of implementation, and allows for multiple easily interchangeable problem statements to be used depending on the available resources and the characteristics of the application problem.
Geometric Transforms for Fast Geometric Algorithms.
1979-12-01
approximation algorithm extends the ideas of the first by defining a transform based on a " pie -slice" diagram and Use Of the floor function. 8.1 .1...2. The second (-approxinate algorithm reduces the time from O(N/(()I/") to O(N + 1 /() by using a tr’nisformu hascd on a " pie -slic e" diagram (Figure...N + 1/).) Bentley, Weide, and Yao [18] have used a simple " pie -slice" diagram for their Voronoi diagram algorithm and Weide [09] has used the floor
An exact accelerated stochastic simulation algorithm.
Mjolsness, Eric; Orendorff, David; Chatelain, Philippe; Koumoutsakos, Petros
2009-04-14
An exact method for stochastic simulation of chemical reaction networks, which accelerates the stochastic simulation algorithm (SSA), is proposed. The present "ER-leap" algorithm is derived from analytic upper and lower bounds on the multireaction probabilities sampled by SSA, together with rejection sampling and an adaptive multiplicity for reactions. The algorithm is tested on a number of well-quantified reaction networks and is found experimentally to be very accurate on test problems including a chaotic reaction network. At the same time ER-leap offers a substantial speedup over SSA with a simulation time proportional to the 23 power of the number of reaction events in a Galton-Watson process.
Quantum hyperparallel algorithm for matrix multiplication.
Zhang, Xin-Ding; Zhang, Xiao-Ming; Xue, Zheng-Yuan
2016-04-29
Hyperentangled states, entangled states with more than one degree of freedom, are considered as promising resource in quantum computation. Here we present a hyperparallel quantum algorithm for matrix multiplication with time complexity O(N(2)), which is better than the best known classical algorithm. In our scheme, an N dimensional vector is mapped to the state of a single source, which is separated to N paths. With the assistance of hyperentangled states, the inner product of two vectors can be calculated with a time complexity independent of dimension N. Our algorithm shows that hyperparallel quantum computation may provide a useful tool in quantum machine learning and "big data" analysis.
Complexity of the Quantum Adiabatic Algorithm
NASA Technical Reports Server (NTRS)
Hen, Itay
2013-01-01
The Quantum Adiabatic Algorithm (QAA) has been proposed as a mechanism for efficiently solving optimization problems on a quantum computer. Since adiabatic computation is analog in nature and does not require the design and use of quantum gates, it can be thought of as a simpler and perhaps more profound method for performing quantum computations that might also be easier to implement experimentally. While these features have generated substantial research in QAA, to date there is still a lack of solid evidence that the algorithm can outperform classical optimization algorithms.
Quantum algorithms for quantum field theories.
Jordan, Stephen P; Lee, Keith S M; Preskill, John
2012-06-01
Quantum field theory reconciles quantum mechanics and special relativity, and plays a central role in many areas of physics. We developed a quantum algorithm to compute relativistic scattering probabilities in a massive quantum field theory with quartic self-interactions (φ(4) theory) in spacetime of four and fewer dimensions. Its run time is polynomial in the number of particles, their energy, and the desired precision, and applies at both weak and strong coupling. In the strong-coupling and high-precision regimes, our quantum algorithm achieves exponential speedup over the fastest known classical algorithm.
Online clustering algorithms for radar emitter classification.
Liu, Jun; Lee, Jim P Y; Senior; Li, Lingjie; Luo, Zhi-Quan; Wong, K Max
2005-08-01
Radar emitter classification is a special application of data clustering for classifying unknown radar emitters from received radar pulse samples. The main challenges of this task are the high dimensionality of radar pulse samples, small sample group size, and closely located radar pulse clusters. In this paper, two new online clustering algorithms are developed for radar emitter classification: One is model-based using the Minimum Description Length (MDL) criterion and the other is based on competitive learning. Computational complexity is analyzed for each algorithm and then compared. Simulation results show the superior performance of the model-based algorithm over competitive learning in terms of better classification accuracy, flexibility, and stability.
Efficient Algorithms for Langevin and DPD Dynamics.
Goga, N; Rzepiela, A J; de Vries, A H; Marrink, S J; Berendsen, H J C
2012-10-09
In this article, we present several algorithms for stochastic dynamics, including Langevin dynamics and different variants of Dissipative Particle Dynamics (DPD), applicable to systems with or without constraints. The algorithms are based on the impulsive application of friction and noise, thus avoiding the computational complexity of algorithms that apply continuous friction and noise. Simulation results on thermostat strength and diffusion properties for ideal gas, coarse-grained (MARTINI) water, and constrained atomic (SPC/E) water systems are discussed. We show that the measured thermal relaxation rates agree well with theoretical predictions. The influence of various parameters on the diffusion coefficient is discussed.
Recursive algorithms for vector extrapolation methods
NASA Technical Reports Server (NTRS)
Ford, William F.; Sidi, Avram
1988-01-01
Three classes of recursion relations are devised for implementing some extrapolation methods for vector sequences. One class of recursion relations can be used to implement methods like the modified minimal polynomial extrapolation and the topological epsilon algorithm; another allows implementation of methods like minimal polynomial and reduced rank extrapolation; while the remaining class can be employed in the implementation of the vector E-algorithm. Operation counts and storage requirements for these methods are also discussed, and some related techniques for special applications are also presented. Included are methods for the rapid evaluations of the vector E-algorithm.
Comparative Study of Two Automatic Registration Algorithms
NASA Astrophysics Data System (ADS)
Grant, D.; Bethel, J.; Crawford, M.
2013-10-01
The Iterative Closest Point (ICP) algorithm is prevalent for the automatic fine registration of overlapping pairs of terrestrial laser scanning (TLS) data. This method along with its vast number of variants, obtains the least squares parameters that are necessary to align the TLS data by minimizing some distance metric between the scans. The ICP algorithm uses a "model-data" concept in which the scans obtain differential treatment in the registration process depending on whether they were assigned to be the "model" or "data". For each of the "data" points, corresponding points from the "model" are sought. Another concept of "symmetric correspondence" was proposed in the Point-to-Plane (P2P) algorithm, where both scans are treated equally in the registration process. The P2P method establishes correspondences on both scans and minimizes the point-to-plane distances between the scans by simultaneously considering the stochastic properties of both scans. This paper studies both the ICP and P2P algorithms in terms of their consistency in registration parameters for pairs of TLS data. The question being investigated in this paper is, should scan A be registered to scan B, will the parameters be the same if scan B were registered to scan A? Experiments were conducted with eight pairs of real TLS data which were registered by the two algorithms in the forward (scan A to scan B) and backward (scan B to scan A) modes and the results were compared. The P2P algorithm was found to be more consistent than the ICP algorithm. The differences in registration accuracy between the forward and backward modes were negligible when using the P2P algorithm (mean difference of 0.03 mm). However, the ICP had a mean difference of 4.26 mm. Each scan was also transformed by the forward and backward parameters of the two algorithms and the misclosure computed. The mean misclosure for the P2P algorithm was 0.80 mm while that for the ICP algorithm was 5.39 mm. The conclusion from this study is
Asynchronous Event-Driven Particle Algorithms
Donev, A
2007-08-30
We present, in a unifying way, the main components of three asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics (MD), is well-known. We also present a recently-developed diffusion kinetic Monte Carlo (DKMC) algorithm, as well as a novel stochastic molecular-dynamics algorithm that builds on the Direct Simulation Monte Carlo (DSMC). We explain how to effectively combine event-driven and classical time-driven handling, and discuss some promises and challenges for event-driven simulation of realistic physical systems.
Asynchronous Event-Driven Particle Algorithms
Donev, A
2007-02-28
We present in a unifying way the main components of three examples of asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics (MD), is well-known. We also present a recently-developed diffusion kinetic Monte Carlo (DKMC) algorithm, as well as a novel event-driven algorithm for Direct Simulation Monte Carlo (DSMC). Finally, we describe how to combine MD with DSMC in an event-driven framework, and discuss some promises and challenges for event-driven simulation of realistic physical systems.
Algorithms For Integrating Nonlinear Differential Equations
NASA Technical Reports Server (NTRS)
Freed, A. D.; Walker, K. P.
1994-01-01
Improved algorithms developed for use in numerical integration of systems of nonhomogenous, nonlinear, first-order, ordinary differential equations. In comparison with integration algorithms, these algorithms offer greater stability and accuracy. Several asymptotically correct, thereby enabling retention of stability and accuracy when large increments of independent variable used. Accuracies attainable demonstrated by applying them to systems of nonlinear, first-order, differential equations that arise in study of viscoplastic behavior, spread of acquired immune-deficiency syndrome (AIDS) virus and predator/prey populations.
Quantum hyperparallel algorithm for matrix multiplication
NASA Astrophysics Data System (ADS)
Zhang, Xin-Ding; Zhang, Xiao-Ming; Xue, Zheng-Yuan
2016-04-01
Hyperentangled states, entangled states with more than one degree of freedom, are considered as promising resource in quantum computation. Here we present a hyperparallel quantum algorithm for matrix multiplication with time complexity O(N2), which is better than the best known classical algorithm. In our scheme, an N dimensional vector is mapped to the state of a single source, which is separated to N paths. With the assistance of hyperentangled states, the inner product of two vectors can be calculated with a time complexity independent of dimension N. Our algorithm shows that hyperparallel quantum computation may provide a useful tool in quantum machine learning and “big data” analysis.
Refined Genetic Algorithms for Polypeptide Structure Prediction.
1996-12-01
designing no v el proteins, in deco ding the information obtained from the Human Genome Pro ject (91), in designing new drugs, and in trying to...function that assigns tness v alues to p ossible solutions and an enco de/ deco de b et w een the algorithm and problem spaces. Al- though these metho ds...genetic algorithms: In tro duction and o v erview of curren t researc h. Parallel Genetic Algorithms, pages 5{35, 1993. 22. Bruce S. Duncan . P arallel ev
Analysis of dissection algorithms for vector computers
NASA Technical Reports Server (NTRS)
George, A.; Poole, W. G., Jr.; Voigt, R. G.
1978-01-01
Recently two dissection algorithms (one-way and incomplete nested dissection) have been developed for solving the sparse positive definite linear systems arising from n by n grid problems. Concurrently, vector computers (such as the CDC STAR-100 and TI ASC) have been developed for large scientific applications. An analysis of the use of dissection algorithms on vector computers dictates that vectors of maximum length be utilized thereby implying little or no dissection; on the other hand, minimizing operation counts suggest that considerable dissection be performed. In this paper we discuss the resolution of this conflict by minimizing the total time required by vectorized versions of the two algorithms.
An algorithm for online optimization of accelerators
Huang, Xiaobiao; Corbett, Jeff; Safranek, James; Wu, Juhao
2013-10-01
We developed a general algorithm for online optimization of accelerator performance, i.e., online tuning, using the performance measure as the objective function. This method, named robust conjugate direction search (RCDS), combines the conjugate direction set approach of Powell's method with a robust line optimizer which considers the random noise in bracketing the minimum and uses parabolic fit of data points that uniformly sample the bracketed zone. Moreover, it is much more robust against noise than traditional algorithms and is therefore suitable for online application. Simulation and experimental studies have been carried out to demonstrate the strength of the new algorithm.
Some multigrid algorithms for SIMD machines
Dendy, J.E. Jr.
1996-12-31
Previously a semicoarsening multigrid algorithm suitable for use on SIMD architectures was investigated. Through the use of new software tools, the performance of this algorithm has been considerably improved. The method has also been extended to three space dimensions. The method performs well for strongly anisotropic problems and for problems with coefficients jumping by orders of magnitude across internal interfaces. The parallel efficiency of this method is analyzed, and its actual performance on the CM-5 is compared with its performance on the CRAY-YMP. A standard coarsening multigrid algorithm is also considered, and we compare its performance on these two platforms as well.
Dual format algorithm implementation with gotcha data
NASA Astrophysics Data System (ADS)
Gorham, LeRoy A.; Rigling, Brian D.
2012-05-01
The Dual Format Algorithm (DFA) is an alternative to the Polar Format Algorithm (PFA) where the image is formed first to an arbitrary grid instead of a Cartesian grid. The arbitrary grid is specifically chosen to allow for more efficient application of defocus and distortion corrections that occur due to range curvature. We provide a description of the arbitrary image grid and show that the quadratic phase errors are isolated along a single dimension of the image. We describe an application of the DFA to circular SAR data and analyze the image focus. For an example SAR dataset, the DFA doubles the focused image size of the PFA algorithm with post imaging corrections.
Algorithms for optimal dyadic decision trees
Hush, Don; Porter, Reid
2009-01-01
A new algorithm for constructing optimal dyadic decision trees was recently introduced, analyzed, and shown to be very effective for low dimensional data sets. This paper enhances and extends this algorithm by: introducing an adaptive grid search for the regularization parameter that guarantees optimal solutions for all relevant trees sizes, revising the core tree-building algorithm so that its run time is substantially smaller for most regularization parameter values on the grid, and incorporating new data structures and data pre-processing steps that provide significant run time enhancement in practice.
An Analysis of Algorithms for Solving Discrete Logarithms in Fixed Groups
2010-03-01
Pollard’s Rho Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 H. Pollard’s Kangaroo Algorithm...Pollard’s Kangaroo Algorithm . . . . . . . . . . . . . . . . . . . . . . 44 5. Index Calculus Algorithm...27 16. Pollard’s Rho - Random Sequence Algorithm . . . . . . . . . . . . . . . . . . 28 17. Pollard’s Kangaroo Algorithm
Finite pure integer programming algorithms employing only hyperspherically deduced cuts
NASA Technical Reports Server (NTRS)
Young, R. D.
1971-01-01
Three algorithms are developed that may be based exclusively on hyperspherically deduced cuts. The algorithms only apply, therefore, to problems structured so that these cuts are valid. The algorithms are shown to be finite.
ANALYZING ENVIRONMENTAL IMPACTS WITH THE WAR ALGORITHM: REVIEW AND UPDATE
This presentation will review uses of the WAR algorithm and current developments and possible future directions. The WAR algorithm is a methodology for analyzing potential environmental impacts of 1600+ chemicals used in the chemical processing and other industries. The algorithm...
Protein structure optimization with a "Lamarckian" ant colony algorithm.
Oakley, Mark T; Richardson, E Grace; Carr, Harriet; Johnston, Roy L
2013-01-01
We describe the LamarckiAnt algorithm: a search algorithm that combines the features of a "Lamarckian" genetic algorithm and ant colony optimization. We have implemented this algorithm for the optimization of BLN model proteins, which have frustrated energy landscapes and represent a challenge for global optimization algorithms. We demonstrate that LamarckiAnt performs competitively with other state-of-the-art optimization algorithms.
Advanced Imaging Algorithms for Radiation Imaging Systems
Marleau, Peter
2015-10-01
The intent of the proposed work, in collaboration with University of Michigan, is to develop the algorithms that will bring the analysis from qualitative images to quantitative attributes of objects containing SNM. The first step to achieving this is to develop an indepth understanding of the intrinsic errors associated with the deconvolution and MLEM algorithms. A significant new effort will be undertaken to relate the image data to a posited three-dimensional model of geometric primitives that can be adjusted to get the best fit. In this way, parameters of the model such as sizes, shapes, and masses can be extracted for both radioactive and non-radioactive materials. This model-based algorithm will need the integrated response of a hypothesized configuration of material to be calculated many times. As such, both the MLEM and the model-based algorithm require significant increases in calculation speed in order to converge to solutions in practical amounts of time.
IIR algorithms for adaptive line enhancement
David, R.A.; Stearns, S.D.; Elliott, G.R.; Etter, D.M.
1983-01-01
We introduce a simple IIR structure for the adaptive line enhancer. Two algorithms based on gradient-search techniques are presented for adapting the structure. Results from experiments which utilized real data as well as computer simulations are provided.