Wang, Hou-Ling; Li, Lan; Tang, Sha; Yuan, Chao; Tian, Qianqian; Su, Yanyan; Li, Hui-Guang; Zhao, Lin; Yin, Weilun; Zhao, Rui; Xia, Xinli
2015-01-01
Despite the unshakable status of reverse transcription-quantitative PCR in gene expression analysis, it has certain disadvantages, including that the results are highly dependent on the reference genes selected for data normalization. Since inappropriate endogenous control genes will lead to inaccurate target gene expression profiles, the validation of suitable internal reference genes is essential. Given the increasing interest in functional genes and genomics of Populus euphratica, a desert poplar showing extraordinary adaptation to salt stress, we evaluated the expression stability of ten candidate reference genes in P. euphratica roots, stems, and leaves under salt stress conditions. We used five algorithms, namely, ΔCt, NormFinder, geNorm, GrayNorm, and a rank aggregation method (RankAggreg) to identify suitable normalizers. To support the suitability of the identified reference genes and to compare the relative merits of these different algorithms, we analyzed and compared the relative expression levels of nine P. euphratica functional genes in different tissues. Our results indicate that a combination of multiple reference genes recommended by GrayNorm algorithm (e.g., a combination of Actin, EF1α, GAPDH, RP, UBQ in root) should be used instead of a single reference gene. These results are valuable for research of gene identification in different P. euphratica tissues. PMID:26343648
Voorhuijzen, Marleen M.; Staats, Martijn; Hutten, Ronald C. B.; Van Dijk, Jeroen P.; Kok, Esther; Frazzon, Jeverson
2015-01-01
Potato (Solanum tuberosum) yield has increased dramatically over the last 50 years and this has been achieved by a combination of improved agronomy and biotechnology efforts. Gene studies are taking place to improve new qualities and develop new cultivars. Reverse transcriptase quantitative polymerase chain reaction (RT-qPCR) is a bench-marking analytical tool for gene expression analysis, but its accuracy is highly dependent on a reliable normalization strategy of an invariant reference genes. For this reason, the goal of this work was to select and validate reference genes for transcriptional analysis of edible tubers of potato. To do so, RT-qPCR primers were designed for ten genes with relatively stable expression in potato tubers as observed in RNA-Seq experiments. Primers were designed across exon boundaries to avoid genomic DNA contamination. Differences were observed in the ranking of candidate genes identified by geNorm, NormFinder and BestKeeper algorithms. The ranks determined by geNorm and NormFinder were very similar and for all samples the most stable candidates were C2, exocyst complex component sec3 (SEC3) and ATCUL3/ATCUL3A/CUL3/CUL3A (CUL3A). According to BestKeeper, the importin alpha and ubiquitin-associated/ts-n genes were the most stable. Three genes were selected as reference genes for potato edible tubers in RT-qPCR studies. The first one, called C2, was selected in common by NormFinder and geNorm, the second one is SEC3, selected by NormFinder, and the third one is CUL3A, selected by geNorm. Appropriate reference genes identified in this work will help to improve the accuracy of gene expression quantification analyses by taking into account differences that may be observed in RNA quality or reverse transcription efficiency across the samples. PMID:25830330
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
Wang, Yu; Wang, Zhong-Kang; Huang, Yi; Liao, Yu-Feng; Yin, You-Ping
2014-01-01
The blister beetle Mylabris cichorii L. (Coleoptera: Meloidae) is a traditional medicinal insect recorded in the Chinese Pharmacopoeia. It synthesizes cantharidin, which kills cancer cells efficiently. Only males produce large amounts of cantharidin. Reference genes are required as endogenous controls for the analysis of differential gene expression in M. cichorii. Our study chose 10 genes as candidate reference genes. The stability of expression of these genes was analyzed by quantitative PCR and determined with two algorithms, geNorm and Normfinder. We recommend UBE3A and RPL22e as suitable reference genes in females and UBE3A, TAF5, and RPL22e in males. PMID:25368050
Wang, Yu; Wang, Zhong-Kang; Huang, Yi; Liao, Yu-Feng; Yin, You-Ping
2014-01-01
The blister beetle Mylabris cichorii L. (Coleoptera: Meloidae) is a traditional medicinal insect recorded in the Chinese Pharmacopoeia. It synthesizes cantharidin, which kills cancer cells efficiently. Only males produce large amounts of cantharidin. Reference genes are required as endogenous controls for the analysis of differential gene expression in M. cichorii. Our study chose 10 genes as candidate reference genes. The stability of expression of these genes was analyzed by quantitative PCR and determined with two algorithms, geNorm and Normfinder. We recommend UBE3A and RPL22e as suitable reference genes in females and UBE3A, TAF5, and RPL22e in males. PMID:25368050
Reference Gene Validation for RT-qPCR, a Note on Different Available Software Packages
De Spiegelaere, Ward; Dern-Wieloch, Jutta; Weigel, Roswitha; Schumacher, Valérie; Schorle, Hubert; Nettersheim, Daniel; Bergmann, Martin; Brehm, Ralph; Kliesch, Sabine; Vandekerckhove, Linos; Fink, Cornelia
2015-01-01
Background An appropriate normalization strategy is crucial for data analysis from real time reverse transcription polymerase chain reactions (RT-qPCR). It is widely supported to identify and validate stable reference genes, since no single biological gene is stably expressed between cell types or within cells under different conditions. Different algorithms exist to validate optimal reference genes for normalization. Applying human cells, we here compare the three main methods to the online available RefFinder tool that integrates these algorithms along with R-based software packages which include the NormFinder and GeNorm algorithms. Results 14 candidate reference genes were assessed by RT-qPCR in two sample sets, i.e. a set of samples of human testicular tissue containing carcinoma in situ (CIS), and a set of samples from the human adult Sertoli cell line (FS1) either cultured alone or in co-culture with the seminoma like cell line (TCam-2) or with equine bone marrow derived mesenchymal stem cells (eBM-MSC). Expression stabilities of the reference genes were evaluated using geNorm, NormFinder, and BestKeeper. Similar results were obtained by the three approaches for the most and least stably expressed genes. The R-based packages NormqPCR, SLqPCR and the NormFinder for R script gave identical gene rankings. Interestingly, different outputs were obtained between the original software packages and the RefFinder tool, which is based on raw Cq values for input. When the raw data were reanalysed assuming 100% efficiency for all genes, then the outputs of the original software packages were similar to the RefFinder software, indicating that RefFinder outputs may be biased because PCR efficiencies are not taken into account. Conclusions This report shows that assay efficiency is an important parameter for reference gene validation. New software tools that incorporate these algorithms should be carefully validated prior to use. PMID:25825906
Pan, Huipeng; Ma, Yabin; Zhang, Deyong; Liu, Yong; Zhang, Zhanhong; Zheng, Changying; Chu, Dong
2015-01-01
Reverse transcriptase-quantitative polymerase chain reaction (RT-qPCR) is a reliable technique for measuring and evaluating gene expression during variable biological processes. To facilitate gene expression studies, normalization of genes of interest relative to stable reference genes is crucial. The western flower thrips Frankliniella occidentalis (Pergande) (Thysanoptera: Thripidae), the main vector of tomato spotted wilt virus (TSWV), is a destructive invasive species. In this study, the expression profiles of 11 candidate reference genes from nonviruliferous and viruliferous F. occidentalis were investigated. Five distinct algorithms, geNorm, NormFinder, BestKeeper, the ΔCt method, and RefFinder, were used to determine the performance of these genes. geNorm, NormFinder, BestKeeper, and RefFinder identified heat shock protein 70 (HSP70), heat shock protein 60 (HSP60), elongation factor 1 α, and ribosomal protein l32 (RPL32) as the most stable reference genes, and the ΔCt method identified HSP60, HSP70, RPL32, and heat shock protein 90 as the most stable reference genes. Additionally, two reference genes were sufficient for reliable normalization in nonviruliferous and viruliferous F. occidentalis. This work provides a foundation for investigating the molecular mechanisms of TSWV and F. occidentalis interactions. PMID:26244556
Deng, Huaxiang; Gao, Ruijie; Liao, Xiangru; Cai, Yujie
2016-04-10
Shiraia bambusicola is an essential pharmaceutical fungus due to its production of hypocrellin with antiviral, antidepressant, and antiretroviral properties. Based on suitable reference gene (RG) normalization, gene expression analysis enables the exploitation of significant genes relative to hypocrellin biosynthesis by quantitative real-time polymerase chain reaction. We selected and assessed nine candidate RGs in the presence and absence of hypocrellin biosynthesis using GeNorm and NormFinder algorithms. After stepwise exclusion of unstable genes, GeNorm analysis identified glyceraldehyde-3-phosphate dehydrogenase (GAPDH) and cytochrome oxidase (CyO) as the most stable expression, while NormFinder determined 18S ribosomal RNA (18S rRNA) as the most appropriate candidate gene for normalization. Tubulin (Tub) was observed to be the least stable gene and should be avoided for relative expression analysis. We further analyzed relative expression levels of essential proteins correlative with hypocrellin biosynthesis, including polyketide synthase (PKS), O-methyltransferase (Omef), FAD/FMN-dependent oxidoreductase (FAD), and monooxygenase (Mono). Compared to PKS, Mono kept a similar expression pattern and simulated PKS expression, while FAD remained constantly expressed. Omef presented lower transcript levels and had no relation to PKS expression. These relative expression analyses will pave the way for further interpretation of the hypocrellin biosynthesis pathway. PMID:26779826
Long, Xiangyu; He, Bin; Gao, Xinsheng; Qin, Yunxia; Yang, Jianghua; Fang, Yongjun; Qi, Jiyan; Tang, Chaorong
2015-06-01
In rubber tree, latex regeneration is one of the decisive factors influencing the rubber yield, although its molecular regulation is not well known. Quantitative real-time PCR (qPCR) is a popular and powerful tool used to understand the molecular mechanisms of latex regeneration. However, the suitable reference genes required for qPCR are not available to investigate the expressions of target genes during latex regeneration. In this study, 20 candidate reference genes were selected and evaluated for their expression stability across the samples during the process of latex regeneration. All reference genes showed a relatively wide range of the threshold cycle values, and their stability was validated by four different algorithms (comparative delta Ct method, Bestkeeper, NormFinder and GeNorm). Three softwares (comparative delta Ct method, NormFinder and GeNorm) exported similar results that identify UBC4, ADF, UBC2a, eIF2 and ADF4 as the top five suitable references, and 18S as the least suitable one. The application of the screened references would improve accuracy and reliability of gene expression analysis in latex regeneration experiments. PMID:25791491
Hirschburger, Daniela; Müller, Manuel; Voegele, Ralf T.; Link, Tobias
2015-01-01
Phakopsora pachyrhizi is a devastating pathogen on soybean, endangering soybean production worldwide. Use of Host Induced Gene Silencing (HIGS) and the study of effector proteins could provide novel strategies for pathogen control. For both approaches quantification of transcript abundance by RT-qPCR is essential. Suitable stable reference genes for normalization are indispensable to obtain accurate RT-qPCR results. According to the Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines and using algorithms geNorm and NormFinder we tested candidate reference genes from P. pachyrhizi and Glycine max for their suitability in normalization of transcript levels throughout the infection process. For P. pachyrhizi we recommend a combination of CytB and PDK or GAPDH for in planta experiments. Gene expression during in vitro stages and over the whole infection process was found to be highly unstable. Here, RPS14 and UbcE2 are ranked best by geNorm and NormFinder. Alternatively CytB that has the smallest Cq range (Cq: quantification cycle) could be used. We recommend specification of gene expression relative to the germ tube stage rather than to the resting urediospore stage. For studies omitting the resting spore and the appressorium stages a combination of Elf3 and RPS9, or PKD and GAPDH should be used. For normalization of soybean genes during rust infection Ukn2 and cons7 are recommended. PMID:26404265
Yang, Chunxiao; Li, Hui; Pan, Huipeng; Ma, Yabin; Zhang, Deyong; Liu, Yong; Zhang, Zhanhong; Zheng, Changying; Chu, Dong
2015-01-01
Reverse transcriptase-quantitative polymerase chain reaction (RT-qPCR) is a reliable technique for measuring and evaluating gene expression during variable biological processes. To facilitate gene expression studies, normalization of genes of interest relative to stable reference genes is crucial. The western flower thrips Frankliniella occidentalis (Pergande) (Thysanoptera: Thripidae), the main vector of tomato spotted wilt virus (TSWV), is a destructive invasive species. In this study, the expression profiles of 11 candidate reference genes from nonviruliferous and viruliferous F. occidentalis were investigated. Five distinct algorithms, geNorm, NormFinder, BestKeeper, the ΔCt method, and RefFinder, were used to determine the performance of these genes. geNorm, NormFinder, BestKeeper, and RefFinder identified heat shock protein 70 (HSP70), heat shock protein 60 (HSP60), elongation factor 1 α, and ribosomal protein l32 (RPL32) as the most stable reference genes, and the ΔCt method identified HSP60, HSP70, RPL32, and heat shock protein 90 as the most stable reference genes. Additionally, two reference genes were sufficient for reliable normalization in nonviruliferous and viruliferous F. occidentalis. This work provides a foundation for investigating the molecular mechanisms of TSWV and F. occidentalis interactions. PMID:26244556
Li, X; Huang, K; Chen, F; Li, W; Sun, S; Shi, X-E; Yang, G
2016-06-01
Intramuscular fat (IMF) is an important trait influencing meat quality, and intramuscular stromal-vascular cell (MSVC) differentiation is a key factor affecting IMF deposition. Quantitative real-time PCR (qPCR) is often used to screen the differentially expressed genes during differentiation of MSVCs, where proper reference genes are essential. In this study, we assessed 31 of previously reported reference genes for their expression suitability in porcine MSVCs derived form longissimus dorsi with qPCR. The expression stability of these genes was evaluated using NormFinder, geNorm and BestKeeper algorithms. NormFinder and geNorm uncovered ACTB, ALDOA and RPS18 as the most three stable genes. BestKeeper identified RPL13A, SSU72 and DAK as the most three stable genes. GAPDH was found to be the least stable gene by all of the three software packages, indicating it is not an appropriate reference gene in qPCR assay. These results might be helpful for further studies in pigs that explore the molecular mechanism underlying IMF deposition. PMID:26781521
Mathur, Deepali; Urena-Peralta, Juan R.; Lopez-Rodas, Gerardo; Casanova, Bonaventura; Coret-Ferrer, Francisco; Burgal-Marti, Maria
2015-01-01
Gene expression studies employing real-time PCR has become an intrinsic part of biomedical research. Appropriate normalization of target gene transcript(s) based on stably expressed housekeeping genes is crucial in individual experimental conditions to obtain accurate results. In multiple sclerosis (MS), several gene expression studies have been undertaken, however, the suitability of housekeeping genes to express stably in this disease is not yet explored. Recent research suggests that their expression level may vary under different experimental conditions. Hence it is indispensible to evaluate their expression stability to accurately normalize target gene transcripts. The present study aims to evaluate the expression stability of seven housekeeping genes in rat granule neurons treated with cerebrospinal fluid of MS patients. The selected reference genes were quantified by real time PCR and their expression stability was assessed using GeNorm and NormFinder algorithms. GeNorm identified transferrin receptor (Tfrc) and microglobulin beta-2 (B2m) the most stable genes followed by ribosomal protein L19 (Rpl19) whereas β-actin (ActB) and glyceraldehyde-3-phosphate-dehydrogenase (Gapdh) the most fluctuated ones in these neurons. NormFinder identified Tfrc as the best invariable gene followed by B2m and Rpl19. ActB and Gapdh were the least stable genes as analyzed by NormFinder algorithm. Both methods reported Tfrc and B2m the most stably expressed genes and Gapdh the least stable one. Altogether our data demonstrate the significance of pre-validation of housekeeping genes for accurate normalization and indicates Tfrc and B2m as best endogenous controls in MS. ActB and Gapdh are not recommended in gene expression studies related to current one. PMID:26441545
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Cieslak, Jakub; Mackowski, Mariusz; Czyzak-Runowska, Grazyna; Wojtowski, Jacek; Puppel, Kamila; Kuczynska, Beata; Pawlak, Piotr
2015-01-01
Apart from the well-known role of somatic cell count as a parameter reflecting the inflammatory status of the mammary gland, the composition of cells isolated from milk is considered as a valuable material for gene expression studies in mammals. Due to its unique composition, in recent years an increasing interest in mare's milk consumption has been observed. Thus, investigating the genetic background of horse's milk variability presents and interesting study model. Relying on 39 milk samples collected from mares representing three breeds (Polish Primitive Horse, Polish Cold-blooded Horse, Polish Warmblood Horse) we aimed to investigate the utility of equine milk somatic cells as a source of mRNA and to screen the best reference genes for RT-qPCR using geNorm and NormFinder algorithms. The results showed that despite relatively low somatic cell counts in mare's milk, the amount and the quality of the extracted RNA are sufficient for gene expression studies. The analysis of the utility of 7 potential reference genes for RT-qPCR experiments for the normalization of equine milk somatic cells revealed some differences between the outcomes of the applied algorithms, although in both cases the KRT8 and TOP2B genes were pointed as the most stable. Analysis by geNorm showed that the combination of 4 reference genes (ACTB, GAPDH, TOP2B and KRT8) is required for apropriate RT-qPCR experiments normalization, whereas NormFinder algorithm pointed the combination of KRT8 and RPS9 genes as the most suitable. The trial study of the relative transcript abundance of the beta-casein gene with the use of various types and numbers of internal control genes confirmed once again that the selection of proper reference gene combinations is crucial for the final results of each real-time PCR experiment. PMID:26437076
Cieslak, Jakub; Mackowski, Mariusz; Czyzak-Runowska, Grazyna; Wojtowski, Jacek; Puppel, Kamila; Kuczynska, Beata; Pawlak, Piotr
2015-01-01
Apart from the well-known role of somatic cell count as a parameter reflecting the inflammatory status of the mammary gland, the composition of cells isolated from milk is considered as a valuable material for gene expression studies in mammals. Due to its unique composition, in recent years an increasing interest in mare's milk consumption has been observed. Thus, investigating the genetic background of horse’s milk variability presents and interesting study model. Relying on 39 milk samples collected from mares representing three breeds (Polish Primitive Horse, Polish Cold-blooded Horse, Polish Warmblood Horse) we aimed to investigate the utility of equine milk somatic cells as a source of mRNA and to screen the best reference genes for RT-qPCR using geNorm and NormFinder algorithms. The results showed that despite relatively low somatic cell counts in mare's milk, the amount and the quality of the extracted RNA are sufficient for gene expression studies. The analysis of the utility of 7 potential reference genes for RT-qPCR experiments for the normalization of equine milk somatic cells revealed some differences between the outcomes of the applied algorithms, although in both cases the KRT8 and TOP2B genes were pointed as the most stable. Analysis by geNorm showed that the combination of 4 reference genes (ACTB, GAPDH, TOP2B and KRT8) is required for apropriate RT-qPCR experiments normalization, whereas NormFinder algorithm pointed the combination of KRT8 and RPS9 genes as the most suitable. The trial study of the relative transcript abundance of the beta-casein gene with the use of various types and numbers of internal control genes confirmed once again that the selection of proper reference gene combinations is crucial for the final results of each real-time PCR experiment. PMID:26437076
NASA Astrophysics Data System (ADS)
Abrams, Daniel S.
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
Sobel, E.; Lange, K.; O`Connell, J.R.
1996-12-31
Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.
Kapila, Neha; Kishore, Amit; Sodhi, Monika; Sharma, Ankita; Kumar, Pawan; Mohanty, A. K.; Jerath, Tanushri; Mukesh, M.
2013-01-01
Gene expression studies require appropriate normalization methods for proper evaluation of reference genes. To date, not many studies have been reported on the identification of suitable reference genes in buffaloes. The present study was undertaken to determine the panel of suitable reference genes in heat-stressed buffalo mammary epithelial cells (MECs). Briefly, MEC culture from buffalo mammary gland was exposed to 42 °C for one hour and subsequently allowed to recover at 37 °C for different time intervals (from 30 m to 48 h). Three different algorithms, geNorm, NormFinder, and BestKeeper softwares, were used to evaluate the stability of 16 potential reference genes from different functional classes. Our data identified RPL4, EEF1A1, and RPS23 genes to be the most appropriate reference genes that could be utilized for normalization of qPCR data in heat-stressed buffalo MECs. PMID:25937980
2013-01-01
Background Phytoplasmas are phloem-limited phytopathogenic wall-less bacteria and represent a major threat to agriculture worldwide. They are transmitted in a persistent, propagative manner by phloem-sucking Hemipteran insects. For gene expression studies based on mRNA quantification by RT-qPCR, stability of housekeeping genes is crucial. The aim of this study was the identification of reference genes to study the effect of phytoplasma infection on gene expression of two leafhopper vector species. The identified reference genes will be useful tools to investigate differential gene expression of leafhopper vectors upon phytoplasma infection. Results The expression profiles of ribosomal 18S, actin, ATP synthase β, glyceraldehyde-3-phosphate dehydrogenase (GAPDH) and tropomyosin were determined in two leafhopper vector species (Hemiptera: Cicadellidae), both healthy and infected by “Candidatus Phytoplasma asteris” (chrysanthemum yellows phytoplasma strain, CYP). Insects were analyzed at three different times post acquisition, and expression stabilities of the selected genes were evaluated with BestKeeper, geNorm and Normfinder algorithms. In Euscelidius variegatus, all genes under all treatments were stable and could serve as reference genes. In Macrosteles quadripunctulatus, BestKeeper and Normfinder analysis indicated ATP synthase β, tropomyosin and GAPDH as the most stable, whereas geNorm identified reliable genes only for early stages of infection. Conclusions In this study a validation of five candidate reference genes was performed with three algorithms, and housekeeping genes were identified for over time transcript profiling of two leafhopper vector species infected by CYP. This work set up an experimental system to study the molecular basis of phytoplasma multiplication in the insect body, in order to elucidate mechanisms of vector specificity. Most of the sequences provided in this study are new for leafhoppers, which are vectors of economically
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
Validation of reference genes for RT-qPCR analysis in Herbaspirillum seropedicae.
Pessoa, Daniella Duarte Villarinho; Vidal, Marcia Soares; Baldani, José Ivo; Simoes-Araujo, Jean Luiz
2016-08-01
The RT-qPCR technique needs a validated set of reference genes for ensuring the consistency of the results from the gene expression. Expression stabilities for 9 genes from Herbaspirillum seropedicae, strain HRC54, grown with different carbon sources were calculated using geNorm and NormFinder, and the gene rpoA showed the best stability values. PMID:27302038
Ferraz, F B; Fernandez, J H
2016-01-01
Macrophages are essential components of the innate and adaptive immune responses, playing a decisive role in atherosclerosis, asthma, obesity, and cancer. The differential gene expression resulting from adhesion of macrophages to the extra-cellular matrix (ECM) has been studied in the J774A1 murine macrophage cell line using quantitative polymerase chain reaction (qPCR). The goal of this study was to identify housekeeping genes (HKGs) that remain stable and unaltered under normal culture conditions and in the presence of laminin after a time lapse of 6 and 24 h. The expression stabilities of eight commonly used reference genes were analyzed by determining the comparative threshold cycle ((ΔΔ)Ct) values, and using the BestKeeper, NormFinder, and geNorm algorithms. BestKeeper analysis revealed that the glyceraldehyde-3-phosphate dehydrogenase (GAPDH), peptidylprolyl isomerase A (PPIA), and ribosomal protein L13a (RPL13A) genes were highly stable, confirming the results of the (ΔΔ)Ct analysis. On the other hand, NormFinder proposed RPL13A and beta-glucuronidase (GUSB) to be the most suitable combination, and geNorm adjudged RPL13A, PPIA, and GUSB to be the most stable across all culture conditions. All programs discarded the use of actin beta and beta-2-microglobulin for normalization. The collected data indicated that RPL13A, PPIA, GAPDH, and GUSB as highly suitable as reference genes for qPCR analysis of murine macrophages under normal and ECM-simulated culture conditions. This study also emphasizes the importance of evaluating HKGs used for normalization to ensure the accuracy of qPCR data. PMID:26985962
Julian, Guilherme Silva; Oliveira, Renato Watanabe de; Tufik, Sergio; Chagas, Jair Ribeiro
2016-01-01
Obstructive sleep apnea (OSA) has been associated with oxidative stress and various cardiovascular consequences, such as increased cardiovascular disease risk. Quantitative real-time PCR is frequently employed to assess changes in gene expression in experimental models. In this study, we analyzed the effects of chronic intermittent hypoxia (an experimental model of OSA) on housekeeping gene expression in the left cardiac ventricle of rats. Analyses via four different approaches-use of the geNorm, BestKeeper, and NormFinder algorithms; and 2-ΔCt (threshold cycle) data analysis-produced similar results: all genes were found to be suitable for use, glyceraldehyde-3-phosphate dehydrogenase and 18S being classified as the most and the least stable, respectively. The use of more than one housekeeping gene is strongly advised. RESUMO A apneia obstrutiva do sono (AOS) tem sido associada ao estresse oxidativo e a várias consequências cardiovasculares, tais como risco aumentado de doença cardiovascular. A PCR quantitativa em tempo real é frequentemente empregada para avaliar alterações na expressão gênica em modelos experimentais. Neste estudo, analisamos os efeitos da hipóxia intermitente crônica (um modelo experimental de AOS) na expressão de genes de referência no ventrículo cardíaco esquerdo de ratos. Análises a partir de quatro abordagens - uso dos algoritmos geNorm, BestKeeper e NormFinder e análise de dados 2-ΔCt (ciclo limiar) - produziram resultados semelhantes: todos os genes mostraram-se adequados para uso, sendo que gliceraldeído-3-fosfato desidrogenase e 18S foram classificados como o mais e o menos estável, respectivamente. A utilização de mais de um gene de referência é altamente recomendada. PMID:27383935
Reference genes for qPCR assays in toxic metal and salinity stress in two flatworm model organisms.
Plusquin, Michelle; DeGheselle, Olivier; Cuypers, Ann; Geerdens, Ellen; Van Roten, Andromeda; Artois, Tom; Smeets, Karen
2012-03-01
The flatworm species Schmidtea mediterranea and Macrostomum lignano have become new and innovative model organisms in stem cell, regeneration and tissue homeostasis research. Because of their unique stem cell system, (lab) technical advantages and their phylogenetic position within the Metazoa, they are also ideal candidate model organisms for toxicity assays. As stress and biomarker screenings are often performed at the transcriptional level, the aim of this study was to establish a set of reference genes for qPCR experiments for these two model organisms in different stress situations. We examined the transcriptional stability of nine potential reference genes (actb, tubb, ck2, cox4, cys, rpl13, gapdh, gm2ap, plscr1) to assess those that are most stable during altered stress conditions (exposure to carcinogenic metals and salinity stress). The gene expression stability was evaluated by means of geNorm and NormFinder algorithms. Sets of best reference genes in these analyses varied between different stress situations, although gm2ap and actb were stably transcribed during all tested combinations. In order to demonstrate the impact of bad normalisation, the stress-specific gene hsp90 was normalised to different sets of reference genes. In contrast to the normalisation according to GeNorm and NormFinder, normalisation of hsp90 in Macrostomum lignano during cadmium stress did not show a significant difference when normalised to only gapdh. On the other hand an increase of variability was noticed when normalised to all nine tested reference genes together. Testing appropriate reference genes is therefore strongly advisable in every new experimental condition. PMID:22080432
Hu, Meizhen; Hu, Wenbin; Xia, Zhiqiang; Zhou, Xincheng; Wang, Wenquan
2016-01-01
Reverse transcription quantitative real-time polymerase chain reaction (real-time PCR, also referred to as quantitative RT-PCR or RT-qPCR) is a highly sensitive and high-throughput method used to study gene expression. Despite the numerous advantages of RT-qPCR, its accuracy is strongly influenced by the stability of internal reference genes used for normalizations. To date, few studies on the identification of reference genes have been performed on cassava (Manihot esculenta Crantz). Therefore, we selected 26 candidate reference genes mainly via the three following channels: reference genes used in previous studies on cassava, the orthologs of the most stable Arabidopsis genes, and the sequences obtained from 32 cassava transcriptome sequence data. Then, we employed ABI 7900 HT and SYBR Green PCR mix to assess the expression of these genes in 21 materials obtained from various cassava samples under different developmental and environmental conditions. The stability of gene expression was analyzed using two statistical algorithms, namely geNorm and NormFinder. geNorm software suggests the combination of cassava4.1_017977 and cassava4.1_006391 as sufficient reference genes for major cassava samples, the union of cassava4.1_014335 and cassava4.1_006884 as best choice for drought stressed samples, and the association of cassava4.1_012496 and cassava4.1_006391 as optimal choice for normally grown samples. NormFinder software recommends cassava4.1_006884 or cassava4.1_006776 as superior reference for qPCR analysis of different materials and organs of drought stressed or normally grown cassava, respectively. Results provide an important resource for cassava reference genes under specific conditions. The limitations of these findings were also discussed. Furthermore, we suggested some strategies that may be used to select candidate reference genes. PMID:27242878
Hu, Meizhen; Hu, Wenbin; Xia, Zhiqiang; Zhou, Xincheng; Wang, Wenquan
2016-01-01
Reverse transcription quantitative real-time polymerase chain reaction (real-time PCR, also referred to as quantitative RT-PCR or RT-qPCR) is a highly sensitive and high-throughput method used to study gene expression. Despite the numerous advantages of RT-qPCR, its accuracy is strongly influenced by the stability of internal reference genes used for normalizations. To date, few studies on the identification of reference genes have been performed on cassava (Manihot esculenta Crantz). Therefore, we selected 26 candidate reference genes mainly via the three following channels: reference genes used in previous studies on cassava, the orthologs of the most stable Arabidopsis genes, and the sequences obtained from 32 cassava transcriptome sequence data. Then, we employed ABI 7900 HT and SYBR Green PCR mix to assess the expression of these genes in 21 materials obtained from various cassava samples under different developmental and environmental conditions. The stability of gene expression was analyzed using two statistical algorithms, namely geNorm and NormFinder. geNorm software suggests the combination of cassava4.1_017977 and cassava4.1_006391 as sufficient reference genes for major cassava samples, the union of cassava4.1_014335 and cassava4.1_006884 as best choice for drought stressed samples, and the association of cassava4.1_012496 and cassava4.1_006391 as optimal choice for normally grown samples. NormFinder software recommends cassava4.1_006884 or cassava4.1_006776 as superior reference for qPCR analysis of different materials and organs of drought stressed or normally grown cassava, respectively. Results provide an important resource for cassava reference genes under specific conditions. The limitations of these findings were also discussed. Furthermore, we suggested some strategies that may be used to select candidate reference genes. PMID:27242878
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Stability of Bareiss algorithm
NASA Astrophysics Data System (ADS)
Bojanczyk, Adam W.; Brent, Richard P.; de Hoog, F. R.
1991-12-01
In this paper, we present a numerical stability analysis of Bareiss algorithm for solving a symmetric positive definite Toeplitz system of linear equations. We also compare Bareiss algorithm with Levinson algorithm and conclude that the former has superior numerical properties.
Library of Continuation Algorithms
2005-03-01
LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newtons method for their nonlinear solve.
Geist, G.A.; Howell, G.W.; Watkins, D.S.
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
Jacob, Francis; Guertler, Rea; Naim, Stephanie; Nixdorf, Sheri; Fedier, André; Hacker, Neville F.; Heinzelmann-Schwarz, Viola
2013-01-01
Reverse Transcription - quantitative Polymerase Chain Reaction (RT-qPCR) is a standard technique in most laboratories. The selection of reference genes is essential for data normalization and the selection of suitable reference genes remains critical. Our aim was to 1) review the literature since implementation of the MIQE guidelines in order to identify the degree of acceptance; 2) compare various algorithms in their expression stability; 3) identify a set of suitable and most reliable reference genes for a variety of human cancer cell lines. A PubMed database review was performed and publications since 2009 were selected. Twelve putative reference genes were profiled in normal and various cancer cell lines (n = 25) using 2-step RT-qPCR. Investigated reference genes were ranked according to their expression stability by five algorithms (geNorm, Normfinder, BestKeeper, comparative ΔCt, and RefFinder). Our review revealed 37 publications, with two thirds patient samples and one third cell lines. qPCR efficiency was given in 68.4% of all publications, but only 28.9% of all studies provided RNA/cDNA amount and standard curves. GeNorm and Normfinder algorithms were used in 60.5% in combination. In our selection of 25 cancer cell lines, we identified HSPCB, RRN18S, and RPS13 as the most stable expressed reference genes. In the subset of ovarian cancer cell lines, the reference genes were PPIA, RPS13 and SDHA, clearly demonstrating the necessity to select genes depending on the research focus. Moreover, a cohort of at least three suitable reference genes needs to be established in advance to the experiments, according to the guidelines. For establishing a set of reference genes for gene normalization we recommend the use of ideally three reference genes selected by at least three stability algorithms. The unfortunate lack of compliance to the MIQE guidelines reflects that these need to be further established in the research community. PMID:23554992
Reasoning about systolic algorithms
Purushothaman, S.
1986-01-01
Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.
Algorithm-development activities
NASA Technical Reports Server (NTRS)
Carder, Kendall L.
1994-01-01
The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
Semioptimal practicable algorithmic cooling
Elias, Yuval; Mor, Tal; Weinstein, Yossi
2011-04-15
Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.
Reasoning about systolic algorithms
Purushothaman, S.; Subrahmanyam, P.A.
1988-12-01
The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.
Competing Sudakov veto algorithms
NASA Astrophysics Data System (ADS)
Kleiss, Ronald; Verheyen, Rob
2016-07-01
We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.
Algorithm That Synthesizes Other Algorithms for Hashing
NASA Technical Reports Server (NTRS)
James, Mark
2010-01-01
An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the
Parallel scheduling algorithms
Dekel, E.; Sahni, S.
1983-01-01
Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.
Developmental Algorithms Have Meaning!
ERIC Educational Resources Information Center
Green, John
1997-01-01
Adapts Stanic and McKillip's ideas for the use of developmental algorithms to propose that the present emphasis on symbolic manipulation should be tempered with an emphasis on the conceptual understanding of the mathematics underlying the algorithm. Uses examples from the areas of numeric computation, algebraic manipulation, and equation solving…
Evaluation of reference genes for gene expression studies in human brown adipose tissue
Taube, Magdalena; Andersson-Assarsson, Johanna C; Lindberg, Kristin; Pereira, Maria J; Gäbel, Markus; Svensson, Maria K; Eriksson, Jan W; Svensson, Per-Arne
2015-01-01
Human brown adipose tissue (BAT) has during the last 5 year been subjected to an increasing research interest, due to its putative function as a target for future obesity treatments. The most commonly used method for molecular studies of human BAT is the quantitative polymerase chain reaction (qPCR). This method requires normalization to a reference gene (genes with uniform expression under different experimental conditions, e.g. similar expression levels between human BAT and WAT), but so far no evaluation of reference genes for human BAT has been performed. Two different microarray datasets with samples containing human BAT were used to search for genes with low variability in expression levels. Seven genes (FAM96B, GNB1, GNB2, HUWE1, PSMB2, RING1 and TPT1) identified by microarray analysis, and 8 commonly used reference genes (18S, B2M, GAPDH, LRP10, PPIA, RPLP0, UBC, and YWHAZ) were selected and further analyzed by quantitative PCR in both BAT containing perirenal adipose tissue and subcutaneous adipose tissue. Results were analyzed using 2 different algorithms (Normfinder and geNorm). Most of the commonly used reference genes displayed acceptably low variability (geNorm M-values <0.5) in the samples analyzed, but the novel reference genes identified by microarray displayed an even lower variability (M-values <0.25). Our data suggests that PSMB2, GNB2 and GNB1 are suitable novel reference genes for qPCR analysis of human BAT and we recommend that they are included in future gene expression studies of human BAT. PMID:26451284
Evaluation of reference genes for gene expression studies in human brown adipose tissue.
Taube, Magdalena; Andersson-Assarsson, Johanna C; Lindberg, Kristin; Pereira, Maria J; Gäbel, Markus; Svensson, Maria K; Eriksson, Jan W; Svensson, Per-Arne
2015-01-01
Human brown adipose tissue (BAT) has during the last 5 year been subjected to an increasing research interest, due to its putative function as a target for future obesity treatments. The most commonly used method for molecular studies of human BAT is the quantitative polymerase chain reaction (qPCR). This method requires normalization to a reference gene (genes with uniform expression under different experimental conditions, e.g. similar expression levels between human BAT and WAT), but so far no evaluation of reference genes for human BAT has been performed. Two different microarray datasets with samples containing human BAT were used to search for genes with low variability in expression levels. Seven genes (FAM96B, GNB1, GNB2, HUWE1, PSMB2, RING1 and TPT1) identified by microarray analysis, and 8 commonly used reference genes (18S, B2M, GAPDH, LRP10, PPIA, RPLP0, UBC, and YWHAZ) were selected and further analyzed by quantitative PCR in both BAT containing perirenal adipose tissue and subcutaneous adipose tissue. Results were analyzed using 2 different algorithms (Normfinder and geNorm). Most of the commonly used reference genes displayed acceptably low variability (geNorm M-values <0.5) in the samples analyzed, but the novel reference genes identified by microarray displayed an even lower variability (M-values <0.25). Our data suggests that PSMB2, GNB2 and GNB1 are suitable novel reference genes for qPCR analysis of human BAT and we recommend that they are included in future gene expression studies of human BAT. PMID:26451284
Liu, Yong; Zhou, Xuguo
2015-01-01
Quantitative real-time PCR (qRT-PCR) is a powerful technique to quantify gene expression. To standardize gene expression studies and obtain more accurate qRT-PCR analysis, normalization relative to consistently expressed housekeeping genes (HKGs) is required. In this study, ten candidate HKGs including elongation factor 1 α (EF1A), ribosomal protein L11 (RPL11), ribosomal protein L14 (RPL14), ribosomal protein S8 (RPS8), ribosomal protein S23 (RPS23), NADH-ubiquinone oxidoreductase (NADH), vacuolar-type H+-ATPase (ATPase), heat shock protein 70 (HSP70), 18S ribosomal RNA (18S), and 12S ribosomal RNA (12S) from the cowpea aphid, Aphis craccivora Koch were selected. Four algorithms, geNorm, Normfinder, BestKeeper, and the ΔCt method were employed to evaluate the expression profiles of these HKGs as endogenous controls across different developmental stages and temperature regimes. Based on RefFinder, which integrates all four analytical algorithms to compare and rank the candidate HKGs, RPS8, RPL14, and RPL11 were the three most stable HKGs across different developmental stages and temperature conditions. This study is the first step to establish a standardized qRT-PCR analysis in A. craccivora following the MIQE guideline. Results from this study lay a foundation for the genomics and functional genomics research in this sap-sucking insect pest with substantial economic impact. PMID:26090683
Yang, Chunxiao; Pan, Huipeng; Noland, Jeffrey Edward; Zhang, Deyong; Zhang, Zhanhong; Liu, Yong; Zhou, Xuguo
2015-01-01
Reverse transcriptase-quantitative polymerase chain reaction (RT-qPCR) is a reliable technique for quantifying gene expression across various biological processes, of which requires a set of suited reference genes to normalize the expression data. Coleomegilla maculata (Coleoptera: Coccinellidae), is one of the most extensively used biological control agents in the field to manage arthropod pest species. In this study, expression profiles of 16 housekeeping genes selected from C. maculata were cloned and investigated. The performance of these candidates as endogenous controls under specific experimental conditions was evaluated by dedicated algorithms, including geNorm, Normfinder, BestKeeper, and ΔCt method. In addition, RefFinder, a comprehensive platform integrating all the above-mentioned algorithms, ranked the overall stability of these candidate genes. As a result, various sets of suitable reference genes were recommended specifically for experiments involving different tissues, developmental stages, sex, and C. maculate larvae treated with dietary double stranded RNA. This study represents the critical first step to establish a standardized RT-qPCR protocol for the functional genomics research in a ladybeetle C. maculate. Furthermore, it lays the foundation for conducting ecological risk assessment of RNAi-based gene silencing biotechnologies on non-target organisms; in this case, a key predatory biological control agent. PMID:26656102
Chen, I-Hua; Chou, Lien-Siang; Chou, Shih-Jen; Wang, Jiann-Hsiung; Stott, Jeffrey; Blanchard, Myra; Jen, I-Fan; Yang, Wei-Cheng
2015-01-01
Quantitative RT-PCR is often used as a research tool directed at gene transcription. Selection of optimal housekeeping genes (HKGs) as reference genes is critical to establishing sensitive and reproducible qRT-PCR-based assays. The current study was designed to identify the appropriate reference genes in blood leukocytes of bottlenose dolphins (Tursiops truncatus) for gene transcription research. Seventy-five blood samples collected from 7 bottlenose dolphins were used to analyze 15 candidate HKGs (ACTB, B2M, GAPDH, HPRT1, LDHB, PGK1, RPL4, RPL8, RPL18, RPS9, RPS18, TFRC, YWHAZ, LDHA, SDHA). HKG stability in qRT-PCR was determined using geNorm, NormFinder, BestKeeper and comparative delta Ct algorithms. Utilization of RefFinder, which combined all 4 algorithms, suggested that PGK1, HPRT1 and RPL4 were the most stable HKGs in bottlenose dolphin blood. Gene transcription perturbations in blood can serve as an indication of health status in cetaceans as it occurs prior to alterations in hematology and chemistry. This study identified HKGs that could be used in gene transcript studies, which may contribute to further mRNA relative quantification research in the peripheral blood leukocytes in captive cetaceans. PMID:26486099
Yang, Chunxiao; Pan, Huipeng; Noland, Jeffrey Edward; Zhang, Deyong; Zhang, Zhanhong; Liu, Yong; Zhou, Xuguo
2015-01-01
Reverse transcriptase-quantitative polymerase chain reaction (RT-qPCR) is a reliable technique for quantifying gene expression across various biological processes, of which requires a set of suited reference genes to normalize the expression data. Coleomegilla maculata (Coleoptera: Coccinellidae), is one of the most extensively used biological control agents in the field to manage arthropod pest species. In this study, expression profiles of 16 housekeeping genes selected from C. maculata were cloned and investigated. The performance of these candidates as endogenous controls under specific experimental conditions was evaluated by dedicated algorithms, including geNorm, Normfinder, BestKeeper, and ΔCt method. In addition, RefFinder, a comprehensive platform integrating all the above-mentioned algorithms, ranked the overall stability of these candidate genes. As a result, various sets of suitable reference genes were recommended specifically for experiments involving different tissues, developmental stages, sex, and C. maculate larvae treated with dietary double stranded RNA. This study represents the critical first step to establish a standardized RT-qPCR protocol for the functional genomics research in a ladybeetle C. maculate. Furthermore, it lays the foundation for conducting ecological risk assessment of RNAi-based gene silencing biotechnologies on non-target organisms; in this case, a key predatory biological control agent. PMID:26656102
Chen, I-Hua; Chou, Lien-Siang; Chou, Shih-Jen; Wang, Jiann-Hsiung; Stott, Jeffrey; Blanchard, Myra; Jen, I-Fan; Yang, Wei-Cheng
2015-01-01
Quantitative RT-PCR is often used as a research tool directed at gene transcription. Selection of optimal housekeeping genes (HKGs) as reference genes is critical to establishing sensitive and reproducible qRT-PCR-based assays. The current study was designed to identify the appropriate reference genes in blood leukocytes of bottlenose dolphins (Tursiops truncatus) for gene transcription research. Seventy-five blood samples collected from 7 bottlenose dolphins were used to analyze 15 candidate HKGs (ACTB, B2M, GAPDH, HPRT1, LDHB, PGK1, RPL4, RPL8, RPL18, RPS9, RPS18, TFRC, YWHAZ, LDHA, SDHA). HKG stability in qRT-PCR was determined using geNorm, NormFinder, BestKeeper and comparative delta Ct algorithms. Utilization of RefFinder, which combined all 4 algorithms, suggested that PGK1, HPRT1 and RPL4 were the most stable HKGs in bottlenose dolphin blood. Gene transcription perturbations in blood can serve as an indication of health status in cetaceans as it occurs prior to alterations in hematology and chemistry. This study identified HKGs that could be used in gene transcript studies, which may contribute to further mRNA relative quantification research in the peripheral blood leukocytes in captive cetaceans. PMID:26486099
NASA Astrophysics Data System (ADS)
Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.
2013-01-01
A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.
Rempp, Florian; Mahler, Guenter; Michel, Mathias
2007-09-15
We introduce a scheme to perform the cooling algorithm, first presented by Boykin et al. in 2002, for an arbitrary number of times on the same set of qbits. We achieve this goal by adding an additional SWAP gate and a bath contact to the algorithm. This way one qbit may repeatedly be cooled without adding additional qbits to the system. By using a product Liouville space to model the bath contact we calculate the density matrix of the system after a given number of applications of the algorithm.
Parallel algorithms and architectures
Albrecht, A.; Jung, H.; Mehlhorn, K.
1987-01-01
Contents of this book are the following: Preparata: Deterministic simulation of idealized parallel computers on more realistic ones; Convex hull of randomly chosen points from a polytope; Dataflow computing; Parallel in sequence; Towards the architecture of an elementary cortical processor; Parallel algorithms and static analysis of parallel programs; Parallel processing of combinatorial search; Communications; An O(nlogn) cost parallel algorithms for the single function coarsest partition problem; Systolic algorithms for computing the visibility polygon and triangulation of a polygonal region; and RELACS - A recursive layout computing system. Parallel linear conflict-free subtree access.
The Algorithm Selection Problem
NASA Technical Reports Server (NTRS)
Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)
1994-01-01
Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.
A Simple Calculator Algorithm.
ERIC Educational Resources Information Center
Cook, Lyle; McWilliam, James
1983-01-01
The problem of finding cube roots when limited to a calculator with only square root capability is discussed. An algorithm is demonstrated and explained which should always produce a good approximation within a few iterations. (MP)
Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi
2014-01-01
Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425
NASA Astrophysics Data System (ADS)
Feigin, G.; Ben-Yosef, N.
1983-10-01
A thinning algorithm, of the banana-peel type, is presented. In each iteration pixels are attacked from all directions (there are no sub-iterations), and the deletion criteria depend on the 24 nearest neighbours.
Diagnostic Algorithm Benchmarking
NASA Technical Reports Server (NTRS)
Poll, Scott
2011-01-01
A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.
Algorithmically specialized parallel computers
Snyder, L.; Jamieson, L.H.; Gannon, D.B.; Siegel, H.J.
1985-01-01
This book is based on a workshop which dealt with array processors. Topics considered include algorithmic specialization using VLSI, innovative architectures, signal processing, speech recognition, image processing, specialized architectures for numerical computations, and general-purpose computers.
2013-07-29
The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.
The Superior Lambert Algorithm
NASA Astrophysics Data System (ADS)
der, G.
2011-09-01
Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most
Project resource reallocation algorithm
NASA Technical Reports Server (NTRS)
Myers, J. E.
1981-01-01
A methodology for adjusting baseline cost estimates according to project schedule changes is described. An algorithm which performs a linear expansion or contraction of the baseline project resource distribution in proportion to the project schedule expansion or contraction is presented. Input to the algorithm consists of the deck of cards (PACE input data) prepared for the baseline project schedule as well as a specification of the nature of the baseline schedule change. Output of the algorithm is a new deck of cards with all work breakdown structure block and element of cost estimates redistributed for the new project schedule. This new deck can be processed through PACE to produce a detailed cost estimate for the new schedule.
Optical rate sensor algorithms
NASA Technical Reports Server (NTRS)
Uhde-Lacovara, Jo A.
1989-01-01
Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.
Temperature Corrected Bootstrap Algorithm
NASA Technical Reports Server (NTRS)
Comiso, Joey C.; Zwally, H. Jay
1997-01-01
A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.
Power spectral estimation algorithms
NASA Technical Reports Server (NTRS)
Bhatia, Manjit S.
1989-01-01
Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.
Programming parallel vision algorithms
Shapiro, L.G.
1988-01-01
Computer vision requires the processing of large volumes of data and requires parallel architectures and algorithms to be useful in real-time, industrial applications. The INSIGHT dataflow language was designed to allow encoding of vision algorithms at all levels of the computer vision paradigm. INSIGHT programs, which are relational in nature, can be translated into a graph structure that represents an architecture for solving a particular vision problem or a configuration of a reconfigurable computational network. The authors consider here INSIGHT programs that produce a parallel net architecture for solving low-, mid-, and high-level vision tasks.
New Effective Multithreaded Matching Algorithms
Manne, Fredrik; Halappanavar, Mahantesh
2014-05-19
Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.
Sinha, Pallavi; Saxena, Rachit K.; Singh, Vikas K.; Krishnamurthy, L.; Varshney, Rajeev K.
2015-01-01
To identify stable housekeeping genes as a reference for expression analysis under heat and salt stress conditions in pigeonpea, the relative expression variation for 10 commonly used housekeeping genes (EF1α, UBQ10, GAPDH, 18Sr RNA, 25Sr RNA, TUB6, ACT1, IF4α, UBC, and HSP90) was studied in root, stem, and leaves tissues of Asha (ICPL 87119), a leading pigeonpea variety. Three statistical algorithms geNorm, NormFinder, and BestKeeper were used to define the stability of candidate genes. Under heat stress, UBC, HSP90, and GAPDH were found to be the most stable reference genes. In the case of salinity stress, GAPDH followed by UBC and HSP90 were identified to be the most stable reference genes. Subsequently, the above identified genes were validated using qRT-PCR based gene expression analysis of two universal stress-resposive genes namely uspA and uspB. The relative quantification of these two genes varied according to the internal controls (most stable, least stable, and combination of most stable and least stable housekeeping genes) and thus confirmed the choice as well as validation of internal controls in such experiments. The identified and validated housekeeping genes will facilitate gene expression studies under heat and salt stress conditions in pigeonpea. PMID:27242803
Fiallos-Jurado, Jennifer; Pollier, Jacob; Moses, Tessa; Arendt, Philipp; Barriga-Medina, Noelia; Morillo, Eduardo; Arahana, Venancio; de Lourdes Torres, Maria; Goossens, Alain; Leon-Reyes, Antonio
2016-09-01
Quinoa (Chenopodium quinoa Willd.) is a highly nutritious pseudocereal with an outstanding protein, vitamin, mineral and nutraceutical content. The leaves, flowers and seed coat of quinoa contain triterpenoid saponins, which impart bitterness to the grain and make them unpalatable without postharvest removal of the saponins. In this study, we quantified saponin content in quinoa leaves from Ecuadorian sweet and bitter genotypes and assessed the expression of saponin biosynthetic genes in leaf samples elicited with methyl jasmonate. We found saponin accumulation in leaves after MeJA treatment in both ecotypes tested. As no reference genes were available to perform qPCR in quinoa, we mined publicly available RNA-Seq data for orthologs of 22 genes known to be stably expressed in Arabidopsis thaliana using geNorm, NormFinder and BestKeeper algorithms. The quinoa ortholog of At2g28390 (Monensin Sensitivity 1, MON1) was stably expressed and chosen as a suitable reference gene for qPCR analysis. Candidate saponin biosynthesis genes were screened in the quinoa RNA-Seq data and subsequent functional characterization in yeast led to the identification of CqbAS1, CqCYP716A78 and CqCYP716A79. These genes were found to be induced by MeJA, suggesting this phytohormone might also modulate saponin biosynthesis in quinoa leaves. Knowledge of the saponin biosynthesis and its regulation in quinoa may aid the further development of sweet cultivars that do not require postharvest processing. PMID:27457995
Li, Meng-Yao; Wang, Feng; Jiang, Qian; Wang, Guan-Long; Tian, Chang; Xiong, Ai-Sheng
2016-01-01
A suitable reference gene is an important prerequisite for guarantying accurate and reliable results in qPCR analysis. Celery is one of the representative vegetable in Apiaceae and is widely cultivated and consumed in the world. However, no reports have been previously published concerning reference genes in celery. In this study, the expression stabilities of nine candidate reference genes in leaf blade and petiole at different development stages were evaluated using three statistics algorithms geNorm, NormFinder, and BestKeeper. Our results showed that TUB-B, TUB-A, and UBC were the most reference genes among all tested samples. GAPDH represented the maximum stability for most individual sample, while the UBQ displayed the minimum stability. To further validate the stability of reference genes, the expression pattern of AgAP2-2 was calculated by using the selected genes for normalization. In addition, the expression patterns of several development-related genes were studied using the selected reference gene. Our results will be beneficial for further studies on gene transcription in celery. PMID:27014330
Quantitative RT-PCR Gene Evaluation and RNA Interference in the Brown Marmorated Stink Bug
Bansal, Raman; Mittapelly, Priyanka; Chen, Yuting; Mamidala, Praveen; Zhao, Chaoyang; Michel, Andy
2016-01-01
The brown marmorated stink bug (Halyomorpha halys) has emerged as one of the most important invasive insect pests in the United States. Functional genomics in H. halys remains unexplored as molecular resources in this insect have recently been developed. To facilitate functional genomics research, we evaluated ten common insect housekeeping genes (RPS26, EF1A, FAU, UBE4A, ARL2, ARP8, GUS, TBP, TIF6 and RPL9) for their stability across various treatments in H. halys. Our treatments included two biotic factors (tissues and developmental stages) and two stress treatments (RNAi injection and starvation). Reference gene stability was determined using three software algorithms (geNorm, NormFinder, BestKeeper) and a web-based tool (RefFinder). The qRT-PCR results indicated ARP8 and UBE4A exhibit the most stable expression across tissues and developmental stages, ARL2 and FAU for dsRNA treatment and TBP and UBE4A for starvation treatment. Following the dsRNA treatment, all genes except GUS showed relatively stable expression. To demonstrate the utility of validated reference genes in accurate gene expression analysis and to explore gene silencing in H. halys, we performed RNAi by administering dsRNA of target gene (catalase) through microinjection. A successful RNAi response with over 90% reduction in expression of target gene was observed. PMID:27144586
Liu, Yong; Zhou, Xuguo
2014-01-01
To facilitate gene expression study and obtain accurate qRT-PCR analysis, normalization relative to stable expressed housekeeping genes is required. In this study, expression profiles of 11 candidate reference genes, including actin (Actin), elongation factor 1 α (EF1A), TATA-box-binding protein (TATA), ribosomal protein L12 (RPL12), β-tubulin (Tubulin), NADH dehydrogenase (NADH), vacuolar-type H+-ATPase (v-ATPase), succinate dehydrogenase B (SDHB), 28S ribosomal RNA (28S), 16S ribosomal RNA (16S), and 18S ribosomal RNA (18S) from the pea aphid Acyrthosiphon pisum, under different developmental stages and temperature conditions, were investigated. A total of four analytical tools, geNorm, Normfinder, BestKeeper, and the ΔCt method, were used to evaluate the suitability of these genes as endogenous controls. According to RefFinder, a web-based software tool which integrates all four above-mentioned algorithms to compare and rank the reference genes, SDHB, 16S, and NADH were the three most stable house-keeping genes under different developmental stages and temperatures. This work is intended to establish a standardized qRT-PCR protocol in pea aphid and serves as a starting point for the genomics and functional genomics research in this emerging insect model. PMID:25423476
NASA Astrophysics Data System (ADS)
Zhao, Ye; Chen, Muyan; Wang, Tianming; Sun, Lina; Xu, Dongxue; Yang, Hongsheng
2014-11-01
Quantitative real-time reverse transcription-polymerase chain reaction (qRT-PCR) is a technique that is widely used for gene expression analysis, and its accuracy depends on the expression stability of the internal reference genes used as normalization factors. However, many applications of qRT-PCR used housekeeping genes as internal controls without validation. In this study, the expression stability of eight candidate reference genes in three tissues (intestine, respiratory tree, and muscle) of the sea cucumber Apostichopus japonicus was assessed during normal growth and aestivation using the geNorm, NormFinder, delta CT, and RefFinder algorithms. The results indicate that the reference genes exhibited significantly different expression patterns among the three tissues during aestivation. In general, the β-tubulin (TUBB) gene was relatively stable in the intestine and respiratory tree tissues. The optimal reference gene combination for intestine was 40S ribosomal protein S18 (RPS18), TUBB, and NADH dehydrogenase (NADH); for respiratory tree, it was β-actin (ACTB), TUBB, and succinate dehydrogenase cytochrome B small subunit (SDHC); and for muscle it was α-tubulin (TUBA) and NADH dehydrogenase [ubiquinone] 1 α subcomplex subunit 13 (NDUFA13). These combinations of internal control genes should be considered for use in further studies of gene expression in A. japonicus during aestivation.
Chao, Jinquan; Yang, Shuguang; Chen, Yueyi; Tian, Wei-Min
2016-01-01
Latex exploitation-caused latex flow is effective in enhancing latex regeneration in laticifer cells of rubber tree. It should be suitable for screening appropriate reference gene for analysis of the expression of latex regeneration-related genes by quantitative real-time PCR (qRT-PCR). In the present study, the expression stability of 23 candidate reference genes was evaluated on the basis of latex flow by using geNorm and NormFinder algorithms. Ubiquitin-protein ligase 2a (UBC2a) and ubiquitin-protein ligase 2b (UBC2b) were the two most stable genes among the selected candidate references in rubber tree clones with differential duration of latex flow. The two genes were also high-ranked in previous reference gene screening across different tissues and experimental conditions. By contrast, the transcripts of latex regeneration-related genes fluctuated significantly during latex flow. The results suggest that screening reference gene during latex flow should be an efficient and effective clue for selection of reference genes in qRT-PCR. PMID:27524995
Lian, Tiantian; Yang, Tao; Liu, Guijun; Sun, Junde; Dong, Caihong
2014-07-01
Cordyceps militaris is considered a model organism for the study of Cordyceps species, which are highly prized in traditional Chinese medicine. Gene expression analysis has become more popular and important in studies of this fungus. Reference gene validation under different experimental conditions is crucial for RT-qPCR analysis. In this study, eight candidate reference genes, actin, cox5, gpd, rpb1, tef1, try, tub, and ubi, were selected and their expression stability was evaluated in C. militaris samples using four algorithms, genorm, normfinder, bestkeeper, and the comparative ∆Ct method. Three sets of samples, five different developmental stages cultured in wheat medium and pupae, and all the samples pool were included. The results showed that rpb1 was the best reference gene during all developmental stages examined, while the most common reference genes, actin and tub, were not suitable internal controls. Cox5 also performed poorly and was less stable in our analysis. The ranks of ubi and gpd were inconsistent in different sample sets by different methods. Our results provide guidelines for reference gene selection at different developmental stages and also represent a foundation for more accurate and widespread use of RT-qPCR in C. militaris gene expression analysis. PMID:24953133
Qi, Shuai; Yang, Liwen; Wen, Xiaohui; Hong, Yan; Song, Xuebin; Zhang, Mengmeng; Dai, Silan
2016-01-01
Quantitative real-time PCR (qPCR) is a popular and powerful tool used to understand the molecular mechanisms of flower development. However, the accuracy of this approach depends on the stability of reference genes. The capitulum of chrysanthemums is very special, which is consisting of ray florets and disc florets. There are obvious differences between the two types of florets in symmetry, gender, histological structure, and function. Furthermore, the ray florets have various shapes. The objective of present study was to identify the stable reference genes in Chrysanthemum morifolium and Chrysanthemum lavandulifolium during the process of flower development. In this study, nine candidate reference genes were selected and evaluated for their expression stability acrosssamples during the process of flower development, and their stability was validated by four different algorithms (Bestkeeper, NormFinder, GeNorm, and Ref-finder). SAND (SAND family protein) was found to be the most stably expressed gene in all samples or different tissues during the process of C. lavandulifolium development. Both SAND and PGK (phosphoglycerate kinase) performed most stable in Chinese large-flowered chrysanthemum cultivars, and PGK was the best in potted chrysanthemums. There were differences in best reference genes among varieties as the genetic background of them were complex. These studies provide guidance for selecting reference genes for analyzing the expression pattern of floral development genes in chrysanthemums. PMID:27014310
Li, Meng-Yao; Wang, Feng; Jiang, Qian; Wang, Guan-Long; Tian, Chang; Xiong, Ai-Sheng
2016-01-01
A suitable reference gene is an important prerequisite for guarantying accurate and reliable results in qPCR analysis. Celery is one of the representative vegetable in Apiaceae and is widely cultivated and consumed in the world. However, no reports have been previously published concerning reference genes in celery. In this study, the expression stabilities of nine candidate reference genes in leaf blade and petiole at different development stages were evaluated using three statistics algorithms geNorm, NormFinder, and BestKeeper. Our results showed that TUB-B, TUB-A, and UBC were the most reference genes among all tested samples. GAPDH represented the maximum stability for most individual sample, while the UBQ displayed the minimum stability. To further validate the stability of reference genes, the expression pattern of AgAP2-2 was calculated by using the selected genes for normalization. In addition, the expression patterns of several development-related genes were studied using the selected reference gene. Our results will be beneficial for further studies on gene transcription in celery. PMID:27014330
Identification of suitable reference genes in bone marrow stromal cells from osteoarthritic donors.
Schildberg, Theresa; Rauh, Juliane; Bretschneider, Henriette; Stiehler, Maik
2013-11-01
Bone marrow stromal cells (BMSCs) are key cellular components for musculoskeletal tissue engineering strategies. Furthermore, recent data suggest that BMSCs are involved in the development of Osteoarthritis (OA) being a frequently occurring degenerative joint disease. Reliable reference genes for the molecular evaluation of BMSCs derived from donors exhibiting OA as a primary co-morbidity have not been reported on yet. Hence, the aim of the study was to identify reference genes suitable for comparative gene expression analyses using OA-BMSCs. Passage 1 bone marrow derived BMSCs were isolated from n=13 patients with advanced stage idiopathic hip osteoarthritis and n=15 age-matched healthy donors. The expression of 31 putative reference genes was analyzed by quantitative reverse transcription polymerase chain reaction (qRT-PCR) using a commercially available TaqMan(®) assay. Calculating the coefficient of variation (CV), mRNA expression stability was determined and afterwards validated using geNorm and NormFinder algorithms. Importin 8 (IPO8), TATA box binding protein (TBP), and cancer susceptibility candidate 3 (CASC3) were identified as the most stable reference genes. Notably, commonly used reference genes, e.g. beta-actin (ACTB) and beta-2-microglobulin (B2M) were among the most unstable genes. For normalization of gene expression data of OA-BMSCs the combined use of IPO8, TBP, and CASC3 gene is recommended. PMID:24080205
Petriccione, Milena; Mastrobuoni, Francesco; Zampella, Luigi; Scortichini, Marco
2015-01-01
Normalization of data, by choosing the appropriate reference genes (RGs), is fundamental for obtaining reliable results in reverse transcription-quantitative PCR (RT-qPCR). In this study, we assessed Actinidia deliciosa leaves inoculated with two doses of Pseudomonas syringae pv. actinidiae during a period of 13 days for the expression profile of nine candidate RGs. Their expression stability was calculated using four algorithms: geNorm, NormFinder, BestKeeper and the deltaCt method. Glyceraldehyde-3-phosphate dehydrogenase (GAPDH) and protein phosphatase 2A (PP2A) were the most stable genes, while β-tubulin and 7s-globulin were the less stable. Expression analysis of three target genes, chosen for RGs validation, encoding the reactive oxygen species scavenging enzymes ascorbate peroxidase (APX), superoxide dismutase (SOD) and catalase (CAT) indicated that a combination of stable RGs, such as GAPDH and PP2A, can lead to an accurate quantification of the expression levels of such target genes. The APX level varied during the experiment time course and according to the inoculum doses, whereas both SOD and CAT resulted down-regulated during the first four days, and up-regulated afterwards, irrespective of inoculum dose. These results can be useful for better elucidating the molecular interaction in the A. deliciosa/P. s. pv. actinidiae pathosystem and for RGs selection in bacteria-plant pathosystems. PMID:26581656
Qi, Shuai; Yang, Liwen; Wen, Xiaohui; Hong, Yan; Song, Xuebin; Zhang, Mengmeng; Dai, Silan
2016-01-01
Quantitative real-time PCR (qPCR) is a popular and powerful tool used to understand the molecular mechanisms of flower development. However, the accuracy of this approach depends on the stability of reference genes. The capitulum of chrysanthemums is very special, which is consisting of ray florets and disc florets. There are obvious differences between the two types of florets in symmetry, gender, histological structure, and function. Furthermore, the ray florets have various shapes. The objective of present study was to identify the stable reference genes in Chrysanthemum morifolium and Chrysanthemum lavandulifolium during the process of flower development. In this study, nine candidate reference genes were selected and evaluated for their expression stability acrosssamples during the process of flower development, and their stability was validated by four different algorithms (Bestkeeper, NormFinder, GeNorm, and Ref-finder). SAND (SAND family protein) was found to be the most stably expressed gene in all samples or different tissues during the process of C. lavandulifolium development. Both SAND and PGK (phosphoglycerate kinase) performed most stable in Chinese large-flowered chrysanthemum cultivars, and PGK was the best in potted chrysanthemums. There were differences in best reference genes among varieties as the genetic background of them were complex. These studies provide guidance for selecting reference genes for analyzing the expression pattern of floral development genes in chrysanthemums. PMID:27014310
Pereira-Fantini, Prue M.; Rajapaksa, Anushi E.; Oakley, Regina; Tingay, David G.
2016-01-01
Preterm newborns often require invasive support, however even brief periods of supported ventilation applied inappropriately to the lung can cause injury. Real-time quantitative reverse transcriptase-PCR (qPCR) has been extensively employed in studies of ventilation-induced lung injury with the reference gene 18S ribosomal RNA (18S RNA) most commonly employed as the internal control reference gene. Whilst the results of these studies depend on the stability of the reference gene employed, the use of 18S RNA has not been validated. In this study the expression profile of five candidate reference genes (18S RNA, ACTB, GAPDH, TOP1 and RPS29) in two geographical locations, was evaluated by dedicated algorithms, including geNorm, Normfinder, Bestkeeper and ΔCt method and the overall stability of these candidate genes determined (RefFinder). Secondary studies examined the influence of reference gene choice on the relative expression of two well-validated lung injury markers; EGR1 and IL1B. In the setting of the preterm lamb model of lung injury, RPS29 reference gene expression was influenced by tissue location; however we determined that individual ventilation strategies influence reference gene stability. Whilst 18S RNA is the most commonly employed reference gene in preterm lamb lung studies, our results suggest that GAPDH is a more suitable candidate. PMID:27210246
Saint-Marcoux, Denis; Proust, Hélène; Dolan, Liam; Langdale, Jane A
2015-01-01
Real-time quantitative polymerase chain reaction (qPCR) has become widely used as a method to compare gene transcript levels across different conditions. However, selection of suitable reference genes to normalize qPCR data is required for accurate transcript level analysis. Recently, Marchantia polymorpha has been adopted as a model for the study of liverwort development and land plant evolution. Identification of appropriate reference genes has therefore become a necessity for gene expression studies. In this study, transcript levels of eleven candidate reference genes have been analyzed across a range of biological contexts that encompass abiotic stress, hormone treatment and different developmental stages. The consistency of transcript levels was assessed using both geNorm and NormFinder algorithms, and a consensus ranking of the different candidate genes was then obtained. MpAPT and MpACT showed relatively constant transcript levels across all conditions tested whereas the transcript levels of other candidate genes were clearly influenced by experimental conditions. By analyzing transcript levels of phosphate and nitrate starvation reporter genes, we confirmed that MpAPT and MpACT are suitable reference genes in M. polymorpha and also demonstrated that normalization with an inappropriate gene can lead to erroneous analysis of qPCR data. PMID:25798897
Schaeck, M; De Spiegelaere, W; De Craene, J; Van den Broeck, W; De Spiegeleer, B; Burvenich, C; Haesebrouck, F; Decostere, A
2016-01-01
The increasing demand for a sustainable larviculture has promoted research regarding environmental parameters, diseases and nutrition, intersecting at the mucosal surface of the gastrointestinal tract of fish larvae. The combination of laser capture microdissection (LCM) and gene expression experiments allows cell specific expression profiling. This study aimed at optimizing an LCM protocol for intestinal tissue of sea bass larvae. Furthermore, a 3'/5' integrity assay was developed for LCM samples of fish tissue, comprising low RNA concentrations. Furthermore, reliable reference genes for performing qPCR in larval sea bass gene expression studies were identified, as data normalization is critical in gene expression experiments using RT-qPCR. We demonstrate that a careful optimization of the LCM procedure allows recovery of high quality mRNA from defined cell populations in complex intestinal tissues. According to the geNorm and Normfinder algorithms, ef1a, rpl13a, rps18 and faua were the most stable genes to be implemented as reference genes for an appropriate normalization of intestinal tissue from sea bass across a range of experimental settings. The methodology developed here, offers a rapid and valuable approach to characterize cells/tissues in the intestinal tissue of fish larvae and their changes following pathogen exposure, nutritional/environmental changes, probiotic supplementation or a combination thereof. PMID:26883391
Engdahl, Elin; Dunn, Nicky; Fogdell-Hahn, Anna
2016-01-01
When using relative gene expression for quantification of RNA it is crucial that the reference genes used for normalization do not change with the experimental condition. We aimed at investigating the expressional stability of commonly used reference genes during Human herpesvirus 6B (HHV-6B) infection. Expression of eight commonly used reference genes were investigated with quantitative PCR in a T-cell line infected with HHV-6B. The stability of genes was investigated using the 2(-ΔΔCT) method and the algorithms BestKeeper, GeNorm and NormFinder. Our results indicate that peptidylprolyl isomerase A (PPIA) is the most stably expressed gene while TATA box binding protein (TBP) is the least stably expressed gene during HHV-6B infection. In a confirmatory experiment, TBP was demonstrated to be dose and time dependently upregulated by HHV-6B. The stability of PPIA is in line with other studies investigating different herpesvirus infections whereas the finding that HHV-6B significantly upregulates TBP is novel and most likely specific to HHV-6B. PMID:26542463
Bao, Wenlong; Qu, Yanli; Shan, Xiaoyi; Wan, Yinglang
2016-01-01
Cunninghamia lanceolata (Chinese fir) is a fast-growing and commercially important conifer of the Cupressaceae family. Due to the unavailability of complete genome sequences and relatively poor genetic background information of the Chinese fir, it is necessary to identify and analyze the expression levels of suitable housekeeping genes (HKGs) as internal reference for precise analysis. Based on the results of database analysis and transcriptome sequencing, we have chosen five candidate HKGs (Actin, GAPDH, EF1a, 18S rRNA, and UBQ) with conservative sequences in the Chinese fir and related species for quantitative analysis. The expression levels of these HKGs in roots and cotyledons under five different abiotic stresses in different time intervals were measured by qRT-PCR. The data were statistically analyzed using the following algorithms: NormFinder, BestKeeper, and geNorm. Finally, RankAggreg was applied to merge the sequences generated from three programs and rank these according to consensus sequences. The expression levels of these HKGs showed variable stabilities under different abiotic stresses. Among these, Actin was the most stable internal control in root, and GAPDH was the most stable housekeeping gene in cotyledon. We have also described an experimental procedure for selecting HKGs based on the de novo sequencing database of other non-model plants. PMID:27483238
Petriccione, Milena; Mastrobuoni, Francesco; Zampella, Luigi; Scortichini, Marco
2015-01-01
Normalization of data, by choosing the appropriate reference genes (RGs), is fundamental for obtaining reliable results in reverse transcription-quantitative PCR (RT-qPCR). In this study, we assessed Actinidia deliciosa leaves inoculated with two doses of Pseudomonas syringae pv. actinidiae during a period of 13 days for the expression profile of nine candidate RGs. Their expression stability was calculated using four algorithms: geNorm, NormFinder, BestKeeper and the deltaCt method. Glyceraldehyde-3-phosphate dehydrogenase (GAPDH) and protein phosphatase 2A (PP2A) were the most stable genes, while β-tubulin and 7s-globulin were the less stable. Expression analysis of three target genes, chosen for RGs validation, encoding the reactive oxygen species scavenging enzymes ascorbate peroxidase (APX), superoxide dismutase (SOD) and catalase (CAT) indicated that a combination of stable RGs, such as GAPDH and PP2A, can lead to an accurate quantification of the expression levels of such target genes. The APX level varied during the experiment time course and according to the inoculum doses, whereas both SOD and CAT resulted down-regulated during the first four days, and up-regulated afterwards, irrespective of inoculum dose. These results can be useful for better elucidating the molecular interaction in the A. deliciosa/P. s. pv. actinidiae pathosystem and for RGs selection in bacteria-plant pathosystems. PMID:26581656
Dolan, Liam; Langdale, Jane A.
2015-01-01
Real-time quantitative polymerase chain reaction (qPCR) has become widely used as a method to compare gene transcript levels across different conditions. However, selection of suitable reference genes to normalize qPCR data is required for accurate transcript level analysis. Recently, Marchantia polymorpha has been adopted as a model for the study of liverwort development and land plant evolution. Identification of appropriate reference genes has therefore become a necessity for gene expression studies. In this study, transcript levels of eleven candidate reference genes have been analyzed across a range of biological contexts that encompass abiotic stress, hormone treatment and different developmental stages. The consistency of transcript levels was assessed using both geNorm and NormFinder algorithms, and a consensus ranking of the different candidate genes was then obtained. MpAPT and MpACT showed relatively constant transcript levels across all conditions tested whereas the transcript levels of other candidate genes were clearly influenced by experimental conditions. By analyzing transcript levels of phosphate and nitrate starvation reporter genes, we confirmed that MpAPT and MpACT are suitable reference genes in M. polymorpha and also demonstrated that normalization with an inappropriate gene can lead to erroneous analysis of qPCR data. PMID:25798897
Sheng, X G; Zhao, Z Q; Yu, H F; Wang, J S; Zheng, C F; Gu, H H
2016-01-01
Quantitative reverse-transcription PCR (qRT-PCR) is a versatile technique for the analysis of gene expression. The selection of stable reference genes is essential for the application of this technique. Cauliflower (Brassica oleracea L. var. botrytis) is a commonly consumed vegetable that is rich in vitamin, calcium, and iron. Thus far, to our knowledge, there have been no reports on the validation of suitable reference genes for the data normalization of qRT-PCR in cauliflower. In the present study, we analyzed 12 candidate housekeeping genes in cauliflower subjected to different abiotic stresses, hormone treatment conditions, and accessions. geNorm and NormFinder algorithms were used to assess the expression stability of these genes. ACT2 and TIP41 were selected as suitable reference genes across all experimental samples in this study. When different accessions were compared, ACT2 and UNK3 were found to be the most suitable reference genes. In the hormone and abiotic stress treatments, ACT2, TIP41, and UNK2 were the most stably expressed. Our study also provided guidelines for selecting the best reference genes under various experimental conditions. PMID:27525844
Selection of Reference Genes for Expression Studies of Xenobiotic Adaptation in Tetranychus urticae
Morales, Mariany Ashanty; Mendoza, Bianca Marie; Lavine, Laura Corley; Lavine, Mark Daniel; Walsh, Douglas Bruce; Zhu, Fang
2016-01-01
Quantitative real-time PCR (qRT-PCR) is an extensively used, high-throughput method to analyze transcriptional expression of genes of interest. An appropriate normalization strategy with reliable reference genes is required for calculating gene expression across diverse experimental conditions. In this study, we aim to identify the most stable reference genes for expression studies of xenobiotic adaptation in Tetranychus urticae, an extremely polyphagous herbivore causing significant yield reduction of agriculture. We chose eight commonly used housekeeping genes as candidates. The qRT-PCR expression data for these genes were evaluated from seven populations: a susceptible and three acaricide resistant populations feeding on lima beans, and three other susceptible populations which had been shifted host from lima beans to three other plant species. The stability of the candidate reference genes was then assessed using four different algorithms (comparative ΔCt method, geNorm, NormFinder, and BestKeeper). Additionally, we used an online web-based tool (RefFinder) to assign an overall final rank for each candidate gene. Our study found that CycA and Rp49 are best for investigating gene expression in acaricide susceptible and resistant populations. GAPDH, Rp49, and Rpl18 are best for host plant shift studies. And GAPDH and Rp49 were the most stable reference genes when investigating gene expression under changes in both experimental conditions. These results will facilitate research in revealing molecular mechanisms underlying the xenobiotic adaptation of this notorious agricultural pest. PMID:27570487
Quantification of Gene Expression after Painful Nerve Injury: Validation of Optimal Reference Genes
Bangaru, Madhavi Latha Yadav; Park, Frank; Hudmon, Andy; McCallum, J. Bruce; Hogan, Quinn H.
2011-01-01
Stably expressed housekeeping genes (HKGs) are necessary for standardization of transcript measurement by quantitative real time PCR (qRT-PCR). Peripheral nerve injury disrupts expression of numerous genes in sensory neurons, but the stability of conventional HKGs has not been tested in this context. We examined the stability of candidate HKGs during nerve injury, including the commonly used 18s ribosomal RNA (18s rRNA), β tubulin I (Tubb5) and β tubulin III (Tubb3), actin, glyceraldehyde 3-phosphate dehydrogenase (GAPDH) and hypoxanthine phosphoribosyl transferase 1 (HPRT1), and mitogen activated protein kinase 6 (MAPK6). Total RNA for cDNA synthesis was isolated from dorsal root ganglia of rats at 3, 7 and 21 days following either skin incision alone or spinal nerve ligation, after which the axotomized and adjacent ganglia were analyzed separately. Relative stability of HKGs was determined using statistical algorithms geNorm and NormFinder. Both analyses identified MAPK6 and GAPDH as the two most stable HKGs for normalizing gene expression for qRT-PCR analysis in the context of peripheral nerve injury. Our findings indicate that a priori analysis of HKG expression levels is important for accurate normalization of gene expression in models of nerve injury. PMID:21863315
Taïhi, Ihsène; Nassif, Ali; Berbar, Tsouria; Isaac, Juliane; Berdal, Ariane; Gogly, Bruno; Fournier, Benjamin Philippe
2016-01-01
Gingival stem cells (GSCs) are recently isolated multipotent cells. Their osteogenic capacity has been validated in vitro and may be transferred to human cell therapy for maxillary large bone defects, as they share a neural crest cell origin with jaw bone cells. RT-qPCR is a widely used technique to study gene expression and may help us to follow osteoblast differentiation of GSCs. For accurate results, the choice of reliable housekeeping genes (HKGs) is crucial. The aim of this study was to select the most reliable HKGs for GSCs study and their osteogenic differentiation (dGSCs). The analysis was performed with ten selected HKGs using four algorithms: ΔCt comparative method, GeNorm, BestKeeper, and NormFinder. This study demonstrated that three HKGs, SDHA, ACTB, and B2M, were the most stable to study GSC, whereas TBP, SDHA, and ALAS1 were the most reliable to study dGSCs. The comparison to stem cells of mesenchymal origin (ASCs) showed that SDHA/HPRT1 were the most appropriate for ASCs study. The choice of suitable HKGs for GSCs is important as it gave access to an accurate analysis of osteogenic differentiation. It will allow further study of this interesting stem cells source for future human therapy. PMID:26880978
Selection of Reference Genes for Expression Studies of Xenobiotic Adaptation in Tetranychus urticae.
Morales, Mariany Ashanty; Mendoza, Bianca Marie; Lavine, Laura Corley; Lavine, Mark Daniel; Walsh, Douglas Bruce; Zhu, Fang
2016-01-01
Quantitative real-time PCR (qRT-PCR) is an extensively used, high-throughput method to analyze transcriptional expression of genes of interest. An appropriate normalization strategy with reliable reference genes is required for calculating gene expression across diverse experimental conditions. In this study, we aim to identify the most stable reference genes for expression studies of xenobiotic adaptation in Tetranychus urticae, an extremely polyphagous herbivore causing significant yield reduction of agriculture. We chose eight commonly used housekeeping genes as candidates. The qRT-PCR expression data for these genes were evaluated from seven populations: a susceptible and three acaricide resistant populations feeding on lima beans, and three other susceptible populations which had been shifted host from lima beans to three other plant species. The stability of the candidate reference genes was then assessed using four different algorithms (comparative ΔCt method, geNorm, NormFinder, and BestKeeper). Additionally, we used an online web-based tool (RefFinder) to assign an overall final rank for each candidate gene. Our study found that CycA and Rp49 are best for investigating gene expression in acaricide susceptible and resistant populations. GAPDH, Rp49, and Rpl18 are best for host plant shift studies. And GAPDH and Rp49 were the most stable reference genes when investigating gene expression under changes in both experimental conditions. These results will facilitate research in revealing molecular mechanisms underlying the xenobiotic adaptation of this notorious agricultural pest. PMID:27570487
A Versatile Panel of Reference Gene Assays for the Measurement of Chicken mRNA by Quantitative PCR
Maier, Helena J.; Van Borm, Steven; Young, John R.; Fife, Mark
2016-01-01
Quantitative real-time PCR assays are widely used for the quantification of mRNA within avian experimental samples. Multiple stably-expressed reference genes, selected for the lowest variation in representative samples, can be used to control random technical variation. Reference gene assays must be reliable, have high amplification specificity and efficiency, and not produce signals from contaminating DNA. Whilst recent research papers identify specific genes that are stable in particular tissues and experimental treatments, here we describe a panel of ten avian gene primer and probe sets that can be used to identify suitable reference genes in many experimental contexts. The panel was tested with TaqMan and SYBR Green systems in two experimental scenarios: a tissue collection and virus infection of cultured fibroblasts. GeNorm and NormFinder algorithms were able to select appropriate reference gene sets in each case. We show the effects of using the selected genes on the detection of statistically significant differences in expression. The results are compared with those obtained using 28s ribosomal RNA, the present most widely accepted reference gene in chicken work, identifying circumstances where its use might provide misleading results. Methods for eliminating DNA contamination of RNA reduced, but did not completely remove, detectable DNA. We therefore attached special importance to testing each qPCR assay for absence of signal using DNA template. The assays and analyses developed here provide a useful resource for selecting reference genes for investigations of avian biology. PMID:27537060
Chen, Chun; Xie, Tingna; Ye, Sudan; Jensen, Annette Bruun; Eilenberg, Jørgen
2016-01-01
The selection of suitable reference genes is crucial for accurate quantification of gene expression and can add to our understanding of host-pathogen interactions. To identify suitable reference genes in Pandora neoaphidis, an obligate aphid pathogenic fungus, the expression of three traditional candidate genes including 18S rRNA(18S), 28S rRNA(28S) and elongation factor 1 alpha-like protein (EF1), were measured by quantitative polymerase chain reaction at different developmental stages (conidia, conidia with germ tubes, short hyphae and elongated hyphae), and under different nutritional conditions. We calculated the expression stability of candidate reference genes using four algorithms including geNorm, NormFinder, BestKeeper and Delta Ct. The analysis results revealed that the comprehensive ranking of candidate reference genes from the most stable to the least stable was 18S (1.189), 28S (1.414) and EF1 (3). The 18S was, therefore, the most suitable reference gene for real-time RT-PCR analysis of gene expression under all conditions. These results will support further studies on gene expression in P. neoaphidis. PMID:26887253
Shivhare, Radha; Lata, Charu
2016-01-01
Pearl millet [Pennisetum glaucum (L.) R. Br.] a widely used grain and forage crop, is grown in areas frequented with one or more abiotic stresses, has superior drought and heat tolerance and considered a model crop for stress tolerance studies. Selection of suitable reference genes for quantification of target stress-responsive gene expression through quantitative real-time (qRT)-PCR is important for elucidating the molecular mechanisms of improved stress tolerance. For precise normalization of gene expression data in pearl millet, ten candidate reference genes were examined in various developmental tissues as well as under different individual abiotic stresses and their combinations at 1 h (early) and 24 h (late) of stress using geNorm, NormFinder and RefFinder algorithms. Our results revealed EF-1α and UBC-E2 as the best reference genes across all samples, the specificity of which was confirmed by assessing the relative expression of a PgAP2 like-ERF gene that suggested use of these two reference genes is sufficient for accurate transcript normalization under different stress conditions. To our knowledge this is the first report on validation of reference genes under different individual and multiple abiotic stresses in pearl millet. The study can further facilitate fastidious discovery of stress-tolerance genes in this important stress-tolerant crop. PMID:26972345
Wang, Yaolong; Liu, Juan; Wang, Xumin; Liu, Shuang; Wang, Guoliang; Zhou, Junhui; Yuan, Yuan; Chen, Tiying; Jiang, Chao; Zha, Liangping; Huang, Luqi
2016-01-01
MicroRNAs (miRNAs), which play crucial regulatory roles in plant secondary metabolism and responses to the environment, could be developed as promising biomarkers for different varieties and production areas of herbal medicines. However, limited information is available for miRNAs from Lonicera japonica, which is widely used in East Asian countries owing to various pharmaceutically active secondary metabolites. Selection of suitable reference genes for quantification of target miRNA expression through quantitative real-time (qRT)-PCR is important for elucidating the molecular mechanisms of secondary metabolic regulation in different tissues and varieties of L. japonica. For precise normalization of gene expression data in L. japonica, 16 candidate miRNAs were examined in three tissues, as well as 21 cultivated varieties collected from 16 production areas, using GeNorm, NormFinder, and RefFinder algorithms. Our results revealed combination of u534122 and u3868172 as the best reference genes across all samples. Their specificity was confirmed by detecting the cycling threshold (C t) value ranges in different varieties of L. japonica collected from diverse production areas, suggesting the use of these two reference miRNAs is sufficient for accurate transcript normalization with different tissues, varieties, and production areas. To our knowledge, this is the first report on validation of reference miRNAs in honeysuckle (Lonicera spp.). Restuls from this study can further facilitate discovery of functional regulatory miRNAs in different varieties of L. japonica. PMID:27507983
Evaluation of Reference Genes for Quantitative Real-Time PCR in Songbirds
Zinzow-Kramer, Wendy M.; Horton, Brent M.; Maney, Donna L.
2014-01-01
Quantitative real-time PCR (qPCR) is becoming a popular tool for the quantification of gene expression in the brain and endocrine tissues of songbirds. Accurate analysis of qPCR data relies on the selection of appropriate reference genes for normalization, yet few papers on songbirds contain evidence of reference gene validation. Here, we evaluated the expression of ten potential reference genes (18S, ACTB, GAPDH, HMBS, HPRT, PPIA, RPL4, RPL32, TFRC, and UBC) in brain, pituitary, ovary, and testis in two species of songbird: zebra finch and white-throated sparrow. We used two algorithms, geNorm and NormFinder, to assess the stability of these reference genes in our samples. We found that the suitability of some of the most popular reference genes for target gene normalization in mammals, such as 18S, depended highly on tissue type. Thus, they are not the best choices for brain and gonad in these songbirds. In contrast, we identified alternative genes, such as HPRT, RPL4 and PPIA, that were highly stable in brain, pituitary, and gonad in these species. Our results suggest that the validation of reference genes in mammals does not necessarily extrapolate to other taxonomic groups. For researchers wishing to identify and evaluate suitable reference genes for qPCR songbirds, our results should serve as a starting point and should help increase the power and utility of songbird models in behavioral neuroendocrinology. PMID:24780145
Guo, Jinlong; Ling, Hui; Wu, Qibin; Xu, Liping; Que, Youxiong
2014-01-01
Sugarcane (Saccharum spp. hybrids) is a world-wide cash crop for sugar and biofuel in tropical and subtropical regions and suffers serious losses in cane yield and sugar content under salinity and drought stresses. Although real-time quantitative PCR has a numerous advantage in the expression quantification of stress-related genes for the elaboration of the corresponding molecular mechanism in sugarcane, the variation happened across the process of gene expression quantification should be normalized and monitored by introducing one or several reference genes. To validate suitable reference genes or gene sets for sugarcane gene expression normalization, 13 candidate reference genes have been tested across 12 NaCl- and PEG-treated sugarcane samples for four sugarcane genotypes using four commonly used systematic statistical algorithms termed geNorm, BestKeeper, NormFinder and the deltaCt method. The results demonstrated that glyceraldehyde-3-phosphate dehydrogenase (GAPDH) and eukaryotic elongation factor 1-alpha (eEF-1a) were identified as suitable reference genes for gene expression normalization under salinity/drought-treatment in sugarcane. Moreover, the expression analyses of SuSK and 6PGDH further validated that a combination of clathrin adaptor complex (CAC) and cullin (CUL) as reference should be better for gene expression normalization. These results can facilitate the future research on gene expression in sugarcane under salinity and drought stresses. PMID:25391499
Guo, Jinlong; Ling, Hui; Wu, Qibin; Xu, Liping; Que, Youxiong
2014-01-01
Sugarcane (Saccharum spp. hybrids) is a world-wide cash crop for sugar and biofuel in tropical and subtropical regions and suffers serious losses in cane yield and sugar content under salinity and drought stresses. Although real-time quantitative PCR has a numerous advantage in the expression quantification of stress-related genes for the elaboration of the corresponding molecular mechanism in sugarcane, the variation happened across the process of gene expression quantification should be normalized and monitored by introducing one or several reference genes. To validate suitable reference genes or gene sets for sugarcane gene expression normalization, 13 candidate reference genes have been tested across 12 NaCl- and PEG-treated sugarcane samples for four sugarcane genotypes using four commonly used systematic statistical algorithms termed geNorm, BestKeeper, NormFinder and the deltaCt method. The results demonstrated that glyceraldehyde-3-phosphate dehydrogenase (GAPDH) and eukaryotic elongation factor 1-alpha (eEF-1a) were identified as suitable reference genes for gene expression normalization under salinity/drought-treatment in sugarcane. Moreover, the expression analyses of SuSK and 6PGDH further validated that a combination of clathrin adaptor complex (CAC) and cullin (CUL) as reference should be better for gene expression normalization. These results can facilitate the future research on gene expression in sugarcane under salinity and drought stresses. PMID:25391499
Bao, Wenlong; Qu, Yanli; Shan, Xiaoyi; Wan, Yinglang
2016-01-01
Cunninghamia lanceolata (Chinese fir) is a fast-growing and commercially important conifer of the Cupressaceae family. Due to the unavailability of complete genome sequences and relatively poor genetic background information of the Chinese fir, it is necessary to identify and analyze the expression levels of suitable housekeeping genes (HKGs) as internal reference for precise analysis. Based on the results of database analysis and transcriptome sequencing, we have chosen five candidate HKGs (Actin, GAPDH, EF1a, 18S rRNA, and UBQ) with conservative sequences in the Chinese fir and related species for quantitative analysis. The expression levels of these HKGs in roots and cotyledons under five different abiotic stresses in different time intervals were measured by qRT-PCR. The data were statistically analyzed using the following algorithms: NormFinder, BestKeeper, and geNorm. Finally, RankAggreg was applied to merge the sequences generated from three programs and rank these according to consensus sequences. The expression levels of these HKGs showed variable stabilities under different abiotic stresses. Among these, Actin was the most stable internal control in root, and GAPDH was the most stable housekeeping gene in cotyledon. We have also described an experimental procedure for selecting HKGs based on the de novo sequencing database of other non-model plants. PMID:27483238
Wang, Yaolong; Liu, Juan; Wang, Xumin; Liu, Shuang; Wang, Guoliang; Zhou, Junhui; Yuan, Yuan; Chen, Tiying; Jiang, Chao; Zha, Liangping; Huang, Luqi
2016-01-01
MicroRNAs (miRNAs), which play crucial regulatory roles in plant secondary metabolism and responses to the environment, could be developed as promising biomarkers for different varieties and production areas of herbal medicines. However, limited information is available for miRNAs from Lonicera japonica, which is widely used in East Asian countries owing to various pharmaceutically active secondary metabolites. Selection of suitable reference genes for quantification of target miRNA expression through quantitative real-time (qRT)-PCR is important for elucidating the molecular mechanisms of secondary metabolic regulation in different tissues and varieties of L. japonica. For precise normalization of gene expression data in L. japonica, 16 candidate miRNAs were examined in three tissues, as well as 21 cultivated varieties collected from 16 production areas, using GeNorm, NormFinder, and RefFinder algorithms. Our results revealed combination of u534122 and u3868172 as the best reference genes across all samples. Their specificity was confirmed by detecting the cycling threshold (Ct) value ranges in different varieties of L. japonica collected from diverse production areas, suggesting the use of these two reference miRNAs is sufficient for accurate transcript normalization with different tissues, varieties, and production areas. To our knowledge, this is the first report on validation of reference miRNAs in honeysuckle (Lonicera spp.). Restuls from this study can further facilitate discovery of functional regulatory miRNAs in different varieties of L. japonica. PMID:27507983
Selection of suitable reference genes for expression analysis in human glioma using RT-qPCR.
Grube, Susanne; Göttig, Tatjana; Freitag, Diana; Ewald, Christian; Kalff, Rolf; Walter, Jan
2015-05-01
In human glioma research, quantitative real-time reverse-transcription PCR is a frequently used tool. Considering the broad variation in the expression of candidate reference genes among tumor stages and normal brain, studies using quantitative RT-PCR require strict definition of adequate endogenous controls. This study aimed at testing a panel of nine reference genes [beta-2-microglobulin, cytochrome c-1 (CYC1), glyceraldehyde-3-phosphate dehydrogenase (GAPDH), hydroxymethylbilane synthase, hypoxanthine guanine phosphoribosyl transferase 1, ribosomal protein L13a (RPL13A), succinate dehydrogenase, TATA-box binding protein and 14-3-3 protein zeta] to identify and validate the most suitable reference genes for expression studies in human glioma of different grades (World Health Organization grades II-IV). After analysis of the stability values calculated using geNorm, NormFinder, and BestKeeper algorithms, GAPDH, RPL13A, and CYC1 can be indicated as reference genes applicable for accurate normalization of gene expression in glioma compared with normal brain and anaplastic astrocytoma or glioblastoma alone within this experimental setting. Generally, there are no differences in expression levels and variability of candidate genes in glioma tissue compared to normal brain. But stability analyses revealed just a small number of genes suitable for normalization in each of the tumor subgroups and across these groups. Nevertheless, our data show the importance of validation of adequate reference genes prior to every study. PMID:25862007
Schaeck, M.; De Spiegelaere, W.; De Craene, J.; Van den Broeck, W.; De Spiegeleer, B.; Burvenich, C.; Haesebrouck, F.; Decostere, A.
2016-01-01
The increasing demand for a sustainable larviculture has promoted research regarding environmental parameters, diseases and nutrition, intersecting at the mucosal surface of the gastrointestinal tract of fish larvae. The combination of laser capture microdissection (LCM) and gene expression experiments allows cell specific expression profiling. This study aimed at optimizing an LCM protocol for intestinal tissue of sea bass larvae. Furthermore, a 3′/5′ integrity assay was developed for LCM samples of fish tissue, comprising low RNA concentrations. Furthermore, reliable reference genes for performing qPCR in larval sea bass gene expression studies were identified, as data normalization is critical in gene expression experiments using RT-qPCR. We demonstrate that a careful optimization of the LCM procedure allows recovery of high quality mRNA from defined cell populations in complex intestinal tissues. According to the geNorm and Normfinder algorithms, ef1a, rpl13a, rps18 and faua were the most stable genes to be implemented as reference genes for an appropriate normalization of intestinal tissue from sea bass across a range of experimental settings. The methodology developed here, offers a rapid and valuable approach to characterize cells/tissues in the intestinal tissue of fish larvae and their changes following pathogen exposure, nutritional/environmental changes, probiotic supplementation or a combination thereof. PMID:26883391
Chao, Jinquan; Yang, Shuguang; Chen, Yueyi; Tian, Wei-Min
2016-01-01
Latex exploitation-caused latex flow is effective in enhancing latex regeneration in laticifer cells of rubber tree. It should be suitable for screening appropriate reference gene for analysis of the expression of latex regeneration-related genes by quantitative real-time PCR (qRT-PCR). In the present study, the expression stability of 23 candidate reference genes was evaluated on the basis of latex flow by using geNorm and NormFinder algorithms. Ubiquitin-protein ligase 2a (UBC2a) and ubiquitin-protein ligase 2b (UBC2b) were the two most stable genes among the selected candidate references in rubber tree clones with differential duration of latex flow. The two genes were also high-ranked in previous reference gene screening across different tissues and experimental conditions. By contrast, the transcripts of latex regeneration-related genes fluctuated significantly during latex flow. The results suggest that screening reference gene during latex flow should be an efficient and effective clue for selection of reference genes in qRT-PCR. PMID:27524995
Kianianmomeni, Arash; Hallmann, Armin
2013-12-01
Quantitative real-time reverse transcription polymerase chain reaction (qRT-PCR) is a sensitive technique for analysis of gene expression under a wide diversity of biological conditions. However, the identification of suitable reference genes is a critical factor for analysis of gene expression data. To determine potential reference genes for normalization of qRT-PCR data in the green alga Volvox carteri, the transcript levels of ten candidate reference genes were measured by qRT-PCR in three experimental sample pools containing different developmental stages, cell types and stress treatments. The expression stability of the candidate reference genes was then calculated using the algorithms geNorm, NormFinder and BestKeeper. The genes for 18S ribosomal RNA (18S) and eukaryotic translation elongation factor 1α2 (eef1) turned out to have the most stable expression levels among the samples both from different developmental stages and different stress treatments. The genes for the ribosomal protein L23 (rpl23) and the TATA-box binding protein (tbpA) showed equivalent transcript levels in the comparison of different cell types, and therefore, can be used as reference genes for cell-type specific gene expression analysis. Our results indicate that more than one reference gene is required for accurate normalization of qRT-PCRs in V. carteri. The reference genes in our study show a much better performance than the housekeeping genes used as a reference in previous studies. PMID:24057254
Chen, Chun; Xie, Tingna; Ye, Sudan; Jensen, Annette Bruun; Eilenberg, Jørgen
2016-01-01
The selection of suitable reference genes is crucial for accurate quantification of gene expression and can add to our understanding of host–pathogen interactions. To identify suitable reference genes in Pandora neoaphidis, an obligate aphid pathogenic fungus, the expression of three traditional candidate genes including 18S rRNA(18S), 28S rRNA(28S) and elongation factor 1 alpha-like protein (EF1), were measured by quantitative polymerase chain reaction at different developmental stages (conidia, conidia with germ tubes, short hyphae and elongated hyphae), and under different nutritional conditions. We calculated the expression stability of candidate reference genes using four algorithms including geNorm, NormFinder, BestKeeper and Delta Ct. The analysis results revealed that the comprehensive ranking of candidate reference genes from the most stable to the least stable was 18S (1.189), 28S (1.414) and EF1 (3). The 18S was, therefore, the most suitable reference gene for real-time RT-PCR analysis of gene expression under all conditions. These results will support further studies on gene expression in P. neoaphidis. PMID:26887253
Ma, Yue-jiao; Sun, Xiao-hong; Xu, Xiao-yan; Zhao, Yong; Pan, Ying-jie; Hwang, Cheng-An; Wu, Vivian C. H.
2015-01-01
Vibrio parahaemolyticus is a significant human pathogen capable of causing foodborne gastroenteritis associated with the consumption of contaminated raw or undercooked seafood. Quantitative RT-PCR (qRT-PCR) is a useful tool for studying gene expression in V. parahaemolyticus to characterize its virulence factors and understand the effect of environmental conditions on its pathogenicity. However, there is not a stable gene in V. parahaemolyticus that has been identified for use as a reference gene for qRT-PCR. This study evaluated the stability of 6 reference genes (16S rRNA, recA, rpoS, pvsA, pvuA, and gapdh) in 5 V. parahaemolyticus strains (O3:K6-clinical strain-tdh+, ATCC33846-tdh+, ATCC33847-tdh+, ATCC17802-trh+, and F13-environmental strain-tdh+) cultured at 4 different temperatures (15, 25, 37 and 42°C). Stability values were calculated using GeNorm, NormFinder, BestKeeper, and Delta CT algorithms. The results indicated that recA was the most stably expressed gene in the V. parahaemolyticus strains cultured at different temperatures. This study examined multiple V. parahaemolyticus strains and growth temperatures, hence the finding provided stronger evidence that recA can be used as a reference gene for gene expression studies in V. parahaemolyticus. PMID:26659406
Li, Rumei; Xie, Wen; Wang, Shaoli; Wu, Qingjun; Yang, Nina; Yang, Xin; Pan, Huipeng; Zhou, Xiaomao; Bai, Lianyang; Xu, Baoyun; Zhou, Xuguo; Zhang, Youjun
2013-01-01
Background Accurate evaluation of gene expression requires normalization relative to the expression of reliable reference genes. Expression levels of “classical” reference genes can differ, however, across experimental conditions. Although quantitative real-time PCR (qRT-PCR) has been used extensively to decipher gene function in the sweetpotato whitefly Bemisia tabaci, a world-wide pest in many agricultural systems, the stability of its reference genes has rarely been validated. Results In this study, 15 candidate reference genes from B. tabaci were evaluated using two Excel-based algorithms geNorm and Normfinder under a diverse set of biotic and abiotic conditions. At least two reference genes were selected to normalize gene expressions in B. tabaci under experimental conditions. Specifically, for biotic conditions including host plant, acquisition of a plant virus, developmental stage, tissue (body region of the adult), and whitefly biotype, ribosomal protein L29 was the most stable reference gene. In contrast, the expression of elongation factor 1 alpha, peptidylprolyl isomerase A, NADH dehydrogenase, succinate dehydrogenase complex subunit A and heat shock protein 40 were consistently stable across various abiotic conditions including photoperiod, temperature, and insecticide susceptibility. Conclusion Our finding is the first step toward establishing a standardized quantitative real-time PCR procedure following the MIQE (Minimum Information for publication of Quantitative real time PCR Experiments) guideline in an agriculturally important insect pest, and provides a solid foundation for future RNA interference based functional study in B. tabaci. PMID:23308130
Wang, Peihong; Xiong, Aisheng; Gao, Zhihong; Yu, Xinyi; Li, Man; Hou, Yingjun; Sun, Chao; Qu, Shenchun
2016-01-01
The success of quantitative real-time reverse transcription polymerase chain reaction (RT-qPCR) to quantify gene expression depends on the stability of the reference genes used for data normalization. To date, systematic screening for reference genes in persimmon (Diospyros kaki Thunb) has never been reported. In this study, 13 candidate reference genes were cloned from 'Nantongxiaofangshi' using information available in the transcriptome database. Their expression stability was assessed by geNorm and NormFinder algorithms under abiotic stress and hormone stimulation. Our results showed that the most suitable reference genes across all samples were UBC and GAPDH, and not the commonly used persimmon reference gene ACT. In addition, UBC combined with RPII or TUA were found to be appropriate for the "abiotic stress" group and α-TUB combined with PP2A were found to be appropriate for the "hormone stimuli" group. For further validation, the transcript level of the DkDREB2C homologue under heat stress was studied with the selected genes (CYP, GAPDH, TUA, UBC, α-TUB, and EF1-α). The results suggested that it is necessary to choose appropriate reference genes according to the test materials or experimental conditions. Our study will be useful for future studies on gene expression in persimmon. PMID:27513755
Wang, Peihong; Xiong, Aisheng; Gao, Zhihong; Yu, Xinyi; Li, Man; Hou, Yingjun; Sun, Chao; Qu, Shenchun
2016-01-01
The success of quantitative real-time reverse transcription polymerase chain reaction (RT-qPCR) to quantify gene expression depends on the stability of the reference genes used for data normalization. To date, systematic screening for reference genes in persimmon (Diospyros kaki Thunb) has never been reported. In this study, 13 candidate reference genes were cloned from 'Nantongxiaofangshi' using information available in the transcriptome database. Their expression stability was assessed by geNorm and NormFinder algorithms under abiotic stress and hormone stimulation. Our results showed that the most suitable reference genes across all samples were UBC and GAPDH, and not the commonly used persimmon reference gene ACT. In addition, UBC combined with RPII or TUA were found to be appropriate for the "abiotic stress" group and α-TUB combined with PP2A were found to be appropriate for the "hormone stimuli" group. For further validation, the transcript level of the DkDREB2C homologue under heat stress was studied with the selected genes (CYP, GAPDH, TUA, UBC, α-TUB, and EF1-α). The results suggested that it is necessary to choose appropriate reference genes according to the test materials or experimental conditions. Our study will be useful for future studies on gene expression in persimmon. PMID:27513755
2005-03-30
The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.
Data Structures and Algorithms.
ERIC Educational Resources Information Center
Wirth, Niklaus
1984-01-01
Built-in data structures are the registers and memory words where binary values are stored; hard-wired algorithms are the fixed rules, embodied in electronic logic circuits, by which stored data are interpreted as instructions to be executed. Various topics related to these two basic elements of every computer program are discussed. (JN)
General cardinality genetic algorithms
Koehler; Bhattacharyya; Vose
1997-01-01
A complete generalization of the Vose genetic algorithm model from the binary to higher cardinality case is provided. Boolean AND and EXCLUSIVE-OR operators are replaced by multiplication and addition over rings of integers. Walsh matrices are generalized with finite Fourier transforms for higher cardinality usage. Comparison of results to the binary case are provided. PMID:10021767
ERIC Educational Resources Information Center
Drake, Michael
2011-01-01
One debate that periodically arises in mathematics education is the issue of how to teach calculation more effectively. "Modern" approaches seem to initially favour mental calculation, informal methods, and the development of understanding before introducing written forms, while traditionalists tend to champion particular algorithms. The debate is…
The Xmath Integration Algorithm
ERIC Educational Resources Information Center
Bringslid, Odd
2009-01-01
The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…
Genetic Algorithms and Local Search
NASA Technical Reports Server (NTRS)
Whitley, Darrell
1996-01-01
The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.
Reactive Collision Avoidance Algorithm
NASA Technical Reports Server (NTRS)
Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred
2010-01-01
The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on
Tomasz Plawski, J. Hovater
2010-09-01
A digital low level radio frequency (RF) system typically incorporates either a heterodyne or direct sampling technique, followed by fast ADCs, then an FPGA, and finally a transmitting DAC. This universal platform opens up the possibilities for a variety of control algorithm implementations. The foremost concern for an RF control system is cavity field stability, and to meet the required quality of regulation, the chosen control system needs to have sufficient feedback gain. In this paper we will investigate the effectiveness of the regulation for three basic control system algorithms: I&Q (In-phase and Quadrature), Amplitude & Phase and digital SEL (Self Exciting Loop) along with the example of the Jefferson Lab 12 GeV cavity field control system.
NASA Technical Reports Server (NTRS)
Arenstorf, Norbert S.; Jordan, Harry F.
1987-01-01
A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor, that support these conclusions are detailed.
Algorithms, games, and evolution
Chastain, Erick; Livnat, Adi; Papadimitriou, Christos; Vazirani, Umesh
2014-01-01
Even the most seasoned students of evolution, starting with Darwin himself, have occasionally expressed amazement that the mechanism of natural selection has produced the whole of Life as we see it around us. There is a computational way to articulate the same amazement: “What algorithm could possibly achieve all this in a mere three and a half billion years?” In this paper we propose an answer: We demonstrate that in the regime of weak selection, the standard equations of population genetics describing natural selection in the presence of sex become identical to those of a repeated game between genes played according to multiplicative weight updates (MWUA), an algorithm known in computer science to be surprisingly powerful and versatile. MWUA maximizes a tradeoff between cumulative performance and entropy, which suggests a new view on the maintenance of diversity in evolution. PMID:24979793
NASA Technical Reports Server (NTRS)
Arenstorf, Norbert S.; Jordan, Harry F.
1989-01-01
A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor that support these conclusions, are detailed.
NASA Astrophysics Data System (ADS)
Deprit, André; Palacián, Jesúus; Deprit, Etienne
2001-03-01
The relegation algorithm extends the method of normalization by Lie transformations. Given a Hamiltonian that is a power series ℋ = ℋ0+ ɛℋ1+ ... of a small parameter ɛ, normalization constructs a map which converts the principal part ℋ0into an integral of the transformed system — relegation does the same for an arbitrary function ℋ[G]. If the Lie derivative induced by ℋ[G] is semi-simple, a double recursion produces the generator of the relegating transformation. The relegation algorithm is illustrated with an elementary example borrowed from galactic dynamics; the exercise serves as a standard against which to test software implementations. Relegation is also applied to the more substantial example of a Keplerian system perturbed by radiation pressure emanating from a rotating source.
Genetic Algorithm for Optimization: Preprocessor and Algorithm
NASA Technical Reports Server (NTRS)
Sen, S. K.; Shaykhian, Gholam A.
2006-01-01
Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.
An efficient algorithm for function optimization: modified stem cells algorithm
NASA Astrophysics Data System (ADS)
Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad
2013-03-01
In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).
Algorithm Visualization System for Teaching Spatial Data Algorithms
ERIC Educational Resources Information Center
Nikander, Jussi; Helminen, Juha; Korhonen, Ari
2010-01-01
TRAKLA2 is a web-based learning environment for data structures and algorithms. The system delivers automatically assessed algorithm simulation exercises that are solved using a graphical user interface. In this work, we introduce a novel learning environment for spatial data algorithms, SDA-TRAKLA2, which has been implemented on top of the…
NASA Astrophysics Data System (ADS)
Reda, Ibrahim; Andreas, Afshin
2015-04-01
The Solar Position Algorithm (SPA) calculates the solar zenith and azimuth angles in the period from the year -2000 to 6000, with uncertainties of +/- 0.0003 degrees based on the date, time, and location on Earth. SPA is implemented in C; in addition to being available for download, an online calculator using this code is available at http://www.nrel.gov/midc/solpos/spa.html.
Quantum defragmentation algorithm
Burgarth, Daniel; Giovannetti, Vittorio
2010-08-15
In this addendum to our paper [D. Burgarth and V. Giovannetti, Phys. Rev. Lett. 99, 100501 (2007)] we prove that during the transformation that allows one to enforce control by relaxation on a quantum system, the ancillary memory can be kept at a finite size, independently from the fidelity one wants to achieve. The result is obtained by introducing the quantum analog of defragmentation algorithms which are employed for efficiently reorganizing classical information in conventional hard disks.
NOSS altimeter algorithm specifications
NASA Technical Reports Server (NTRS)
Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.
1982-01-01
A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.
NASA Astrophysics Data System (ADS)
Nardi, Jerry
The Satellite Aided Search and Rescue (Sarsat) is designed to detect and locate distress beacons using satellite receivers. Algorithms used for calculating the positions of 406 MHz beacons and 121.5/243 MHz beacons are presented. The techniques for matching, resolving and averaging calculated locations from multiple satellite passes are also described along with results pertaining to single pass and multiple pass location estimate accuracy.
Algorithms for builder guidelines
Balcomb, J.D.; Lekov, A.B.
1989-06-01
The Builder Guidelines are designed to make simple, appropriate guidelines available to builders for their specific localities. Builders may select from passive solar and conservation strategies with different performance potentials. They can then compare the calculated results for their particular house design with a typical house in the same location. Algorithms used to develop the Builder Guidelines are described. The main algorithms used are the monthly solar ratio (SLR) method for winter heating, the diurnal heat capacity (DHC) method for temperature swing, and a new simplified calculation method (McCool) for summer cooling. This paper applies the algorithms to estimate the performance potential of passive solar strategies, and the annual heating and cooling loads of various combinations of conservation and passive solar strategies. The basis of the McCool method is described. All three methods are implemented in a microcomputer program used to generate the guideline numbers. Guidelines for Denver, Colorado, are used to illustrate the results. The structure of the guidelines and worksheet booklets are also presented. 5 refs., 3 tabs.
Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.
1995-09-01
This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.
Large scale tracking algorithms.
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
Baudoin, T; Grgić, M V; Zadravec, D; Geber, G; Tomljenović, D; Kalogjera, L
2013-12-01
ENT navigation has given new opportunities in performing Endoscopic Sinus Surgery (ESS) and improving surgical outcome of the patients` treatment. ESS assisted by a navigation system could be called Navigated Endoscopic Sinus Surgery (NESS). As it is generally accepted that the NESS should be performed only in cases of complex anatomy and pathology, it has not yet been established as a state-of-the-art procedure and thus not used on a daily basis. This paper presents an algorithm for use of a navigation system for basic ESS in the treatment of chronic rhinosinusitis (CRS). The algorithm includes five units that should be highlighted using a navigation system. They are as follows: 1) nasal vestibule unit, 2) OMC unit, 3) anterior ethmoid unit, 4) posterior ethmoid unit, and 5) sphenoid unit. Each unit has a shape of a triangular pyramid and consists of at least four reference points or landmarks. As many landmarks as possible should be marked when determining one of the five units. Navigated orientation in each unit should always precede any surgical intervention. The algorithm should improve the learning curve of trainees and enable surgeons to use the navigation system routinely and systematically. PMID:24260766
Developing dataflow algorithms
Hiromoto, R.E. ); Bohm, A.P.W. . Dept. of Computer Science)
1991-01-01
Our goal is to study the performance of a collection of numerical algorithms written in Id which is available to users of Motorola's dataflow machine Monsoon. We will study the dataflow performance of these implementations first under the parallel profiling simulator Id World, and second in comparison with actual dataflow execution on the Motorola Monsoon. This approach will allow us to follow the computational and structural details of the parallel algorithms as implemented on dataflow systems. When running our programs on the Id World simulator we will examine the behaviour of algorithms at dataflow graph level, where each instruction takes one timestep and data becomes available at the next. This implies that important machine level phenomena such as the effect that global communication time may have on the computation are not addressed. These phenomena will be addressed when we run our programs on the Monsoon hardware. Potential ramifications for compilation techniques, functional programming style, and program efficiency are significant to this study. In a later stage of our research we will compare the efficiency of Id programs to programs written in other languages. This comparison will be of a rather qualitative nature as there are too many degrees of freedom in a language implementation for a quantitative comparison to be of interest. We begin our study by examining one routine that exhibit different computational characteristics. This routine and its corresponding characteristics is Fast Fourier Transforms; computational parallelism and data dependences between the butterfly shuffles.
Evaluating super resolution algorithms
NASA Astrophysics Data System (ADS)
Kim, Youn Jin; Park, Jong Hyun; Shin, Gun Shik; Lee, Hyun-Seung; Kim, Dong-Hyun; Park, Se Hyeok; Kim, Jaehyun
2011-01-01
This study intends to establish a sound testing and evaluation methodology based upon the human visual characteristics for appreciating the image restoration accuracy; in addition to comparing the subjective results with predictions by some objective evaluation methods. In total, six different super resolution (SR) algorithms - such as iterative back-projection (IBP), robust SR, maximum a posteriori (MAP), projections onto convex sets (POCS), a non-uniform interpolation, and frequency domain approach - were selected. The performance comparison between the SR algorithms in terms of their restoration accuracy was carried out through both subjectively and objectively. The former methodology relies upon the paired comparison method that involves the simultaneous scaling of two stimuli with respect to image restoration accuracy. For the latter, both conventional image quality metrics and color difference methods are implemented. Consequently, POCS and a non-uniform interpolation outperformed the others for an ideal situation, while restoration based methods appear more accurate to the HR image in a real world case where any prior information about the blur kernel is remained unknown. However, the noise-added-image could not be restored successfully by any of those methods. The latest International Commission on Illumination (CIE) standard color difference equation CIEDE2000 was found to predict the subjective results accurately and outperformed conventional methods for evaluating the restoration accuracy of those SR algorithms.
Design of robust systolic algorithms
Varman, P.J.; Fussell, D.S.
1983-01-01
A primary reason for the susceptibility of systolic algorithms to faults is their strong dependence on the interconnection between the processors in a systolic array. A technique to transform any linear systolic algorithm into an equivalent pipelined algorithm that executes on arbitrary trees is presented. 5 references.
High-performance combinatorial algorithms
Pinar, Ali
2003-10-31
Combinatorial algorithms have long played an important role in many applications of scientific computing such as sparse matrix computations and parallel computing. The growing importance of combinatorial algorithms in emerging applications like computational biology and scientific data mining calls for development of a high performance library for combinatorial algorithms. Building such a library requires a new structure for combinatorial algorithms research that enables fast implementation of new algorithms. We propose a structure for combinatorial algorithms research that mimics the research structure of numerical algorithms. Numerical algorithms research is nicely complemented with high performance libraries, and this can be attributed to the fact that there are only a small number of fundamental problems that underlie numerical solvers. Furthermore there are only a handful of kernels that enable implementation of algorithms for these fundamental problems. Building a similar structure for combinatorial algorithms will enable efficient implementations for existing algorithms and fast implementation of new algorithms. Our results will promote utilization of combinatorial techniques and will impact research in many scientific computing applications, some of which are listed.
Multipartite entanglement in quantum algorithms
Bruss, D.; Macchiavello, C.
2011-05-15
We investigate the entanglement features of the quantum states employed in quantum algorithms. In particular, we analyze the multipartite entanglement properties in the Deutsch-Jozsa, Grover, and Simon algorithms. Our results show that for these algorithms most instances involve multipartite entanglement.
Algorithm for Constructing Contour Plots
NASA Technical Reports Server (NTRS)
Johnson, W.; Silva, F.
1984-01-01
General computer algorithm developed for construction of contour plots. algorithm accepts as input data values at set of points irregularly distributed over plane. Algorithm based on interpolation scheme: points in plane connected by straight-line segments to form set of triangles. Program written in FORTRAN IV.
Polynomial Algorithms for Item Matching.
ERIC Educational Resources Information Center
Armstrong, Ronald D.; Jones, Douglas H.
1992-01-01
Polynomial algorithms are presented that are used to solve selected problems in test theory, and computational results from sample problems with several hundred decision variables are provided that demonstrate the benefits of these algorithms. The algorithms are based on optimization theory in networks (graphs). (SLD)
Verifying a Computer Algorithm Mathematically.
ERIC Educational Resources Information Center
Olson, Alton T.
1986-01-01
Presents an example of mathematics from an algorithmic point of view, with emphasis on the design and verification of this algorithm. The program involves finding roots for algebraic equations using the half-interval search algorithm. The program listing is included. (JN)
Improved multiprocessor garbage collection algorithms
Newman, I.A.; Stallard, R.P.; Woodward, M.C.
1983-01-01
Outlines the results of an investigation of existing multiprocessor garbage collection algorithms and introduces two new algorithms which significantly improve some aspects of the performance of their predecessors. The two algorithms arise from different starting assumptions. One considers the case where the algorithm will terminate successfully whatever list structure is being processed and assumes that the extra data space should be minimised. The other seeks a very fast garbage collection time for list structures that do not contain loops. Results of both theoretical and experimental investigations are given to demonstrate the efficacy of the algorithms. 7 references.
Efficient multicomponent fuel algorithm
NASA Astrophysics Data System (ADS)
Torres, D. J.; O'Rourke, P. J.; Amsden, A. A.
2003-03-01
We derive equations for multicomponent fuel evaporation in airborne fuel droplets and wall films, and implement the model into KIVA-3V. Temporal and spatial variations in liquid droplet composition and temperature are not modelled but solved for by discretizing the interior of the droplet in an implicit and computationally efficient way. We find that an interior discretization is necessary to correctly compute the evolution of the droplet composition. The details of the one-dimensional numerical algorithm are described. Numerical simulations of multicomponent evaporation are performed for single droplets and compared to experimental data.
NASA Technical Reports Server (NTRS)
Vardi, A.
1984-01-01
The representation min t s.t. F(I)(x). - t less than or equal to 0 for all i is examined. An active set strategy is designed of functions: active, semi-active, and non-active. This technique will help in preventing zigzagging which often occurs when an active set strategy is used. Some of the inequality constraints are handled with slack variables. Also a trust region strategy is used in which at each iteration there is a sphere around the current point in which the local approximation of the function is trusted. The algorithm is implemented into a successful computer program. Numerical results are provided.
Join-Graph Propagation Algorithms
Mateescu, Robert; Kask, Kalev; Gogate, Vibhav; Dechter, Rina
2010-01-01
The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearl's belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and then move to the iterative scheme called Iterative Join-Graph Propagation (IJGP), that combines both iteration and bounded inference. Algorithm IJGP belongs to the class of Generalized Belief Propagation algorithms, a framework that allowed connections with approximate algorithms from statistical physics and is shown empirically to surpass the performance of mini-clustering and belief propagation, as well as a number of other state-of-the-art algorithms on several classes of networks. We also provide insight into the accuracy of iterative BP and IJGP by relating these algorithms to well known classes of constraint propagation schemes. PMID:20740057
Constructive neural network learning algorithms
Parekh, R.; Yang, Jihoon; Honavar, V.
1996-12-31
Constructive Algorithms offer an approach for incremental construction of potentially minimal neural network architectures for pattern classification tasks. These algorithms obviate the need for an ad-hoc a-priori choice of the network topology. The constructive algorithm design involves alternately augmenting the existing network topology by adding one or more threshold logic units and training the newly added threshold neuron(s) using a stable variant of the perception learning algorithm (e.g., pocket algorithm, thermal perception, and barycentric correction procedure). Several constructive algorithms including tower, pyramid, tiling, upstart, and perception cascade have been proposed for 2-category pattern classification. These algorithms differ in terms of their topological and connectivity constraints as well as the training strategies used for individual neurons.
NASA Technical Reports Server (NTRS)
Rabideau, Gregg R.; Chien, Steve A.
2010-01-01
AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.
NASA Technical Reports Server (NTRS)
Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John
2005-01-01
The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.
NASA Astrophysics Data System (ADS)
Owen, Mark W.; Stubberud, Allen R.
2003-12-01
Highly maneuvering threats are a major concern for the Navy and the DoD and the technology discussed in this paper is intended to help address this issue. A neural extended Kalman filter algorithm has been embedded in an interacting multiple model architecture for target tracking. The neural extended Kalman filter algorithm is used to improve motion model prediction during maneuvers. With a better target motion mode, noise reduction can be achieved through a maneuver. Unlike the interacting multiple model architecture which uses a high process noise model to hold a target through a maneuver with poor velocity and acceleration estimates, a neural extended Kalman filter is used to predict corrections to the velocity and acceleration states of a target through a maneuver. The neural extended Kalman filter estimates the weights of a neural network, which in turn are used to modify the state estimate predictions of the filter as measurements are processed. The neural network training is performed on-line as data is processed. In this paper, the simulation results of a tracking problem using a neural extended Kalman filter embedded in an interacting multiple model tracking architecture are shown. Preliminary results on the 2nd Benchmark Problem are also given.
NASA Astrophysics Data System (ADS)
Owen, Mark W.; Stubberud, Allen R.
2004-01-01
Highly maneuvering threats are a major concern for the Navy and the DoD and the technology discussed in this paper is intended to help address this issue. A neural extended Kalman filter algorithm has been embedded in an interacting multiple model architecture for target tracking. The neural extended Kalman filter algorithm is used to improve motion model prediction during maneuvers. With a better target motion mode, noise reduction can be achieved through a maneuver. Unlike the interacting multiple model architecture which uses a high process noise model to hold a target through a maneuver with poor velocity and acceleration estimates, a neural extended Kalman filter is used to predict corrections to the velocity and acceleration states of a target through a maneuver. The neural extended Kalman filter estimates the weights of a neural network, which in turn are used to modify the state estimate predictions of the filter as measurements are processed. The neural network training is performed on-line as data is processed. In this paper, the simulation results of a tracking problem using a neural extended Kalman filter embedded in an interacting multiple model tracking architecture are shown. Preliminary results on the 2nd Benchmark Problem are also given.
STAR Algorithm Integration Team - Facilitating operational algorithm development
NASA Astrophysics Data System (ADS)
Mikles, V. J.
2015-12-01
The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.
Fuentes, Alejandra; Ortiz, Javier; Saavedra, Nicolás; Salazar, Luis A; Meneses, Claudio; Arriagada, Cesar
2016-04-01
The gene expression stability of candidate reference genes in the roots and leaves of Solanum lycopersicum inoculated with arbuscular mycorrhizal fungi was investigated. Eight candidate reference genes including elongation factor 1 α (EF1), glyceraldehyde-3-phosphate dehydrogenase (GAPDH), phosphoglycerate kinase (PGK), protein phosphatase 2A (PP2Acs), ribosomal protein L2 (RPL2), β-tubulin (TUB), ubiquitin (UBI) and actin (ACT) were selected, and their expression stability was assessed to determine the most stable internal reference for quantitative PCR normalization in S. lycopersicum inoculated with the arbuscular mycorrhizal fungus Rhizophagus irregularis. The stability of each gene was analysed in leaves and roots together and separated using the geNorm and NormFinder algorithms. Differences were detected between leaves and roots, varying among the best-ranked genes depending on the algorithm used and the tissue analysed. PGK, TUB and EF1 genes showed higher stability in roots, while EF1 and UBI had higher stability in leaves. Statistical algorithms indicated that the GAPDH gene was the least stable under the experimental conditions assayed. Then, we analysed the expression levels of the LePT4 gene, a phosphate transporter whose expression is induced by fungal colonization in host plant roots. No differences were observed when the most stable genes were used as reference genes. However, when GAPDH was used as the reference gene, we observed an overestimation of LePT4 expression. In summary, our results revealed that candidate reference genes present variable stability in S. lycopersicum arbuscular mycorrhizal symbiosis depending on the algorithm and tissue analysed. Thus, reference gene selection is an important issue for obtaining reliable results in gene expression quantification. PMID:26874621
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess
2011-01-01
More efficient versions of an interpolation method, called kriging, have been introduced in order to reduce its traditionally high computational cost. Written in C++, these approaches were tested on both synthetic and real data. Kriging is a best unbiased linear estimator and suitable for interpolation of scattered data points. Kriging has long been used in the geostatistic and mining communities, but is now being researched for use in the image fusion of remotely sensed data. This allows a combination of data from various locations to be used to fill in any missing data from any single location. To arrive at the faster algorithms, sparse SYMMLQ iterative solver, covariance tapering, Fast Multipole Methods (FMM), and nearest neighbor searching techniques were used. These implementations were used when the coefficient matrix in the linear system is symmetric, but not necessarily positive-definite.
Fighting Censorship with Algorithms
NASA Astrophysics Data System (ADS)
Mahdian, Mohammad
In countries such as China or Iran where Internet censorship is prevalent, users usually rely on proxies or anonymizers to freely access the web. The obvious difficulty with this approach is that once the address of a proxy or an anonymizer is announced for use to the public, the authorities can easily filter all traffic to that address. This poses a challenge as to how proxy addresses can be announced to users without leaking too much information to the censorship authorities. In this paper, we formulate this question as an interesting algorithmic problem. We study this problem in a static and a dynamic model, and give almost tight bounds on the number of proxy servers required to give access to n people k of whom are adversaries. We will also discuss how trust networks can be used in this context.
Trial encoding algorithms ensemble.
Cheng, Lipin Bill; Yeh, Ren Jye
2013-01-01
This paper proposes trial algorithms for some basic components in cryptography and lossless bit compression. The symmetric encryption is accomplished by mixing up randomizations and scrambling with hashing of the key playing an essential role. The digital signature is adapted from the Hill cipher with the verification key matrices incorporating un-invertible parts to hide the signature matrix. The hash is a straight running summation (addition chain) of data bytes plus some randomization. One simplified version can be burst error correcting code. The lossless bit compressor is the Shannon-Fano coding that is less optimal than the later Huffman and Arithmetic coding, but can be conveniently implemented without the use of a tree structure and improvable with bytes concatenation. PMID:27057475
Multisensor data fusion algorithm development
Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.
1995-12-01
This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.
Ozone Uncertainties Study Algorithm (OUSA)
NASA Technical Reports Server (NTRS)
Bahethi, O. P.
1982-01-01
An algorithm to carry out sensitivities, uncertainties and overall imprecision studies to a set of input parameters for a one dimensional steady ozone photochemistry model is described. This algorithm can be used to evaluate steady state perturbations due to point source or distributed ejection of H2O, CLX, and NOx, besides, varying the incident solar flux. This algorithm is operational on IBM OS/360-91 computer at NASA/Goddard Space Flight Center's Science and Applications Computer Center (SACC).
Ozone Uncertainties Study Algorithm (OUSA)
NASA Astrophysics Data System (ADS)
Bahethi, O. P.
An algorithm to carry out sensitivities, uncertainties and overall imprecision studies to a set of input parameters for a one dimensional steady ozone photochemistry model is described. This algorithm can be used to evaluate steady state perturbations due to point source or distributed ejection of H2O, CLX, and NOx, besides, varying the incident solar flux. This algorithm is operational on IBM OS/360-91 computer at NASA/Goddard Space Flight Center's Science and Applications Computer Center (SACC).
Solar Occultation Retrieval Algorithm Development
NASA Technical Reports Server (NTRS)
Lumpe, Jerry D.
2004-01-01
This effort addresses the comparison and validation of currently operational solar occultation retrieval algorithms, and the development of generalized algorithms for future application to multiple platforms. initial development of generalized forward model algorithms capable of simulating transmission data from of the POAM II/III and SAGE II/III instruments. Work in the 2" quarter will focus on: completion of forward model algorithms, including accurate spectral characteristics for all instruments, and comparison of simulated transmission data with actual level 1 instrument data for specific occultation events.
Messy genetic algorithms: Recent developments
Kargupta, H.
1996-09-01
Messy genetic algorithms define a rare class of algorithms that realize the need for detecting appropriate relations among members of the search domain in optimization. This paper reviews earlier works in messy genetic algorithms and describes some recent developments. It also describes the gene expression messy GA (GEMGA)--an {Omicron}({Lambda}{sup {kappa}}({ell}{sup 2} + {kappa})) sample complexity algorithm for the class of order-{kappa} delineable problems (problems that can be solved by considering no higher than order-{kappa} relations) of size {ell} and alphabet size {Lambda}. Experimental results are presented to demonstrate the scalability of the GEMGA.
NOSS Altimeter Detailed Algorithm specifications
NASA Technical Reports Server (NTRS)
Hancock, D. W.; Mcmillan, J. D.
1982-01-01
The details of the algorithms and data sets required for satellite radar altimeter data processing are documented in a form suitable for (1) development of the benchmark software and (2) coding the operational software. The algorithms reported in detail are those established for altimeter processing. The algorithms which required some additional development before documenting for production were only scoped. The algorithms are divided into two levels of processing. The first level converts the data to engineering units and applies corrections for instrument variations. The second level provides geophysical measurements derived from altimeter parameters for oceanographic users.
Preconditioned quantum linear system algorithm.
Clader, B D; Jacobs, B C; Sprouse, C R
2013-06-21
We describe a quantum algorithm that generalizes the quantum linear system algorithm [Harrow et al., Phys. Rev. Lett. 103, 150502 (2009)] to arbitrary problem specifications. We develop a state preparation routine that can initialize generic states, show how simple ancilla measurements can be used to calculate many quantities of interest, and integrate a quantum-compatible preconditioner that greatly expands the number of problems that can achieve exponential speedup over classical linear systems solvers. To demonstrate the algorithm's applicability, we show how it can be used to compute the electromagnetic scattering cross section of an arbitrary target exponentially faster than the best classical algorithm. PMID:23829722
Variable Selection using MM Algorithms
Hunter, David R.; Li, Runze
2009-01-01
Variable selection is fundamental to high-dimensional statistical modeling. Many variable selection techniques may be implemented by maximum penalized likelihood using various penalty functions. Optimizing the penalized likelihood function is often challenging because it may be nondifferentiable and/or nonconcave. This article proposes a new class of algorithms for finding a maximizer of the penalized likelihood for a broad class of penalty functions. These algorithms operate by perturbing the penalty function slightly to render it differentiable, then optimizing this differentiable function using a minorize-maximize (MM) algorithm. MM algorithms are useful extensions of the well-known class of EM algorithms, a fact that allows us to analyze the local and global convergence of the proposed algorithm using some of the techniques employed for EM algorithms. In particular, we prove that when our MM algorithms converge, they must converge to a desirable point; we also discuss conditions under which this convergence may be guaranteed. We exploit the Newton-Raphson-like aspect of these algorithms to propose a sandwich estimator for the standard errors of the estimators. Our method performs well in numerical tests. PMID:19458786
Research on Routing Selection Algorithm Based on Genetic Algorithm
NASA Astrophysics Data System (ADS)
Gao, Guohong; Zhang, Baojian; Li, Xueyong; Lv, Jinna
The hereditary algorithm is a kind of random searching and method of optimizing based on living beings natural selection and hereditary mechanism. In recent years, because of the potentiality in solving complicate problems and the successful application in the fields of industrial project, hereditary algorithm has been widely concerned by the domestic and international scholar. Routing Selection communication has been defined a standard communication model of IP version 6.This paper proposes a service model of Routing Selection communication, and designs and implements a new Routing Selection algorithm based on genetic algorithm.The experimental simulation results show that this algorithm can get more resolution at less time and more balanced network load, which enhances search ratio and the availability of network resource, and improves the quality of service.
Algorithm for Autonomous Landing
NASA Technical Reports Server (NTRS)
Kuwata, Yoshiaki
2011-01-01
Because of their small size, high maneuverability, and easy deployment, micro aerial vehicles (MAVs) are used for a wide variety of both civilian and military missions. One of their current drawbacks is the vast array of sensors (such as GPS, altimeter, radar, and the like) required to make a landing. Due to the MAV s small payload size, this is a major concern. Replacing the imaging sensors with a single monocular camera is sufficient to land a MAV. By applying optical flow algorithms to images obtained from the camera, time-to-collision can be measured. This is a measurement of position and velocity (but not of absolute distance), and can avoid obstacles as well as facilitate a landing on a flat surface given a set of initial conditions. The key to this approach is to calculate time-to-collision based on some image on the ground. By holding the angular velocity constant, horizontal speed decreases linearly with the height, resulting in a smooth landing. Mathematical proofs show that even with actuator saturation or modeling/ measurement uncertainties, MAVs can land safely. Landings of this nature may have a higher velocity than is desirable, but this can be compensated for by a cushioning or dampening system, or by using a system of legs to grab onto a surface. Such a monocular camera system can increase vehicle payload size (or correspondingly reduce vehicle size), increase speed of descent, and guarantee a safe landing by directly correlating speed to height from the ground.
Berry, K.; Dayton, S.
1996-10-28
Citibank was using a data collection system to create a one-time-only mailing history on prospective credit card customers that was becoming dated in its time to market requirements and as such was in need of performance improvements. To compound problems with their existing system, the assurance of the quality of the data matching process was manpower intensive and needed to be automated. Analysis, design, and prototyping capabilities involving information technology were areas of expertise provided by DOE-LMES Data Systems Research and Development (DSRD) program. The goal of this project was for Data Systems Research and Development (DSRD) to analyze the current Citibank credit card offering system and suggest and prototype technology improvements that would result in faster processing with quality as good as the current system. Technologies investigated include: a high-speed network of reduced instruction set computing (RISC) processors for loosely coupled parallel processing, tightly coupled, high performance parallel processing, higher order computer languages such as `C`, fuzzy matching algorithms applied to very large data files, relational database management system, and advanced programming techniques.
FORTRAN Algorithm for Image Processing
NASA Technical Reports Server (NTRS)
Roth, Don J.; Hull, David R.
1987-01-01
FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.
Computer algorithm for coding gain
NASA Technical Reports Server (NTRS)
Dodd, E. E.
1974-01-01
Development of a computer algorithm for coding gain for use in an automated communications link design system. Using an empirical formula which defines coding gain as used in space communications engineering, an algorithm is constructed on the basis of available performance data for nonsystematic convolutional encoding with soft-decision (eight-level) Viterbi decoding.
Cascade Error Projection Learning Algorithm
NASA Technical Reports Server (NTRS)
Duong, T. A.; Stubberud, A. R.; Daud, T.
1995-01-01
A detailed mathematical analysis is presented for a new learning algorithm termed cascade error projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters.
The Chopthin Algorithm for Resampling
NASA Astrophysics Data System (ADS)
Gandy, Axel; Lau, F. Din-Houn
2016-08-01
Resampling is a standard step in particle filters and more generally sequential Monte Carlo methods. We present an algorithm, called chopthin, for resampling weighted particles. In contrast to standard resampling methods the algorithm does not produce a set of equally weighted particles; instead it merely enforces an upper bound on the ratio between the weights. Simulation studies show that the chopthin algorithm consistently outperforms standard resampling methods. The algorithms chops up particles with large weight and thins out particles with low weight, hence its name. It implicitly guarantees a lower bound on the effective sample size. The algorithm can be implemented efficiently, making it practically useful. We show that the expected computational effort is linear in the number of particles. Implementations for C++, R (on CRAN), Python and Matlab are available.
CORDIC algorithms in four dimensions
NASA Astrophysics Data System (ADS)
Delosme, Jean-Marc; Hsiao, Shen-Fu
1990-11-01
CORDIC algorithms offer an attractive alternative to multiply-and-add based algorithms for the implementation of two-dimensional rotations preserving either norm: (x2 + 2) or (x2 _ y2)/2 Indeed these norms whose computation is a significant part of the evaluation of the two-dimensional rotations are computed much more easily by the CORDIC algorithms. However the part played by norm computations in the evaluation of rotations becomes quickly small as the dimension of the space increases. Thus in spaces of dimension 5 or more there is no practical alternative to multiply-and-add based algorithms. In the intermediate region dimensions 3 and 4 extensions of the CORDIC algorithms are an interesting option. The four-dimensional extensions are particularly elegant and are the main object of this paper.
Cubit Adaptive Meshing Algorithm Library
2004-09-01
CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMALs triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandias patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less
Testing an earthquake prediction algorithm
Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.
1997-01-01
A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.
An Artificial Immune Univariate Marginal Distribution Algorithm
NASA Astrophysics Data System (ADS)
Zhang, Qingbin; Kang, Shuo; Gao, Junxiang; Wu, Song; Tian, Yanping
Hybridization is an extremely effective way of improving the performance of the Univariate Marginal Distribution Algorithm (UMDA). Owing to its diversity and memory mechanisms, artificial immune algorithm has been widely used to construct hybrid algorithms with other optimization algorithms. This paper proposes a hybrid algorithm which combines the UMDA with the principle of general artificial immune algorithm. Experimental results on deceptive function of order 3 show that the proposed hybrid algorithm can get more building blocks (BBs) than the UMDA.
Algorithmic advances in stochastic programming
Morton, D.P.
1993-07-01
Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.
Zhang, Ming-Fang
2016-01-01
Normalization to reference genes is the most common method to avoid bias in real-time quantitative PCR (qPCR), which has been widely used for quantification of gene expression. Despite several studies on gene expression, Lilium, and particularly L. regale, has not been fully investigated regarding the evaluation of reference genes suitable for normalization. In this study, nine putative reference genes, namely 18S rRNA, ACT, BHLH, CLA, CYP, EF1, GAPDH, SAND and TIP41, were analyzed for accurate quantitative PCR normalization at different developmental stages and under different stress conditions, including biotic (Botrytis elliptica), drought, salinity, cold and heat stress. All these genes showed a wide variation in their Cq (quantification Cycle) values, and their stabilities were calculated by geNorm, NormFinder and BestKeeper. In a combination of the results from the three algorithms, BHLH was superior to the other candidates when all the experimental treatments were analyzed together; CLA and EF1 were also recommended by two of the three algorithms. As for specific conditions, EF1 under various developmental stages, SAND under biotic stress, CYP/GAPDH under drought stress, and TIP41 under salinity stress were generally considered suitable. All the algorithms agreed on the stability of SAND and GAPDH under cold stress, while only CYP was selected under heat stress by all of them. Additionally, the selection of optimal reference genes under biotic stress was further verified by analyzing the expression level of LrLOX in leaves inoculated with B. elliptica. Our study would be beneficial for future studies on gene expression and molecular breeding of Lilium. PMID:27019788
The Dropout Learning Algorithm
Baldi, Pierre; Sadowski, Peter
2014-01-01
Dropout is a recently introduced algorithm for training neural network by randomly dropping units during training to prevent their co-adaptation. A mathematical analysis of some of the static and dynamic properties of dropout is provided using Bernoulli gating variables, general enough to accommodate dropout on units or connections, and with variable rates. The framework allows a complete analysis of the ensemble averaging properties of dropout in linear networks, which is useful to understand the non-linear case. The ensemble averaging properties of dropout in non-linear logistic networks result from three fundamental equations: (1) the approximation of the expectations of logistic functions by normalized geometric means, for which bounds and estimates are derived; (2) the algebraic equality between normalized geometric means of logistic functions with the logistic of the means, which mathematically characterizes logistic functions; and (3) the linearity of the means with respect to sums, as well as products of independent variables. The results are also extended to other classes of transfer functions, including rectified linear functions. Approximation errors tend to cancel each other and do not accumulate. Dropout can also be connected to stochastic neurons and used to predict firing rates, and to backpropagation by viewing the backward propagation as ensemble averaging in a dropout linear network. Moreover, the convergence properties of dropout can be understood in terms of stochastic gradient descent. Finally, for the regularization properties of dropout, the expectation of the dropout gradient is the gradient of the corresponding approximation ensemble, regularized by an adaptive weight decay term with a propensity for self-consistent variance minimization and sparse representations. PMID:24771879
Wavelet periodicity detection algorithms
NASA Astrophysics Data System (ADS)
Benedetto, John J.; Pfander, Goetz E.
1998-10-01
This paper deals with the analysis of time series with respect to certain known periodicities. In particular, we shall present a fast method aimed at detecting periodic behavior inherent in noise data. The method is composed of three steps: (1) Non-noisy data are analyzed through spectral and wavelet methods to extract specific periodic patterns of interest. (2) Using these patterns, we construct an optimal piecewise constant wavelet designed to detect the underlying periodicities. (3) We introduce a fast discretized version of the continuous wavelet transform, as well as waveletgram averaging techniques, to detect occurrence and period of these periodicities. The algorithm is formulated to provide real time implementation. Our procedure is generally applicable to detect locally periodic components in signals s which can be modeled as s(t) equals A(t)F(h(t)) + N(t) for t in I, where F is a periodic signal, A is a non-negative slowly varying function, and h is strictly increasing with h' slowly varying, N denotes background activity. For example, the method can be applied in the context of epileptic seizure detection. In this case, we try to detect seizure periodics in EEG and ECoG data. In the case of ECoG data, N is essentially 1/f noise. In the case of EEG data and for t in I,N includes noise due to cranial geometry and densities. In both cases N also includes standard low frequency rhythms. Periodicity detection has other applications including ocean wave prediction, cockpit motion sickness prediction, and minefield detection.
Scheduling with genetic algorithms
NASA Technical Reports Server (NTRS)
Fennel, Theron R.; Underbrink, A. J., Jr.; Williams, George P. W., Jr.
1994-01-01
In many domains, scheduling a sequence of jobs is an important function contributing to the overall efficiency of the operation. At Boeing, we develop schedules for many different domains, including assembly of military and commercial aircraft, weapons systems, and space vehicles. Boeing is under contract to develop scheduling systems for the Space Station Payload Planning System (PPS) and Payload Operations and Integration Center (POIC). These applications require that we respect certain sequencing restrictions among the jobs to be scheduled while at the same time assigning resources to the jobs. We call this general problem scheduling and resource allocation. Genetic algorithms (GA's) offer a search method that uses a population of solutions and benefits from intrinsic parallelism to search the problem space rapidly, producing near-optimal solutions. Good intermediate solutions are probabalistically recombined to produce better offspring (based upon some application specific measure of solution fitness, e.g., minimum flowtime, or schedule completeness). Also, at any point in the search, any intermediate solution can be accepted as a final solution; allowing the search to proceed longer usually produces a better solution while terminating the search at virtually any time may yield an acceptable solution. Many processes are constrained by restrictions of sequence among the individual jobs. For a specific job, other jobs must be completed beforehand. While there are obviously many other constraints on processes, it is these on which we focussed for this research: how to allocate crews to jobs while satisfying job precedence requirements and personnel, and tooling and fixture (or, more generally, resource) requirements.
Portable Health Algorithms Test System
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.
2010-01-01
A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.
Cluster algorithms and computational complexity
NASA Astrophysics Data System (ADS)
Li, Xuenan
Cluster algorithms for the 2D Ising model with a staggered field have been studied and a new cluster algorithm for path sampling has been worked out. The complexity properties of Bak-Seppen model and the Growing network model have been studied by using the Computational Complexity Theory. The dynamic critical behavior of the two-replica cluster algorithm is studied. Several versions of the algorithm are applied to the two-dimensional, square lattice Ising model with a staggered field. The dynamic exponent for the full algorithm is found to be less than 0.5. It is found that odd translations of one replica with respect to the other together with global flips are essential for obtaining a small value of the dynamic exponent. The path sampling problem for the 1D Ising model is studied using both a local algorithm and a novel cluster algorithm. The local algorithm is extremely inefficient at low temperature, where the integrated autocorrelation time is found to be proportional to the fourth power of correlation length. The dynamic exponent of the cluster algorithm is found to be zero and therefore proved to be much more efficient than the local algorithm. The parallel computational complexity of the Bak-Sneppen evolution model is studied. It is shown that Bak-Sneppen histories can be generated by a massively parallel computer in a time that is polylog in the length of the history, which means that the logical depth of producing a Bak-Sneppen history is exponentially less than the length of the history. The parallel dynamics for generating Bak-Sneppen histories is contrasted to standard Bak-Sneppen dynamics. The parallel computational complexity of the Growing Network model is studied. The growth of the network with linear kernels is shown to be not complex and an algorithm with polylog parallel running time is found. The growth of the network with gamma ≥ 2 super-linear kernels can be realized by a randomized parallel algorithm with polylog expected running time.
Routing Algorithm Exploits Spatial Relations
NASA Technical Reports Server (NTRS)
Okino, Clayton; Jennings, Esther
2004-01-01
A recently developed routing algorithm for broadcasting in an ad hoc wireless communication network takes account of, and exploits, the spatial relationships among the locations of nodes, in addition to transmission power levels and distances between the nodes. In contrast, most prior algorithms for discovering routes through ad hoc networks rely heavily on transmission power levels and utilize limited graph-topology techniques that do not involve consideration of the aforesaid spatial relationships. The present algorithm extracts the relevant spatial-relationship information by use of a construct denoted the relative-neighborhood graph (RNG).
Linearization algorithms for line transfer
Scott, H.A.
1990-11-06
Complete linearization is a very powerful technique for solving multi-line transfer problems that can be used efficiently with a variety of transfer formalisms. The linearization algorithm we describe is computationally very similar to ETLA, but allows an effective treatment of strongly-interacting lines. This algorithm has been implemented (in several codes) with two different transfer formalisms in all three one-dimensional geometries. We also describe a variation of the algorithm that handles saturable laser transport. Finally, we present a combination of linearization with a local approximate operator formalism, which has been implemented in two dimensions and is being developed in three dimensions. 11 refs.
Li, Dandan; Hu, Bo; Wang, Qing; Liu, Hongchang; Pan, Feng; Wu, Wei
2015-01-01
Safflower (Carthamus tinctorius L.) has received a significant amount of attention as a medicinal plant and oilseed crop. Gene expression studies provide a theoretical molecular biology foundation for improving new traits and developing new cultivars. Real-time quantitative PCR (RT-qPCR) has become a crucial approach for gene expression analysis. In addition, appropriate reference genes (RGs) are essential for accurate and rapid relative quantification analysis of gene expression. In this study, fifteen candidate RGs involved in multiple metabolic pathways of plants were finally selected and validated under different experimental treatments, at different seed development stages and in different cultivars and tissues for real-time PCR experiments. These genes were ABCS, 60SRPL10, RANBP1, UBCL, MFC, UBCE2, EIF5A, COA, EF1-β, EF1, GAPDH, ATPS, MBF1, GTPB and GST. The suitability evaluation was executed by the geNorm and NormFinder programs. Overall, EF1, UBCE2, EIF5A, ATPS and 60SRPL10 were the most stable genes, and MBF1, as well as MFC, were the most unstable genes by geNorm and NormFinder software in all experimental samples. To verify the validation of RGs selected by the two programs, the expression analysis of 7 CtFAD2 genes in safflower seeds at different developmental stages under cold stress was executed using different RGs in RT-qPCR experiments for normalization. The results showed similar expression patterns when the most stable RGs selected by geNorm or NormFinder software were used. However, the differences were detected using the most unstable reference genes. The most stable combination of genes selected in this study will help to achieve more accurate and reliable results in a wide variety of samples in safflower. PMID:26457898
Fibonacci Numbers and Computer Algorithms.
ERIC Educational Resources Information Center
Atkins, John; Geist, Robert
1987-01-01
The Fibonacci Sequence describes a vast array of phenomena from nature. Computer scientists have discovered and used many algorithms which can be classified as applications of Fibonacci's sequence. In this article, several of these applications are considered. (PK)
An onboard star identification algorithm
NASA Astrophysics Data System (ADS)
Ha, Kong; Femiano, Michael
The paper presents the autonomous Initial Stellar Acquisition (ISA) algorithm developed for the X-Ray Timing Explorer for prividing the attitude quaternion within the desired accuracy, based on the one-axis attitude knowledge (through the use of the Digital Sun Sensor, CCD Star Trackers, and the onboard star catalog, OSC). Mathematical analysis leads to an accurate measure of the performance of the algorithm as a function of various parameters, such as the probability of a tracked star being in the OSC, the sensor noise level, and the number of stars matched. It is shown that the simplicity, tractability, and robustness of the ISA algorithm, compared to a general three-axis attiude determination algorithm, make it a viable on-board solution.
Scheduling Jobs with Genetic Algorithms
NASA Astrophysics Data System (ADS)
Ferrolho, António; Crisóstomo, Manuel
Most scheduling problems are NP-hard, the time required to solve the problem optimally increases exponentially with the size of the problem. Scheduling problems have important applications, and a number of heuristic algorithms have been proposed to determine relatively good solutions in polynomial time. Recently, genetic algorithms (GA) are successfully used to solve scheduling problems, as shown by the growing numbers of papers. GA are known as one of the most efficient algorithms for solving scheduling problems. But, when a GA is applied to scheduling problems various crossovers and mutations operators can be applicable. This paper presents and examines a new concept of genetic operators for scheduling problems. A software tool called hybrid and flexible genetic algorithm (HybFlexGA) was developed to examine the performance of various crossover and mutation operators by computing simulations of job scheduling problems.
Recursive Algorithm For Linear Regression
NASA Technical Reports Server (NTRS)
Varanasi, S. V.
1988-01-01
Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.
Algorithmic complexity of a protein
NASA Astrophysics Data System (ADS)
Dewey, T. Gregory
1996-07-01
The information contained in a protein's amino acid sequence dictates its three-dimensional structure. To quantitate the transfer of information that occurs in the protein folding process, the Kolmogorov information entropy or algorithmic complexity of the protein structure is investigated. The algorithmic complexity of an object provides a means of quantitating its information content. Recent results have indicated that the algorithmic complexity of microstates of certain statistical mechanical systems can be estimated from the thermodynamic entropy. In the present work, it is shown that the algorithmic complexity of a protein is given by its configurational entropy. Using this result, a quantitative estimate of the information content of a protein's structure is made and is compared to the information content of the sequence. Additionally, the mutual information between sequence and structure is determined. It is seen that virtually all the information contained in the protein structure is shared with the sequence.
An onboard star identification algorithm
NASA Technical Reports Server (NTRS)
Ha, Kong; Femiano, Michael
1993-01-01
The paper presents the autonomous Initial Stellar Acquisition (ISA) algorithm developed for the X-Ray Timing Explorer for prividing the attitude quaternion within the desired accuracy, based on the one-axis attitude knowledge (through the use of the Digital Sun Sensor, CCD Star Trackers, and the onboard star catalog, OSC). Mathematical analysis leads to an accurate measure of the performance of the algorithm as a function of various parameters, such as the probability of a tracked star being in the OSC, the sensor noise level, and the number of stars matched. It is shown that the simplicity, tractability, and robustness of the ISA algorithm, compared to a general three-axis attiude determination algorithm, make it a viable on-board solution.
Cascade Error Projection: A New Learning Algorithm
NASA Technical Reports Server (NTRS)
Duong, T. A.; Stubberud, A. R.; Daud, T.; Thakoor, A. P.
1995-01-01
A new neural network architecture and a hardware implementable learning algorithm is proposed. The algorithm, called cascade error projection (CEP), handles lack of precision and circuit noise better than existing algorithms.
Belief network algorithms: A study of performance
Jitnah, N.
1996-12-31
This abstract gives an overview of the work. We present a survey of Belief Network algorithms and propose a domain characterization system to be used as a basis for algorithm comparison and for predicting algorithm performance.
Genetic algorithms as discovery programs
Hilliard, M.R.; Liepins, G.
1986-01-01
Genetic algorithms are mathematical counterparts to natural selection and gene recombination. As such, they have provided one of the few significant breakthroughs in machine learning. Used with appropriate reward functions and apportionment of credit, they have been successfully applied to gas pipeline operation, x-ray registration and mathematical optimization problems. This paper discusses the basics of genetic algorithms, describes a few successes, and reports on current progress at Oak Ridge National Laboratory in applications to set covering and simulated robots.
A retrodictive stochastic simulation algorithm
Vaughan, T.G. Drummond, P.D.; Drummond, A.J.
2010-05-20
In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.
Fully relativistic lattice Boltzmann algorithm
Romatschke, P.; Mendoza, M.; Succi, S.
2011-09-15
Starting from the Maxwell-Juettner equilibrium distribution, we develop a relativistic lattice Boltzmann (LB) algorithm capable of handling ultrarelativistic systems with flat, but expanding, spacetimes. The algorithm is validated through simulations of a quark-gluon plasma, yielding excellent agreement with hydrodynamic simulations. The present scheme opens the possibility of transferring the recognized computational advantages of lattice kinetic theory to the context of both weakly and ultrarelativistic systems.
NASA Astrophysics Data System (ADS)
El-Guibaly, Fayez; Sabaa, A.
1996-10-01
In this paper, we introduce modifications on the classic CORDIC algorithm to reduce the number of iterations, and hence the rounding noise. The modified algorithm needs, at most, half the number of iterations to achieve the same accuracy as the classical one. The modifications are applicable to linear, circular and hyperbolic CORDIC in both vectoring and rotation modes. Simulations illustrate the effect of the new modifications.
Localization algorithm for acoustic emission
NASA Astrophysics Data System (ADS)
Salinas, V.; Vargas, Y.; Ruzzante, J.; Gaete, L.
2010-01-01
In this paper, an iterative algorithm for localization of acoustic emission (AE) source is presented. The main advantage of the system is that it is independent of the 'ability' in the determination of signal level to triggering the signal by the researcher. The system was tested in cylindrical samples with an AE localized in a known position; the precision in the source determination was of about 2 mm, better than the precision obtained with classic localization algorithms (˜1 cm).
CORDIC Algorithms: Theory And Extensions
NASA Astrophysics Data System (ADS)
Delosme, Jean-Marc
1989-11-01
Optimum algorithms for signal processing are notoriously costly to implement since they usually require intensive linear algebra operations to be performed at very high rates. In these cases a cost-effective solution is to design a pipelined or parallel architecture with special-purpose VLSI processors. One may often lower the hardware cost of such a dedicated architecture by using processors that implement CORDIC-like arithmetic algorithms. Indeed, with CORDIC algorithms, the evaluation and the application of an operation, such as determining a rotation that brings a vector onto another one and rotating other vectors by that amount, require the same time on identical processors and can be fully overlapped in most cases, thus leading to highly efficient implementations. We have shown earlier that a necessary condition for a CORDIC-type algorithm to exist is that the function to be implemented can be represented in terms of a matrix exponential. This paper refines this condition to the ability to represent , the desired function in terms of a rational representation of a matrix exponential. This insight gives us a powerful tool for the design of new CORDIC algorithms. This is demonstrated by rederiving classical CORDIC algorithms and introducing several new ones, for Jacobi rotations, three and higher dimensional rotations, etc.
Multithreaded Algorithms for Graph Coloring
Catalyurek, Umit V.; Feo, John T.; Gebremedhin, Assefaw H.; Halappanavar, Mahantesh; Pothen, Alex
2012-10-21
Graph algorithms are challenging to parallelize when high performance and scalability are primary goals. Low concurrency, poor data locality, irregular access pattern, and high data access to computation ratio are among the chief reasons for the challenge. The performance implication of these features is exasperated on distributed memory machines. More success is being achieved on shared-memory, multi-core architectures supporting multithreading. We consider a prototypical graph problem, coloring, and show how a greedy algorithm for solving it can be e*ectively parallelized on multithreaded architectures. We present in particular two di*erent parallel algorithms. The first relies on speculation and iteration, and is suitable for any shared-memory, multithreaded system. The second uses data ow principles and is targeted at the massively multithreaded Cray XMT system. We benchmark the algorithms on three di*erent platforms and demonstrate scalable runtime performance. In terms of quality of solution, both algorithms use nearly the same number of colors as the serial algorithm.
Myers, Timothy
2006-09-01
The use of protocols or care algorithms in medical facilities has increased in the managed care environment. The definition and application of care algorithms, with a particular focus on the treatment of acute bronchospasm, are explored in this review. The benefits and goals of using protocols, especially in the treatment of asthma, to standardize patient care based on clinical guidelines and evidence-based medicine are explained. Ideally, evidence-based protocols should translate research findings into best medical practices that would serve to better educate patients and their medical providers who are administering these protocols. Protocols should include evaluation components that can monitor, through some mechanism of quality assurance, the success and failure of the instrument so that modifications can be made as necessary. The development and design of an asthma care algorithm can be accomplished by using a four-phase approach: phase 1, identifying demographics, outcomes, and measurement tools; phase 2, reviewing, negotiating, and standardizing best practice; phase 3, testing and implementing the instrument and collecting data; and phase 4, analyzing the data and identifying areas of improvement and future research. The experiences of one medical institution that implemented an asthma care algorithm in the treatment of pediatric asthma are described. Their care algorithms served as tools for decision makers to provide optimal asthma treatment in children. In addition, the studies that used the asthma care algorithm to determine the efficacy and safety of ipratropium bromide and levalbuterol in children with asthma are described. PMID:16945065
Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal
Tactical Synthesis Of Efficient Global Search Algorithms
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2009-01-01
Algorithm synthesis transforms a formal specification into an efficient algorithm to solve a problem. Algorithm synthesis in Specware combines the formal specification of a problem with a high-level algorithm strategy. To derive an efficient algorithm, a developer must define operators that refine the algorithm by combining the generic operators in the algorithm with the details of the problem specification. This derivation requires skill and a deep understanding of the problem and the algorithmic strategy. In this paper we introduce two tactics to ease this process. The tactics serve a similar purpose to tactics used for determining indefinite integrals in calculus, that is suggesting possible ways to attack the problem.
Chang, C.Y.
1986-01-01
New results on efficient forms of decoding convolutional codes based on Viterbi and stack algorithms using systolic array architecture are presented. Some theoretical aspects of systolic arrays are also investigated. First, systolic array implementation of Viterbi algorithm is considered, and various properties of convolutional codes are derived. A technique called strongly connected trellis decoding is introduced to increase the efficient utilization of all the systolic array processors. The issues dealing with the composite branch metric generation, survivor updating, overall system architecture, throughput rate, and computations overhead ratio are also investigated. Second, the existing stack algorithm is modified and restated in a more concise version so that it can be efficiently implemented by a special type of systolic array called systolic priority queue. Three general schemes of systolic priority queue based on random access memory, shift register, and ripple register are proposed. Finally, a systematic approach is presented to design systolic arrays for certain general classes of recursively formulated algorithms.
NASA Astrophysics Data System (ADS)
Qiu, Reng; Sun, Boguang; Fang, Shasha; Sun, Li; Liu, Xiao
2013-03-01
Quantitative real-time reverse transcription-polymerase chain reaction (qRT-PCR) is widely used in studies of gene expression. In most of these studies, housekeeping genes are used as internal references without validation. To identify appropriate reference genes for qRT-PCR in Pacific abalone Haliotis discus hannai, we examined the transcription stability of six housekeeping genes in abalone tissues in the presence and absence of bacterial infection. For this purpose, abalone were infected with the bacterial pathogen Vibrio anguillarum for 12 h and 48 h. The mRNA levels of the housekeeping genes in five tissues (digestive glands, foot muscle, gill, hemocyte, and mantle) were determined by qRT-PCR. The PCR data was subsequently analyzed with the geNorm and NormFinder algorithms. The results show that in the absence of bacterial infection, elongation factor-1-alpha and beta-actin were the most stably expressed genes in all tissues, and thus are suitable as cross-tissue type normalization factors. However, we did not identify any universal reference genes post infection because the most stable genes varied between tissue types. Furthermore, for most tissues, the optimal reference genes identified by both algorithms at 12 h and 48 h post-infection differed. These results indicate that bacterial infection induced significant changes in the expression of abalone housekeeping genes in a manner that is dependent on tissue type and duration of infection. As a result, different normalization factors must be used for different tissues at different infection points.
Chapman, Joanne R.; Helin, Anu S.; Wille, Michelle; Atterby, Clara; Järhult, Josef D.; Fridlund, Jimmy S.; Waldenström, Jonas
2016-01-01
Determining which reference genes have the highest stability, and are therefore appropriate for normalising data, is a crucial step in the design of real-time quantitative PCR (qPCR) gene expression studies. This is particularly warranted in non-model and ecologically important species for which appropriate reference genes are lacking, such as the mallard—a key reservoir of many diseases with relevance for human and livestock health. Previous studies assessing gene expression changes as a consequence of infection in mallards have nearly universally used β-actin and/or GAPDH as reference genes without confirming their suitability as normalisers. The use of reference genes at random, without regard for stability of expression across treatment groups, can result in erroneous interpretation of data. Here, eleven putative reference genes for use in gene expression studies of the mallard were evaluated, across six different tissues, using a low pathogenic avian influenza A virus infection model. Tissue type influenced the selection of reference genes, whereby different genes were stable in blood, spleen, lung, gastrointestinal tract and colon. β-actin and GAPDH generally displayed low stability and are therefore inappropriate reference genes in many cases. The use of different algorithms (GeNorm and NormFinder) affected stability rankings, but for both algorithms it was possible to find a combination of two stable reference genes with which to normalise qPCR data in mallards. These results highlight the importance of validating the choice of normalising reference genes before conducting gene expression studies in ducks. The fact that nearly all previous studies of the influence of pathogen infection on mallard gene expression have used a single, non-validated reference gene is problematic. The toolkit of putative reference genes provided here offers a solid foundation for future studies of gene expression in mallards and other waterfowl. PMID:26886224
Zhu, Wuzheng; Lin, Yaqiu; Liao, Honghai; Wang, Yong
2015-01-01
The identification of suitable reference genes is critical for obtaining reliable results from gene expression studies using quantitative real-time PCR (qPCR) because the expression of reference genes may vary considerably under different experimental conditions. In most cases, however, commonly used reference genes are employed in data normalization without proper validation, which may lead to incorrect data interpretation. Here, we aim to select a set of optimal reference genes for the accurate normalization of gene expression associated with intramuscular fat (IMF) deposition during development. In the present study, eight reference genes (PPIB, HMBS, RPLP0, B2M, YWHAZ, 18S, GAPDH and ACTB) were evaluated by three different algorithms (geNorm, NormFinder and BestKeeper) in two types of muscle tissues (longissimus dorsi muscle and biceps femoris muscle) across different developmental stages. All three algorithms gave similar results. PPIB and HMBS were identified as the most stable reference genes, while the commonly used reference genes 18S and GAPDH were the most variably expressed, with expression varying dramatically across different developmental stages. Furthermore, to reveal the crucial role of appropriate reference genes in obtaining a reliable result, analysis of PPARG expression was performed by normalization to the most and the least stable reference genes. The relative expression levels of PPARG normalized to the most stable reference genes greatly differed from those normalized to the least stable one. Therefore, evaluation of reference genes must be performed for a given experimental condition before the reference genes are used. PPIB and HMBS are the optimal reference genes for analysis of gene expression associated with IMF deposition in skeletal muscle during development. PMID:25794179
Rueda-Martínez, Carmen; Lamas, Oscar; Mataró, María José; Robledo-Carmona, Juan; Sánchez-Espín, Gemma; Jiménez-Navarro, Manuel; Such-Martínez, Miguel; Fernández, Borja
2014-01-01
Dilatation of the ascending aorta (AAD) is a prevalent aortopathy that occurs frequently associated with bicuspid aortic valve (BAV), the most common human congenital cardiac malformation. The molecular mechanisms leading to AAD associated with BAV are still poorly understood. The search for differentially expressed genes in diseased tissue by quantitative real-time PCR (qPCR) is an invaluable tool to fill this gap. However, studies dedicated to identify reference genes necessary for normalization of mRNA expression in aortic tissue are scarce. In this report, we evaluate the qPCR expression of six candidate reference genes in tissue from the ascending aorta of 52 patients with a variety of clinical and demographic characteristics, normal and dilated aortas, and different morphologies of the aortic valve (normal aorta and normal valve n = 30; dilated aorta and normal valve n = 10; normal aorta and BAV n = 4; dilated aorta and BAV n = 8). The expression stability of the candidate reference genes was determined with three statistical algorithms, GeNorm, NormFinder and Bestkeeper. The expression analyses showed that the most stable genes for the three algorithms employed were CDKN1β, POLR2A and CASC3, independently of the structure of the aorta and the valve morphology. In conclusion, we propose the use of these three genes as reference genes for mRNA expression analysis in human ascending aorta. However, we suggest searching for specific reference genes when conducting qPCR experiments with new cohort of samples. PMID:24841551
Mathematical algorithms for approximate reasoning
NASA Technical Reports Server (NTRS)
Murphy, John H.; Chay, Seung C.; Downs, Mary M.
1988-01-01
Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away
Improved autonomous star identification algorithm
NASA Astrophysics Data System (ADS)
Luo, Li-Yan; Xu, Lu-Ping; Zhang, Hua; Sun, Jing-Rong
2015-06-01
The log-polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. Project supported by the National Natural Science Foundation of China (Grant Nos. 61172138 and 61401340), the Open Research Fund of the Academy of Satellite Application, China (Grant No. 2014_CXJJ-DH_12), the Fundamental Research Funds for the Central Universities, China (Grant Nos. JB141303 and 201413B), the Natural Science Basic Research Plan in Shaanxi Province, China (Grant No. 2013JQ8040), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130203120004), and the Xi’an Science and Technology Plan, China (Grant. No CXY1350(4)).
GPU Accelerated Event Detection Algorithm
2011-05-25
Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomore » model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.« less
SDR input power estimation algorithms
NASA Astrophysics Data System (ADS)
Briones, J. C.; Nappier, J. M.
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.
Ensemble algorithms in reinforcement learning.
Wiering, Marco A; van Hasselt, Hado
2008-08-01
This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms. PMID:18632380
Conflict-Aware Scheduling Algorithm
NASA Technical Reports Server (NTRS)
Wang, Yeou-Fang; Borden, Chester
2006-01-01
conflict-aware scheduling algorithm is being developed to help automate the allocation of NASA s Deep Space Network (DSN) antennas and equipment that are used to communicate with interplanetary scientific spacecraft. The current approach for scheduling DSN ground resources seeks to provide an equitable distribution of tracking services among the multiple scientific missions and is very labor intensive. Due to the large (and increasing) number of mission requests for DSN services, combined with technical and geometric constraints, the DSN is highly oversubscribed. To help automate the process, and reduce the DSN and spaceflight project labor effort required for initiating, maintaining, and negotiating schedules, a new scheduling algorithm is being developed. The scheduling algorithm generates a "conflict-aware" schedule, where all requests are scheduled based on a dynamic priority scheme. The conflict-aware scheduling algorithm allocates all requests for DSN tracking services while identifying and maintaining the conflicts to facilitate collaboration and negotiation between spaceflight missions. These contrast with traditional "conflict-free" scheduling algorithms that assign tracks that are not in conflict and mark the remainder as unscheduled. In the case where full schedule automation is desired (based on mission/event priorities, fairness, allocation rules, geometric constraints, and ground system capabilities/ constraints), a conflict-free schedule can easily be created from the conflict-aware schedule by removing lower priority items that are in conflict.
Fourier Lucas-Kanade algorithm.
Lucey, Simon; Navarathna, Rajitha; Ashraf, Ahmed Bilal; Sridharan, Sridha
2013-06-01
In this paper, we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one preprocesses the source image and template/model with a bank of filters (e.g., oriented edges, Gabor, etc.) as 1) it can handle substantial illumination variations, 2) the inefficient preprocessing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, 3) unlike traditional LK, the computational cost is invariant to the number of filters and as a result is far more efficient, and 4) this approach can be extended to the Inverse Compositional (IC) form of the LK algorithm where nearly all steps (including Fourier transform and filter bank preprocessing) can be precomputed, leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to nonrigid object alignment tasks that are considered extensions of the LK algorithm, such as those found in Active Appearance Models (AAMs). PMID:23599053
SDR Input Power Estimation Algorithms
NASA Technical Reports Server (NTRS)
Nappier, Jennifer M.; Briones, Janette C.
2013-01-01
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.
POSE Algorithms for Automated Docking
NASA Technical Reports Server (NTRS)
Heaton, Andrew F.; Howard, Richard T.
2011-01-01
POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.
Benchmarking image fusion algorithm performance
NASA Astrophysics Data System (ADS)
Howell, Christopher L.
2012-06-01
Registering two images produced by two separate imaging sensors having different detector sizes and fields of view requires one of the images to undergo transformation operations that may cause its overall quality to degrade with regards to visual task performance. This possible change in image quality could add to an already existing difference in measured task performance. Ideally, a fusion algorithm would take as input unaltered outputs from each respective sensor used in the process. Therefore, quantifying how well an image fusion algorithm performs should be base lined to whether the fusion algorithm retained the performance benefit achievable by each independent spectral band being fused. This study investigates an identification perception experiment using a simple and intuitive process for discriminating between image fusion algorithm performances. The results from a classification experiment using information theory based image metrics is presented and compared to perception test results. The results show an effective performance benchmark for image fusion algorithms can be established using human perception test data. Additionally, image metrics have been identified that either agree with or surpass the performance benchmark established.
Algorithms for automated DNA assembly
Densmore, Douglas; Hsiau, Timothy H.-C.; Kittleson, Joshua T.; DeLoache, Will; Batten, Christopher; Anderson, J. Christopher
2010-01-01
Generating a defined set of genetic constructs within a large combinatorial space provides a powerful method for engineering novel biological functions. However, the process of assembling more than a few specific DNA sequences can be costly, time consuming and error prone. Even if a correct theoretical construction scheme is developed manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and formal approaches are needed for exploring these vast design spaces. By automating the design of DNA fabrication schemes using computational algorithms, we can eliminate human error while reducing redundant operations, thus minimizing the time and cost required for conducting biological engineering experiments. Here, we provide algorithms that optimize the simultaneous assembly of a collection of related DNA sequences. We compare our algorithms to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with random search approaches on two real-world datasets show that our algorithms can also quickly find lower-cost solutions for large datasets. PMID:20335162
Algorithms, complexity, and the sciences.
Papadimitriou, Christos
2014-11-11
Algorithms, perhaps together with Moore's law, compose the engine of the information technology revolution, whereas complexity--the antithesis of algorithms--is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal--and therefore less compelling--than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene's cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution. PMID:25349382
Projection Classification Based Iterative Algorithm
NASA Astrophysics Data System (ADS)
Zhang, Ruiqiu; Li, Chen; Gao, Wenhua
2015-05-01
Iterative algorithm has good performance as it does not need complete projection data in 3D image reconstruction area. It is possible to be applied in BGA based solder joints inspection but with low convergence speed which usually acts with x-ray Laminography that has a worse reconstruction image compared to the former one. This paper explores to apply one projection classification based method which tries to separate the object to three parts, i.e. solute, solution and air, and suppose that the reconstruction speed decrease from solution to two other parts on both side lineally. And then SART and CAV algorithms are improved under the proposed idea. Simulation experiment result with incomplete projection images indicates the fast convergence speed of the improved iterative algorithms and the effectiveness of the proposed method. Less the projection images, more the superiority is also founded.
Firefly Algorithm for Structural Search.
Avendaño-Franco, Guillermo; Romero, Aldo H
2016-07-12
The problem of computational structure prediction of materials is approached using the firefly (FF) algorithm. Starting from the chemical composition and optionally using prior knowledge of similar structures, the FF method is able to predict not only known stable structures but also a variety of novel competitive metastable structures. This article focuses on the strengths and limitations of the algorithm as a multimodal global searcher. The algorithm has been implemented in software package PyChemia ( https://github.com/MaterialsDiscovery/PyChemia ), an open source python library for materials analysis. We present applications of the method to van der Waals clusters and crystal structures. The FF method is shown to be competitive when compared to other population-based global searchers. PMID:27232694
Some nonlinear space decomposition algorithms
Tai, Xue-Cheng; Espedal, M.
1996-12-31
Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.
Seamless Merging of Hypertext and Algorithm Animation
ERIC Educational Resources Information Center
Karavirta, Ville
2009-01-01
Online learning material that students use by themselves is one of the typical usages of algorithm animation (AA). Thus, the integration of algorithm animations into hypertext is seen as an important topic today to promote the usage of algorithm animation in teaching. This article presents an algorithm animation viewer implemented purely using…
HEATR project: ATR algorithm parallelization
NASA Astrophysics Data System (ADS)
Deardorf, Catherine E.
1998-09-01
High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.
Decryption of pure-position permutation algorithms.
Zhao, Xiao-Yu; Chen, Gang; Zhang, Dan; Wang, Xiao-Hong; Dong, Guang-Chang
2004-07-01
Pure position permutation image encryption algorithms, commonly used as image encryption investigated in this work are unfortunately frail under known-text attack. In view of the weakness of pure position permutation algorithm, we put forward an effective decryption algorithm for all pure-position permutation algorithms. First, a summary of the pure position permutation image encryption algorithms is given by introducing the concept of ergodic matrices. Then, by using probability theory and algebraic principles, the decryption probability of pure-position permutation algorithms is verified theoretically; and then, by defining the operation system of fuzzy ergodic matrices, we improve a specific decryption algorithm. Finally, some simulation results are shown. PMID:15495308
Old And New Algorithms For Toeplitz Systems
NASA Astrophysics Data System (ADS)
Brent, Richard P.
1988-02-01
Toeplitz linear systems and Toeplitz least squares problems commonly arise in digital signal processing. In this paper we survey some old, "well known" algorithms and some recent algorithms for solving these problems. We concentrate our attention on algorithms which can be implemented efficiently on a variety of parallel machines (including pipelined vector processors and systolic arrays). We distinguish between algorithms which require inner products, and algorithms which avoid inner products, and thus are better suited to parallel implementation on some parallel architectures. Finally, we mention some "asymptotically fast" 0(n(log n)2) algorithms and compare them with 0(n2) algorithms.
A generalized memory test algorithm
NASA Technical Reports Server (NTRS)
Milner, E. J.
1982-01-01
A general algorithm for testing digital computer memory is presented. The test checks that (1) every bit can be cleared and set in each memory work, and (2) bits are not erroneously cleared and/or set elsewhere in memory at the same time. The algorithm can be applied to any size memory block and any size memory word. It is concise and efficient, requiring the very few cycles through memory. For example, a test of 16-bit-word-size memory requries only 384 cycles through memory. Approximately 15 seconds were required to test a 32K block of such memory, using a microcomputer having a cycle time of 133 nanoseconds.
Squint mode SAR processing algorithms
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Jin, M.; Curlander, J. C.
1989-01-01
The unique characteristics of a spaceborne SAR (synthetic aperture radar) operating in a squint mode include large range walk and large variation in the Doppler centroid as a function of range. A pointing control technique to reduce the Doppler drift and a new processing algorithm to accommodate large range walk are presented. Simulations of the new algorithm for squint angles up to 20 deg and look angles up to 44 deg for the Earth Observing System (Eos) L-band SAR configuration demonstrate that it is capable of maintaining the resolution broadening within 20 percent and the ISLR within a fraction of a decibel of the theoretical value.
Fast algorithms for transport models
Manteuffel, T.A.
1992-12-01
The objective of this project is the development of numerical solution techniques for deterministic models of the transport of neutral and charged particles and the demonstration of their effectiveness in both a production environment and on advanced architecture computers. The primary focus is on various versions of the linear Boltzman equation. These equations are fundamental in many important applications. This project is an attempt to integrate the development of numerical algorithms with the process of developing production software. A major thrust of this reject will be the implementation of these algorithms on advanced architecture machines that reside at the Advanced Computing Laboratory (ACL) at Los Alamos National Laboratories (LANL).
ALGORITHM DEVELOPMENT FOR SPATIAL OPERATORS.
Claire, Robert W.
1984-01-01
An approach is given that develops spatial operators about the basic geometric elements common to spatial data structures. In this fashion, a single set of spatial operators may be accessed by any system that reduces its operands to such basic generic representations. Algorithms based on this premise have been formulated to perform operations such as separation, overlap, and intersection. Moreover, this generic approach is well suited for algorithms that exploit concurrent properties of spatial operators. The results may provide a framework for a geometry engine to support fundamental manipulations within a geographic information system.
Born approximation, scattering, and algorithm
NASA Astrophysics Data System (ADS)
Martinez, Alex; Hu, Mengqi; Gu, Haicheng; Qiao, Zhijun
2015-05-01
In the past few decades, there were many imaging algorithms designed in the case of the absence of multiple scattering. Recently, we discussed an algorithm for removing high order scattering components from collected data. This paper is a continuation of our previous work. First, we investigate the current state of multiple scattering in SAR. Then, we revise our method and test it. Given an estimate of our target reflectivity, we compute the multi scattering effects in the target region for various frequencies. Furthermore, we propagate this energy through free space towards our antenna, and remove it from the collected data.
Synthesis of Greedy Algorithms Using Dominance Relations
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2010-01-01
Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.
Two Algorithms for Processing Electronic Nose Data
NASA Technical Reports Server (NTRS)
Young, Rebecca; Linnell, Bruce
2007-01-01
Two algorithms for processing the digitized readings of electronic noses, and computer programs to implement the algorithms, have been devised in a continuing effort to increase the utility of electronic noses as means of identifying airborne compounds and measuring their concentrations. One algorithm identifies the two vapors in a two-vapor mixture and estimates the concentration of each vapor (in principle, this algorithm could be extended to more than two vapors). The other algorithm identifies a single vapor and estimates its concentration.
Blind Alley Aware ACO Routing Algorithm
NASA Astrophysics Data System (ADS)
Yoshikawa, Masaya; Otani, Kazuo
2010-10-01
The routing problem is applied to various engineering fields. Many researchers study this problem. In this paper, we propose a new routing algorithm which is based on Ant Colony Optimization. The proposed algorithm introduces the tabu search mechanism to escape the blind alley. Thus, the proposed algorithm enables to find the shortest route, even if the map data contains the blind alley. Experiments using map data prove the effectiveness in comparison with Dijkstra algorithm which is the most popular conventional routing algorithm.
Parallel algorithms for unconstrained optimizations by multisplitting
He, Qing
1994-12-31
In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.
Algorithm Visualization in Teaching Practice
ERIC Educational Resources Information Center
Törley, Gábor
2014-01-01
This paper presents the history of algorithm visualization (AV), highlighting teaching-methodology aspects. A combined, two-group pedagogical experiment will be presented as well, which measured the efficiency and the impact on the abstract thinking of AV. According to the results, students, who learned with AV, performed better in the experiment.
Algorithms, complexity, and the sciences
Papadimitriou, Christos
2014-01-01
Algorithms, perhaps together with Moore’s law, compose the engine of the information technology revolution, whereas complexity—the antithesis of algorithms—is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal—and therefore less compelling—than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene’s cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution. PMID:25349382
Threshold extended ID3 algorithm
NASA Astrophysics Data System (ADS)
Kumar, A. B. Rajesh; Ramesh, C. Phani; Madhusudhan, E.; Padmavathamma, M.
2012-04-01
Information exchange over insecure networks needs to provide authentication and confidentiality to the database in significant problem in datamining. In this paper we propose a novel authenticated multiparty ID3 Algorithm used to construct multiparty secret sharing decision tree for implementation in medical transactions.
Aerocapture Guidance Algorithm Comparison Campaign
NASA Technical Reports Server (NTRS)
Rousseau, Stephane; Perot, Etienne; Graves, Claude; Masciarelli, James P.; Queen, Eric
2002-01-01
The aerocapture is a promising technique for the future human interplanetary missions. The Mars Sample Return was initially based on an insertion by aerocapture. A CNES orbiter Mars Premier was developed to demonstrate this concept. Mainly due to budget constraints, the aerocapture was cancelled for the French orbiter. A lot of studies were achieved during the three last years to develop and test different guidance algorithms (APC, EC, TPC, NPC). This work was shared between CNES and NASA, with a fruitful joint working group. To finish this study an evaluation campaign has been performed to test the different algorithms. The objective was to assess the robustness, accuracy, capability to limit the load, and the complexity of each algorithm. A simulation campaign has been specified and performed by CNES, with a similar activity on the NASA side to confirm the CNES results. This evaluation has demonstrated that the numerical guidance principal is not competitive compared to the analytical concepts. All the other algorithms are well adapted to guaranty the success of the aerocapture. The TPC appears to be the more robust, the APC the more accurate, and the EC appears to be a good compromise.
Adaptive color image watermarking algorithm
NASA Astrophysics Data System (ADS)
Feng, Gui; Lin, Qiwei
2008-03-01
As a major method for intellectual property right protecting, digital watermarking techniques have been widely studied and used. But due to the problems of data amount and color shifted, watermarking techniques on color image was not so widespread studied, although the color image is the principal part for multi-medium usages. Considering the characteristic of Human Visual System (HVS), an adaptive color image watermarking algorithm is proposed in this paper. In this algorithm, HSI color model was adopted both for host and watermark image, the DCT coefficient of intensity component (I) of the host color image was used for watermark date embedding, and while embedding watermark the amount of embedding bit was adaptively changed with the complex degree of the host image. As to the watermark image, preprocessing is applied first, in which the watermark image is decomposed by two layer wavelet transformations. At the same time, for enhancing anti-attack ability and security of the watermarking algorithm, the watermark image was scrambled. According to its significance, some watermark bits were selected and some watermark bits were deleted as to form the actual embedding data. The experimental results show that the proposed watermarking algorithm is robust to several common attacks, and has good perceptual quality at the same time.
Simultaneous stabilization using genetic algorithms
Benson, R.W.; Schmitendorf, W.E. . Dept. of Mechanical Engineering)
1991-01-01
This paper considers the problem of simultaneously stabilizing a set of plants using full state feedback. The problem is converted to a simple optimization problem which is solved by a genetic algorithm. Several examples demonstrate the utility of this method. 14 refs., 8 figs.
Detection Algorithms: FFT vs. KLT
NASA Astrophysics Data System (ADS)
Maccone, Claudio
Given the vast distances between the stars, we can anticipate that any received SETI signal will be exceedingly weak. How can we hope to extract (or even recognize) such signals buried well beneath the natural background noise with which they must compete? This chapter analyzes, compares, and contrasts the two dominant signal detection algorithms used by SETI scientists to recognize extremely weak candidate signals.
Adaptive protection algorithm and system
Hedrick, Paul [Pittsburgh, PA; Toms, Helen L [Irwin, PA; Miller, Roger M [Mars, PA
2009-04-28
An adaptive protection algorithm and system for protecting electrical distribution systems traces the flow of power through a distribution system, assigns a value (or rank) to each circuit breaker in the system and then determines the appropriate trip set points based on the assigned rank.
Understanding Algorithms in Different Presentations
ERIC Educational Resources Information Center
Csernoch, Mária; Biró, Piroska; Abari, Kálmán; Máth, János
2015-01-01
Within the framework of the Testing Algorithmic and Application Skills project we tested first year students of Informatics at the beginning of their tertiary education. We were focusing on the students' level of understanding in different programming environments. In the present paper we provide the results from the University of Debrecen, the…
Coagulation algorithms with size binning
NASA Technical Reports Server (NTRS)
Statton, David M.; Gans, Jason; Williams, Eric
1994-01-01
The Smoluchowski equation describes the time evolution of an aerosol particle size distribution due to aggregation or coagulation. Any algorithm for computerized solution of this equation requires a scheme for describing the continuum of aerosol particle sizes as a discrete set. One standard form of the Smoluchowski equation accomplishes this by restricting the particle sizes to integer multiples of a basic unit particle size (the monomer size). This can be inefficient when particle concentrations over a large range of particle sizes must be calculated. Two algorithms employing a geometric size binning convention are examined: the first assumes that the aerosol particle concentration as a function of size can be considered constant within each size bin; the second approximates the concentration as a linear function of particle size within each size bin. The output of each algorithm is compared to an analytical solution in a special case of the Smoluchowski equation for which an exact solution is known . The range of parameters more appropriate for each algorithm is examined.
Nuclear models and exact algorithms
NASA Astrophysics Data System (ADS)
Bes, D. R.; Dobaczewski, J.; Draayer, J. P.; Szymański, Z.
1992-07-01
Discussion Group E on Nuclear Models and Exact Algorithms received contributions from the following individuals: L. Egido, S. Frauendorf, F. Iachello, P. Ring, H. Sagawa, W. Satula, N. C. Schmeing, M. Vincent, A. J. Zucker. The report that follows is an attempt by the leaders of the discussion to summarize the presentations and to give an impression of the subject matter.
SMAP's Radar OBP Algorithm Development
NASA Technical Reports Server (NTRS)
Le, Charles; Spencer, Michael W.; Veilleux, Louise; Chan, Samuel; He, Yutao; Zheng, Jason; Nguyen, Kayla
2009-01-01
An approach for algorithm specifications and development is described for SMAP's radar onboard processor with multi-stage demodulation and decimation bandpass digital filter. Point target simulation is used to verify and validate the filter design with the usual radar performance parameters. Preliminary FPGA implementation is also discussed.
Multilevel algorithms for nonlinear optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Dennis, J. E., Jr.
1994-01-01
Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.
Quartic Rotation Criteria and Algorithms.
ERIC Educational Resources Information Center
Clarkson, Douglas B.; Jennrich, Robert I.
1988-01-01
Most of the current analytic rotation criteria for simple structure in factor analysis are summarized and identified as members of a general symmetric family of quartic criteria. A unified development of algorithms for orthogonal and direct oblique rotation using arbitrary criteria from this family is presented. (Author/TJH)
Key Concepts in Informatics: Algorithm
ERIC Educational Resources Information Center
Szlávi, Péter; Zsakó, László
2014-01-01
"The system of key concepts contains the most important key concepts related to the development tasks of knowledge areas and their vertical hierarchy as well as the links of basic key concepts of different knowledge areas." (Vass 2011) One of the most important of these concepts is the algorithm. In everyday life, when learning or…
Knowledge-based tracking algorithm
NASA Astrophysics Data System (ADS)
Corbeil, Allan F.; Hawkins, Linda J.; Gilgallon, Paul F.
1990-10-01
This paper describes the Knowledge-Based Tracking (KBT) algorithm for which a real-time flight test demonstration was recently conducted at Rome Air Development Center (RADC). In KBT processing, the radar signal in each resolution cell is thresholded at a lower than normal setting to detect low RCS targets. This lower threshold produces a larger than normal false alarm rate. Therefore, additional signal processing including spectral filtering, CFAR and knowledge-based acceptance testing are performed to eliminate some of the false alarms. TSC's knowledge-based Track-Before-Detect (TBD) algorithm is then applied to the data from each azimuth sector to detect target tracks. In this algorithm, tentative track templates are formed for each threshold crossing and knowledge-based association rules are applied to the range, Doppler, and azimuth measurements from successive scans. Lastly, an M-association out of N-scan rule is used to declare a detection. This scan-to-scan integration enhances the probability of target detection while maintaining an acceptably low output false alarm rate. For a real-time demonstration of the KBT algorithm, the L-band radar in the Surveillance Laboratory (SL) at RADC was used to illuminate a small Cessna 310 test aircraft. The received radar signal wa digitized and processed by a ST-100 Array Processor and VAX computer network in the lab. The ST-100 performed all of the radar signal processing functions, including Moving Target Indicator (MTI) pulse cancelling, FFT Doppler filtering, and CFAR detection. The VAX computers performed the remaining range-Doppler clustering, beamsplitting and TBD processing functions. The KBT algorithm provided a 9.5 dB improvement relative to single scan performance with a nominal real time delay of less than one second between illumination and display.
Linear Bregman algorithm implemented in parallel GPU
NASA Astrophysics Data System (ADS)
Li, Pengyan; Ke, Jue; Sui, Dong; Wei, Ping
2015-08-01
At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people's fast growing demand to information technology.
Birkhoffian symplectic algorithms derived from Hamiltonian symplectic algorithms
NASA Astrophysics Data System (ADS)
Xin-Lei, Kong; Hui-Bin, Wu; Feng-Xiang, Mei
2016-01-01
In this paper, we focus on the construction of structure preserving algorithms for Birkhoffian systems, based on existing symplectic schemes for the Hamiltonian equations. The key of the method is to seek an invertible transformation which drives the Birkhoffian equations reduce to the Hamiltonian equations. When there exists such a transformation, applying the corresponding inverse map to symplectic discretization of the Hamiltonian equations, then resulting difference schemes are verified to be Birkhoffian symplectic for the original Birkhoffian equations. To illustrate the operation process of the method, we construct several desirable algorithms for the linear damped oscillator and the single pendulum with linear dissipation respectively. All of them exhibit excellent numerical behavior, especially in preserving conserved quantities. Project supported by the National Natural Science Foundation of China (Grant No. 11272050), the Excellent Young Teachers Program of North China University of Technology (Grant No. XN132), and the Construction Plan for Innovative Research Team of North China University of Technology (Grant No. XN129).
Why is Boris Algorithm So Good?
et al, Hong Qin
2013-03-03
Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this letter, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.
Why is Boris algorithm so good?
Qin, Hong; Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 ; Zhang, Shuangxi; Xiao, Jianyuan; Liu, Jian; Sun, Yajuan; Tang, William M.
2013-08-15
Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this paper, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.
Tumor suppressor microRNAs are downregulated in myelodysplastic syndrome with spliceosome mutations
Aslan, Derya; Garde, Christian; Nygaard, Mette Katrine; Helbo, Alexandra Søgaard; Dimopoulos, Konstantinos; Hansen, Jakob Werner; Severinsen, Marianne Tang; Treppendahl, Marianne Bach; Sjø, Lene Dissing; Grønbæk, Kirsten; Kristensen, Lasse Sommer
2016-01-01
Spliceosome mutations are frequently observed in patients with myelodysplastic syndromes (MDS). However, it is largely unknown how these mutations contribute to the disease. MicroRNAs (miRNAs) are small noncoding RNAs, which have been implicated in most human cancers due to their role in post transcriptional gene regulation. The aim of this study was to analyze the impact of spliceosome mutations on the expression of miRNAs in a cohort of 34 MDS patients. In total, the expression of 76 miRNAs, including mirtrons and splice site overlapping miRNAs, was accurately quantified using reverse transcriptase quantitative PCR. The majority of the studied miRNAs have previously been implicated in MDS. Stably expressed miRNA genes for normalization of the data were identified using GeNorm and NormFinder algorithms. High-resolution melting assays covering all mutational hotspots within SF3B1, SRSF2, and U2AF1 (U2AF35) were developed, and all detected mutations were confirmed by Sanger sequencing. Overall, canonical miRNAs were downregulated in spliceosome mutated samples compared to wild-type (P = 0.002), and samples from spliceosome mutated patients clustered together in hierarchical cluster analyses. Among the most downregulated miRNAs were several tumor-suppressor miRNAs, including several let-7 family members, miR-423, and miR-103a. Finally, we observed that the predicted targets of the most downregulated miRNAs were involved in apoptosis, hematopoiesis, and acute myeloid leukemia among other cancer- and metabolic pathways. Our data indicate that spliceosome mutations may play an important role in MDS pathophysiology by affecting the expression of tumor suppressor miRNA genes involved in the development and progression of MDS. PMID:26848861
Wu, Jianyang; Zhang, Hongna; Liu, Liqin; Li, Weicai; Wei, Yongzan; Shi, Shengyou
2016-01-01
Reverse transcription quantitative PCR (RT-qPCR) as the accurate and sensitive method is use for gene expression analysis, but the veracity and reliability result depends on whether select appropriate reference gene or not. To date, several reliable reference gene validations have been reported in fruits trees, but none have been done on preharvest and postharvest longan fruits. In this study, 12 candidate reference genes, namely, CYP, RPL, GAPDH, TUA, TUB, Fe-SOD, Mn-SOD, Cu/Zn-SOD, 18SrRNA, Actin, Histone H3, and EF-1a, were selected. Expression stability of these genes in 150 longan samples was evaluated and analyzed using geNorm and NormFinder algorithms. Preharvest samples consisted of seven experimental sets, including different developmental stages, organs, hormone stimuli (NAA, 2,4-D, and ethephon) and abiotic stresses (bagging and girdling with defoliation). Postharvest samples consisted of different temperature treatments (4 and 22°C) and varieties. Our findings indicate that appropriate reference gene(s) should be picked for each experimental condition. Our data further showed that the commonly used reference gene Actin does not exhibit stable expression across experimental conditions in longan. Expression levels of the DlACO gene, which is a key gene involved in regulating fruit abscission under girdling with defoliation treatment, was evaluated to validate our findings. In conclusion, our data provide a useful framework for choice of suitable reference genes across different experimental conditions for RT-qPCR analysis of preharvest and postharvest longan fruits. PMID:27375640
Hu, Yu; Xie, Shuying; Yao, Jihua
2016-01-01
Reference genes used in normalizing qRT-PCR data are critical for the accuracy of gene expression analysis. However, many traditional reference genes used in zebrafish early development are not appropriate because of their variable expression levels during embryogenesis. In the present study, we used our previous RNA-Seq dataset to identify novel reference genes suitable for gene expression analysis during zebrafish early developmental stages. We first selected 197 most stably expressed genes from an RNA-Seq dataset (29,291 genes in total), according to the ratio of their maximum to minimum RPKM values. Among the 197 genes, 4 genes with moderate expression levels and the least variation throughout 9 developmental stages were identified as candidate reference genes. Using four independent statistical algorithms (delta-CT, geNorm, BestKeeper and NormFinder), the stability of qRT-PCR expression of these candidates was then evaluated and compared to that of actb1 and actb2, two commonly used zebrafish reference genes. Stability rankings showed that two genes, namely mobk13 (mob4) and lsm12b, were more stable than actb1 and actb2 in most cases. To further test the suitability of mobk13 and lsm12b as novel reference genes, they were used to normalize three well-studied target genes. The results showed that mobk13 and lsm12b were more suitable than actb1 and actb2 with respect to zebrafish early development. We recommend mobk13 and lsm12b as new optimal reference genes for zebrafish qRT-PCR analysis during embryogenesis and early larval stages. PMID:26891128
Baumann, Andre; Lehmann, Rüdiger; Beckert, Annika; Vilcinskas, Andreas; Franta, Zdeněk
2015-01-01
The larvae of the common green bottle fly Lucilia sericata (Diptera: Calliphoridae) have been used for centuries to promote wound healing, but the molecular basis of their antimicrobial, debridement and healing functions remains largely unknown. The analysis of differential gene expression in specific larval tissues before and after immune challenge could be used to identify key molecular factors, but the most sensitive and reproducible method qRT-PCR requires validated reference genes. We therefore selected 10 candidate reference genes encoding products from different functional classes (18S rRNA, 28S rRNA, actin, β-tubulin, RPS3, RPLP0, EF1α, PKA, GAPDH and GST1). Two widely applied algorithms (GeNorm and Normfinder) were used to analyze reference gene candidates in different larval tissues associated with secretion, digestion, and antimicrobial activity (midgut, hindgut, salivary glands, crop and fat body). The Gram-negative bacterium Pseudomonas aeruginosa was then used to boost the larval immune system and the stability of reference gene expression was tested in comparison to three immune genes (lucimycin, defensin-1 and attacin-2), which target different pathogen classes. We observed no differential expression of the antifungal peptide lucimycin, whereas the representative targeting Gram-positive bacteria (defensin-1) was upregulated in salivary glands, crop, nerve ganglion and reached its maximum in fat body (up to 300-fold). The strongest upregulation in all immune challenged tissues (over 50,000-fold induction in the fat body) was monitored for attacin-2, the representative targeting Gram-negative bacteria. Here we identified and validated a set of reference genes that allows the accurate normalization of gene expression in specific tissues of L. sericata after immune challenge. PMID:26252388
Barros Rodrigues, Thaís; Khajuria, Chitvan; Wang, Haichuan; Matz, Natalie; Cunha Cardoso, Danielle; Valicente, Fernando Hercos; Zhou, Xuguo; Siegfried, Blair
2014-01-01
Quantitative Real-time PCR (qRT-PCR) is a powerful technique to investigate comparative gene expression. In general, normalization of results using a highly stable housekeeping gene (HKG) as an internal control is recommended and necessary. However, there are several reports suggesting that regulation of some HKGs is affected by different conditions. The western corn rootworm (WCR), Diabrotica virgifera virgifera LeConte (Coleoptera: Chrysomelidae), is a serious pest of corn in the United States and Europe. The expression profile of target genes related to insecticide exposure, resistance, and RNA interference has become an important experimental technique for study of western corn rootworms; however, lack of information on reliable HKGs under different conditions makes the interpretation of qRT-PCR results difficult. In this study, four distinct algorithms (Genorm, NormFinder, BestKeeper and delta-CT) and five candidate HKGs to genes of reference (β-actin; GAPDH, glyceraldehyde-3-phosphate dehydrogenase; β-tubulin; RPS9, ribosomal protein S9; EF1a, elongation factor-1α) were evaluated to determine the most reliable HKG under different experimental conditions including exposure to dsRNA and Bt toxins and among different tissues and developmental stages. Although all the HKGs tested exhibited relatively stable expression among the different treatments, some differences were noted. Among the five candidate reference genes evaluated, β-actin exhibited highly stable expression among different life stages. RPS9 exhibited the most similar pattern of expression among dsRNA treatments, and both experiments indicated that EF1a was the second most stable gene. EF1a was also the most stable for Bt exposure and among different tissues. These results will enable researchers to use more accurate and reliable normalization of qRT-PCR data in WCR experiments. PMID:25356627
Koči, Juraj; Šimo, Ladislav; Park, Yoonseong
2013-01-01
Obtaining reliable gene expression data using real-time quantitative polymerase chain reaction(qPCR)is highly dependent on the choice of normalization method. We tested the expression stability of multiple candidate genes in the salivary glands (SG) and synganglia (SYN) of female Ixodes scapularis (Say) ticks in multiple blood-feeding phases. We found that the amount of total RNA in both the SG and SYN increases dramatically during tick feeding, with 34× and 5.8× increases from 62 and 7.1 ng of unfed tick, respectively. We tested candidate genes that were predicted from I. scapularis genome data to encode glyceraldehyde 3-phosphate dehydrogenase (gapdh), ribosomal protein L13A (l13a), TATA box-binding protein (tbp), ribosomal protein S4 (rps4), glucose 6-phosphate dehydrogenase (gpdh), and beta-glucuronidase (gusb). The geNorm and NormFinder algorithms were used to analyze data from different feeding phases (i.e., daily samples from unfed to fully engorged females over a 7-d period in three replicate experiments). We found that the rps4 and l13a genes showed highly stable expression patterns over the feeding duration in both the SG and SYN. Furthermore, the highly expressed rps4 gene makes it useful as a normalization factor when we perform studies using minute amounts of dissected tissue for qPCR. We conclude that rps4 and l13a, whether individually or as a pair, serve as suitable internal reference genes for qRT-PCR studies in the SG and SYN of I. scapularis. PMID:23427655
Gharbi, Sedigheh; Shamsara, Mehdi; Khateri, Shahriar; Soroush, Mohammad Reza; Ghorbanmehr, Nassim; Tavallaei, Mahmood; Nourani, Mohammad Reza; Mowla, Seyed Javad
2015-01-01
Objective In spite of accumulating information about pathological aspects of sulfur mustard (SM), the precise mechanism responsible for its effects is not well understood. Circulating microRNAs (miRNAs) are promising biomarkers for disease diagnosis and prognosis. Accurate normalization using appropriate reference genes, is a critical step in miRNA expression studies. In this study, we aimed to identify appropriate reference gene for microRNA quantification in serum samples of SM victims. Materials and Methods In this case and control experimental study, using quantitative real-time polymerase chain reaction (qRT-PCR), we evaluated the suitability of a panel of small RNAs including SNORD38B, SNORD49A, U6, 5S rRNA, miR-423-3p, miR-191, miR-16 and miR-103 in sera of 28 SM-exposed veterans of Iran-Iraq war (1980-1988) and 15 matched control volunteers. Different statistical algorithms including geNorm, Normfinder, best-keeper and comparative delta-quantification cycle (Cq) method were employed to find the least variable reference gene. Results miR-423-3p was identified as the most stably expressed reference gene, and miR- 103 and miR-16 ranked after that. Conclusion We demonstrate that non-miRNA reference genes have the least stabil- ity in serum samples and that some house-keeping miRNAs may be used as more reliable reference genes for miRNAs in serum. In addition, using the geometric mean of two reference genes could increase the reliability of the normalizers. PMID:26464821
Screening Reliable Reference Genes for RT-qPCR Analysis of Gene Expression in Moringa oleifera
Deng, Li-Ting; Wu, Yu-Ling; Li, Jun-Cheng; OuYang, Kun-Xi; Ding, Mei-Mei; Zhang, Jun-Jie; Li, Shu-Qi; Lin, Meng-Fei; Chen, Han-Bin; Hu, Xin-Sheng; Chen, Xiao-Yang
2016-01-01
Moringa oleifera is a promising plant species for oil and forage, but its genetic improvement is limited. Our current breeding program in this species focuses on exploiting the functional genes associated with important agronomical traits. Here, we screened reliable reference genes for accurately quantifying the expression of target genes using the technique of real-time quantitative polymerase chain reaction (RT-qPCR) in M. oleifera. Eighteen candidate reference genes were selected from a transcriptome database, and their expression stabilities were examined in 90 samples collected from the pods in different developmental stages, various tissues, and the roots and leaves under different conditions (low or high temperature, sodium chloride (NaCl)- or polyethyleneglycol (PEG)- simulated water stress). Analyses with geNorm, NormFinder and BestKeeper algorithms revealed that the reliable reference genes differed across sample designs and that ribosomal protein L1 (RPL1) and acyl carrier protein 2 (ACP2) were the most suitable reference genes in all tested samples. The experiment results demonstrated the significance of using the properly validated reference genes and suggested the use of more than one reference gene to achieve reliable expression profiles. In addition, we applied three isotypes of the superoxide dismutase (SOD) gene that are associated with plant adaptation to abiotic stress to confirm the efficacy of the validated reference genes under NaCl and PEG water stresses. Our results provide a valuable reference for future studies on identifying important functional genes from their transcriptional expressions via RT-qPCR technique in M. oleifera. PMID:27541138
Reference gene alternatives to Gapdh in rodent and human heart failure gene expression studies
2010-01-01
Background Quantitative real-time RT-PCR (RT-qPCR) is a highly sensitive method for mRNA quantification, but requires invariant expression of the chosen reference gene(s). In pathological myocardium, there is limited information on suitable reference genes other than the commonly used Gapdh mRNA and 18S ribosomal RNA. Our aim was to evaluate and identify suitable reference genes in human failing myocardium, in rat and mouse post-myocardial infarction (post-MI) heart failure and across developmental stages in fetal and neonatal rat myocardium. Results The abundance of Arbp, Rpl32, Rpl4, Tbp, Polr2a, Hprt1, Pgk1, Ppia and Gapdh mRNA and 18S ribosomal RNA in myocardial samples was quantified by RT-qPCR. The expression variability of these transcripts was evaluated by the geNorm and Normfinder algorithms and by a variance component analysis method. Biological variability was a greater contributor to sample variability than either repeated reverse transcription or PCR reactions. Conclusions The most stable reference genes were Rpl32, Gapdh and Polr2a in mouse post-infarction heart failure, Polr2a, Rpl32 and Tbp in rat post-infarction heart failure and Rpl32 and Pgk1 in human heart failure (ischemic disease and cardiomyopathy). The overall most stable reference genes across all three species was Rpl32 and Polr2a. In rat myocardium, all reference genes tested showed substantial variation with developmental stage, with Rpl4 as was most stable among the tested genes. PMID:20331858
Optimal Reference Genes for Gene Expression Normalization in Trichomonas vaginalis
dos Santos, Odelta; de Vargas Rigo, Graziela; Frasson, Amanda Piccoli; Macedo, Alexandre José; Tasca, Tiana
2015-01-01
Trichomonas vaginalis is the etiologic agent of trichomonosis, the most common non-viral sexually transmitted disease worldwide. This infection is associated with several health consequences, including cervical and prostate cancers and HIV acquisition. Gene expression analysis has been facilitated because of available genome sequences and large-scale transcriptomes in T. vaginalis, particularly using quantitative real-time polymerase chain reaction (qRT-PCR), one of the most used methods for molecular studies. Reference genes for normalization are crucial to ensure the accuracy of this method. However, to the best of our knowledge, a systematic validation of reference genes has not been performed for T. vaginalis. In this study, the transcripts of nine candidate reference genes were quantified using qRT-PCR under different cultivation conditions, and the stability of these genes was compared using the geNorm and NormFinder algorithms. The most stable reference genes were α-tubulin, actin and DNATopII, and, conversely, the widely used T. vaginalis reference genes GAPDH and β-tubulin were less stable. The PFOR gene was used to validate the reliability of the use of these candidate reference genes. As expected, the PFOR gene was upregulated when the trophozoites were cultivated with ferrous ammonium sulfate when the DNATopII, α-tubulin and actin genes were used as normalizing gene. By contrast, the PFOR gene was downregulated when the GAPDH gene was used as an internal control, leading to misinterpretation of the data. These results provide an important starting point for reference gene selection and gene expression analysis with qRT-PCR studies of T. vaginalis. PMID:26393928
Wu, Jianyang; Zhang, Hongna; Liu, Liqin; Li, Weicai; Wei, Yongzan; Shi, Shengyou
2016-01-01
Reverse transcription quantitative PCR (RT-qPCR) as the accurate and sensitive method is use for gene expression analysis, but the veracity and reliability result depends on whether select appropriate reference gene or not. To date, several reliable reference gene validations have been reported in fruits trees, but none have been done on preharvest and postharvest longan fruits. In this study, 12 candidate reference genes, namely, CYP, RPL, GAPDH, TUA, TUB, Fe-SOD, Mn-SOD, Cu/Zn-SOD, 18SrRNA, Actin, Histone H3, and EF-1a, were selected. Expression stability of these genes in 150 longan samples was evaluated and analyzed using geNorm and NormFinder algorithms. Preharvest samples consisted of seven experimental sets, including different developmental stages, organs, hormone stimuli (NAA, 2,4-D, and ethephon) and abiotic stresses (bagging and girdling with defoliation). Postharvest samples consisted of different temperature treatments (4 and 22°C) and varieties. Our findings indicate that appropriate reference gene(s) should be picked for each experimental condition. Our data further showed that the commonly used reference gene Actin does not exhibit stable expression across experimental conditions in longan. Expression levels of the DlACO gene, which is a key gene involved in regulating fruit abscission under girdling with defoliation treatment, was evaluated to validate our findings. In conclusion, our data provide a useful framework for choice of suitable reference genes across different experimental conditions for RT-qPCR analysis of preharvest and postharvest longan fruits. PMID:27375640
Ferradás, Yolanda; Rey, Laura; Martínez, Óscar; Rey, Manuel; González, Ma Victoria
2016-05-01
Identification and validation of reference genes are required for the normalization of qPCR data. We studied the expression stability produced by eight primer pairs amplifying four common genes used as references for normalization. Samples representing different tissues, organs and developmental stages in kiwifruit (Actinidia chinensis var. deliciosa (A. Chev.) A. Chev.) were used. A total of 117 kiwifruit samples were divided into five sample sets (mature leaves, axillary buds, stigmatic arms, fruit flesh and seeds). All samples were also analysed as a single set. The expression stability of the candidate primer pairs was tested using three algorithms (geNorm, NormFinder and BestKeeper). The minimum number of reference genes necessary for normalization was also determined. A unique primer pair was selected for amplifying the 18S rRNA gene. The primer pair selected for amplifying the ACTIN gene was different depending on the sample set. 18S 2 and ACT 2 were the candidate primer pairs selected for normalization in the three sample sets (mature leaves, fruit flesh and stigmatic arms). 18S 2 and ACT 3 were the primer pairs selected for normalization in axillary buds. No primer pair could be selected for use as the reference for the seed sample set. The analysis of all samples in a single set did not produce the selection of any stably expressing primer pair. Considering data previously reported in the literature, we validated the selected primer pairs amplifying the FLOWERING LOCUS T gene for use in the normalization of gene expression in kiwifruit. PMID:26897117
Müller, Oliver A.; Grau, Jan; Thieme, Sabine; Prochaska, Heike; Adlung, Norman; Sorgatz, Anika; Bonas, Ulla
2015-01-01
The Gram-negative bacterium Xanthomonas campestris pv. vesicatoria (Xcv) causes bacterial spot disease of pepper and tomato by direct translocation of type III effector proteins into the plant cell cytosol. Once in the plant cell the effectors interfere with host cell processes and manipulate the plant transcriptome. Quantitative RT-PCR (qRT-PCR) is usually the method of choice to analyze transcriptional changes of selected plant genes. Reliable results depend, however, on measuring stably expressed reference genes that serve as internal normalization controls. We identified the most stably expressed tomato genes based on microarray analyses of Xcv-infected tomato leaves and evaluated the reliability of 11 genes for qRT-PCR studies in comparison to four traditionally employed reference genes. Three different statistical algorithms, geNorm, NormFinder and BestKeeper, concordantly determined the superiority of the newly identified reference genes. The most suitable reference genes encode proteins with homology to PHD finger family proteins and the U6 snRNA-associated protein LSm7. In addition, we identified pepper orthologs and validated several genes as reliable normalization controls for qRT-PCR analysis of Xcv-infected pepper plants. The newly identified reference genes will be beneficial for future qRT-PCR studies of the Xcv-tomato and Xcv-pepper pathosystems, as well as for the identification of suitable normalization controls for qRT-PCR studies of other plant-pathogen interactions, especially, if related plant species are used in combination with bacterial pathogens. PMID:26313760
Niu, Longjian; Tao, Yan-Bin; Chen, Mao-Sheng; Fu, Qiantang; Li, Chaoqiong; Dong, Yuling; Wang, Xiulan; He, Huiying; Xu, Zeng-Fu
2015-01-01
Real-time quantitative PCR (RT-qPCR) is a reliable and widely used method for gene expression analysis. The accuracy of the determination of a target gene expression level by RT-qPCR demands the use of appropriate reference genes to normalize the mRNA levels among different samples. However, suitable reference genes for RT-qPCR have not been identified in Sacha inchi (Plukenetia volubilis), a promising oilseed crop known for its polyunsaturated fatty acid (PUFA)-rich seeds. In this study, using RT-qPCR, twelve candidate reference genes were examined in seedlings and adult plants, during flower and seed development and for the entire growth cycle of Sacha inchi. Four statistical algorithms (delta cycle threshold (ΔCt), BestKeeper, geNorm, and NormFinder) were used to assess the expression stabilities of the candidate genes. The results showed that ubiquitin-conjugating enzyme (UCE), actin (ACT) and phospholipase A22 (PLA) were the most stable genes in Sacha inchi seedlings. For roots, stems, leaves, flowers, and seeds from adult plants, 30S ribosomal protein S13 (RPS13), cyclophilin (CYC) and elongation factor-1alpha (EF1α) were recommended as reference genes for RT-qPCR. During the development of reproductive organs, PLA, ACT and UCE were the optimal reference genes for flower development, whereas UCE, RPS13 and RNA polymerase II subunit (RPII) were optimal for seed development. Considering the entire growth cycle of Sacha inchi, UCE, ACT and EF1α were sufficient for the purpose of normalization. Our results provide useful guidelines for the selection of reliable reference genes for the normalization of RT-qPCR data for seedlings and adult plants, for reproductive organs, and for the entire growth cycle of Sacha inchi. PMID:26047338
Reddy, Dumbala Srinivas; Bhatnagar-Mathur, Pooja; Reddy, Palakolanu Sudhakar; Sri Cindhuri, Katamreddy; Sivaji Ganesh, Adusumalli; Sharma, Kiran Kumar
2016-01-01
Quantitative Real-Time PCR (qPCR) is a preferred and reliable method for accurate quantification of gene expression to understand precise gene functions. A total of 25 candidate reference genes including traditional and new generation reference genes were selected and evaluated in a diverse set of chickpea samples. The samples used in this study included nine chickpea genotypes (Cicer spp.) comprising of cultivated and wild species, six abiotic stress treatments (drought, salinity, high vapor pressure deficit, abscisic acid, cold and heat shock), and five diverse tissues (leaf, root, flower, seedlings and seed). The geNorm, NormFinder and RefFinder algorithms used to identify stably expressed genes in four sample sets revealed stable expression of UCP and G6PD genes across genotypes, while TIP41 and CAC were highly stable under abiotic stress conditions. While PP2A and ABCT genes were ranked as best for different tissues, ABCT, UCP and CAC were most stable across all samples. This study demonstrated the usefulness of new generation reference genes for more accurate qPCR based gene expression quantification in cultivated as well as wild chickpea species. Validation of the best reference genes was carried out by studying their impact on normalization of aquaporin genes PIP1;4 and TIP3;1, in three contrasting chickpea genotypes under high vapor pressure deficit (VPD) treatment. The chickpea TIP3;1 gene got significantly up regulated under high VPD conditions with higher relative expression in the drought susceptible genotype, confirming the suitability of the selected reference genes for expression analysis. This is the first comprehensive study on the stability of the new generation reference genes for qPCR studies in chickpea across species, different tissues and abiotic stresses. PMID:26863232
Wu, Zhi-Jun; Tian, Chang; Jiang, Qian; Li, Xing-Hui; Zhuang, Jing
2016-01-01
Tea plant (Camellia sinensis) leaf is an important non-alcoholic beverage resource. The application of quantitative real time polymerase chain reaction (qRT-PCR) has a profound significance for the gene expression studies of tea plant, especially when applied to tea leaf development and metabolism. In this study, nine candidate reference genes (i.e., CsACT7, CsEF-1α, CseIF-4α, CsGAPDH, CsPP2A, CsSAND, CsTBP, CsTIP41, and CsTUB) of C. sinensis were cloned. The quantitative expression data of these genes were investigated in five tea leaf developmental stages (i.e., 1st, 2nd, 3rd, 4th, and older leaves) and normal growth tea leaves subjected to five hormonal stimuli (i.e., ABA, GA, IAA, MeJA, and SA), and gene expression stability was calculated using three common statistical algorithms, namely, geNorm, NormFinder, and Bestkeeper. Results indicated that CsTBP and CsTIP41 were the most stable genes in tea leaf development and CsTBP was the best gene under hormonal stimuli; by contrast, CsGAPDH and CsTUB genes showed the least stability. The gene expression profile of CsNAM gene was analyzed to confirm the validity of the reference genes in this study. Our data provide basis for the selection of reference genes for future biological research in the leaf development and hormonal stimuli of C. sinensis. PMID:26813576
Ji, Nanjing; Li, Ling; Lin, Lingxiao; Lin, Senjie
2015-01-01
The raphidophyte Heterosigma akashiwo is a globally distributed harmful alga that has been associated with fish kills in coastal waters. To understand the mechanisms of H. akashiwo bloom formation, gene expression analysis is often required. To accurately characterize the expression levels of a gene of interest, proper reference genes are essential. In this study, we assessed ten of the previously reported algal candidate genes (rpL17-2, rpL23, cox2, cal, tua, tub, ef1, 18S, gapdh, and mdh) for their suitability as reference genes in this species. We used qRT-PCR to quantify the expression levels of these genes in H. akashiwo grown under different temperatures, light intensities, nutrient concentrations, and time points over a diel cycle. The expression stability of these genes was evaluated using geNorm and NormFinder algorithms. Although none of these genes exhibited invariable expression levels, cal, tub, rpL17-2 and rpL23 expression levels were the most stable across the different conditions tested. For further validation, these selected genes were used to normalize the expression levels of ribulose-1, 5-bisphosphate carboxylase/oxygenase large unite (HrbcL) over a diel cycle. Results showed that the expression of HrbcL normalized against each of these reference genes was the highest at midday and lowest at midnight, similar to the diel patterns typically documented for this gene in algae. While the validated reference genes will be useful for future gene expression studies on H. akashiwo, we expect that the procedure used in this study may be helpful to future efforts to screen reference genes for other algae. PMID:26133173
Selection and Validation of Reference Genes for Quantitative Real-time PCR in Gentiana macrophylla
He, Yihan; Yan, Hailing; Hua, Wenping; Huang, Yaya; Wang, Zhezhi
2016-01-01
Real time quantitative PCR (RT-qPCR or qPCR) has been extensively applied for analyzing gene expression because of its accuracy, sensitivity, and high throughput. However, the unsuitable choice of reference gene(s) can lead to a misinterpretation of results. We evaluated the stability of 10 candidates – five traditional housekeeping genes (UBC21, GAPC2, EF-1α4, UBQ10, and UBC10) and five novel genes (SAND1, FBOX, PTB1, ARP, and Expressed1) – using the transcriptome data of Gentiana macrophylla. Common statistical algorithms ΔCt, GeNorm, NormFinder, and BestKeeper were run with samples collected from plants under various experimental conditions. For normalizing expression levels from tissues at different developmental stages, GAPC2 and UBC21 had the highest rankings. Both SAND1 and GAPC2 proved to be the optimal reference genes for roots from plants exposed to abiotic stresses while EF-1α4 and SAND1 were optimal when examining expression data from the leaves of stressed plants. Based on a comprehensive ranking of stability under different experimental conditions, we recommend that SAND1 and EF-1α4 are the most suitable overall. In this study, to find a suitable reference gene and its real-time PCR assay for G. macrophylla DNA content quantification, we evaluated three target genes including WRKY30, G10H, and SLS, through qualitative and absolute quantitative PCR with leaves under elicitors stressed experimental conditions. Arbitrary use of reference genes without previous evaluation can lead to a misinterpretation of the data. Our results will benefit future research on the expression of genes related to secoiridoid biosynthesis in this species under different experimental conditions. PMID:27446172
Siegfried, Blair D.; Zhou, Xuguo
2015-01-01
Reverse transcriptase-quantitative polymerase chain reaction (RT-qPCR) is a reliable, rapid, and reproducible technique for measuring and evaluating changes in gene expression. To facilitate gene expression studies and obtain more accurate RT-qPCR data, normalization relative to stable reference genes is required. In this study, expression profiles of seven candidate reference genes, including β-actin (Actin), elongation factor 1 α (EF1A), glyceralde hyde-3-phosphate dehydro-genase (GAPDH), cyclophilins A (CypA), vacuolar-type H+-ATPase (ATPase), 28S ribosomal RNA (28S), and 18S ribosomal RNA (18S) from Hippodamia convergens were investigated. H. convergens is an abundant predatory species in the New World, and has been widely used as a biological control agent against sap-sucking insect pests, primarily aphids. A total of four analytical methods, geNorm, Normfinder, BestKeeper, and the ΔCt method, were employed to evaluate the performance of these seven genes as endogenous controls under diverse experimental conditions. Additionally, RefFinder, a comprehensive evaluation platform integrating the four above mentioned algorithms, ranked the overall stability of these candidate genes. A suite of reference genes were specifically recommended for each experimental condition. Among them, 28S, EF1A, and CypA were the best reference genes across different development stages; GAPDH, 28S, and CypA were most stable in different tissues. GAPDH and CypA were most stable in female and male adults and photoperiod conditions, 28S and EF1A were most stable under a range of temperatures, Actin and CypA were most stable under dietary RNAi condition. This work establishes a standardized RT-qPCR analysis in H. convergens. Additionally, this study lays a foundation for functional genomics research in H. convergens and sheds light on the ecological risk assessment of RNAi-based biopesticides on this non-target biological control agent. PMID:25915640
Screening Reliable Reference Genes for RT-qPCR Analysis of Gene Expression in Moringa oleifera.
Deng, Li-Ting; Wu, Yu-Ling; Li, Jun-Cheng; OuYang, Kun-Xi; Ding, Mei-Mei; Zhang, Jun-Jie; Li, Shu-Qi; Lin, Meng-Fei; Chen, Han-Bin; Hu, Xin-Sheng; Chen, Xiao-Yang
2016-01-01
Moringa oleifera is a promising plant species for oil and forage, but its genetic improvement is limited. Our current breeding program in this species focuses on exploiting the functional genes associated with important agronomical traits. Here, we screened reliable reference genes for accurately quantifying the expression of target genes using the technique of real-time quantitative polymerase chain reaction (RT-qPCR) in M. oleifera. Eighteen candidate reference genes were selected from a transcriptome database, and their expression stabilities were examined in 90 samples collected from the pods in different developmental stages, various tissues, and the roots and leaves under different conditions (low or high temperature, sodium chloride (NaCl)- or polyethyleneglycol (PEG)- simulated water stress). Analyses with geNorm, NormFinder and BestKeeper algorithms revealed that the reliable reference genes differed across sample designs and that ribosomal protein L1 (RPL1) and acyl carrier protein 2 (ACP2) were the most suitable reference genes in all tested samples. The experiment results demonstrated the significance of using the properly validated reference genes and suggested the use of more than one reference gene to achieve reliable expression profiles. In addition, we applied three isotypes of the superoxide dismutase (SOD) gene that are associated with plant adaptation to abiotic stress to confirm the efficacy of the validated reference genes under NaCl and PEG water stresses. Our results provide a valuable reference for future studies on identifying important functional genes from their transcriptional expressions via RT-qPCR technique in M. oleifera. PMID:27541138
Lemma, Silvia; Avnet, Sofia; Salerno, Manuela; Chano, Tokuhiro; Baldini, Nicola
2016-01-01
The characterization of cancer stem cell (CSC) subpopulation, through the comparison of the gene expression signature in respect to the native cancer cells, is particularly important for the identification of novel and more effective anticancer strategies. However, CSC have peculiar characteristics in terms of adhesion, growth, and metabolism that possibly implies a different modulation of the expression of the most commonly used housekeeping genes (HKG), like b-actin (ACTB). Although it is crucial to identify which are the most stable HKG genes to normalize the data derived from quantitative Real-Time PCR analysis to obtain robust and consistent results, an exhaustive validation of reference genes in CSC is still missing. Here, we isolated CSC spheres from different musculoskeletal sarcomas and carcinomas as a model to investigate on the stability of the mRNA expression of 15 commonly used HKG, in respect to the native cells. The selected genes were analysed for the variation coefficient and compared using the popular algorithms NormFinder and geNorm to evaluate stability ranking. As a result, we found that: 1) Tata Binding Protein (TBP), Tyrosine 3-monooxygenase/tryptophan 5-monooxygenase activation protein zeta polypeptide (YWHAZ), Peptidylprolyl isomerase A (PPIA), and Hydroxymethylbilane synthase (HMBS) are the most stable HKG for the comparison between CSC and native cells; 2) at least four reference genes should be considered for robust results; 3) the use of ACTB should not be recommended, 4) specific HKG should be considered for studies that are focused only on a specific tumor type, like sarcoma or carcinoma. Our results should be taken in consideration for all the studies of gene expression analysis of CSC, and will substantially contribute for future investigations aimed to identify novel anticancer therapy based on CSC targeting. PMID:26894994
Jacobsen, Annette V.; Yemaneab, Bisrat T.; Jass, Jana; Scherbak, Nikolai
2014-01-01
The ability of commensal bacteria to influence gene expression in host cells under the influence of pathogenic bacteria has previously been demonstrated, however the extent of this interaction is important for understanding how bacteria can be used as probiotics. Real-time quantitative polymerase chain reaction is the most sensitive tool for evaluating relative changes to gene expression levels. However as a result of its sensitivity an appropriate method of normalisation should be used to account for any variation incurred in preparatory experimental procedures. These variations may result from differences in the amount of starting material, quality of extracted RNA, or in the efficiency of the reverse transcriptase or polymerase enzymes. Selection of an endogenous control gene is the preferred method of normalisation, and ideally a proper validation of the gene's appropriateness for the study in question should be performed. In this study we used quantitative polymerase chain reaction data and applied four different algorithms (geNorm, BestKeeper, NormFinder, and comparative ΔCq) to evaluate eleven different genes as to their suitability as endogenous controls for use in studies involving colonic (HT-29) and vaginal (VK2/E6E7) human mucosal epithelial cells treated with probiotic and pathogenic bacteria. We found phosphoglycerate kinase 1 to be most appropriate for HT-29 cells, and ribosomal protein large P0 to be the best choice for VK2/E6E7 cells. We also showed that use of less stable reference genes can lead to less accurate quantification of expression levels of gene of interest (GOI) and also can result in decreased statistical significance for GOI expression levels when compared to control. Additionally, we found the cell type being analysed had greater influence on reference gene selection than the treatment performed. This study provides recommendations for stable endogenous control genes for use in further studies involving colonic and vaginal cell
Rodrigues, Thaís Barros; Barros Rodrigues, Thaís; Khajuria, Chitvan; Wang, Haichuan; Matz, Natalie; Cunha Cardoso, Danielle; Valicente, Fernando Hercos; Zhou, Xuguo; Siegfried, Blair
2014-01-01
Quantitative Real-time PCR (qRT-PCR) is a powerful technique to investigate comparative gene expression. In general, normalization of results using a highly stable housekeeping gene (HKG) as an internal control is recommended and necessary. However, there are several reports suggesting that regulation of some HKGs is affected by different conditions. The western corn rootworm (WCR), Diabrotica virgifera virgifera LeConte (Coleoptera: Chrysomelidae), is a serious pest of corn in the United States and Europe. The expression profile of target genes related to insecticide exposure, resistance, and RNA interference has become an important experimental technique for study of western corn rootworms; however, lack of information on reliable HKGs under different conditions makes the interpretation of qRT-PCR results difficult. In this study, four distinct algorithms (Genorm, NormFinder, BestKeeper and delta-CT) and five candidate HKGs to genes of reference (β-actin; GAPDH, glyceraldehyde-3-phosphate dehydrogenase; β-tubulin; RPS9, ribosomal protein S9; EF1a, elongation factor-1α) were evaluated to determine the most reliable HKG under different experimental conditions including exposure to dsRNA and Bt toxins and among different tissues and developmental stages. Although all the HKGs tested exhibited relatively stable expression among the different treatments, some differences were noted. Among the five candidate reference genes evaluated, β-actin exhibited highly stable expression among different life stages. RPS9 exhibited the most similar pattern of expression among dsRNA treatments, and both experiments indicated that EF1a was the second most stable gene. EF1a was also the most stable for Bt exposure and among different tissues. These results will enable researchers to use more accurate and reliable normalization of qRT-PCR data in WCR experiments. PMID:25356627
Ji, Nanjing; Li, Ling; Lin, Lingxiao; Lin, Senjie
2015-01-01
The raphidophyte Heterosigma akashiwo is a globally distributed harmful alga that has been associated with fish kills in coastal waters. To understand the mechanisms of H. akashiwo bloom formation, gene expression analysis is often required. To accurately characterize the expression levels of a gene of interest, proper reference genes are essential. In this study, we assessed ten of the previously reported algal candidate genes (rpL17-2, rpL23, cox2, cal, tua, tub, ef1, 18S, gapdh, and mdh) for their suitability as reference genes in this species. We used qRT-PCR to quantify the expression levels of these genes in H. akashiwo grown under different temperatures, light intensities, nutrient concentrations, and time points over a diel cycle. The expression stability of these genes was evaluated using geNorm and NormFinder algorithms. Although none of these genes exhibited invariable expression levels, cal, tub, rpL17-2 and rpL23 expression levels were the most stable across the different conditions tested. For further validation, these selected genes were used to normalize the expression levels of ribulose-1, 5-bisphosphate carboxylase/oxygenase large unite (HrbcL) over a diel cycle. Results showed that the expression of HrbcL normalized against each of these reference genes was the highest at midday and lowest at midnight, similar to the diel patterns typically documented for this gene in algae. While the validated reference genes will be useful for future gene expression studies on H. akashiwo, we expect that the procedure used in this study may be helpful to future efforts to screen reference genes for other algae. PMID:26133173
Bae, In-Seon; Chung, Ki Yong; Yi, Jongmin; Kim, Tae Il; Choi, Hwa-Sik; Cho, Young-Moo; Choi, Inho; Kim, Sang Hoon
2015-01-01
Circulating microRNAs in body fluids have been implicated as promising biomarkers for physiopathology disorders. Currently, the expression levels of circulating microRNAs are estimated by reverse transcription quantitative real-time polymerase chain reaction. Use of appropriate reference microRNAs for normalization is critical for accurate microRNA expression analysis. However, no study has systematically investigated reference genes for evaluating circulating microRNA expression in cattle. In this study, we describe the identification and characterization of appropriate reference microRNAs for use in the normalization of circulating microRNA levels in bovine serum. We evaluated the expression stability of ten candidate reference genes in bovine serum by using reverse transcription quantitative real-time polymerase chain reaction. Data were analyzed using geNorm, NormFinder, and BestKeeper statistical algorithms. The results consistently showed that a combination of miR-93 and miR-127 provided the most stably expressed reference. The suitability of these microRNAs was validated, and even when compared among different genders or breeds, the combination of miR-93 and miR-127 was ranked as the most stable microRNA reference. Therefore, we conclude that this combination is the optimal endogenous reference for reverse transcription quantitative real-time polymerase chain reaction-based detection of microRNAs in bovine serum. The data presented in this study are crucial to successful biomarker discovery and validation for the diagnosis of physiopathological conditions in cattle. PMID:25826387
Li, Xiaoshuang; Zhang, Daoyuan; Li, Haiyan; Gao, Bei; Yang, Honglan; Zhang, Yuanming; Wood, Andrew J.
2015-01-01
Syntrichia caninervis is the dominant bryophyte of the biological soil crusts found in the Gurbantunggut desert. The extreme desert environment is characterized by prolonged drought, temperature extremes, high radiation and frequent cycles of hydration and dehydration. S. caninervis is an ideal organism for the identification and characterization of genes related to abiotic stress tolerance. Reverse transcription quantitative real-time polymerase chain reaction (RT-qPCR) expression analysis is a powerful analytical technique that requires the use of stable reference genes. Using available S. caninervis transcriptome data, we selected 15 candidate reference genes and analyzed their relative expression stabilities in S. caninervis gametophores exposed to a range of abiotic stresses or a hydration-desiccation-rehydration cycle. The programs geNorm, NormFinder, and RefFinder were used to assess and rank the expression stability of the 15 candidate genes. The stability ranking results of reference genes under each specific experimental condition showed high consistency using different algorithms. For abiotic stress treatments, the combination of two genes (α-TUB2 and CDPK) were sufficient for accurate normalization. For the hydration-desiccation-rehydration process, the combination of two genes (α-TUB1 and CDPK) were sufficient for accurate normalization. 18S was among the least stable genes in all of the experimental sets and was unsuitable as reference gene in S. caninervis. This is the first systematic investigation and comparison of reference gene selection for RT-qPCR work in S. caninervis. This research will facilitate gene expression studies in S. caninervis, related moss species from the Syntrichia complex and other mosses. PMID:25699066
Yang, Yuting; Zhang, Xu; Chen, Yun; Guo, Jinlong; Ling, Hui; Gao, Shiwu; Su, Yachun; Que, Youxiong; Xu, Liping
2016-01-01
Sugarcane, accounting for 80% of world's sugar, originates in the tropics but is cultivated mainly in the subtropics. Therefore, chilling injury frequently occurs and results in serious losses. Recent studies in various plant species have established microRNAs as key elements in the post-transcriptional regulation of response to biotic and abiotic stresses including cold stress. Though, its accuracy is largely influenced by the use of reference gene for normalization, quantitative PCR is undoubtedly a popular method used for identification of microRNAs. For identifying the most suitable reference genes for normalizing miRNAs expression in sugarcane under cold stress, 13 candidates among 17 were investigated using four algorithms: geNorm, NormFinder, deltaCt, and Bestkeeper, and four candidates were excluded because of unsatisfactory efficiency and specificity. Verification was carried out using cold-related genes miR319 and miR393 in cold-tolerant and sensitive cultivars. The results suggested that miR171/18S rRNA and miR171/miR5059 were the best reference gene sets for normalization for miRNA RT-qPCR, followed by the single miR171 and 18S rRNA. These results can aid research on miRNA responses during sugarcane stress, and the development of sugarcane tolerant to cold stress. This study is the first report concerning the reference gene selection of miRNA RT-qPCR in sugarcane. PMID:26904058
Li, Xiaoshuang; Zhang, Daoyuan; Li, Haiyan; Gao, Bei; Yang, Honglan; Zhang, Yuanming; Wood, Andrew J
2015-01-01
Syntrichia caninervis is the dominant bryophyte of the biological soil crusts found in the Gurbantunggut desert. The extreme desert environment is characterized by prolonged drought, temperature extremes, high radiation and frequent cycles of hydration and dehydration. S. caninervis is an ideal organism for the identification and characterization of genes related to abiotic stress tolerance. Reverse transcription quantitative real-time polymerase chain reaction (RT-qPCR) expression analysis is a powerful analytical technique that requires the use of stable reference genes. Using available S. caninervis transcriptome data, we selected 15 candidate reference genes and analyzed their relative expression stabilities in S. caninervis gametophores exposed to a range of abiotic stresses or a hydration-desiccation-rehydration cycle. The programs geNorm, NormFinder, and RefFinder were used to assess and rank the expression stability of the 15 candidate genes. The stability ranking results of reference genes under each specific experimental condition showed high consistency using different algorithms. For abiotic stress treatments, the combination of two genes (α-TUB2 and CDPK) were sufficient for accurate normalization. For the hydration-desiccation-rehydration process, the combination of two genes (α-TUB1 and CDPK) were sufficient for accurate normalization. 18S was among the least stable genes in all of the experimental sets and was unsuitable as reference gene in S. caninervis. This is the first systematic investigation and comparison of reference gene selection for RT-qPCR work in S. caninervis. This research will facilitate gene expression studies in S. caninervis, related moss species from the Syntrichia complex and other mosses. PMID:25699066
Yang, Yuting; Zhang, Xu; Chen, Yun; Guo, Jinlong; Ling, Hui; Gao, Shiwu; Su, Yachun; Que, Youxiong; Xu, Liping
2016-01-01
Sugarcane, accounting for 80% of world's sugar, originates in the tropics but is cultivated mainly in the subtropics. Therefore, chilling injury frequently occurs and results in serious losses. Recent studies in various plant species have established microRNAs as key elements in the post-transcriptional regulation of response to biotic and abiotic stresses including cold stress. Though, its accuracy is largely influenced by the use of reference gene for normalization, quantitative PCR is undoubtedly a popular method used for identification of microRNAs. For identifying the most suitable reference genes for normalizing miRNAs expression in sugarcane under cold stress, 13 candidates among 17 were investigated using four algorithms: geNorm, NormFinder, deltaCt, and Bestkeeper, and four candidates were excluded because of unsatisfactory efficiency and specificity. Verification was carried out using cold-related genes miR319 and miR393 in cold-tolerant and sensitive cultivars. The results suggested that miR171/18S rRNA and miR171/miR5059 were the best reference gene sets for normalization for miRNA RT-qPCR, followed by the single miR171 and 18S rRNA. These results can aid research on miRNA responses during sugarcane stress, and the development of sugarcane tolerant to cold stress. This study is the first report concerning the reference gene selection of miRNA RT-qPCR in sugarcane. PMID:26904058
Optimal Reference Genes for Gene Expression Normalization in Trichomonas vaginalis.
dos Santos, Odelta; de Vargas Rigo, Graziela; Frasson, Amanda Piccoli; Macedo, Alexandre José; Tasca, Tiana
2015-01-01
Trichomonas vaginalis is the etiologic agent of trichomonosis, the most common non-viral sexually transmitted disease worldwide. This infection is associated with several health consequences, including cervical and prostate cancers and HIV acquisition. Gene expression analysis has been facilitated because of available genome sequences and large-scale transcriptomes in T. vaginalis, particularly using quantitative real-time polymerase chain reaction (qRT-PCR), one of the most used methods for molecular studies. Reference genes for normalization are crucial to ensure the accuracy of this method. However, to the best of our knowledge, a systematic validation of reference genes has not been performed for T. vaginalis. In this study, the transcripts of nine candidate reference genes were quantified using qRT-PCR under different cultivation conditions, and the stability of these genes was compared using the geNorm and NormFinder algorithms. The most stable reference genes were α-tubulin, actin and DNATopII, and, conversely, the widely used T. vaginalis reference genes GAPDH and β-tubulin were less stable. The PFOR gene was used to validate the reliability of the use of these candidate reference genes. As expected, the PFOR gene was upregulated when the trophozoites were cultivated with ferrous ammonium sulfate when the DNATopII, α-tubulin and actin genes were used as normalizing gene. By contrast, the PFOR gene was downregulated when the GAPDH gene was used as an internal control, leading to misinterpretation of the data. These results provide an important starting point for reference gene selection and gene expression analysis with qRT-PCR studies of T. vaginalis. PMID:26393928
Hu, Yu; Xie, Shuying; Yao, Jihua
2016-01-01
Reference genes used in normalizing qRT-PCR data are critical for the accuracy of gene expression analysis. However, many traditional reference genes used in zebrafish early development are not appropriate because of their variable expression levels during embryogenesis. In the present study, we used our previous RNA-Seq dataset to identify novel reference genes suitable for gene expression analysis during zebrafish early developmental stages. We first selected 197 most stably expressed genes from an RNA-Seq dataset (29,291 genes in total), according to the ratio of their maximum to minimum RPKM values. Among the 197 genes, 4 genes with moderate expression levels and the least variation throughout 9 developmental stages were identified as candidate reference genes. Using four independent statistical algorithms (delta-CT, geNorm, BestKeeper and NormFinder), the stability of qRT-PCR expression of these candidates was then evaluated and compared to that of actb1 and actb2, two commonly used zebrafish reference genes. Stability rankings showed that two genes, namely mobk13 (mob4) and lsm12b, were more stable than actb1 and actb2 in most cases. To further test the suitability of mobk13 and lsm12b as novel reference genes, they were used to normalize three well-studied target genes. The results showed that mobk13 and lsm12b were more suitable than actb1 and actb2 with respect to zebrafish early development. We recommend mobk13 and lsm12b as new optimal reference genes for zebrafish qRT-PCR analysis during embryogenesis and early larval stages. PMID:26891128
Selection and Validation of Reference Genes for Quantitative Real-time PCR in Gentiana macrophylla.
He, Yihan; Yan, Hailing; Hua, Wenping; Huang, Yaya; Wang, Zhezhi
2016-01-01
Real time quantitative PCR (RT-qPCR or qPCR) has been extensively applied for analyzing gene expression because of its accuracy, sensitivity, and high throughput. However, the unsuitable choice of reference gene(s) can lead to a misinterpretation of results. We evaluated the stability of 10 candidates - five traditional housekeeping genes (UBC21, GAPC2, EF-1α4, UBQ10, and UBC10) and five novel genes (SAND1, FBOX, PTB1, ARP, and Expressed1) - using the transcriptome data of Gentiana macrophylla. Common statistical algorithms ΔC t, GeNorm, NormFinder, and BestKeeper were run with samples collected from plants under various experimental conditions. For normalizing expression levels from tissues at different developmental stages, GAPC2 and UBC21 had the highest rankings. Both SAND1 and GAPC2 proved to be the optimal reference genes for roots from plants exposed to abiotic stresses while EF-1α4 and SAND1 were optimal when examining expression data from the leaves of stressed plants. Based on a comprehensive ranking of stability under different experimental conditions, we recommend that SAND1 and EF-1α4 are the most suitable overall. In this study, to find a suitable reference gene and its real-time PCR assay for G. macrophylla DNA content quantification, we evaluated three target genes including WRKY30, G10H, and SLS, through qualitative and absolute quantitative PCR with leaves under elicitors stressed experimental conditions. Arbitrary use of reference genes without previous evaluation can lead to a misinterpretation of the data. Our results will benefit future research on the expression of genes related to secoiridoid biosynthesis in this species under different experimental conditions. PMID:27446172
Reddy, Palakolanu Sudhakar; Sri Cindhuri, Katamreddy; Sivaji Ganesh, Adusumalli; Sharma, Kiran Kumar
2016-01-01
Quantitative Real-Time PCR (qPCR) is a preferred and reliable method for accurate quantification of gene expression to understand precise gene functions. A total of 25 candidate reference genes including traditional and new generation reference genes were selected and evaluated in a diverse set of chickpea samples. The samples used in this study included nine chickpea genotypes (Cicer spp.) comprising of cultivated and wild species, six abiotic stress treatments (drought, salinity, high vapor pressure deficit, abscisic acid, cold and heat shock), and five diverse tissues (leaf, root, flower, seedlings and seed). The geNorm, NormFinder and RefFinder algorithms used to identify stably expressed genes in four sample sets revealed stable expression of UCP and G6PD genes across genotypes, while TIP41 and CAC were highly stable under abiotic stress conditions. While PP2A and ABCT genes were ranked as best for different tissues, ABCT, UCP and CAC were most stable across all samples. This study demonstrated the usefulness of new generation reference genes for more accurate qPCR based gene expression quantification in cultivated as well as wild chickpea species. Validation of the best reference genes was carried out by studying their impact on normalization of aquaporin genes PIP1;4 and TIP3;1, in three contrasting chickpea genotypes under high vapor pressure deficit (VPD) treatment. The chickpea TIP3;1 gene got significantly up regulated under high VPD conditions with higher relative expression in the drought susceptible genotype, confirming the suitability of the selected reference genes for expression analysis. This is the first comprehensive study on the stability of the new generation reference genes for qPCR studies in chickpea across species, different tissues and abiotic stresses. PMID:26863232
Müller, Oliver A; Grau, Jan; Thieme, Sabine; Prochaska, Heike; Adlung, Norman; Sorgatz, Anika; Bonas, Ulla
2015-01-01
The Gram-negative bacterium Xanthomonas campestris pv. vesicatoria (Xcv) causes bacterial spot disease of pepper and tomato by direct translocation of type III effector proteins into the plant cell cytosol. Once in the plant cell the effectors interfere with host cell processes and manipulate the plant transcriptome. Quantitative RT-PCR (qRT-PCR) is usually the method of choice to analyze transcriptional changes of selected plant genes. Reliable results depend, however, on measuring stably expressed reference genes that serve as internal normalization controls. We identified the most stably expressed tomato genes based on microarray analyses of Xcv-infected tomato leaves and evaluated the reliability of 11 genes for qRT-PCR studies in comparison to four traditionally employed reference genes. Three different statistical algorithms, geNorm, NormFinder and BestKeeper, concordantly determined the superiority of the newly identified reference genes. The most suitable reference genes encode proteins with homology to PHD finger family proteins and the U6 snRNA-associated protein LSm7. In addition, we identified pepper orthologs and validated several genes as reliable normalization controls for qRT-PCR analysis of Xcv-infected pepper plants. The newly identified reference genes will be beneficial for future qRT-PCR studies of the Xcv-tomato and Xcv-pepper pathosystems, as well as for the identification of suitable normalization controls for qRT-PCR studies of other plant-pathogen interactions, especially, if related plant species are used in combination with bacterial pathogens. PMID:26313760
Xia, Wei; Mason, Annaliese S; Xiao, Yong; Liu, Zheng; Yang, Yaodong; Lei, Xintao; Wu, Xiaoming; Ma, Zilong; Peng, Ming
2014-08-20
The African oil palm (Elaeis guineensis), which is grown in tropical and subtropical regions, is a highly productive oil-bearing crop. For gene expression-based analyses such as reverse transcription-quantitative real time PCR (RT-qPCR), reference genes are essential to provide a baseline with which to quantify relative gene expression. Normalization using reliable reference genes is critical in correctly interpreting expression data from RT-qPCR. In order to identify suitable reference genes in African oil palm, 17 transcriptomes of different tissues obtained from NCBI were systematically assessed for gene expression variation. In total, 53 putative candidate reference genes with coefficient of variation values <3.0 were identified: 18 in reproductive tissue and 35 in vegetative tissue. Analysis for enriched functions showed that approximately 90% of identified genes were clustered in cell component gene functions, and 12 out of 53 genes were traditional housekeeping genes. We selected and validated 16 reference genes chosen from leaf tissue transcriptomes by using RT-qPCR in sets of cold, drought and high salinity treated samples, and ranked expression stability using statistical algorithms geNorm, Normfinder and Bestkeeper. Genes encoding actin, adenine phosphoribosyltransferase and eukaryotic initiation factor 4A genes were the most stable genes over the cold, drought and high salinity stresses. Identification of stably expressed genes as reference gene candidates from multiple transcriptome datasets was found to be reliable and efficient, and some traditional housekeeping genes were more stably expressed than others. We provide a useful molecular genetic resource for future gene expression studies in African oil palm, facilitating molecular genetics approaches for crop improvement in this species. PMID:24862192
Wu, Zhi-Jun; Tian, Chang; Jiang, Qian; Li, Xing-Hui; Zhuang, Jing
2016-01-01
Tea plant (Camellia sinensis) leaf is an important non-alcoholic beverage resource. The application of quantitative real time polymerase chain reaction (qRT-PCR) has a profound significance for the gene expression studies of tea plant, especially when applied to tea leaf development and metabolism. In this study, nine candidate reference genes (i.e., CsACT7, CsEF-1α, CseIF-4α, CsGAPDH, CsPP2A, CsSAND, CsTBP, CsTIP41, and CsTUB) of C. sinensis were cloned. The quantitative expression data of these genes were investigated in five tea leaf developmental stages (i.e., 1st, 2nd, 3rd, 4th, and older leaves) and normal growth tea leaves subjected to five hormonal stimuli (i.e., ABA, GA, IAA, MeJA, and SA), and gene expression stability was calculated using three common statistical algorithms, namely, geNorm, NormFinder, and Bestkeeper. Results indicated that CsTBP and CsTIP41 were the most stable genes in tea leaf development and CsTBP was the best gene under hormonal stimuli; by contrast, CsGAPDH and CsTUB genes showed the least stability. The gene expression profile of CsNAM gene was analyzed to confirm the validity of the reference genes in this study. Our data provide basis for the selection of reference genes for future biological research in the leaf development and hormonal stimuli of C. sinensis. PMID:26813576
Lin, Yu Ling; Lai, Zhong Xiong
2013-05-01
Accurate profiling of microRNAs (miRNAs) is an essential step for understanding both developmental and physiological functions of miRNAs. Real-time quantitative PCR (qPCR) is being widely used in miRNA expression studies, but choosing a suitable reference gene is a crucial factor for correct analysis of results. To date, there has been no systematic evaluation of qPCR reference genes for the study of miRNAs during somatic embryogenesis (SE) in the longan tree (Dimocarpus longan). Here, the most stably expressed miRNAs in synchronized longan tree embryogenic cultures at different developmental stages were determined using the geNorm and NormFinder algorithms. Validation qPCR experiments were performed for 24 miRNAs together with a snRNA (U6 snRNA), a rRNA (5S rRNA), and three housekeeping genes. It was found that small RNAs had better expression stability than protein-coding genes, and dlo-miR24 was identified as the most reliable reference gene, followed by dlo-miR168a*, dlo-miR2089*-1 and 5S rRNA. dlo-miR24 was recommended as a normalizer if only a single reference gene was to be used, while the combination of dlo-miR156c, dlo-2089*-1 and 5S rRNA was preferred to normalize miRNA expression data during longan SE. PMID:23454294
Song, Liang; Li, Tong; Fan, Li; Shen, Xiao-Ye; Hou, Cheng-Lin
2016-04-01
The stability of reference genes plays a vital role in real-time quantitative reverse transcription polymerase chain reaction (qRT-PCR) analysis, which is generally regarded as a convenient and sensitive tool for the analysis of gene expression. A well-known medicinal fungus, Shiraia bambusicola, has great potential in the pharmaceutical, agricultural and food industries, but its suitable reference genes have not yet been determined. In the present study, 11 candidate reference genes in S. bambusicola were first evaluated and validated comprehensively. To identify the suitable reference genes for qRT-PCR analysis, three software-based algorithms, geNorm, NormFinder and Best Keeper, were applied to rank the tested genes. RNA samples were collected from seven fermentation stages using different media (potato dextrose or Czapek medium) and under different light conditions (12-h light/12-h dark and all-dark). The three most appropriate reference genes, ubi, tfc and ags, were able to normalize the qRT-PCR results under the culturing conditions of 12-h light/12-h dark, whereas the other three genes, vac, gke and acyl, performed better in the culturing conditions of all-dark growth. Therefore, under different light conditions, at least two reference genes (ubi and vac) could be employed to assure the reliability of qRT-PCR results. For both the natural culture medium (the most appropriate genes of this group: ubi, tfc and ags) and the chemically defined synthetic medium (the most stable genes of this group: tfc, vac and ef), the tfc gene remained the best gene used for normalizing the gene expression found with qRT-PCR. It is anticipated that these results would improve the selection of suitable reference genes for qRT-PCR assays and lay the foundation for an accurate analysis of gene expression in S. bambusicola. PMID:26721832
Niu, Longjian; Tao, Yan-Bin; Chen, Mao-Sheng; Fu, Qiantang; Li, Chaoqiong; Dong, Yuling; Wang, Xiulan; He, Huiying; Xu, Zeng-Fu
2015-01-01
Real-time quantitative PCR (RT-qPCR) is a reliable and widely used method for gene expression analysis. The accuracy of the determination of a target gene expression level by RT-qPCR demands the use of appropriate reference genes to normalize the mRNA levels among different samples. However, suitable reference genes for RT-qPCR have not been identified in Sacha inchi (Plukenetia volubilis), a promising oilseed crop known for its polyunsaturated fatty acid (PUFA)-rich seeds. In this study, using RT-qPCR, twelve candidate reference genes were examined in seedlings and adult plants, during flower and seed development and for the entire growth cycle of Sacha inchi. Four statistical algorithms (delta cycle threshold (ΔCt), BestKeeper, geNorm, and NormFinder) were used to assess the expression stabilities of the candidate genes. The results showed that ubiquitin-conjugating enzyme (UCE), actin (ACT) and phospholipase A22 (PLA) were the most stable genes in Sacha inchi seedlings. For roots, stems, leaves, flowers, and seeds from adult plants, 30S ribosomal protein S13 (RPS13), cyclophilin (CYC) and elongation factor-1alpha (EF1α) were recommended as reference genes for RT-qPCR. During the development of reproductive organs, PLA, ACT and UCE were the optimal reference genes for flower development, whereas UCE, RPS13 and RNA polymerase II subunit (RPII) were optimal for seed development. Considering the entire growth cycle of Sacha inchi, UCE, ACT and EF1α were sufficient for the purpose of normalization. Our results provide useful guidelines for the selection of reliable reference genes for the normalization of RT-qPCR data for seedlings and adult plants, for reproductive organs, and for the entire growth cycle of Sacha inchi. PMID:26047338
Systolic algorithms and their implementation
Kung, H.T.
1984-01-01
Very high performance computer systems must rely heavily on parallelism since there are severe physical and technological limits on the ultimate speed of any single processor. The systolic array concept developed in the last several years allows effective use of a very large number of processors in parallel. This article illustrates the basic ideas by reviewing a systolic array design for matrix triangularization and describing its use in the on-the-fly updating of Cholesky decomposition of covariance matrices-a crucial computation in adaptive signal processing. Following this are discussions on issues related to the hardware implementation of systolic algorithms in general, and some guidelines for designing systolic algorithms that will be convenient for implementation. 33 references.
A fast meteor detection algorithm
NASA Astrophysics Data System (ADS)
Gural, P.
2016-01-01
A low latency meteor detection algorithm for use with fast steering mirrors had been previously developed to track and telescopically follow meteors in real-time (Gural, 2007). It has been rewritten as a generic clustering and tracking software module for meteor detection that meets both the demanding throughput requirements of a Raspberry Pi while also maintaining a high probability of detection. The software interface is generalized to work with various forms of front-end video pre-processing approaches and provides a rich product set of parameterized line detection metrics. Discussion will include the Maximum Temporal Pixel (MTP) compression technique as a fast thresholding option for feeding the detection module, the detection algorithm trade for maximum processing throughput, details on the clustering and tracking methodology, processing products, performance metrics, and a general interface description.
NASA Technical Reports Server (NTRS)
Loewenstein, M.; Greenblatt. B. J.; Jost, H.; Podolske, J. R.; Elkins, Jim; Hurst, Dale; Romanashkin, Pavel; Atlas, Elliott; Schauffler, Sue; Donnelly, Steve; Condon, Estelle (Technical Monitor)
2000-01-01
De-nitrification and excess re-nitrification was widely observed by ER-2 instruments in the Arctic vortex during SOLVE in winter/spring 2000. Analyses of these events requires a knowledge of the initial or pre-vortex state of the sampled air masses. The canonical relationship of NOy to the long-lived tracer N2O observed in the unperturbed stratosphere is generally used for this purpose. In this paper we will attempt to establish the current unperturbed NOy:N2O relationship (NOy* algorithm) using the ensemble of extra-vortex data from in situ instruments flying on the ER-2 and DC-8, and from the Mark IV remote measurements on the OMS balloon. Initial analysis indicates a change in the SOLVE NOy* from the values predicted by the 1994 Northern Hemisphere NOy* algorithm which was derived from the observations in the ASHOE/MAESA campaign.
A spectral canonical electrostatic algorithm
NASA Astrophysics Data System (ADS)
Webb, Stephen D.
2016-03-01
Studying single-particle dynamics over many periods of oscillations is a well-understood problem solved using symplectic integration. Such integration schemes derive their update sequence from an approximate Hamiltonian, guaranteeing that the geometric structure of the underlying problem is preserved. Simulating a self-consistent system over many oscillations can introduce numerical artifacts such as grid heating. This unphysical heating stems from using non-symplectic methods on Hamiltonian systems. With this guidance, we derive an electrostatic algorithm using a discrete form of Hamilton’s principle. The resulting algorithm, a gridless spectral electrostatic macroparticle model, does not exhibit the unphysical heating typical of most particle-in-cell methods. We present results of this using a two-body problem as an example of the algorithm’s energy- and momentum-conserving properties.
Constrained Multiobjective Biogeography Optimization Algorithm
Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping
2014-01-01
Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591
Constrained multiobjective biogeography optimization algorithm.
Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping
2014-01-01
Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591
Innovations in Lattice QCD Algorithms
Konstantinos Orginos
2006-06-25
Lattice QCD calculations demand a substantial amount of computing power in order to achieve the high precision results needed to better understand the nature of strong interactions, assist experiment to discover new physics, and predict the behavior of a diverse set of physical systems ranging from the proton itself to astrophysical objects such as neutron stars. However, computer power alone is clearly not enough to tackle the calculations we need to be doing today. A steady stream of recent algorithmic developments has made an important impact on the kinds of calculations we can currently perform. In this talk I am reviewing these algorithms and their impact on the nature of lattice QCD calculations performed today.
Algorithm refinement for fluctuating hydrodynamics
Williams, Sarah A.; Bell, John B.; Garcia, Alejandro L.
2007-07-03
This paper introduces an adaptive mesh and algorithmrefinement method for fluctuating hydrodynamics. This particle-continuumhybrid simulates the dynamics of a compressible fluid with thermalfluctuations. The particle algorithm is direct simulation Monte Carlo(DSMC), a molecular-level scheme based on the Boltzmann equation. Thecontinuum algorithm is based on the Landau-Lifshitz Navier-Stokes (LLNS)equations, which incorporate thermal fluctuations into macroscopichydrodynamics by using stochastic fluxes. It uses a recently-developedsolver for LLNS, based on third-order Runge-Kutta. We present numericaltests of systems in and out of equilibrium, including time-dependentsystems, and demonstrate dynamic adaptive refinement by the computationof a moving shock wave. Mean system behavior and second moment statisticsof our simulations match theoretical values and benchmarks well. We findthat particular attention should be paid to the spectrum of the flux atthe interface between the particle and continuum methods, specificallyfor the non-hydrodynamic (kinetic) time scales.
Optimisation algorithms for microarray biclustering.
Perrin, Dimitri; Duhamel, Christophe
2013-01-01
In providing simultaneous information on expression profiles for thousands of genes, microarray technologies have, in recent years, been largely used to investigate mechanisms of gene expression. Clustering and classification of such data can, indeed, highlight patterns and provide insight on biological processes. A common approach is to consider genes and samples of microarray datasets as nodes in a bipartite graphs, where edges are weighted e.g. based on the expression levels. In this paper, using a previously-evaluated weighting scheme, we focus on search algorithms and evaluate, in the context of biclustering, several variations of Genetic Algorithms. We also introduce a new heuristic "Propagate", which consists in recursively evaluating neighbour solutions with one more or one less active conditions. The results obtained on three well-known datasets show that, for a given weighting scheme, optimal or near-optimal solutions can be identified. PMID:24109756
A possible hypercomputational quantum algorithm
NASA Astrophysics Data System (ADS)
Sicard, Andres; Velez, Mario; Ospina, Juan
2005-05-01
The term 'hypermachine' denotes any data processing device (theoretical or that can be implemented) capable of carrying out tasks that cannot be performed by a Turing machine. We present a possible quantum algorithm for a classically non-computable decision problem, Hilbert's tenth problem; more specifically, we present a possible hypercomputation model based on quantum computation. Our algorithm is inspired by the one proposed by Tien D. Kieu, but we have selected the infinite square well instead of the (one-dimensional) simple harmonic oscillator as the underlying physical system. Our model exploits the quantum adiabatic process and the characteristics of the representation of the dynamical Lie algebra su(1,1) associated to the infinite square well.
MUSIC algorithms for rebar detection
NASA Astrophysics Data System (ADS)
Solimene, Raffaele; Leone, Giovanni; Dell'Aversano, Angela
2013-12-01
The MUSIC (MUltiple SIgnal Classification) algorithm is employed to detect and localize an unknown number of scattering objects which are small in size as compared to the wavelength. The ensemble of objects to be detected consists of both strong and weak scatterers. This represents a scattering environment challenging for detection purposes as strong scatterers tend to mask the weak ones. Consequently, the detection of more weakly scattering objects is not always guaranteed and can be completely impaired when the noise corrupting data is of a relatively high level. To overcome this drawback, here a new technique is proposed, starting from the idea of applying a two-stage MUSIC algorithm. In the first stage strong scatterers are detected. Then, information concerning their number and location is employed in the second stage focusing only on the weak scatterers. The role of an adequate scattering model is emphasized to improve drastically detection performance in realistic scenarios.
Systolic systems: algorithms and complexity
Chang, J.H.
1986-01-01
This thesis has two main contributions. The first is the design of efficient systolic algorithms for solving recurrence equations, dynamic programming problems, scheduling problems, as well as new systolic implementation of data structures such as stacks, queues, priority queues, and dictionary machines. The second major contribution is the investigation of the computational power of systolic arrays in comparison to sequential models and other models of parallel computation.
Algorithms Could Automate Cancer Diagnosis
NASA Technical Reports Server (NTRS)
Baky, A. A.; Winkler, D. G.
1982-01-01
Five new algorithms are a complete statistical procedure for quantifying cell abnormalities from digitized images. Procedure could be basis for automated detection and diagnosis of cancer. Objective of procedure is to assign each cell an atypia status index (ASI), which quantifies level of abnormality. It is possible that ASI values will be accurate and economical enough to allow diagnoses to be made quickly and accurately by computer processing of laboratory specimens extracted from patients.
Algorithms of NCG geometrical module
NASA Astrophysics Data System (ADS)
Gurevich, M. I.; Pryanichnikov, A. V.
2012-12-01
The methods and algorithms of the versatile NCG geometrical module used in the MCU code system are described. The NCG geometrical module is based on the Monte Carlo method and intended for solving equations of particle transport. The versatile combinatorial body method, the grid method, and methods of equalized cross sections and grain structures are used for description of the system geometry and calculation of trajectories.
Algorithms of NCG geometrical module
Gurevich, M. I.; Pryanichnikov, A. V.
2012-12-15
The methods and algorithms of the versatile NCG geometrical module used in the MCU code system are described. The NCG geometrical module is based on the Monte Carlo method and intended for solving equations of particle transport. The versatile combinatorial body method, the grid method, and methods of equalized cross sections and grain structures are used for description of the system geometry and calculation of trajectories.
Computed laminography and reconstruction algorithm
NASA Astrophysics Data System (ADS)
Que, Jie-Min; Cao, Da-Quan; Zhao, Wei; Tang, Xiao; Sun, Cui-Li; Wang, Yan-Fang; Wei, Cun-Feng; Shi, Rong-Jian; Wei, Long; Yu, Zhong-Qiang; Yan, Yong-Lian
2012-08-01
Computed laminography (CL) is an alternative to computed tomography if large objects are to be inspected with high resolution. This is especially true for planar objects. In this paper, we set up a new scanning geometry for CL, and study the algebraic reconstruction technique (ART) for CL imaging. We compare the results of ART with variant weighted functions by computer simulation with a digital phantom. It proves that ART algorithm is a good choice for the CL system.
Efficient algorithms for proximity problems
Wee, Y.C.
1989-01-01
Computational geometry is currently a very active area of research in computer science because of its applications to VLSI design, database retrieval, robotics, pattern recognition, etc. The author studies a number of proximity problems which are fundamental in computational geometry. Optimal or improved sequential and parallel algorithms for these problems are presented. Along the way, some relations among the proximity problems are also established. Chapter 2 presents an O(N log{sup 2} N) time divide-and-conquer algorithm for solving the all pairs geographic nearest neighbors problem (GNN) for a set of N sites in the plane under any L{sub p} metric. Chapter 3 presents an O(N log N) divide-and-conquer algorithm for computing the angle restricted Voronoi diagram for a set of N sites in the plane. Chapter 4 introduces a new data structure for the dynamic version of GNN. Chapter 5 defines a new formalism called the quasi-valid range aggregation. This formalism leads to a new and simple method for reducing non-range query-like problems to range queries and often to orthogonal range queries, with immediate applications to the attracted neighbor and the planar all-pairs nearest neighbors problem. Chapter 6 introduces a new approach for the construction of the Voronoi diagram. Using this approach, we design an O(log N) time O (N) processor algorithm for constructing the Voronoi diagram with L{sub 1} and L. metrics on a CREW PRAM machine. Even though the GNN and the Delaunay triangulation (DT) do not have an inclusion relation, we show, using some range type queries, how to efficiently construct DT from the GNN relations over a constant number of angular ranges.
Algorithm Helps Monitor Engine Operation
NASA Technical Reports Server (NTRS)
Eckerling, Sherry J.; Panossian, Hagop V.; Kemp, Victoria R.; Taniguchi, Mike H.; Nelson, Richard L.
1995-01-01
Real-Time Failure Control (RTFC) algorithm part of automated monitoring-and-shutdown system being developed to ensure safety and prevent major damage to equipment during ground tests of main engine of space shuttle. Includes redundant sensors, controller voting logic circuits, automatic safe-limit logic circuits, and conditional-decision logic circuits, all monitored by human technicians. Basic principles of system also applicable to stationary powerplants and other complex machinery systems.
Algorithmic Strategies in Combinatorial Chemistry
GOLDMAN,DEBORAH; ISTRAIL,SORIN; LANCIA,GIUSEPPE; PICCOLBONI,ANTONIO; WALENZ,BRIAN
2000-08-01
Combinatorial Chemistry is a powerful new technology in drug design and molecular recognition. It is a wet-laboratory methodology aimed at ``massively parallel'' screening of chemical compounds for the discovery of compounds that have a certain biological activity. The power of the method comes from the interaction between experimental design and computational modeling. Principles of ``rational'' drug design are used in the construction of combinatorial libraries to speed up the discovery of lead compounds with the desired biological activity. This paper presents algorithms, software development and computational complexity analysis for problems arising in the design of combinatorial libraries for drug discovery. The authors provide exact polynomial time algorithms and intractability results for several Inverse Problems-formulated as (chemical) graph reconstruction problems-related to the design of combinatorial libraries. These are the first rigorous algorithmic results in the literature. The authors also present results provided by the combinatorial chemistry software package OCOTILLO for combinatorial peptide design using real data libraries. The package provides exact solutions for general inverse problems based on shortest-path topological indices. The results are superior both in accuracy and computing time to the best software reports published in the literature. For 5-peptoid design, the computation is rigorously reduced to an exhaustive search of about 2% of the search space; the exact solutions are found in a few minutes.
Algorithm validation using multicolor phantoms.
Samarov, Daniel V; Clarke, Matthew L; Lee, Ji Youn; Allen, David W; Litorja, Maritoni; Hwang, Jeeseong
2012-06-01
We present a framework for hyperspectral image (HSI) analysis validation, specifically abundance fraction estimation based on HSI measurements of water soluble dye mixtures printed on microarray chips. In our work we focus on the performance of two algorithms, the Least Absolute Shrinkage and Selection Operator (LASSO) and the Spatial LASSO (SPLASSO). The LASSO is a well known statistical method for simultaneously performing model estimation and variable selection. In the context of estimating abundance fractions in a HSI scene, the "sparse" representations provided by the LASSO are appropriate as not every pixel will be expected to contain every endmember. The SPLASSO is a novel approach we introduce here for HSI analysis which takes the framework of the LASSO algorithm a step further and incorporates the rich spatial information which is available in HSI to further improve the estimates of abundance. In our work here we introduce the dye mixture platform as a new benchmark data set for hyperspectral biomedical image processing and show our algorithm's improvement over the standard LASSO. PMID:22741077
A novel stochastic optimization algorithm.
Li, B; Jiang, W
2000-01-01
This paper presents a new stochastic approach SAGACIA based on proper integration of simulated annealing algorithm (SAA), genetic algorithm (GA), and chemotaxis algorithm (CA) for solving complex optimization problems. SAGACIA combines the advantages of SAA, GA, and CA together. It has the following features: (1) it is not the simple mix of SAA, GA, and CA; (2) it works from a population; (3) it can be easily used to solve optimization problems either with continuous variables or with discrete variables, and it does not need coding and decoding,; and (4) it can easily escape from local minima and converge quickly. Good solutions can be obtained in a very short time. The search process of SAGACIA can be explained with Markov chains. In this paper, it is proved that SAGACIA has the property of global asymptotical convergence. SAGACIA has been applied to solve such problems as scheduling, the training of artificial neural networks, and the optimizing of complex functions. In all the test cases, the performance of SAGACIA is better than that of SAA, GA, and CA. PMID:18244742
Spaceborne SAR Imaging Algorithm for Coherence Optimized
Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun
2016-01-01
This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446
An algorithm for generating abstract syntax trees
NASA Technical Reports Server (NTRS)
Noonan, R. E.
1985-01-01
The notion of an abstract syntax is discussed. An algorithm is presented for automatically deriving an abstract syntax directly from a BNF grammar. The implementation of this algorithm and its application to the grammar for Modula are discussed.
Teaching Multiplication Algorithms from Other Cultures
ERIC Educational Resources Information Center
Lin, Cheng-Yao
2007-01-01
This article describes a number of multiplication algorithms from different cultures around the world: Hindu, Egyptian, Russian, Japanese, and Chinese. Students can learn these algorithms and better understand the operation and properties of multiplication.
An algorithm for segmenting range imagery
Roberts, R.S.
1997-03-01
This report describes the technical accomplishments of the FY96 Cross Cutting and Advanced Technology (CC&AT) project at Los Alamos National Laboratory. The project focused on developing algorithms for segmenting range images. The image segmentation algorithm developed during the project is described here. In addition to segmenting range images, the algorithm can fuse multiple range images thereby providing true 3D scene models. The algorithm has been incorporated into the Rapid World Modelling System at Sandia National Laboratory.
Algorithms and Requirements for Measuring Network Bandwidth
Jin, Guojun
2002-12-08
This report unveils new algorithms for actively measuring (not estimating) available bandwidths with very low intrusion, computing cross traffic, thus estimating the physical bandwidth, provides mathematical proof that the algorithms are accurate, and addresses conditions, requirements, and limitations for new and existing algorithms for measuring network bandwidths. The paper also discusses a number of important terminologies and issues for network bandwidth measurement, and introduces a fundamental parameter -Maximum Burst Size that is critical for implementing algorithms based on multiple packets.
TVFMCATS. Time Variant Floating Mean Counting Algorithm
Huffman, R.K.
1999-05-01
This software was written to test a time variant floating mean counting algorithm. The algorithm was developed by Westinghouse Savannah River Company and a provisional patent has been filed on the algorithm. The test software was developed to work with the Val Tech model IVB prototype version II count rate meter hardware. The test software was used to verify the algorithm developed by WSRC could be correctly implemented with the vendor`s hardware.
Algorithmic formulation of control problems in manipulation
NASA Technical Reports Server (NTRS)
Bejczy, A. K.
1975-01-01
The basic characteristics of manipulator control algorithms are discussed. The state of the art in the development of manipulator control algorithms is briefly reviewed. Different end-point control techniques are described together with control algorithms which operate on external sensor (imaging, proximity, tactile, and torque/force) signals in realtime. Manipulator control development at JPL is briefly described and illustrated with several figures. The JPL work pays special attention to the front or operator input end of the control algorithms.
Time Variant Floating Mean Counting Algorithm
1999-06-03
This software was written to test a time variant floating mean counting algorithm. The algorithm was developed by Westinghouse Savannah River Company and a provisional patent has been filed on the algorithm. The test software was developed to work with the Val Tech model IVB prototype version II count rate meter hardware. The test software was used to verify the algorithm developed by WSRC could be correctly implemented with the vendor''s hardware.
Efficient Algorithm for Rectangular Spiral Search
NASA Technical Reports Server (NTRS)
Brugarolas, Paul; Breckenridge, William
2008-01-01
An algorithm generates grid coordinates for a computationally efficient spiral search pattern covering an uncertain rectangular area spanned by a coordinate grid. The algorithm does not require that the grid be fixed; the algorithm can search indefinitely, expanding the grid and spiral, as needed, until the target of the search is found. The algorithm also does not require memory of coordinates of previous points on the spiral to generate the current point on the spiral.
Optimisation of nonlinear motion cueing algorithm based on genetic algorithm
NASA Astrophysics Data System (ADS)
Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid
2015-04-01
Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching
A Robustly Stabilizing Model Predictive Control Algorithm
NASA Technical Reports Server (NTRS)
Ackmece, A. Behcet; Carson, John M., III
2007-01-01
A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.
Algorithmic Processes for Increasing Design Efficiency.
ERIC Educational Resources Information Center
Terrell, William R.
1983-01-01
Discusses the role of algorithmic processes as a supplementary method for producing cost-effective and efficient instructional materials. Examines three approaches to problem solving in the context of developing training materials for the Naval Training Command: application of algorithms, quasi-algorithms, and heuristics. (EAO)
In-Trail Procedure (ITP) Algorithm Design
NASA Technical Reports Server (NTRS)
Munoz, Cesar A.; Siminiceanu, Radu I.
2007-01-01
The primary objective of this document is to provide a detailed description of the In-Trail Procedure (ITP) algorithm, which is part of the Airborne Traffic Situational Awareness In-Trail Procedure (ATSA-ITP) application. To this end, the document presents a high level description of the ITP Algorithm and a prototype implementation of this algorithm in the programming language C.
An algorithm on distributed mining association rules
NASA Astrophysics Data System (ADS)
Xu, Fan
2005-12-01
With the rapid development of the Internet/Intranet, distributed databases have become a broadly used environment in various areas. It is a critical task to mine association rules in distributed databases. The algorithms of distributed mining association rules can be divided into two classes. One is a DD algorithm, and another is a CD algorithm. A DD algorithm focuses on data partition optimization so as to enhance the efficiency. A CD algorithm, on the other hand, considers a setting where the data is arbitrarily partitioned horizontally among the parties to begin with, and focuses on parallelizing the communication. A DD algorithm is not always applicable, however, at the time the data is generated, it is often already partitioned. In many cases, it cannot be gathered and repartitioned for reasons of security and secrecy, cost transmission, or sheer efficiency. A CD algorithm may be a more appealing solution for systems which are naturally distributed over large expenses, such as stock exchange and credit card systems. An FDM algorithm provides enhancement to CD algorithm. However, CD and FDM algorithms are both based on net-structure and executing in non-shareable resources. In practical applications, however, distributed databases often are star-structured. This paper proposes an algorithm based on star-structure networks, which are more practical in application, have lower maintenance costs and which are more practical in the construction of the networks. In addition, the algorithm provides high efficiency in communication and good extension in parallel computation.
Improvements of HITS Algorithms for Spam Links
NASA Astrophysics Data System (ADS)
Asano, Yasuhito; Tezuka, Yu; Nishizeki, Takao
The HITS algorithm proposed by Kleinberg is one of the representative methods of scoring Web pages by using hyperlinks. In the days when the algorithm was proposed, most of the pages given high score by the algorithm were really related to a given topic, and hence the algorithm could be used to find related pages. However, the algorithm and the variants including Bharat's improved HITS, abbreviated to BHITS, proposed by Bharat and Henzinger cannot be used to find related pages any more on today's Web, due to an increase of spam links. In this paper, we first propose three methods to find “linkfarms,” that is, sets of spam links forming a densely connected subgraph of a Web graph. We then present an algorithm, called a trust-score algorithm, to give high scores to pages which are not spam pages with a high probability. Combining the three methods and the trust-score algorithm with BHITS, we obtain several variants of the HITS algorithm. We ascertain by experiments that one of them, named TaN+BHITS using the trust-score algorithm and the method of finding linkfarms by employing name servers, is most suitable for finding related pages on today's Web. Our algorithms take time and memory no more than those required by the original HITS algorithm, and can be executed on a PC with a small amount of main memory.
Verification of IEEE Compliant Subtractive Division Algorithms
NASA Technical Reports Server (NTRS)
Miner, Paul S.; Leathrum, James F., Jr.
1996-01-01
A parameterized definition of subtractive floating point division algorithms is presented and verified using PVS. The general algorithm is proven to satisfy a formal definition of an IEEE standard for floating point arithmetic. The utility of the general specification is illustrated using a number of different instances of the general algorithm.
Improvements to the stand and hit algorithm
Boneh, A.; Boneh, S.; Caron, R.; Jibrin, S.
1994-12-31
The stand and hit algorithm is a probabilistic algorithm for detecting necessary constraints. The algorithm stands at a point in the feasible region and hits constraints by moving towards the boundary along randomly generated directions. In this talk we discuss methods for choosing the standing point. As well, we present the undetected first rule for determining the hit constraints.
Parameter incremental learning algorithm for neural networks.
Wan, Sheng; Banta, Larry E
2006-11-01
In this paper, a novel stochastic (or online) training algorithm for neural networks, named parameter incremental learning (PIL) algorithm, is proposed and developed. The main idea of the PIL strategy is that the learning algorithm should not only adapt to the newly presented input-output training pattern by adjusting parameters, but also preserve the prior results. A general PIL algorithm for feedforward neural networks is accordingly presented as the first-order approximate solution to an optimization problem, where the performance index is the combination of proper measures of preservation and adaptation. The PIL algorithms for the multilayer perceptron (MLP) are subsequently derived. Numerical studies show that for all the three benchmark problems used in this paper the PIL algorithm for MLP is measurably superior to the standard online backpropagation (BP) algorithm and the stochastic diagonal Levenberg-Marquardt (SDLM) algorithm in terms of the convergence speed and accuracy. Other appealing features of the PIL algorithm are that it is computationally as simple as the BP algorithm, and as easy to use as the BP algorithm. It, therefore, can be applied, with better performance, to any situations where the standard online BP algorithm is applicable. PMID:17131658
New Results in Astrodynamics Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Coverstone-Carroll, V.; Hartmann, J. W.; Williams, S. N.; Mason, W. J.
1998-01-01
Generic algorithms have gained popularity as an effective procedure for obtaining solutions to traditionally difficult space mission optimization problems. In this paper, a brief survey of the use of genetic algorithms to solve astrodynamics problems is presented and is followed by new results obtained from applying a Pareto genetic algorithm to the optimization of low-thrust interplanetary spacecraft missions.
Learning Intelligent Genetic Algorithms Using Japanese Nonograms
ERIC Educational Resources Information Center
Tsai, Jinn-Tsong; Chou, Ping-Yi; Fang, Jia-Cen
2012-01-01
An intelligent genetic algorithm (IGA) is proposed to solve Japanese nonograms and is used as a method in a university course to learn evolutionary algorithms. The IGA combines the global exploration capabilities of a canonical genetic algorithm (CGA) with effective condensed encoding, improved fitness function, and modified crossover and…
Color sorting algorithm based on K-means clustering algorithm
NASA Astrophysics Data System (ADS)
Zhang, BaoFeng; Huang, Qian
2009-11-01
In the process of raisin production, there were a variety of color impurities, which needs be removed effectively. A new kind of efficient raisin color-sorting algorithm was presented here. First, the technology of image processing basing on the threshold was applied for the image pre-processing, and then the gray-scale distribution characteristic of the raisin image was found. In order to get the chromatic aberration image and reduce some disturbance, we made the flame image subtraction that the target image data minus the background image data. Second, Haar wavelet filter was used to get the smooth image of raisins. According to the different colors and mildew, spots and other external features, the calculation was made to identify the characteristics of their images, to enable them to fully reflect the quality differences between the raisins of different types. After the processing above, the image were analyzed by K-means clustering analysis method, which can achieve the adaptive extraction of the statistic features, in accordance with which, the image data were divided into different categories, thereby the categories of abnormal colors were distinct. By the use of this algorithm, the raisins of abnormal colors and ones with mottles were eliminated. The sorting rate was up to 98.6%, and the ratio of normal raisins to sorted grains was less than one eighth.
Validation of Reference Genes for Real-Time PCR of Reproductive System in the Black Tiger Shrimp
Leelatanawit, Rungnapa; Klanchui, Amornpan; Uawisetwathana, Umaporn; Karoonuthaisiri, Nitsara
2012-01-01
Gene expression of reproductive system of the black tiger shrimp (Peneaus monodon) has been widely studied to address poor maturation problem in captivity. However, a systematic evaluation of reference genes in quantitative real-time PCR (qPCR) for P. monodon reproductive organs is lacking. In this study, the stability of four potential reference genes (18s rRNA, GAPDH, β-actin, and EF1-α) was examined in the reproductive tissues in various conditions using bioinformatic tools: NormFinder and geNorm. For NormFinder, EF1-α and GAPDH ranked first and second as the most stable genes in testis groups whereas GAPDH and EF1-α were for ovaries from wild-caught broodstock and domesticated groups. EF1-α and β-actin ranked first and second for the eyestalk ablated ovaries. For geNorm, EF1-α and GAPDH had the best stability in all testis and ovaries from domesticated groups whereas EF1-α and β-actin were the best for ovaries from wild-caught and eyestalk ablated groups. Moreover, the expression levels of two well-known reproductive genes, Dmc1 and Vitellogenin, were used to validate these reference genes. When normalized to EF1-α, the expected expression patterns were obtained in all cases. Therefore, this work suggests that EF1-α is more versatile as reference genes in qPCR analysis for reproductive system in P. monodon. PMID:23285145
Borowska, D; Rothwell, L; Bailey, R A; Watson, K; Kaiser, P
2016-02-01
Quantitative polymerase chain reaction (qPCR) is a powerful technique for quantification of gene expression, especially genes involved in immune responses. Although qPCR is a very efficient and sensitive tool, variations in the enzymatic efficiency, quality of RNA and the presence of inhibitors can lead to errors. Therefore, qPCR needs to be normalised to obtain reliable results and allow comparison. The most common approach is to use reference genes as internal controls in qPCR analyses. In this study, expression of seven genes, including β-actin (ACTB), β-2-microglobulin (B2M), glyceraldehyde-3-phosphate dehydrogenase (GAPDH), β-glucuronidase (GUSB), TATA box binding protein (TBP), α-tubulin (TUBAT) and 28S ribosomal RNA (r28S), was determined in cells isolated from chicken lymphoid tissues and stimulated with three different mitogens. The stability of the genes was measured using geNorm, NormFinder and BestKeeper software. The results from both geNorm and NormFinder were that the three most stably expressed genes in this panel were TBP, GAPDH and r28S. BestKeeper did not generate clear answers because of the highly heterogeneous sample set. Based on these data we will include TBP in future qPCR normalisation. The study shows the importance of appropriate reference gene normalisation in other tissues before qPCR analysis. PMID:26872627
Farrokhi, A; Eslaminejad, M B; Nazarian, H; Moradmand, A; Samadian, A; Akhlaghi, A
2012-01-01
Reverse transcription quantitative PCR (RT—qPCR) is one of the best methods for the study of mesenchymal stem cell (MSC) differentiation by gene expression analysis. This technique needs appropriate reference or housekeeping genes (HKGs) to normalize the expression of the genes of interest. In the present study the expression stability of six widely used HKGs including Actb, Btub, Hprt, B2m, Gusb and Tfrc was investigated during rat MSC differentiation into osteocytes, adipocytes and chondrocytes lineages using geNorm and NormFinder software. RT—qPCR data analyzed by geNorm revealed the different sets of suitable reference genes for each cell type. NormFinder also showed similar results. Analysis of the combined data of MSCs with each differentiated cell type revealed the considerable shift in expression of some reference genes during differentiation; for example Gusb and B2m were among the least stable genes in MSCs but the most stable in chondrocytes. Normalization of specific genes for each lineage by different reference genes showed considerable difference in their expression fold change. In conclusion, for the appropriate analysis of gene expression during rat MSC differentiation and also for monitoring differentiation procedures, it is better to consider precisely the reference gene stability and select suitable reference genes for each purpose. PMID:22595340
Liu, Chenlin; Wu, Guangting; Huang, Xiaohang; Liu, Shenghao; Cong, Bailin
2012-05-01
Antarctic ice alga Chlamydomonas sp. ICE-L can endure extreme low temperature and high salinity stress under freezing conditions. To elucidate the molecular acclimation mechanisms using gene expression analysis, the expression stabilities of ten housekeeping genes of Chlamydomonas sp. ICE-L during freezing stress were analyzed. Some discrepancies were detected in the ranking of the candidate reference genes between geNorm and NormFinder programs, but there was substantial agreement between the groups of genes with the most and the least stable expression. RPL19 was ranked as the best candidate reference genes. Pairwise variation (V) analysis indicated the combination of two reference genes was sufficient for qRT-PCR data normalization under the experimental conditions. Considering the co-regulation between RPL19 and RPL32 (the most stable gene pairs given by geNorm program), we propose that the mean data rendered by RPL19 and GAPDH (the most stable gene pairs given by NormFinder program) be used to normalize gene expression values in Chlamydomonas sp. ICE-L more accurately. The example of FAD3 gene expression calculation demonstrated the importance of selecting an appropriate category and number of reference genes to achieve an accurate and reliable normalization of gene expression during freeze acclimation in Chlamydomonas sp. ICE-L. PMID:22527038
Parallelized Dilate Algorithm for Remote Sensing Image
Zhang, Suli; Hu, Haoran; Pan, Xin
2014-01-01
As an important algorithm, dilate algorithm can give us more connective view of a remote sensing image which has broken lines or objects. However, with the technological progress of satellite sensor, the resolution of remote sensing image has been increasing and its data quantities become very large. This would lead to the decrease of algorithm running speed or cannot obtain a result in limited memory or time. To solve this problem, our research proposed a parallelized dilate algorithm for remote sensing Image based on MPI and MP. Experiments show that our method runs faster than traditional single-process algorithm. PMID:24955392
Problem solving with genetic algorithms and Splicer
NASA Technical Reports Server (NTRS)
Bayer, Steven E.; Wang, Lui
1991-01-01
Genetic algorithms are highly parallel, adaptive search procedures (i.e., problem-solving methods) loosely based on the processes of population genetics and Darwinian survival of the fittest. Genetic algorithms have proven useful in domains where other optimization techniques perform poorly. The main purpose of the paper is to discuss a NASA-sponsored software development project to develop a general-purpose tool for using genetic algorithms. The tool, called Splicer, can be used to solve a wide variety of optimization problems and is currently available from NASA and COSMIC. This discussion is preceded by an introduction to basic genetic algorithm concepts and a discussion of genetic algorithm applications.
Efficient demultiplexing algorithm for noncontiguous carriers
NASA Technical Reports Server (NTRS)
Thanawala, A. A.; Kwatra, S. C.; Jamali, M. M.; Budinger, J.
1992-01-01
A channel separation algorithm for the frequency division multiple access/time division multiplexing (FDMA/TDM) scheme is presented. It is shown that implementation using this algorithm can be more effective than the fast Fourier transform (FFT) algorithm when only a small number of carriers need to be selected from many, such as satellite Earth terminals. The algorithm is based on polyphase filtering followed by application of a generalized Walsh-Hadamard transform (GWHT). Comparison of the transform technique used in this algorithm with discrete Fourier transform (DFT) and FFT is given. Estimates of the computational rates and power requirements to implement this system are also given.
Improved piecewise orthogonal signal correction algorithm.
Feudale, Robert N; Tan, Huwei; Brown, Steven D
2003-10-01
Piecewise orthogonal signal correction (POSC), an algorithm that performs local orthogonal filtering, was recently developed to process spectral signals. POSC was shown to improve partial leastsquares regression models over models built with conventional OSC. However, rank deficiencies within the POSC algorithm lead to artifacts in the filtered spectra when removing two or more POSC components. Thus, an updated OSC algorithm for use with the piecewise procedure is reported. It will be demonstrated how the mathematics of this updated OSC algorithm were derived from the previous version and why some OSC versions may not be as appropriate to use with the piecewise modeling procedure as the algorithm reported here. PMID:14639746
Is there a best hyperspectral detection algorithm?
NASA Astrophysics Data System (ADS)
Manolakis, D.; Lockwood, R.; Cooley, T.; Jacobson, J.
2009-05-01
A large number of hyperspectral detection algorithms have been developed and used over the last two decades. Some algorithms are based on highly sophisticated mathematical models and methods; others are derived using intuition and simple geometrical concepts. The purpose of this paper is threefold. First, we discuss the key issues involved in the design and evaluation of detection algorithms for hyperspectral imaging data. Second, we present a critical review of existing detection algorithms for practical hyperspectral imaging applications. Finally, we argue that the "apparent" superiority of sophisticated algorithms with simulated data or in laboratory conditions, does not necessarily translate to superiority in real-world applications.
Filtering algorithm for dotted interferences
NASA Astrophysics Data System (ADS)
Osterloh, K.; Bücherl, T.; Lierse von Gostomski, Ch.; Zscherpel, U.; Ewert, U.; Bock, S.
2011-09-01
An algorithm has been developed to remove reliably dotted interferences impairing the perceptibility of objects within a radiographic image. This particularly is a major challenge encountered with neutron radiographs collected at the NECTAR facility, Forschungs-Neutronenquelle Heinz Maier-Leibnitz (FRM II): the resulting images are dominated by features resembling a snow flurry. These artefacts are caused by scattered neutrons, gamma radiation, cosmic radiation, etc. all hitting the detector CCD directly in spite of a sophisticated shielding. This makes such images rather useless for further direct evaluations. One approach to resolve this problem of these random effects would be to collect a vast number of single images, to combine them appropriately and to process them with common image filtering procedures. However, it has been shown that, e.g. median filtering, depending on the kernel size in the plane and/or the number of single shots to be combined, is either insufficient or tends to blur sharp lined structures. This inevitably makes a visually controlled processing image by image unavoidable. Particularly in tomographic studies, it would be by far too tedious to treat each single projection by this way. Alternatively, it would be not only more comfortable but also in many cases the only reasonable approach to filter a stack of images in a batch procedure to get rid of the disturbing interferences. The algorithm presented here meets all these requirements. It reliably frees the images from the snowy pattern described above without the loss of fine structures and without a general blurring of the image. It consists of an iterative, within a batch procedure parameter free filtering algorithm aiming to eliminate the often complex interfering artefacts while leaving the original information untouched as far as possible.
Wavelet Algorithms for Illumination Computations
NASA Astrophysics Data System (ADS)
Schroder, Peter
One of the core problems of computer graphics is the computation of the equilibrium distribution of light in a scene. This distribution is given as the solution to a Fredholm integral equation of the second kind involving an integral over all surfaces in the scene. In the general case such solutions can only be numerically approximated, and are generally costly to compute, due to the geometric complexity of typical computer graphics scenes. For this computation both Monte Carlo and finite element techniques (or hybrid approaches) are typically used. A simplified version of the illumination problem is known as radiosity, which assumes that all surfaces are diffuse reflectors. For this case hierarchical techniques, first introduced by Hanrahan et al. (32), have recently gained prominence. The hierarchical approaches lead to an asymptotic improvement when only finite precision is required. The resulting algorithms have cost proportional to O(k^2 + n) versus the usual O(n^2) (k is the number of input surfaces, n the number of finite elements into which the input surfaces are meshed). Similarly a hierarchical technique has been introduced for the more general radiance problem (which allows glossy reflectors) by Aupperle et al. (6). In this dissertation we show the equivalence of these hierarchical techniques to the use of a Haar wavelet basis in a general Galerkin framework. By so doing, we come to a deeper understanding of the properties of the numerical approximations used and are able to extend the hierarchical techniques to higher orders. In particular, we show the correspondence of the geometric arguments underlying hierarchical methods to the theory of Calderon-Zygmund operators and their sparse realization in wavelet bases. The resulting wavelet algorithms for radiosity and radiance are analyzed and numerical results achieved with our implementation are reported. We find that the resulting algorithms achieve smaller and smoother errors at equivalent work.
ALFA: Automated Line Fitting Algorithm
NASA Astrophysics Data System (ADS)
Wesson, R.
2015-12-01
ALFA fits emission line spectra of arbitrary wavelength coverage and resolution, fully automatically. It uses a catalog of lines which may be present to construct synthetic spectra, the parameters of which are then optimized by means of a genetic algorithm. Uncertainties are estimated using the noise structure of the residuals. An emission line spectrum containing several hundred lines can be fitted in a few seconds using a single processor of a typical contemporary desktop or laptop PC. Data cubes in FITS format can be analysed using multiple processors, and an analysis of tens of thousands of deep spectra obtained with instruments such as MUSE will take a few hours.
Newman-Janis Algorithm Revisited
NASA Astrophysics Data System (ADS)
Brauer, O.; Camargo, H. A.; Socolovsky, M.
2015-01-01
The purpose of the present article is to show that the Newman-Janis and Newman et al algorithm used to derive the Kerr and Kerr-Newman metrics respectively, automatically leads to the extension of the initial non negative polar radial coordinate r to a cartesian coordinate running from to , thus introducing in a natural way the region in the above spacetimes. Using Boyer-Lindquist and ellipsoidal coordinates, we discuss some geometrical aspects of the positive and negative regions of , like horizons, ergosurfaces, and foliation structures
Algorithms for skiascopy measurement automatization
NASA Astrophysics Data System (ADS)
Fomins, Sergejs; Trukša, Renārs; KrūmiĆa, Gunta
2014-10-01
Automatic dynamic infrared retinoscope was developed, which allows to run procedure at a much higher rate. Our system uses a USB image sensor with up to 180 Hz refresh rate equipped with a long focus objective and 850 nm infrared light emitting diode as light source. Two servo motors driven by microprocessor control the rotation of semitransparent mirror and motion of retinoscope chassis. Image of eye pupil reflex is captured via software and analyzed along the horizontal plane. Algorithm for automatic accommodative state analysis is developed based on the intensity changes of the fundus reflex.
Wire Detection Algorithms for Navigation
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia I.
2002-01-01
In this research we addressed the problem of obstacle detection for low altitude rotorcraft flight. In particular, the problem of detecting thin wires in the presence of image clutter and noise was studied. Wires present a serious hazard to rotorcrafts. Since they are very thin, their detection early enough so that the pilot has enough time to take evasive action is difficult, as their images can be less than one or two pixels wide. Two approaches were explored for this purpose. The first approach involved a technique for sub-pixel edge detection and subsequent post processing, in order to reduce the false alarms. After reviewing the line detection literature, an algorithm for sub-pixel edge detection proposed by Steger was identified as having good potential to solve the considered task. The algorithm was tested using a set of images synthetically generated by combining real outdoor images with computer generated wire images. The performance of the algorithm was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter. The second approach involved the use of an example-based learning scheme namely, Support Vector Machines. The purpose of this approach was to explore the feasibility of an example-based learning based approach for the task of detecting wires from their images. Support Vector Machines (SVMs) have emerged as a promising pattern classification tool and have been used in various applications. It was found that this approach is not suitable for very thin wires and of course, not suitable at all for sub-pixel thick wires. High dimensionality of the data as such does not present a major problem for SVMs. However it is desirable to have a large number of training examples especially for high dimensional data. The main difficulty in using SVMs (or any other example-based learning
Ordered subsets algorithms for transmission tomography.
Erdogan, H; Fessler, J A
1999-11-01
The ordered subsets EM (OSEM) algorithm has enjoyed considerable interest for emission image reconstruction due to its acceleration of the original EM algorithm and ease of programming. The transmission EM reconstruction algorithm converges very slowly and is not used in practice. In this paper, we introduce a simultaneous update algorithm called separable paraboloidal surrogates (SPS) that converges much faster than the transmission EM algorithm. Furthermore, unlike the 'convex algorithm' for transmission tomography, the proposed algorithm is monotonic even with nonzero background counts. We demonstrate that the ordered subsets principle can also be applied to the new SPS algorithm for transmission tomography to accelerate 'convergence', albeit with similar sacrifice of global convergence properties as for OSEM. We implemented and evaluated this ordered subsets transmission (OSTR) algorithm. The results indicate that the OSTR algorithm speeds up the increase in the objective function by roughly the number of subsets in the early iterates when compared to the ordinary SPS algorithm. We compute mean square errors and segmentation errors for different methods and show that OSTR is superior to OSEM applied to the logarithm of the transmission data. However, penalized-likelihood reconstructions yield the best quality images among all other methods tested. PMID:10588288
Empirical study of parallel LRU simulation algorithms
NASA Technical Reports Server (NTRS)
Carr, Eric; Nicol, David M.
1994-01-01
This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.
Algorithms versus architectures for computational chemistry
NASA Technical Reports Server (NTRS)
Partridge, H.; Bauschlicher, C. W., Jr.
1986-01-01
The algorithms employed are computationally intensive and, as a result, increased performance (both algorithmic and architectural) is required to improve accuracy and to treat larger molecular systems. Several benchmark quantum chemistry codes are examined on a variety of architectures. While these codes are only a small portion of a typical quantum chemistry library, they illustrate many of the computationally intensive kernels and data manipulation requirements of some applications. Furthermore, understanding the performance of the existing algorithm on present and proposed supercomputers serves as a guide for future programs and algorithm development. The algorithms investigated are: (1) a sparse symmetric matrix vector product; (2) a four index integral transformation; and (3) the calculation of diatomic two electron Slater integrals. The vectorization strategies are examined for these algorithms for both the Cyber 205 and Cray XMP. In addition, multiprocessor implementations of the algorithms are looked at on the Cray XMP and on the MIT static data flow machine proposed by DENNIS.
A compilation of jet finding algorithms
Flaugher, B.; Meier, K.
1992-12-31
Technical descriptions of jet finding algorithms currently in use in p{anti p} collider experiments (CDF, UA1, UA2), e{sup +}e{sup {minus}} experiments and Monte-Carlo event generators (LUND programs, ISAJET) have been collected. For the hadron collider experiments, the clustering methods fall into two categories: cone algorithms and nearest-neighbor algorithms. In addition, UA2 has employed a combination of both methods for some analysis. While there are clearly differences between the cone and nearest-neighbor algorithms, the authors have found that there are also differences among the cone algorithms in the details of how the centroid of a cone cluster is located and how the E{sub T} and P{sub T} of the jet are defined. The most commonly used jet algorithm in electron-positron experiments is the JADE-type cluster algorithm. Five various incarnations of this approach have been described.
A Synthesized Heuristic Task Scheduling Algorithm
Dai, Yanyan; Zhang, Xiangli
2014-01-01
Aiming at the static task scheduling problems in heterogeneous environment, a heuristic task scheduling algorithm named HCPPEFT is proposed. In task prioritizing phase, there are three levels of priority in the algorithm to choose task. First, the critical tasks have the highest priority, secondly the tasks with longer path to exit task will be selected, and then algorithm will choose tasks with less predecessors to schedule. In resource selection phase, the algorithm is selected task duplication to reduce the interresource communication cost, besides forecasting the impact of an assignment for all children of the current task permits better decisions to be made in selecting resources. The algorithm proposed is compared with STDH, PEFT, and HEFT algorithms through randomly generated graphs and sets of task graphs. The experimental results show that the new algorithm can achieve better scheduling performance. PMID:25254244
Search properties of some sequential decoding algorithms.
NASA Technical Reports Server (NTRS)
Geist, J. M.
1973-01-01
Sequential decoding procedures are studied in the context of selecting a path through a tree. Several algorithms are considered, and their properties are compared. It is shown that the stack algorithm introduced by Zigangirov (1966) and by Jelinek (1969) is essentially equivalent to the Fano algorithm with regard to the set of nodes examined and the path selected, although the description, implementation, and action of the two algorithms are quite different. A modified Fano algorithm is introduced, in which the quantizing parameter is eliminated. It can be inferred from limited simulation results that, at least in some applications, the new algorithm is computationally inferior to the old. However, it is of some theoretical interest since the conventional Fano algorithm may be considered to be a quantized version of it.
An efficient parallel termination detection algorithm
Baker, A. H.; Crivelli, S.; Jessup, E. R.
2004-05-27
Information local to any one processor is insufficient to monitor the overall progress of most distributed computations. Typically, a second distributed computation for detecting termination of the main computation is necessary. In order to be a useful computational tool, the termination detection routine must operate concurrently with the main computation, adding minimal overhead, and it must promptly and correctly detect termination when it occurs. In this paper, we present a new algorithm for detecting the termination of a parallel computation on distributed-memory MIMD computers that satisfies all of those criteria. A variety of termination detection algorithms have been devised. Of these, the algorithm presented by Sinha, Kale, and Ramkumar (henceforth, the SKR algorithm) is unique in its ability to adapt to the load conditions of the system on which it runs, thereby minimizing the impact of termination detection on performance. Because their algorithm also detects termination quickly, we consider it to be the most efficient practical algorithm presently available. The termination detection algorithm presented here was developed for use in the PMESC programming library for distributed-memory MIMD computers. Like the SKR algorithm, our algorithm adapts to system loads and imposes little overhead. Also like the SKR algorithm, ours is tree-based, and it does not depend on any assumptions about the physical interconnection topology of the processors or the specifics of the distributed computation. In addition, our algorithm is easier to implement and requires only half as many tree traverses as does the SKR algorithm. This paper is organized as follows. In section 2, we define our computational model. In section 3, we review the SKR algorithm. We introduce our new algorithm in section 4, and prove its correctness in section 5. We discuss its efficiency and present experimental results in section 6.
The Aquarius Salinity Retrieval Algorithm
NASA Technical Reports Server (NTRS)
Meissner, Thomas; Wentz, Frank; Hilburn, Kyle; Lagerloef, Gary; Le Vine, David
2012-01-01
The first part of this presentation gives an overview over the Aquarius salinity retrieval algorithm. The instrument calibration [2] converts Aquarius radiometer counts into antenna temperatures (TA). The salinity retrieval algorithm converts those TA into brightness temperatures (TB) at a flat ocean surface. As a first step, contributions arising from the intrusion of solar, lunar and galactic radiation are subtracted. The antenna pattern correction (APC) removes the effects of cross-polarization contamination and spillover. The Aquarius radiometer measures the 3rd Stokes parameter in addition to vertical (v) and horizontal (h) polarizations, which allows for an easy removal of ionospheric Faraday rotation. The atmospheric absorption at L-band is almost entirely due to molecular oxygen, which can be calculated based on auxiliary input fields from numerical weather prediction models and then successively removed from the TB. The final step in the TA to TB conversion is the correction for the roughness of the sea surface due to wind, which is addressed in more detail in section 3. The TB of the flat ocean surface can now be matched to a salinity value using a surface emission model that is based on a model for the dielectric constant of sea water [3], [4] and an auxiliary field for the sea surface temperature. In the current processing only v-pol TB are used for this last step.
Region processing algorithm for HSTAMIDS
NASA Astrophysics Data System (ADS)
Ngan, Peter; Burke, Sean; Cresci, Roger; Wilson, Joseph N.; Gader, Paul; Ho, Dominic K. C.
2006-05-01
The AN/PSS-14 (a.k.a. HSTAMIDS) has been tested for its performance in South East Asia, Thailand), South Africa (Namibia) and in November of 2005 in South West Asia (Afghanistan). The system has been proven effective in manual demining particularly in discriminating indigenous, metallic artifacts in the minefields. The Humanitarian Demining Research and Development (HD R&D) Program has sought to further improve the system to address specific needs in several areas. One particular area of these improvement efforts is the development of a mine detection/discrimination improvement software algorithm called Region Processing (RP). RP is an innovative technique in processing and is designed to work on a set of data acquired in a unique sweep pattern over a region-of-interest (ROI). The RP team is a joint effort consisting of three universities (University of Florida, University of Missouri, and Duke University), but is currently being led by the University of Florida. This paper describes the state-of-the-art Region Processing algorithm, its implementation into the current HSTAMIDS system, and its most recent test results.
Enhanced algorithms for stochastic programming
Krishna, A.S.
1993-09-01
In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean of a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.
Digital Shaping Algorithms for GODDESS
NASA Astrophysics Data System (ADS)
Lonsdale, Sarah-Jane; Cizewski, Jolie; Ratkiewicz, Andrew; Pain, Steven
2014-09-01
Gammasphere-ORRUBA: Dual Detectors for Experimental Structure Studies (GODDESS) combines the highly segmented position-sensitive silicon strip detectors of ORRUBA with up to 110 Compton-suppressed HPGe detectors from Gammasphere, for high resolution for particle-gamma coincidence measurements. The signals from the silicon strip detectors have position-dependent rise times, and require different forms of pulse shaping for optimal position and energy resolutions. Traditionally, a compromise was achieved with a single shaping of the signals performed by conventional analog electronics. However, there are benefits to using digital acquisition of the detector signals, including the ability to apply multiple custom shaping algorithms to the same signal, each optimized for position and energy, in addition to providing a flexible triggering system, and a reduction in rate-limitation due to pile-up. Recent developments toward creating digital signal processing algorithms for GODDESS will be discussed. This work is supported in part by the U.S. D.O.E. and N.S.F.
RES: Regularized Stochastic BFGS Algorithm
NASA Astrophysics Data System (ADS)
Mokhtari, Aryan; Ribeiro, Alejandro
2014-12-01
RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.
Ligand Identification Scoring Algorithm (LISA)
Zheng, Zheng; Merz, Kenneth M.
2011-01-01
A central problem in de novo drug design is determining the binding affinity of a ligand with a receptor. A new scoring algorithm is presented that estimates the binding affinity of a protein-ligand complex given a three-dimensional structure. The method, LISA (Ligand Identification Scoring Algorithm), uses an empirical scoring function to describe the binding free energy. Interaction terms have been designed to account for van der Waals (VDW) contacts, hydrogen bonding, desolvation effects and metal chelation to model the dissociation equilibrium constants using a linear model. Atom types have been introduced to differentiate the parameters for VDW, H-bonding interactions and metal chelation between different atom pairs. A training set of 492 protein-ligand complexes was selected for the fitting process. Different test sets have been examined to evaluate its ability to predict experimentally measured binding affinities. By comparing with other well known scoring functions, the results show that LISA has advantages over many existing scoring functions in simulating protein-ligand binding affinity, especially metalloprotein-ligand binding affinity. Artificial Neural Network (ANN) was also used in order to demonstrate that the energy terms in LISA are well designed and do not require extra cross terms. PMID:21561101
HYBRID FAST HANKEL TRANSFORM ALGORITHM FOR ELECTROMAGNETIC MODELING
A hybrid fast Hankel transform algorithm has been developed that uses several complementary features of two existing algorithms: Anderson's digital filtering or fast Hankel transform (FHT) algorithm and Chave's quadrature and continued fraction algorithm. A hybrid FHT subprogram ...
Fusing face-verification algorithms and humans.
O'Toole, Alice J; Abdi, Hervé; Jiang, Fang; Phillips, P Jonathon
2007-10-01
It has been demonstrated recently that state-of-the-art face-recognition algorithms can surpass human accuracy at matching faces over changes in illumination. The ranking of algorithms and humans by accuracy, however, does not provide information about whether algorithms and humans perform the task comparably or whether algorithms and humans can be fused to improve performance. In this paper, we fused humans and algorithms using partial least square regression (PLSR). In the first experiment, we applied PLSR to face-pair similarity scores generated by seven algorithms participating in the Face Recognition Grand Challenge. The PLSR produced an optimal weighting of the similarity scores, which we tested for generality with a jackknife procedure. Fusing the algorithms' similarity scores using the optimal weights produced a twofold reduction of error rate over the most accurate algorithm. Next, human-subject-generated similarity scores were added to the PLSR analysis. Fusing humans and algorithms increased the performance to near-perfect classification accuracy. These results are discussed in terms of maximizing face-verification accuracy with hybrid systems consisting of multiple algorithms and humans. PMID:17926698
Effects of visualization on algorithm comprehension
NASA Astrophysics Data System (ADS)
Mulvey, Matthew
Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.
A Probabilistic Cell Tracking Algorithm
NASA Astrophysics Data System (ADS)
Steinacker, Reinhold; Mayer, Dieter; Leiding, Tina; Lexer, Annemarie; Umdasch, Sarah
2013-04-01
The research described below was carried out during the EU-Project Lolight - development of a low cost, novel and accurate lightning mapping and thunderstorm (supercell) tracking system. The Project aims to develop a small-scale tracking method to determine and nowcast characteristic trajectories and velocities of convective cells and cell complexes. The results of the algorithm will provide a higher accuracy than current locating systems distributed on a coarse scale. Input data for the developed algorithm are two temporally separated lightning density fields. Additionally a Monte Carlo method minimizing a cost function is utilizied which leads to a probabilistic forecast for the movement of thunderstorm cells. In the first step the correlation coefficients between the first and the second density field are computed. Hence, the first field is shifted by all shifting vectors which are physically allowed. The maximum length of each vector is determined by the maximum possible speed of thunderstorm cells and the difference in time for both density fields. To eliminate ambiguities in determination of directions and velocities, the so called Random Walker of the Monte Carlo process is used. Using this method a grid point is selected at random. Moreover, one vector out of all predefined shifting vectors is suggested - also at random but with a probability that is related to the correlation coefficient. If this exchange of shifting vectors reduces the cost function, the new direction and velocity are accepted. Otherwise it is discarded. This process is repeated until the change of cost functions falls below a defined threshold. The Monte Carlo run gives information about the percentage of accepted shifting vectors for all grid points. In the course of the forecast, amplifications of cell density are permitted. For this purpose, intensity changes between the investigated areas of both density fields are taken into account. Knowing the direction and speed of thunderstorm
Regier, Michael D; Moodie, Erica E M
2016-05-01
We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience. PMID:27227718
Online Planning Algorithms for POMDPs
Ross, Stéphane; Pineau, Joelle; Paquet, Sébastien; Chaib-draa, Brahim
2009-01-01
Partially Observable Markov Decision Processes (POMDPs) provide a rich framework for sequential decision-making under uncertainty in stochastic domains. However, solving a POMDP is often intractable except for small problems due to their complexity. Here, we focus on online approaches that alleviate the computational complexity by computing good local policies at each decision step during the execution. Online algorithms generally consist of a lookahead search to find the best action to execute at each time step in an environment. Our objectives here are to survey the various existing online POMDP methods, analyze their properties and discuss their advantages and disadvantages; and to thoroughly evaluate these online approaches in different environments under various metrics (return, error bound reduction, lower bound improvement). Our experimental results indicate that state-of-the-art online heuristic search methods can handle large POMDP domains efficiently. PMID:19777080
[Algorithm for treating preoperative anemia].
Bisbe Vives, E; Basora Macaya, M
2015-06-01
Hemoglobin optimization and treatment of preoperative anemia in surgery with a moderate to high risk of surgical bleeding reduces the rate of transfusions and improves hemoglobin levels at discharge and can also improve postoperative outcomes. To this end, we need to schedule preoperative visits sufficiently in advance to treat the anemia. The treatment algorithm we propose comes with a simple checklist to determine whether we should refer the patient to a specialist or if we can treat the patient during the same visit. With the blood count test and additional tests for iron metabolism, inflammation parameter and glomerular filtration rate, we can decide whether to start the treatment with intravenous iron alone or erythropoietin with or without iron. With significant anemia, a visit after 15 days might be necessary to observe the response and supplement the treatment if required. The hemoglobin objective will depend on the type of surgery and the patient's characteristics. PMID:26320341
[A simple algorithm for anemia].
Egyed, Miklós
2014-03-01
The author presents a novel algorithm for anaemia based on the erythrocyte haemoglobin content. The scheme is based on the aberrations of erythropoiesis and not on the pathophysiology of anaemia. The hemoglobin content of one erytrocyte is between 28-35 picogram. Any disturbance in hemoglobin synthesis can lead to a lower than 28 picogram hemoglobin content of the erythrocyte which will lead to hypochromic anaemia. In contrary, disturbances of nucleic acid metabolism will result in a hemoglobin content greater than 36 picogram, and this will result in hyperchromic anaemia. Normochromic anemia, characterised by hemoglobin content of erythrocytes between 28 and 35 picogram, is the result of alteration in the proliferation of erythropoeisis. Based on these three categories of anaemia, a unique system can be constructed, which can be used as a model for basic laboratory investigations and work-up of anaemic patients. PMID:24583558
Measuring anomaly with algorithmic entropy
NASA Astrophysics Data System (ADS)
Solano, Wanda M.
Anomaly detection refers to the identification of observations that are considered outside of normal. Since they are unknown to the system prior to training and rare, the anomaly detection problem is particularly challenging. Model based techniques require large quantities of existing data are to build the model. Statistically based techniques result in the use of statistical metrics or thresholds for determining whether a particular observation is anomalous. I propose a novel approach to anomaly detection using wavelet based algorithmic entropy that does not require modeling or large amounts of data. My method embodies the concept of information distance that rests on the fact that data encodes information. This distance is large when little information is shared, and small when there is greater information sharing. I compare my approach with several techniques in the literature using data obtained from testing of NASA's Space Shuttle Main Engines (SSME)
Algorithmic synthesis using Python compiler
NASA Astrophysics Data System (ADS)
Cieszewski, Radoslaw; Romaniuk, Ryszard; Pozniak, Krzysztof; Linczuk, Maciej
2015-09-01
This paper presents a python to VHDL compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and translate it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the programmed circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. This can be achieved by using many computational resources at the same time. Creating parallel programs implemented in FPGAs in pure HDL is difficult and time consuming. Using higher level of abstraction and High-Level Synthesis compiler implementation time can be reduced. The compiler has been implemented using the Python language. This article describes design, implementation and results of created tools.
Improved Heat-Stress Algorithm
NASA Technical Reports Server (NTRS)
Teets, Edward H., Jr.; Fehn, Steven
2007-01-01
NASA Dryden presents an improved and automated site-specific algorithm for heat-stress approximation using standard atmospheric measurements routinely obtained from the Edwards Air Force Base weather detachment. Heat stress, which is the net heat load a worker may be exposed to, is officially measured using a thermal-environment monitoring system to calculate the wet-bulb globe temperature (WBGT). This instrument uses three independent thermometers to measure wet-bulb, dry-bulb, and the black-globe temperatures. By using these improvements, a more realistic WBGT estimation value can now be produced. This is extremely useful for researchers and other employees who are working on outdoor projects that are distant from the areas that the Web system monitors. Most importantly, the improved WBGT estimations will make outdoor work sites safer by reducing the likelihood of heat stress.
Virtual Crystals and Kleber's Algorithm
NASA Astrophysics Data System (ADS)
Okado, Masato; Schilling, Anne; Shimozono, Mark
Kirillov and Reshetikhin conjectured what is now known as the fermionic formula for the decomposition of tensor products of certain finite dimensional modules over quantum affine algebras. This formula can also be extended to the case of q-deformations of tensor product multiplicities as recently conjectured by Hatayama et al. In its original formulation it is difficult to compute the fermionic formula efficiently. Kleber found an algorithm for the simply-laced algebras which overcomes this problem. We present a method which reduces all other cases to the simply-laced case using embeddings of affine algebras. This is the fermionic analogue of the virtual crystal construction by the authors, which is the realization of crystal graphs for arbitrary quantum affine algebras in terms of those of simply-laced type.
Parallel algorithms for message decomposition
Teng, S.H.; Wang, B.
1987-06-01
The authors consider the deterministic and random parallel complexity (time and processor) of message decoding: an essential problem in communications systems and translation systems. They present an optimal parallel algorithm to decompose prefix-coded messages and uniquely decipherable-coded messages in O(n/P) time, using O(P) processors (for all P:1 less than or equal toPless than or equal ton/log n) deterministically as well as randomly on the weakest version of parallel random access machines in which concurrent read and concurrent write to a cell in the common memory are not allowed. This is done by reducing decoding to parallel finite-state automata simulation and the prefix sums.
Anaphora Resolution Algorithm for Sanskrit
NASA Astrophysics Data System (ADS)
Pralayankar, Pravin; Devi, Sobha Lalitha
This paper presents an algorithm, which identifies different types of pronominal and its antecedents in Sanskrit, an Indo-European language. The computational grammar implemented here uses very familiar concepts such as clause, subject, object etc., which are identified with the help of morphological information and concepts such as precede and follow. It is well known that natural languages contain anaphoric expressions, gaps and elliptical constructions of various kinds and that understanding of natural languages involves assignment of interpretations to these elements. Therefore, it is only to be expected that natural language understanding systems must have the necessary mechanism to resolve the same. The method we adopt here for resolving the anaphors is by exploiting the morphological richness of the language. The system is giving encouraging results when tested with a small corpus.
Novel MRC algorithms using GPGPU
NASA Astrophysics Data System (ADS)
Kato, Kokoro; Taniguchi, Yoshiyuki; Inoue, Tadao; Kadota, Kazuya
2012-06-01
GPGPU (General Purpose Graphic Processor Unit) has been attracting many engineers and scientists who develop their own software for massive numerical computation. With hundreds of core-processors and tens of thousands of threads operating concurrently, GPGPU programs can run significantly fast if their software architecture is well optimized. The basic program model used in GPGPU is SIMD (Single Instruction Multiple Data stream), and one must adapt his programming model to SIMD. However, conditional branching is fundamentally not allowed in SIMD and this limitation is quite challenging to apply GPGPU to photomask related software such as MDP or MRC. In this paper unique methods are proposed to utilize GPU for MRC operation. We explain novel algorithms of mask layout verification by GPGPU.
Advanced spectral signature discrimination algorithm
NASA Astrophysics Data System (ADS)
Chakravarty, Sumit; Cao, Wenjie; Samat, Alim
2013-05-01
This paper presents a novel approach to the task of hyperspectral signature analysis. Hyperspectral signature analysis has been studied a lot in literature and there has been a lot of different algorithms developed which endeavors to discriminate between hyperspectral signatures. There are many approaches for performing the task of hyperspectral signature analysis. Binary coding approaches like SPAM and SFBC use basic statistical thresholding operations to binarize a signature which are then compared using Hamming distance. This framework has been extended to techniques like SDFC wherein a set of primate structures are used to characterize local variations in a signature together with the overall statistical measures like mean. As we see such structures harness only local variations and do not exploit any covariation of spectrally distinct parts of the signature. The approach of this research is to harvest such information by the use of a technique similar to circular convolution. In the approach we consider the signature as cyclic by appending the two ends of it. We then create two copies of the spectral signature. These three signatures can be placed next to each other like the rotating discs of a combination lock. We then find local structures at different circular shifts between the three cyclic spectral signatures. Texture features like in SDFC can be used to study the local structural variation for each circular shift. We can then create different measure by creating histogram from the shifts and thereafter using different techniques for information extraction from the histograms. Depending on the technique used different variant of the proposed algorithm are obtained. Experiments using the proposed technique show the viability of the proposed methods and their performances as compared to current binary signature coding techniques.
SLAP lesions: a treatment algorithm.
Brockmeyer, Matthias; Tompkins, Marc; Kohn, Dieter M; Lorbach, Olaf
2016-02-01
Tears of the superior labrum involving the biceps anchor are a common entity, especially in athletes, and may highly impair shoulder function. If conservative treatment fails, successful arthroscopic repair of symptomatic SLAP lesions has been described in the literature particularly for young athletes. However, the results in throwing athletes are less successful with a significant amount of patients who will not regain their pre-injury level of performance. The clinical results of SLAP repairs in middle-aged and older patients are mixed, with worse results and higher revision rates as compared to younger patients. In this population, tenotomy or tenodesis of the biceps tendon is a viable alternative to SLAP repairs in order to improve clinical outcomes. The present article introduces a treatment algorithm for SLAP lesions based upon the recent literature as well as the authors' clinical experience. The type of lesion, age of patient, concomitant lesions, and functional requirements, as well as sport activity level of the patient, need to be considered. Moreover, normal variations and degenerative changes in the SLAP complex have to be distinguished from "true" SLAP lesions in order to improve results and avoid overtreatment. The suggestion for a treatment algorithm includes: type I: conservative treatment or arthroscopic debridement, type II: SLAP repair or biceps tenotomy/tenodesis, type III: resection of the instable bucket-handle tear, type IV: SLAP repair (biceps tenotomy/tenodesis if >50 % of biceps tendon is affected), type V: Bankart repair and SLAP repair, type VI: resection of the flap and SLAP repair, and type VII: refixation of the anterosuperior labrum and SLAP repair. PMID:26818554
Consensus algorithms in decentralized networks
NASA Astrophysics Data System (ADS)
Coduti, Leonardo Phillip
We consider a decentralized network with the following goal: the state at each node of the network iteratively converges to the same value. Ensuring that this goal is achieved requires certain properties of the topology of the network and the function describing the evolution of the network. We will present these properties for deterministic systems, extending current results in the literature. As an additional contribution, we will show how the convergence results for stochastic systems are direct consequences of the corresponding deterministic systems, drastically simplifying many other current results. In general, these consensus systems can be both time invariant and time varying, and we will extend all our deterministic and stochastic results to include time varying systems as well. We will then consider a more complex consensus problem, the resource allocation problem. In this situation each node of the network has both a state and a capacity. The capacity is a monotone increasing function of the state, and the goal is for the nodes to exchange capacity in a decentralized manner in order to drive all of the states to the same value. Conditions ensuring consensus in the deterministic setting will be presented, and we will show how convergence in this system also comes from the fundamental deterministic result for consensus algorithms. The main results will again be extended to stochastic and time varying systems. The linear consensus system requires the construction of a matrix of weighting parameters with specific properties. We present an iterative algorithm for determining the weighting parameters in a decentralized fashion; the weighting parameters are specified by the nodes and each node only specifies the weighting parameters as sociated with that node. The results assume that the communication graph of the network is directed, and we consider both synchronous communication, and stochastic asynchronous networks.
Runtime support for parallelizing data mining algorithms
NASA Astrophysics Data System (ADS)
Jin, Ruoming; Agrawal, Gagan
2002-03-01
With recent technological advances, shared memory parallel machines have become more scalable, and offer large main memories and high bus bandwidths. They are emerging as good platforms for data warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms. We have developed a series of techniques for parallelization of data mining algorithms, including full replication, full locking, fixed locking, optimized full locking, and cache-sensitive locking. Unlike previous work on shared memory parallelization of specific data mining algorithms, all of our techniques apply to a large number of common data mining algorithms. In addition, we propose a reduction-object based interface for specifying a data mining algorithm. We show how our runtime system can apply any of the technique we have developed starting from a common specification of the algorithm.
On mapping systolic algorithms onto the hypercube
Ibarra, O.H.; Sohn, S.M. )
1990-01-01
Much effort has been devoted toward developing efficient algorithms for systolic arrays. Here the authors consider the problem of mapping these algorithms into efficient algorithms for a fixed-size hypercube architecture. They describe in detail several optimal implementations of algorithms given for one-way one and two-dimensional systolic arrays. Since interprocessor communication is many times slower than local computation in parallel computers built to date, the problem of efficient communication is specifically addressed for these mappings. In order to experimentally validate the technique, five systolic algorithms were mapped in various ways onto a 64-node NCUBE/7 MMD hypercube machine. The algorithms are for the following problems: the shuffle scheduling problem, finite impulse response filtering, linear context-free language recognition, matrix multiplication, and computing the Boolean transitive closure. Experimental evidence indicates that good performance is obtained for the mappings.
An improved Camshift algorithm for target recognition
NASA Astrophysics Data System (ADS)
Fu, Min; Cai, Chao; Mao, Yusu
2015-12-01
Camshift algorithm and three frame difference algorithm are the popular target recognition and tracking methods. Camshift algorithm requires a manual initialization of the search window, which needs the subjective error and coherence, and only in the initialization calculating a color histogram, so the color probability model cannot be updated continuously. On the other hand, three frame difference method does not require manual initialization search window, it can make full use of the motion information of the target only to determine the range of motion. But it is unable to determine the contours of the object, and can not make use of the color information of the target object. Therefore, the improved Camshift algorithm is proposed to overcome the disadvantages of the original algorithm, the three frame difference operation is combined with the object's motion information and color information to identify the target object. The improved Camshift algorithm is realized and shows better performance in the recognition and tracking of the target.
ENAS-RIF algorithm for image restoration
NASA Astrophysics Data System (ADS)
Yang, Yang; Yang, Zhen-wen; Shen, Tian-shuang; Chen, Bo
2012-11-01
mage of objects is inevitably encountered by space-based working in the atmospheric turbulence environment, such as those used in astronomy, remote sensing and so on. The observed images are seriously blurred. The restoration is required for reconstruction turbulence degraded images. In order to enhance the performance of image restoration, a novel enhanced nonnegativity and support constants recursive inverse filtering(ENAS-RIF) algorithm was presented, which was based on the reliable support region and enhanced cost function. Firstly, the Curvelet denoising algorithm was used to weaken image noise. Secondly, the reliable object support region estimation was used to accelerate the algorithm convergence. Then, the average gray was set as the gray of image background pixel. Finally, an object construction limit and the logarithm function were add to enhance algorithm stability. The experimental results prove that the convergence speed of the novel ENAS-RIF algorithm is faster than that of NAS-RIF algorithm and it is better in image restoration.
An Algorithmic Framework for Multiobjective Optimization
Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.
2013-01-01
Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795
Adaptive link selection algorithms for distributed estimation
NASA Astrophysics Data System (ADS)
Xu, Songcen; de Lamare, Rodrigo C.; Poor, H. Vincent
2015-12-01
This paper presents adaptive link selection algorithms for distributed estimation and considers their application to wireless sensor networks and smart grids. In particular, exhaustive search-based least mean squares (LMS) / recursive least squares (RLS) link selection algorithms and sparsity-inspired LMS / RLS link selection algorithms that can exploit the topology of networks with poor-quality links are considered. The proposed link selection algorithms are then analyzed in terms of their stability, steady-state, and tracking performance and computational complexity. In comparison with the existing centralized or distributed estimation strategies, the key features of the proposed algorithms are as follows: (1) more accurate estimates and faster convergence speed can be obtained and (2) the network is equipped with the ability of link selection that can circumvent link failures and improve the estimation performance. The performance of the proposed algorithms for distributed estimation is illustrated via simulations in applications of wireless sensor networks and smart grids.
Spatial search algorithms on Hanoi networks
NASA Astrophysics Data System (ADS)
Marquezino, Franklin de Lima; Portugal, Renato; Boettcher, Stefan
2013-01-01
We use the abstract search algorithm and its extension due to Tulsi to analyze a spatial quantum search algorithm that finds a marked vertex in Hanoi networks of degree 4 faster than classical algorithms. We also analyze the effect of using non-Groverian coins that take advantage of the small-world structure of the Hanoi networks. We obtain the scaling of the total cost of the algorithm as a function of the number of vertices. We show that Tulsi's technique plays an important role to speed up the searching algorithm. We can improve the algorithm's efficiency by choosing a non-Groverian coin if we do not implement Tulsi's method. Our conclusions are based on numerical implementations.
An algorithmic framework for multiobjective optimization.
Ganesan, T; Elamvazuthi, I; Shaari, Ku Zilati Ku; Vasant, P
2013-01-01
Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795
Orbital objects detection algorithm using faint streaks
NASA Astrophysics Data System (ADS)
Tagawa, Makoto; Yanagisawa, Toshifumi; Kurosaki, Hirohisa; Oda, Hiroshi; Hanada, Toshiya
2016-02-01
This study proposes an algorithm to detect orbital objects that are small or moving at high apparent velocities from optical images by utilizing their faint streaks. In the conventional object-detection algorithm, a high signal-to-noise-ratio (e.g., 3 or more) is required, whereas in our proposed algorithm, the signals are summed along the streak direction to improve object-detection sensitivity. Lower signal-to-noise ratio objects were detected by applying the algorithm to a time series of images. The algorithm comprises the following steps: (1) image skewing, (2) image compression along the vertical axis, (3) detection and determination of streak position, (4) searching for object candidates using the time-series streak-position data, and (5) selecting the candidate with the best linearity and reliability. Our algorithm's ability to detect streaks with signals weaker than the background noise was confirmed using images from the Australia Remote Observatory.
Modified OMP Algorithm for Exponentially Decaying Signals
Kazimierczuk, Krzysztof; Kasprzak, Paweł
2015-01-01
A group of signal reconstruction methods, referred to as compressed sensing (CS), has recently found a variety of applications in numerous branches of science and technology. However, the condition of the applicability of standard CS algorithms (e.g., orthogonal matching pursuit, OMP), i.e., the existence of the strictly sparse representation of a signal, is rarely met. Thus, dedicated algorithms for solving particular problems have to be developed. In this paper, we introduce a modification of OMP motivated by nuclear magnetic resonance (NMR) application of CS. The algorithm is based on the fact that the NMR spectrum consists of Lorentzian peaks and matches a single Lorentzian peak in each of its iterations. Thus, we propose the name Lorentzian peak matching pursuit (LPMP). We also consider certain modification of the algorithm by introducing the allowed positions of the Lorentzian peaks' centers. Our results show that the LPMP algorithm outperforms other CS algorithms when applied to exponentially decaying signals. PMID:25609044
Voronoi particle merging algorithm for PIC codes
NASA Astrophysics Data System (ADS)
Luu, Phuc T.; Tückmantel, T.; Pukhov, A.
2016-05-01
We present a new particle-merging algorithm for the particle-in-cell method. Based on the concept of the Voronoi diagram, the algorithm partitions the phase space into smaller subsets, which consist of only particles that are in close proximity in the phase space to each other. We show the performance of our algorithm in the case of the two-stream instability and the magnetic shower.
Testing block subdivision algorithms on block designs
NASA Astrophysics Data System (ADS)
Wiseman, Natalie; Patterson, Zachary
2016-01-01
Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.
Algorithm for Compressing Time-Series Data
NASA Technical Reports Server (NTRS)
Hawkins, S. Edward, III; Darlington, Edward Hugo
2012-01-01
An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").
Evolutionary Algorithm for Optimal Vaccination Scheme
NASA Astrophysics Data System (ADS)
Parousis-Orthodoxou, K. J.; Vlachos, D. S.
2014-03-01
The following work uses the dynamic capabilities of an evolutionary algorithm in order to obtain an optimal immunization strategy in a user specified network. The produced algorithm uses a basic genetic algorithm with crossover and mutation techniques, in order to locate certain nodes in the inputted network. These nodes will be immunized in an SIR epidemic spreading process, and the performance of each immunization scheme, will be evaluated by the level of containment that provides for the spreading of the disease.
Sequential and Parallel Algorithms for Spherical Interpolation
NASA Astrophysics Data System (ADS)
De Rossi, Alessandra
2007-09-01
Given a large set of scattered points on a sphere and their associated real values, we analyze sequential and parallel algorithms for the construction of a function defined on the sphere satisfying the interpolation conditions. The algorithms we implemented are based on a local interpolation method using spherical radial basis functions and the Inverse Distance Weighted method. Several numerical results show accuracy and efficiency of the algorithms.
Robustness of Tree Extraction Algorithms from LIDAR
NASA Astrophysics Data System (ADS)
Dumitru, M.; Strimbu, B. M.
2015-12-01
Forest inventory faces a new era as unmanned aerial systems (UAS) increased the precision of measurements, while reduced field effort and price of data acquisition. A large number of algorithms were developed to identify various forest attributes from UAS data. The objective of the present research is to assess the robustness of two types of tree identification algorithms when UAS data are combined with digital elevation models (DEM). The algorithms use as input photogrammetric point cloud, which are subsequent rasterized. The first type of algorithms associate tree crown with an inversed watershed (subsequently referred as watershed based), while the second type is based on simultaneous representation of tree crown as an individual entity, and its relation with neighboring crowns (subsequently referred as simultaneous representation). A DJI equipped with a SONY a5100 was used to acquire images over an area from center Louisiana. The images were processed with Pix4D, and a photogrammetric point cloud with 50 points / m2 was attained. DEM was obtained from a flight executed in 2013, which also supplied a LIDAR point cloud with 30 points/m2. The algorithms were tested on two plantations with different species and crown class complexities: one homogeneous (i.e., a mature loblolly pine plantation), and one heterogeneous (i.e., an unmanaged uneven-aged stand with mixed species pine -hardwoods). Tree identification on photogrammetric point cloud reveled that simultaneous representation algorithm outperforms watershed algorithm, irrespective stand complexity. Watershed algorithm exhibits robustness to parameters, but the results were worse than majority sets of parameters needed by the simultaneous representation algorithm. The simultaneous representation algorithm is a better alternative to watershed algorithm even when parameters are not accurately estimated. Similar results were obtained when the two algorithms were run on the LIDAR point cloud.
Mapping algorithms on regular parallel architectures
Lee, P.
1989-01-01
It is significant that many of time-intensive scientific algorithms are formulated as nested loops, which are inherently regularly structured. In this dissertation the relations between the mathematical structure of nested loop algorithms and the architectural capabilities required for their parallel execution are studied. The architectural model considered in depth is that of an arbitrary dimensional systolic array. The mathematical structure of the algorithm is characterized by classifying its data-dependence vectors according to the new ZERO-ONE-INFINITE property introduced. Using this classification, the first complete set of necessary and sufficient conditions for correct transformation of a nested loop algorithm onto a given systolic array of an arbitrary dimension by means of linear mappings is derived. Practical methods to derive optimal or suboptimal systolic array implementations are also provided. The techniques developed are used constructively to develop families of implementations satisfying various optimization criteria and to design programmable arrays efficiently executing classes of algorithms. In addition, a Computer-Aided Design system running on SUN workstations has been implemented to help in the design. The methodology, which deals with general algorithms, is illustrated by synthesizing linear and planar systolic array algorithms for matrix multiplication, a reindexed Warshall-Floyd transitive closure algorithm, and the longest common subsequence algorithm.
A parallel algorithm for global routing
NASA Technical Reports Server (NTRS)
Brouwer, Randall J.; Banerjee, Prithviraj
1990-01-01
A Parallel Hierarchical algorithm for Global Routing (PHIGURE) is presented. The router is based on the work of Burstein and Pelavin, but has many extensions for general global routing and parallel execution. Main features of the algorithm include structured hierarchical decomposition into separate independent tasks which are suitable for parallel execution and adaptive simplex solution for adding feedthroughs and adjusting channel heights for row-based layout. Alternative decomposition methods and the various levels of parallelism available in the algorithm are examined closely. The algorithm is described and results are presented for a shared-memory multiprocessor implementation.
Automatic control algorithm effects on energy production
NASA Technical Reports Server (NTRS)
Mcnerney, G. M.
1981-01-01
A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.
Algorithm to search for genomic rearrangements
NASA Astrophysics Data System (ADS)
Nałecz-Charkiewicz, Katarzyna; Nowak, Robert
2013-10-01
The aim of this article is to discuss the issue of comparing nucleotide sequences in order to detect chromosomal rearrangements (for example, in the study of genomes of two cucumber varieties, Polish and Chinese). Two basic algorithms for detecting rearrangements has been described: Smith-Waterman algorithm, as well as a new method of searching genetic markers in combination with Knuth-Morris-Pratt algorithm. The computer program in client-server architecture was developed. The algorithms properties were examined on genomes Escherichia coli and Arabidopsis thaliana genomes, and are prepared to compare two cucumber varieties, Polish and Chinese. The results are promising and further works are planned.
Java implementation of Class Association Rule algorithms
2007-08-30
Java implementation of three Class Association Rule mining algorithms, NETCAR, CARapriori, and clustering based rule mining. NETCAR algorithm is a novel algorithm developed by Makio Tamura. The algorithm is discussed in a paper: UCRL-JRNL-232466-DRAFT, and would be published in a peer review scientific journal. The software is used to extract combinations of genes relevant with a phenotype from a phylogenetic profile and a phenotype profile. The phylogenetic profiles is represented by a binary matrix andmore » a phenotype profile is represented by a binary vector. The present application of this software will be in genome analysis, however, it could be applied more generally.« less
A Unifying Multibody Dynamics Algorithm Development Workbench
NASA Technical Reports Server (NTRS)
Ziegler, John L.
2005-01-01
The development of new and efficient algorithms for multibody dynamics has been an important research area. These algorithms are used for modeling, simulation, and control of systems such as spacecraft, robotic systems, automotive applications, the human body, manufacturing operations, and micro-electromechanical systems (MEMS). At JPL's Dynamics and Real Time Simulation (DARTS) Laboratory we have developed software that serves as a computational workbench for these algorithms. This software utilizes the mathematical perspective of the spatial operator algebra, which allows the development of dynamics algorithms and new insights into multibody dynamics.
A new frame-based registration algorithm.
Yan, C H; Whalen, R T; Beaupre, G S; Sumanaweera, T S; Yen, S Y; Napel, S
1998-01-01
This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required. PMID:9472834
Unifying parametrized VLSI Jacobi algorithms and architectures
NASA Astrophysics Data System (ADS)
Deprettere, Ed F. A.; Moonen, Marc
1993-11-01
Implementing Jacobi algorithms in parallel VLSI processor arrays is a non-trivial task, in particular when the algorithms are parametrized with respect to size and the architectures are parametrized with respect to space-time trade-offs. The paper is concerned with an approach to implement several time-adaptive Jacobi-type algorithms on a parallel processor array, using only Cordic arithmetic and asynchronous communications, such that any degree of parallelism, ranging from single-processor up to full-size array implementation, is supported by a `universal' processing unit. This result is attributed to a gracious interplay between algorithmic and architectural engineering.
Thermostat algorithm for generating target ensembles
NASA Astrophysics Data System (ADS)
Bravetti, A.; Tapias, D.
2016-02-01
We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator.
A new frame-based registration algorithm
NASA Technical Reports Server (NTRS)
Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Sumanaweera, T. S.; Yen, S. Y.; Napel, S.
1998-01-01
This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required.
Java implementation of Class Association Rule algorithms
Tamura, Makio
2007-08-30
Java implementation of three Class Association Rule mining algorithms, NETCAR, CARapriori, and clustering based rule mining. NETCAR algorithm is a novel algorithm developed by Makio Tamura. The algorithm is discussed in a paper: UCRL-JRNL-232466-DRAFT, and would be published in a peer review scientific journal. The software is used to extract combinations of genes relevant with a phenotype from a phylogenetic profile and a phenotype profile. The phylogenetic profiles is represented by a binary matrix and a phenotype profile is represented by a binary vector. The present application of this software will be in genome analysis, however, it could be applied more generally.
Practical algorithmic probability: an image inpainting example
NASA Astrophysics Data System (ADS)
Potapov, Alexey; Scherbakov, Oleg; Zhdanov, Innokentii
2013-12-01
Possibility of practical application of algorithmic probability is analyzed on an example of image inpainting problem that precisely corresponds to the prediction problem. Such consideration is fruitful both for the theory of universal prediction and practical image inpaiting methods. Efficient application of algorithmic probability implies that its computation is essentially optimized for some specific data representation. In this paper, we considered one image representation, namely spectral representation, for which an image inpainting algorithm is proposed based on the spectrum entropy criterion. This algorithm showed promising results in spite of very simple representation. The same approach can be used for introducing ALP-based criterion for more powerful image representations.
Iterative phase retrieval algorithms. I: optimization.
Guo, Changliang; Liu, Shi; Sheridan, John T
2015-05-20
Two modified Gerchberg-Saxton (GS) iterative phase retrieval algorithms are proposed. The first we refer to as the spatial phase perturbation GS algorithm (SPP GSA). The second is a combined GS hybrid input-output algorithm (GS/HIOA). In this paper (Part I), it is demonstrated that the SPP GS and GS/HIO algorithms are both much better at avoiding stagnation during phase retrieval, allowing them to successfully locate superior solutions compared with either the GS or the HIO algorithms. The performances of the SPP GS and GS/HIO algorithms are also compared. Then, the error reduction (ER) algorithm is combined with the HIO algorithm (ER/HIOA) to retrieve the input object image and the phase, given only some knowledge of its extent and the amplitude in the Fourier domain. In Part II, the algorithms developed here are applied to carry out known plaintext and ciphertext attacks on amplitude encoding and phase encoding double random phase encryption systems. Significantly, ER/HIOA is then used to carry out a ciphertext-only attack on AE DRPE systems. PMID:26192504
Thermostat algorithm for generating target ensembles.
Bravetti, A; Tapias, D
2016-02-01
We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator. PMID:26986320
The annealing robust backpropagation (ARBP) learning algorithm.
Chuang, C C; Su, S F; Hsiao, C C
2000-01-01
Multilayer feedforward neural networks are often referred to as universal approximators. Nevertheless, if the used training data are corrupted by large noise, such as outliers, traditional backpropagation learning schemes may not always come up with acceptable performance. Even though various robust learning algorithms have been proposed in the literature, those approaches still suffer from the initialization problem. In those robust learning algorithms, the so-called M-estimator is employed. For the M-estimation type of learning algorithms, the loss function is used to play the role in discriminating against outliers from the majority by degrading the effects of those outliers in learning. However, the loss function used in those algorithms may not correctly discriminate against those outliers. In this paper, the annealing robust backpropagation learning algorithm (ARBP) that adopts the annealing concept into the robust learning algorithms is proposed to deal with the problem of modeling under the existence of outliers. The proposed algorithm has been employed in various examples. Those results all demonstrated the superiority over other robust learning algorithms independent of outliers. In the paper, not only is the annealing concept adopted into the robust learning algorithms but also the annealing schedule k/t was found experimentally to achieve the best performance among other annealing schedules, where k is a constant and is the epoch number. PMID:18249835
Distilling the Verification Process for Prognostics Algorithms
NASA Technical Reports Server (NTRS)
Roychoudhury, Indranil; Saxena, Abhinav; Celaya, Jose R.; Goebel, Kai
2013-01-01
The goal of prognostics and health management (PHM) systems is to ensure system safety, and reduce downtime and maintenance costs. It is important that a PHM system is verified and validated before it can be successfully deployed. Prognostics algorithms are integral parts of PHM systems. This paper investigates a systematic process of verification of such prognostics algorithms. To this end, first, this paper distinguishes between technology maturation and product development. Then, the paper describes the verification process for a prognostics algorithm as it moves up to higher maturity levels. This process is shown to be an iterative process where verification activities are interleaved with validation activities at each maturation level. In this work, we adopt the concept of technology readiness levels (TRLs) to represent the different maturity levels of a prognostics algorithm. It is shown that at each TRL, the verification of a prognostics algorithm depends on verifying the different components of the algorithm according to the requirements laid out by the PHM system that adopts this prognostics algorithm. Finally, using simplified examples, the systematic process for verifying a prognostics algorithm is demonstrated as the prognostics algorithm moves up TRLs.
Overview of an Algorithm Plugin Package (APP)
NASA Astrophysics Data System (ADS)
Linda, M.; Tilmes, C.; Fleig, A. J.
2004-12-01
Science software that runs operationally is fundamentally different than software that runs on a scientist's desktop. There are complexities in hosting software for automated production that are necessary and significant. Identifying common aspects of these complexities can simplify algorithm integration. We use NASA's MODIS and OMI data production systems as examples. An Algorithm Plugin Package (APP) is science software that is combined with algorithm-unique elements that permit the algorithm to interface with, and function within, the framework of a data processing system. The framework runs algorithms operationally against large quantities of data. The extra algorithm-unique items are constrained by the design of the data processing system. APPs often include infrastructure that is vastly similar. When the common elements in APPs are identified and abstracted, the cost of APP development, testing, and maintenance will be reduced. This paper is an overview of the extra algorithm-unique pieces that are shared between MODAPS and OMIDAPS APPs. Our exploration of APP structure will help builders of other production systems identify their common elements and reduce algorithm integration costs. Our goal is to complete the development of a library of functions and a menu of implementation choices that reflect common needs of APPs. The library and menu will reduce the time and energy required for science developers to integrate algorithms into production systems.
Ascent guidance algorithm using lidar wind measurements
NASA Technical Reports Server (NTRS)
Cramer, Evin J.; Bradt, Jerre E.; Hardtla, John W.
1990-01-01
The formulation of a general nonlinear programming guidance algorithm that incorporates wind measurements in the computation of ascent guidance steering commands is discussed. A nonlinear programming (NLP) algorithm that is designed to solve a very general problem has the potential to address the diversity demanded by future launch systems. Using B-splines for the command functional form allows the NLP algorithm to adjust the shape of the command profile to achieve optimal performance. The algorithm flexibility is demonstrated by simulation of ascent with dynamic loading constraints through a set of random wind profiles with and without wind sensing capability.
Generation of attributes for learning algorithms
Hu, Yuh-Jyh; Kibler, D.
1996-12-31
Inductive algorithms rely strongly on their representational biases. Constructive induction can mitigate representational inadequacies. This paper introduces the notion of a relative gain measure and describes a new constructive induction algorithm (GALA) which is independent of the learning algorithm. Unlike most previous research on constructive induction, our methods are designed as preprocessing step before standard machine learning algorithms are applied. We present the results which demonstrate the effectiveness of GALA on artificial and real domains for several learners: C4.5, CN2, perceptron and backpropagation.
A Support Vector Machine Blind Equalization Algorithm Based on Immune Clone Algorithm
NASA Astrophysics Data System (ADS)
Yecai, Guo; Rui, Ding
Aiming at affecting of the parameter selection method of support vector machine(SVM) on its application in blind equalization algorithm, a SVM constant modulus blind equalization algorithm based on immune clone selection algorithm(CSA-SVM-CMA) is proposed. In this proposed algorithm, the immune clone algorithm is used to optimize the parameters of the SVM on the basis advantages of its preventing evolutionary precocious, avoiding local optimum, and fast convergence. The proposed algorithm can improve the parameter selection efficiency of SVM constant modulus blind equalization algorithm(SVM-CMA) and overcome the defect of the artificial setting parameters. Accordingly, the CSA-SVM-CMA has faster convergence rate and smaller mean square error than the SVM-CMA. Computer simulations in underwater acoustic channels have proved the validity of the algorithm.
Liu, Dong-sheng; Fan, Shu-jiang
2014-01-01
In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389
A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem
Liu, Dong-sheng; Fan, Shu-jiang
2014-01-01
In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389
Control algorithms for dynamic attenuators
Hsieh, Scott S.; Pelc, Norbert J.
2014-06-15
Purpose: The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. Methods: The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not requirea priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. Results: The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current
NASA Astrophysics Data System (ADS)
La Foy, Roderick; Vlachos, Pavlos
2011-11-01
An optimally designed MLOS tomographic reconstruction algorithm for use in 3D PIV and PTV applications is analyzed. Using a set of optimized reconstruction parameters, the reconstructions produced by the MLOS algorithm are shown to be comparable to reconstructions produced by the MART algorithm for a range of camera geometries, camera numbers, and particle seeding densities. The resultant velocity field error calculated using PIV and PTV algorithms is further minimized by applying both pre and post processing to the reconstructed data sets.
Formation Algorithms and Simulation Testbed
NASA Technical Reports Server (NTRS)
Wette, Matthew; Sohl, Garett; Scharf, Daniel; Benowitz, Edward
2004-01-01
Formation flying for spacecraft is a rapidly developing field that will enable a new era of space science. For one of its missions, the Terrestrial Planet Finder (TPF) project has selected a formation flying interferometer design to detect earth-like planets orbiting distant stars. In order to advance technology needed for the TPF formation flying interferometer, the TPF project has been developing a distributed real-time testbed to demonstrate end-to-end operation of formation flying with TPF-like functionality and precision. This is the Formation Algorithms and Simulation Testbed (FAST) . This FAST was conceived to bring out issues in timing, data fusion, inter-spacecraft communication, inter-spacecraft sensing and system-wide formation robustness. In this paper we describe the FAST and show results from a two-spacecraft formation scenario. The two-spacecraft simulation is the first time that precision end-to-end formation flying operation has been demonstrated in a distributed real-time simulation environment.
The algorithmic origins of life
Walker, Sara Imari; Davies, Paul C. W.
2013-01-01
Although it has been notoriously difficult to pin down precisely what is it that makes life so distinctive and remarkable, there is general agreement that its informational aspect is one key property, perhaps the key property. The unique informational narrative of living systems suggests that life may be characterized by context-dependent causal influences, and, in particular, that top-down (or downward) causation—where higher levels influence and constrain the dynamics of lower levels in organizational hierarchies—may be a major contributor to the hierarchal structure of living systems. Here, we propose that the emergence of life may correspond to a physical transition associated with a shift in the causal structure, where information gains direct and context-dependent causal efficacy over the matter in which it is instantiated. Such a transition may be akin to more traditional physical transitions (e.g. thermodynamic phase transitions), with the crucial distinction that determining which phase (non-life or life) a given system is in requires dynamical information and therefore can only be inferred by identifying causal architecture. We discuss some novel research directions based on this hypothesis, including potential measures of such a transition that may be amenable to laboratory study, and how the proposed mechanism corresponds to the onset of the unique mode of (algorithmic) information processing characteristic of living systems. PMID:23235265
Multivariate Spline Algorithms for CAGD
NASA Technical Reports Server (NTRS)
Boehm, W.
1985-01-01
Two special polyhedra present themselves for the definition of B-splines: a simplex S and a box or parallelepiped B, where the edges of S project into an irregular grid, while the edges of B project into the edges of a regular grid. More general splines may be found by forming linear combinations of these B-splines, where the three-dimensional coefficients are called the spline control points. Univariate splines are simplex splines, where s = 1, whereas splines over a regular triangular grid are box splines, where s = 2. Two simple facts render the development of the construction of B-splines: (1) any face of a simplex or a box is again a simplex or box but of lower dimension; and (2) any simplex or box can be easily subdivided into smaller simplices or boxes. The first fact gives a geometric approach to Mansfield-like recursion formulas that express a B-spline in B-splines of lower order, where the coefficients depend on x. By repeated recursion, the B-spline will be expressed as B-splines of order 1; i.e., piecewise constants. In the case of a simplex spline, the second fact gives a so-called insertion algorithm that constructs the new control points if an additional knot is inserted.
Macroparticle merging algorithm for PIC
NASA Astrophysics Data System (ADS)
Vranic, Marija; Grismayer, Thomas; Martins, Joana L.; Fonseca, Ricardo A.; Silva, Luis O.
2014-10-01
With the development of large supercomputers (>1000000 cores), the complexity of the problems we are able to simulate with particle-in-cell (PIC) codes has increased substantially. However, localized density spikes can introduce load imbalance where a small fraction of cores is occupied, while the others remain idle. An additional challenge lies in self-consistent modeling of QED effects at ultra-high laser intensities (I > 1023 W/cm2), where the number of pairs produced sometimes grows exponentially and may reach beyond the maximum number of particles that each processor can handle. We can overcome this by resampling the 6D phase space: the macroparticles can be merged into fewer particles with higher particle weights. Existing merging scheme preserves the total charge, but not the particle distribution. Here we present a novel particle-merging algorithm that preserves the energy, momentum and charge locally and thereby minimizes the potential influence to the relevant physics. Through examples of classical plasma physics and more extreme scenarios, we show that the physics is not altered but we obtain an immense increase in performance.
Reliability measure for segmenting algorithms
NASA Astrophysics Data System (ADS)
Alvarez, Robert E.
2004-05-01
Segmenting is a key initial step in many computer-aided detection (CAD) systems. Our purpose is to develop a method to estimate the reliability of segmenting algorithm results. We use a statistical shape model computed using principal component analysis. The model retains a small number of eigenvectors, or modes, that represent a large fraction of the variance. The residuals between the segmenting result and its projection into the space of retained modes are computed. The sum of the squares of residuals is transformed to a zero-mean, unit standard deviation Gaussian random variable. We also use the standardized scale parameter. The reliability measure is the probability that the transformed residuals and scale parameter are greater than the absolute value of the observed values. We tested the reliability measure with thirty chest x-ray images with "leave-out-one" testing. The Gaussian assumption was verified using normal probability plots. For each image, a statistical shape model was computed from the hand-digitized data of the rest of the images in the training set. The residuals and scale parameter with automated segment results for the image were used to compute the reliability measure in each case. The reliability measure was significantly lower for two images in the training set with unusual lung fields or processing errors. The data and Matlab scripts for reproducing the figures are at http://www.aprendtech.com/papers/relmsr.zip Errors detected by the new reliability measure can be used to adjust processing or warn the user.
Advanced algorithms for information science
Argo, P.; Brislawn, C.; Fitzgerald, T.J.; Kelley, B.; Kim, W.H.; Mazieres, B.; Roeder, H.; Strottman, D.
1998-12-31
This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). In a modern information-controlled society the importance of fast computational algorithms facilitating data compression and image analysis cannot be overemphasized. Feature extraction and pattern recognition are key to many LANL projects and the same types of dimensionality reduction and compression used in source coding are also applicable to image understanding. The authors have begun developing wavelet coding which decomposes data into different length-scale and frequency bands. New transform-based source-coding techniques offer potential for achieving better, combined source-channel coding performance by using joint-optimization techniques. They initiated work on a system that compresses the video stream in real time, and which also takes the additional step of analyzing the video stream concurrently. By using object-based compression schemes (where an object is an identifiable feature of the video signal, repeatable in time or space), they believe that the analysis is directly related to the efficiency of the compression.
Comparison of cone beam artifacts reduction: two pass algorithm vs TV-based CS algorithm
NASA Astrophysics Data System (ADS)
Choi, Shinkook; Baek, Jongduk
2015-03-01
In a cone beam computed tomography (CBCT), the severity of the cone beam artifacts is increased as the cone angle increases. To reduce the cone beam artifacts, several modified FDK algorithms and compressed sensing based iterative algorithms have been proposed. In this paper, we used two pass algorithm and Gradient-Projection-Barzilai-Borwein (GPBB) algorithm to reduce the cone beam artifacts, and compared their performance using structural similarity (SSIM) index. In two pass algorithm, it is assumed that the cone beam artifacts are mainly caused by extreme-density(ED) objects, and therefore the algorithm reproduces the cone beam artifacts(i.e., error image) produced by ED objects, and then subtract it from the original image. GPBB algorithm is a compressed sensing based iterative algorithm which minimizes an energy function for calculating the gradient projection with the step size determined by the Barzilai- Borwein formulation, therefore it can estimate missing data caused by the cone beam artifacts. To evaluate the performance of two algorithms, we used testing objects consisting of 7 ellipsoids separated along the z direction and cone beam artifacts were generated using 30 degree cone angle. Even though the FDK algorithm produced severe cone beam artifacts with a large cone angle, two pass algorithm reduced the cone beam artifacts with small residual errors caused by inaccuracy of ED objects. In contrast, GPBB algorithm completely removed the cone beam artifacts and restored the original shape of the objects.
Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms
NASA Technical Reports Server (NTRS)
Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)
2000-01-01
In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.
Gaining Algorithmic Insight through Simplifying Constraints.
ERIC Educational Resources Information Center
Ginat, David
2002-01-01
Discusses algorithmic problem solving in computer science education, particularly algorithmic insight, and focuses on the relevance and effectiveness of the heuristic simplifying constraints which involves simplification of a given problem to a problem in which constraints are imposed on the input data. Presents three examples involving…
Algorithmic Mechanism Design of Evolutionary Computation
Pei, Yan
2015-01-01
We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777
A Runge-Kutta Nystrom algorithm.
NASA Technical Reports Server (NTRS)
Bettis, D. G.
1973-01-01
A Runge-Kutta algorithm of order five is presented for the solution of the initial value problem where the system of ordinary differential equations is of second order and does not contain the first derivative. The algorithm includes the Fehlberg step control procedure.
Trees, bialgebras and intrinsic numerical algorithms
NASA Technical Reports Server (NTRS)
Crouch, Peter; Grossman, Robert; Larson, Richard
1990-01-01
Preliminary work about intrinsic numerical integrators evolving on groups is described. Fix a finite dimensional Lie group G; let g denote its Lie algebra, and let Y(sub 1),...,Y(sub N) denote a basis of g. A class of numerical algorithms is presented that approximate solutions to differential equations evolving on G of the form: dot-x(t) = F(x(t)), x(0) = p is an element of G. The algorithms depend upon constants c(sub i) and c(sub ij), for i = 1,...,k and j is less than i. The algorithms have the property that if the algorithm starts on the group, then it remains on the group. In addition, they also have the property that if G is the abelian group R(N), then the algorithm becomes the classical Runge-Kutta algorithm. The Cayley algebra generated by labeled, ordered trees is used to generate the equations that the coefficients c(sub i) and c(sub ij) must satisfy in order for the algorithm to yield an rth order numerical integrator and to analyze the resulting algorithms.
The Porter Stemming Algorithm: Then and Now
ERIC Educational Resources Information Center
Willett, Peter
2006-01-01
Purpose: In 1980, Porter presented a simple algorithm for stemming English language words. This paper summarises the main features of the algorithm, and highlights its role not just in modern information retrieval research, but also in a range of related subject domains. Design/methodology/approach: Review of literature and research involving use…
Pitch-Learning Algorithm For Speech Encoders
NASA Technical Reports Server (NTRS)
Bhaskar, B. R. Udaya
1988-01-01
Adaptive algorithm detects and corrects errors in sequence of estimates of pitch period of speech. Algorithm operates in conjunction with techniques used to estimate pitch period. Used in such parametric and hybrid speech coders as linear predictive coders and adaptive predictive coders.
Kalman plus weights: a time scale algorithm
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
2001-01-01
KPW is a time scale algorithm that combines Kalman filtering with the basic time scale equation (BTSE). A single Kalman filter that estimates all clocks simultaneously is used to generate the BTSE frequency estimates, while the BTSE weights are inversely proportional to the white FM variances of the clocks. Results from simulated clock ensembles are compared to previous simulation results from other algorithms.
Algorithm for genome contig assembly. Final report
1995-09-01
An algorithm was developed for genome contig assembly which extended the range of data types that could be included in assembly and which ran on the order of a hundred times faster than the algorithm it replaced. Maps of all existing cosmid clone and YAC data at the Human Genome Information Resource were assembled using ICA. The resulting maps are summarized.
Performance analysis of cone detection algorithms.
Mariotti, Letizia; Devaney, Nicholas
2015-04-01
Many algorithms have been proposed to help clinicians evaluate cone density and spacing, as these may be related to the onset of retinal diseases. However, there has been no rigorous comparison of the performance of these algorithms. In addition, the performance of such algorithms is typically determined by comparison with human observers. Here we propose a technique to simulate realistic images of the cone mosaic. We use the simulated images to test the performance of three popular cone detection algorithms, and we introduce an algorithm which is used by astronomers to detect stars in astronomical images. We use Free Response Operating Characteristic (FROC) curves to evaluate and compare the performance of the four algorithms. This allows us to optimize the performance of each algorithm. We observe that performance is significantly enhanced by up-sampling the images. We investigate the effect of noise and image quality on cone mosaic parameters estimated using the different algorithms, finding that the estimated regularity is the most sensitive parameter. PMID:26366758
IUS guidance algorithm gamma guide assessment
NASA Technical Reports Server (NTRS)
Bray, R. E.; Dauro, V. A.
1980-01-01
The Gamma Guidance Algorithm which controls the inertial upper stage is described. The results of an independent assessment of the algorithm's performance in satisfying the NASA missions' targeting objectives are presented. The results of a launch window analysis for a Galileo mission, and suggested improvements are included.
Faster Algorithms on Branch and Clique Decompositions
NASA Astrophysics Data System (ADS)
Bodlaender, Hans L.; van Leeuwen, Erik Jan; van Rooij, Johan M. M.; Vatshelle, Martin
We combine two techniques recently introduced to obtain faster dynamic programming algorithms for optimization problems on graph decompositions. The unification of generalized fast subset convolution and fast matrix multiplication yields significant improvements to the running time of previous algorithms for several optimization problems. As an example, we give an O^{*}(3^{ω/2k}) time algorithm for Minimum Dominating Set on graphs of branchwidth k, improving on the previous O *(4 k ) algorithm. Here ω is the exponent in the running time of the best matrix multiplication algorithm (currently ω< 2.376). For graphs of cliquewidth k, we improve from O *(8 k ) to O *(4 k ). We also obtain an algorithm for counting the number of perfect matchings of a graph, given a branch decomposition of width k, that runs in time O^{*}(2^{ω/2k}). Generalizing these approaches, we obtain faster algorithms for all so-called [ρ,σ]-domination problems on branch decompositions if ρ and σ are finite or cofinite. The algorithms presented in this paper either attain or are very close to natural lower bounds for these problems.
Optical Sensor Based Corn Algorithm Evaluation
Technology Transfer Automated Retrieval System (TEKTRAN)
Optical sensor based algorithms for corn fertilization have developed by researchers in several states. The goal of this international research project was to evaluate these different algorithms and determine their robustness over a large geographic area. Concurrently the goal of this project was to...
Explaining the Cross-Multiplication Algorithm
ERIC Educational Resources Information Center
Handa, Yuichi
2009-01-01
Many high-school mathematics teachers have likely been asked by a student, "Why does the cross-multiplication algorithm work?" It is a commonly used algorithm when dealing with proportion problems, conversion of units, or fractional linear equations. For most teachers, the explanation usually involves the idea of finding a common denominator--one…
Global Optimality of the Successive Maxbet Algorithm.
ERIC Educational Resources Information Center
Hanafi, Mohamed; ten Berge, Jos M. F.
2003-01-01
It is known that the Maxbet algorithm, which is an alternative to the method of generalized canonical correlation analysis and Procrustes analysis, may converge to local maxima. Discusses an eigenvalue criterion that is sufficient, but not necessary, for global optimality of the successive Maxbet algorithm. (SLD)
Genetic Algorithms with Local Minimum Escaping Technique
NASA Astrophysics Data System (ADS)
Tamura, Hiroki; Sakata, Kenichiro; Tang, Zheng; Ishii, Masahiro
In this paper, we propose a genetic algorithm(GA) with local minimum escaping technique. This proposed method uses the local minimum escaping techique. It can escape from the local minimum by correcting parameters when genetic algorithm falls into a local minimum. Simulations are performed to scheduling problem without buffer capacity using this proposed method, and its validity is shown.
Excursion-Set-Mediated Genetic Algorithm
NASA Technical Reports Server (NTRS)
Noever, David; Baskaran, Subbiah
1995-01-01
Excursion-set-mediated genetic algorithm (ESMGA) is embodiment of method of searching for and optimizing computerized mathematical models. Incorporates powerful search and optimization techniques based on concepts analogous to natural selection and laws of genetics. In comparison with other genetic algorithms, this one achieves stronger condition for implicit parallelism. Includes three stages of operations in each cycle, analogous to biological generation.
Evaluation of TCP congestion control algorithms.
Long, Robert Michael
2003-12-01
Sandia, Los Alamos, and Lawrence Livermore National Laboratories currently deploy high speed, Wide Area Network links to permit remote access to their Supercomputer systems. The current TCP congestion algorithm does not take full advantage of high delay, large bandwidth environments. This report involves evaluating alternative TCP congestion algorithms and comparing them with the currently used congestion algorithm. The goal was to find if an alternative algorithm could provide higher throughput with minimal impact on existing network traffic. The alternative congestion algorithms used were Scalable TCP and High-Speed TCP. Network lab experiments were run to record the performance of each algorithm under different network configurations. The network configurations used were back-to-back with no delay, back-to-back with a 30ms delay, and two-to-one with a 30ms delay. The performance of each algorithm was then compared to the existing TCP congestion algorithm to determine if an acceptable alternative had been found. Comparisons were made based on throughput, stability, and fairness.
QPSO-Based Adaptive DNA Computing Algorithm
Karakose, Mehmet; Cigdem, Ugur
2013-01-01
DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm. PMID:23935409
Force-Control Algorithm for Surface Sampling
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Quadrelli, Marco B.; Phan, Linh
2008-01-01
A G-FCON algorithm is designed for small-body surface sampling. It has a linearization component and a feedback component to enhance performance. The algorithm regulates the contact force between the tip of a robotic arm attached to a spacecraft and a surface during sampling.
A Stemming Algorithm for Latin Text Databases.
ERIC Educational Resources Information Center
Schinke, Robyn; And Others
1996-01-01
Describes the design of a stemming algorithm for searching Latin text databases. The algorithm uses a longest-match approach with some recoding but differs from most stemmers in its use of two separate suffix dictionaries for processing query and database words that enables users to pursue specific searches for single grammatical forms of words.…
Algorithmic Mechanism Design of Evolutionary Computation.
Pei, Yan
2015-01-01
We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777
A quantum Algorithm for the Moebius Function
NASA Astrophysics Data System (ADS)
Love, Peter
We give an efficient quantum algorithm for the Moebius function from the natural numbers to -1,0,1. The cost of the algorithm is asymptotically quadratic in log n and does not require the computation of the prime factorization of n as an intermediate step.
Parallel Algorithm Solves Coupled Differential Equations
NASA Technical Reports Server (NTRS)
Hayashi, A.
1987-01-01
Numerical methods adapted to concurrent processing. Algorithm solves set of coupled partial differential equations by numerical integration. Adapted to run on hypercube computer, algorithm separates problem into smaller problems solved concurrently. Increase in computing speed with concurrent processing over that achievable with conventional sequential processing appreciable, especially for large problems.
A Generalization of Takane's Algorithm for DEDICOM.
ERIC Educational Resources Information Center
Kiers, Henk A. L.; And Others
1990-01-01
An algorithm is described for fitting the DEDICOM model (proposed by R. A. Harshman in 1978) for the analysis of asymmetric data matrices. The method modifies a procedure proposed by Y. Takane (1985) to provide guaranteed monotonic convergence. The algorithm is based on a technique known as majorization. (SLD)
Evolutionary development of path planning algorithms
Hage, M
1998-09-01
This paper describes the use of evolutionary software techniques for developing both genetic algorithms and genetic programs. Genetic algorithms are evolved to solve a specific problem within a fixed and known environment. While genetic algorithms can evolve to become very optimized for their task, they often are very specialized and perform poorly if the environment changes. Genetic programs are evolved through simultaneous training in a variety of environments to develop a more general controller behavior that operates in unknown environments. Performance of genetic programs is less optimal than a specially bred algorithm for an individual environment, but the controller performs acceptably under a wider variety of circumstances. The example problem addressed in this paper is evolutionary development of algorithms and programs for path planning in nuclear environments, such as Chernobyl.
MM Algorithms for Geometric and Signomial Programming.
Lange, Kenneth; Zhou, Hua
2014-02-01
This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates. PMID:24634545
Basic firefly algorithm for document clustering
NASA Astrophysics Data System (ADS)
Mohammed, Athraa Jasim; Yusof, Yuhanis; Husni, Husniza
2015-12-01
The Document clustering plays significant role in Information Retrieval (IR) where it organizes documents prior to the retrieval process. To date, various clustering algorithms have been proposed and this includes the K-means and Particle Swarm Optimization. Even though these algorithms have been widely applied in many disciplines due to its simplicity, such an approach tends to be trapped in a local minimum during its search for an optimal solution. To address the shortcoming, this paper proposes a Basic Firefly (Basic FA) algorithm to cluster text documents. The algorithm employs the Average Distance to Document Centroid (ADDC) as the objective function of the search. Experiments utilizing the proposed algorithm were conducted on the 20Newsgroups benchmark dataset. Results demonstrate that the Basic FA generates a more robust and compact clusters than the ones produced by K-means and Particle Swarm Optimization (PSO).
Algorithm refinement for the stochastic Burgers' equation
Bell, John B.; Foo, Jasmine; Garcia, Alejandro L. . E-mail: algarcia@algarcia.org
2007-04-10
In this paper, we develop an algorithm refinement (AR) scheme for an excluded random walk model whose mean field behavior is given by the viscous Burgers' equation. AR hybrids use the adaptive mesh refinement framework to model a system using a molecular algorithm where desired while allowing a computationally faster continuum representation to be used in the remainder of the domain. The focus in this paper is the role of fluctuations on the dynamics. In particular, we demonstrate that it is necessary to include a stochastic forcing term in Burgers' equation to accurately capture the correct behavior of the system. The conclusion we draw from this study is that the fidelity of multiscale methods that couple disparate algorithms depends on the consistent modeling of fluctuations in each algorithm and on a coupling, such as algorithm refinement, that preserves this consistency.
The theory of hybrid stochastic algorithms
Kennedy, A.D. . Supercomputer Computations Research Inst.)
1989-11-21
These lectures introduce the family of Hybrid Stochastic Algorithms for performing Monte Carlo calculations in Quantum Field Theory. After explaining the basic concepts of Monte Carlo integration we discuss the properties of Markov processes and one particularly useful example of them: the Metropolis algorithm. Building upon this framework we consider the Hybrid and Langevin algorithms from the viewpoint that they are approximate versions of the Hybrid Monte Carlo method; and thus we are led to consider Molecular Dynamics using the Leapfrog algorithm. The lectures conclude by reviewing recent progress in these areas, explaining higher-order integration schemes, the asymptotic large-volume behaviour of the various algorithms, and some simple exact results obtained by applying them to free field theory. It is attempted throughout to give simple yet correct proofs of the various results encountered. 38 refs.
Vector Quantization Algorithm Based on Associative Memories
NASA Astrophysics Data System (ADS)
Guzmán, Enrique; Pogrebnyak, Oleksiy; Yáñez, Cornelio; Manrique, Pablo
This paper presents a vector quantization algorithm for image compression based on extended associative memories. The proposed algorithm is divided in two stages. First, an associative network is generated applying the learning phase of the extended associative memories between a codebook generated by the LBG algorithm and a training set. This associative network is named EAM-codebook and represents a new codebook which is used in the next stage. The EAM-codebook establishes a relation between training set and the LBG codebook. Second, the vector quantization process is performed by means of the recalling stage of EAM using as associative memory the EAM-codebook. This process generates a set of the class indices to which each input vector belongs. With respect to the LBG algorithm, the main advantages offered by the proposed algorithm is high processing speed and low demand of resources (system memory); results of image compression and quality are presented.
Self-adaptive parameters in genetic algorithms
NASA Astrophysics Data System (ADS)
Pellerin, Eric; Pigeon, Luc; Delisle, Sylvain
2004-04-01
Genetic algorithms are powerful search algorithms that can be applied to a wide range of problems. Generally, parameter setting is accomplished prior to running a Genetic Algorithm (GA) and this setting remains unchanged during execution. The problem of interest to us here is the self-adaptive parameters adjustment of a GA. In this research, we propose an approach in which the control of a genetic algorithm"s parameters can be encoded within the chromosome of each individual. The parameters" values are entirely dependent on the evolution mechanism and on the problem context. Our preliminary results show that a GA is able to learn and evaluate the quality of self-set parameters according to their degree of contribution to the resolution of the problem. These results are indicative of a promising approach to the development of GAs with self-adaptive parameter settings that do not require the user to pre-adjust parameters at the outset.
MM Algorithms for Geometric and Signomial Programming
Lange, Kenneth; Zhou, Hua
2013-01-01
This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates. PMID:24634545
A Parallel Rendering Algorithm for MIMD Architectures
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.; Orloff, Tobias
1991-01-01
Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well.
Acceleration of iterative image restoration algorithms.
Biggs, D S; Andrews, M
1997-03-10
A new technique for the acceleration of iterative image restoration algorithms is proposed. The method is based on the principles of vector extrapolation and does not require the minimization of a cost function. The algorithm is derived and its performance illustrated with Richardson-Lucy (R-L) and maximum entropy (ME) deconvolution algorithms and the Gerchberg-Saxton magnitude and phase retrieval algorithms. Considerable reduction in restoration times is achieved with little image distortion or computational overhead per iteration. The speedup achieved is shown to increase with the number of iterations performed and is easily adapted to suit different algorithms. An example R-L restoration achieves an average speedup of 40 times after 250 iterations and an ME method 20 times after only 50 iterations. An expression for estimating the acceleration factor is derived and confirmed experimentally. Comparisons with other acceleration techniques in the literature reveal significant improvements in speed and stability. PMID:18250863
Smooth transitions between bump rendering algorithms
Becker, B.G. Max, N.L. |
1993-01-04
A method is described for switching smoothly between rendering algorithms as required by the amount of visible surface detail. The result will be more realism with less computation for displaying objects whose surface detail can be described by one or more bump maps. The three rendering algorithms considered are bidirectional reflection distribution function (BRDF), bump-mapping, and displacement-mapping. The bump-mapping has been modified to make it consistent with the other two. For a given viewpoint, one of these algorithms will show a better trade-off between quality, computation time, and aliasing than the other two. Thus, it needs to be determined for any given viewpoint which regions of the object(s) will be rendered with each algorithm The decision as to which algorithm is appropriate is a function of distance, viewing angle, and the frequency of bumps in the bump map.
Univariate time series forecasting algorithm validation
NASA Astrophysics Data System (ADS)
Ismail, Suzilah; Zakaria, Rohaiza; Muda, Tuan Zalizam Tuan
2014-12-01
Forecasting is a complex process which requires expert tacit knowledge in producing accurate forecast values. This complexity contributes to the gaps between end users and expert. Automating this process by using algorithm can act as a bridge between them. Algorithm is a well-defined rule for solving a problem. In this study a univariate time series forecasting algorithm was developed in JAVA and validated using SPSS and Excel. Two set of simulated data (yearly and non-yearly); several univariate forecasting techniques (i.e. Moving Average, Decomposition, Exponential Smoothing, Time Series Regressions and ARIMA) and recent forecasting process (such as data partition, several error measures, recursive evaluation and etc.) were employed. Successfully, the results of the algorithm tally with the results of SPSS and Excel. This algorithm will not just benefit forecaster but also end users that lacking in depth knowledge of forecasting process.
Intelligent perturbation algorithms for space scheduling optimization
NASA Technical Reports Server (NTRS)
Kurtzman, Clifford R.
1991-01-01
Intelligent perturbation algorithms for space scheduling optimization are presented in the form of the viewgraphs. The following subject areas are covered: optimization of planning, scheduling, and manifesting; searching a discrete configuration space; heuristic algorithms used for optimization; use of heuristic methods on a sample scheduling problem; intelligent perturbation algorithms are iterative refinement techniques; properties of a good iterative search operator; dispatching examples of intelligent perturbation algorithm and perturbation operator attributes; scheduling implementations using intelligent perturbation algorithms; major advances in scheduling capabilities; the prototype ISF (industrial Space Facility) experiment scheduler; optimized schedule (max revenue); multi-variable optimization; Space Station design reference mission scheduling; ISF-TDRSS command scheduling demonstration; and example task - communications check.
Automatic ionospheric layers detection: Algorithms analysis
NASA Astrophysics Data System (ADS)
Molina, María G.; Zuccheretti, Enrico; Cabrera, Miguel A.; Bianchi, Cesidio; Sciacca, Umberto; Baskaradas, James
2016-03-01
Vertical sounding is a widely used technique to obtain ionosphere measurements, such as an estimation of virtual height versus frequency scanning. It is performed by high frequency radar for geophysical applications called "ionospheric sounder" (or "ionosonde"). Radar detection depends mainly on targets characteristics. While several targets behavior and correspondent echo detection algorithms have been studied, a survey to address a suitable algorithm for ionospheric sounder has to be carried out. This paper is focused on automatic echo detection algorithms implemented in particular for an ionospheric sounder, target specific characteristics were studied as well. Adaptive threshold detection algorithms are proposed, compared to the current implemented algorithm, and tested using actual data obtained from the Advanced Ionospheric Sounder (AIS-INGV) at Rome Ionospheric Observatory. Different cases of study have been selected according typical ionospheric and detection conditions.
A parallel algorithm for mesh smoothing
Freitag, L.; Jones, M.; Plassmann, P.
1999-07-01
Maintaining good mesh quality during the generation and refinement of unstructured meshes in finite-element applications is an important aspect in obtaining accurate discretizations and well-conditioned linear systems. In this article, the authors present a mesh-smoothing algorithm based on nonsmooth optimization techniques and a scalable implementation of this algorithm. They prove that the parallel algorithm has a provably fast runtime bound and executes correctly for a parallel random access machine (PRAM) computational model. They extend the PRAM algorithm to distributed memory computers and report results for two-and three-dimensional simplicial meshes that demonstrate the efficiency and scalability of this approach for a number of different test cases. They also examine the effect of different architectures on the parallel algorithm and present results for the IBM SP supercomputer and an ATM-connected network of SPARC Ultras.
Marshall Rosenbluth and the Metropolis algorithm
Gubernatis, J.E.
2005-05-15
The 1953 publication, 'Equation of State Calculations by Very Fast Computing Machines' by N. Metropolis, A. W. Rosenbluth and M. N. Rosenbluth, and M. Teller and E. Teller [J. Chem. Phys. 21, 1087 (1953)] marked the beginning of the use of the Monte Carlo method for solving problems in the physical sciences. The method described in this publication subsequently became known as the Metropolis algorithm, undoubtedly the most famous and most widely used Monte Carlo algorithm ever published. As none of the authors made subsequent use of the algorithm, they became unknown to the large simulation physics community that grew from this publication and their roles in its development became the subject of mystery and legend. At a conference marking the 50th anniversary of the 1953 publication, Marshall Rosenbluth gave his recollections of the algorithm's development. The present paper describes the algorithm, reconstructs the historical context in which it was developed, and summarizes Marshall's recollections.
Passive microwave algorithm development and evaluation
NASA Technical Reports Server (NTRS)
Petty, Grant W.
1995-01-01
The scientific objectives of this grant are: (1) thoroughly evaluate, both theoretically and empirically, all available Special Sensor Microwave Imager (SSM/I) retrieval algorithms for column water vapor, column liquid water, and surface wind speed; (2) where both appropriate and feasible, develop, validate, and document satellite passive microwave retrieval algorithms that offer significantly improved performance compared with currently available algorithms; and (3) refine and validate a novel physical inversion scheme for retrieving rain rate over the ocean. This report summarizes work accomplished or in progress during the first year of a three year grant. The emphasis during the first year has been on the validation and refinement of the rain rate algorithm published by Petty and on the analysis of independent data sets that can be used to help evaluate the performance of rain rate algorithms over remote areas of the ocean. Two articles in the area of global oceanic precipitation are attached.
Implementation of the phase gradient algorithm
Wahl, D.E.; Eichel, P.H.; Jakowatz, C.V. Jr.
1990-01-01
The recently introduced Phase Gradient Autofocus (PGA) algorithm is a non-parametric autofocus technique which has been shown to be quite effective for phase correction of Synthetic Aperture Radar (SAR) imagery. This paper will show that this powerful algorithm can be executed at near real-time speeds and also be implemented in a relatively small piece of hardware. A brief review of the PGA will be presented along with an overview of some critical implementation considerations. In addition, a demonstration of the PGA algorithm running on a 7 in. {times} 10 in. printed circuit board containing a TMS320C30 digital signal processing (DSP) chip will be given. With this system, using only the 20 range bins which contain the brightest points in the image, the algorithm can correct a badly degraded 256 {times} 256 image in as little as 3 seconds. Using all range bins, the algorithm can correct the image in 9 seconds. 4 refs., 2 figs.
Algorithms for improved performance in cryptographic protocols.
Schroeppel, Richard Crabtree; Beaver, Cheryl Lynn
2003-11-01
Public key cryptographic algorithms provide data authentication and non-repudiation for electronic transmissions. The mathematical nature of the algorithms, however, means they require a significant amount of computation, and encrypted messages and digital signatures possess high bandwidth. Accordingly, there are many environments (e.g. wireless, ad-hoc, remote sensing networks) where public-key requirements are prohibitive and cannot be used. The use of elliptic curves in public-key computations has provided a means by which computations and bandwidth can be somewhat reduced. We report here on the research conducted in an LDRD aimed to find even more efficient algorithms and to make public-key cryptography available to a wider range of computing environments. We improved upon several algorithms, including one for which a patent has been applied. Further we discovered some new problems and relations on which future cryptographic algorithms may be based.
A new algorithm for coding geological terminology
NASA Astrophysics Data System (ADS)
Apon, W.
The Geological Survey of The Netherlands has developed an algorithm to convert the plain geological language of lithologic well logs into codes suitable for computer processing and link these to existing plotting programs. The algorithm is based on the "direct method" and operates in three steps: (1) searching for defined word combinations and assigning codes; (2) deleting duplicated codes; (3) correcting incorrect code combinations. Two simple auxiliary files are used. A simple PC demonstration program is included to enable readers to experiment with this algorithm. The Department of Quarternary Geology of the Geological Survey of The Netherlands possesses a large database of shallow lithologic well logs in plain language and has been using a program based on this algorithm for about 3 yr. Erroneous codes resulting from using this algorithm are less than 2%.
Improving the algorithm of temporal relation propagation
NASA Astrophysics Data System (ADS)
Shen, Jifeng; Xu, Dan; Liu, Tongming
2005-03-01
In the military Multi Agent System, every agent needs to analyze the temporal relationships among the tasks or combat behaviors, and it"s very important to reflect the battlefield situation in time. The temporal relation among agents is usually very complex, and we model it with interval algebra (IA) network. Therefore an efficient temporal reasoning algorithm is vital in battle MAS model. The core of temporal reasoning is path consistency algorithm, an efficient path consistency algorithm is necessary. In this paper we used the Interval Matrix Calculus (IMC) method to represent the temporal relation, and optimized the path consistency algorithm by improving the efficiency of propagation of temporal relation based on the Allen's path consistency algorithm.
Localization Algorithms of Underwater Wireless Sensor Networks: A Survey
Han, Guangjie; Jiang, Jinfang; Shu, Lei; Xu, Yongjun; Wang, Feng
2012-01-01
In Underwater Wireless Sensor Networks (UWSNs), localization is one of most important technologies since it plays a critical role in many applications. Motivated by widespread adoption of localization, in this paper, we present a comprehensive survey of localization algorithms. First, we classify localization algorithms into three categories based on sensor nodes’ mobility: stationary localization algorithms, mobile localization algorithms and hybrid localization algorithms. Moreover, we compare the localization algorithms in detail and analyze future research directions of localization algorithms in UWSNs. PMID:22438752
GOES-R Algorithm Working Group (AWG)
NASA Astrophysics Data System (ADS)
Daniels, Jaime; Goldberg, Mitch; Wolf, Walter; Zhou, Lihang; Lowe, Kenneth
2009-08-01
For the next-generation of GOES-R instruments to meet stated performance requirements, state-of-the-art algorithms will be needed to convert raw instrument data to calibrated radiances and derived geophysical parameters (atmosphere, land, ocean, and space weather). The GOES-R Program Office (GPO) assigned the NOAA/NESDIS Center for Satellite Research and Applications (STAR) the responsibility for technical leadership and management of GOES-R algorithm development and calibration/validation. STAR responded with the creation of the GOES-R Algorithm Working Group (AWG) to manage and coordinate development and calibration/validation activities for GOES-R proxy data and geophysical product algorithms. The AWG consists of 15 application teams that bring expertise in product algorithms that span atmospheric, land, oceanic, and space weather disciplines. Each AWG teams will develop new scientific Level- 2 algorithms for GOES-R and will also leverage science developments from other communities (other government agencies, universities and industry), and heritage approaches from current operational GOES and POES product systems. All algorithms will be demonstrated and validated in a scalable operational demonstration environment. All software developed by the AWG will adhere to new standards established within NOAA/NESDIS. The AWG Algorithm Integration Team (AIT) has the responsibility for establishing the system framework, integrating the product software from each team into this framework, enforcing the established software development standards, and preparing system deliveries. The AWG will deliver an Algorithm Theoretical Basis Document (ATBD) for each GOES-R geophysical product as well as Delivered Algorithm Packages (DAPs) to the GPO.
Exploration of new multivariate spectral calibration algorithms.
Van Benthem, Mark Hilary; Haaland, David Michael; Melgaard, David Kennett; Martin, Laura Elizabeth; Wehlburg, Christine Marie; Pell, Randy J.; Guenard, Robert D.
2004-03-01
A variety of multivariate calibration algorithms for quantitative spectral analyses were investigated and compared, and new algorithms were developed in the course of this Laboratory Directed Research and Development project. We were able to demonstrate the ability of the hybrid classical least squares/partial least squares (CLSIPLS) calibration algorithms to maintain calibrations in the presence of spectrometer drift and to transfer calibrations between spectrometers from the same or different manufacturers. These methods were found to be as good or better in prediction ability as the commonly used partial least squares (PLS) method. We also present the theory for an entirely new class of algorithms labeled augmented classical least squares (ACLS) methods. New factor selection methods are developed and described for the ACLS algorithms. These factor selection methods are demonstrated using near-infrared spectra collected from a system of dilute aqueous solutions. The ACLS algorithm is also shown to provide improved ease of use and better prediction ability than PLS when transferring calibrations between near-infrared calibrations from the same manufacturer. Finally, simulations incorporating either ideal or realistic errors in the spectra were used to compare the prediction abilities of the new ACLS algorithm with that of PLS. We found that in the presence of realistic errors with non-uniform spectral error variance across spectral channels or with spectral errors correlated between frequency channels, ACLS methods generally out-performed the more commonly used PLS method. These results demonstrate the need for realistic error structure in simulations when the prediction abilities of various algorithms are compared. The combination of equal or superior prediction ability and the ease of use of the ACLS algorithms make the new ACLS methods the preferred algorithms to use for multivariate spectral calibrations.
Annealed Importance Sampling Reversible Jump MCMC algorithms
Karagiannis, Georgios; Andrieu, Christophe
2013-03-20
It will soon be 20 years since reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithms have been proposed. They have significantly extended the scope of Markov chain Monte Carlo simulation methods, offering the promise to be able to routinely tackle transdimensional sampling problems, as encountered in Bayesian model selection problems for example, in a principled and flexible fashion. Their practical efficient implementation, however, still remains a challenge. A particular difficulty encountered in practice is in the choice of the dimension matching variables (both their nature and their distribution) and the reversible transformations which allow one to define the one-to-one mappings underpinning the design of these algorithms. Indeed, even seemingly sensible choices can lead to algorithms with very poor performance. The focus of this paper is the development and performance evaluation of a method, annealed importance sampling RJ-MCMC (aisRJ), which addresses this problem by mitigating the sensitivity of RJ-MCMC algorithms to the aforementioned poor design. As we shall see the algorithm can be understood as being an “exact approximation” of an idealized MCMC algorithm that would sample from the model probabilities directly in a model selection set-up. Such an idealized algorithm may have good theoretical convergence properties, but typically cannot be implemented, and our algorithms can approximate the performance of such idealized algorithms to an arbitrary degree while not introducing any bias for any degree of approximation. Our approach combines the dimension matching ideas of RJ-MCMC with annealed importance sampling and its Markov chain Monte Carlo implementation. We illustrate the performance of the algorithm with numerical simulations which indicate that, although the approach may at first appear computationally involved, it is in fact competitive.
Recent Advancements in Lightning Jump Algorithm Work
NASA Technical Reports Server (NTRS)
Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.
2010-01-01
In the past year, the primary objectives were to show the usefulness of total lightning as compared to traditional cloud-to-ground (CG) networks, test the lightning jump algorithm configurations in other regions of the country, increase the number of thunderstorms within our thunderstorm database, and to pinpoint environments that could prove difficult for any lightning jump configuration. A total of 561 thunderstorms have been examined in the past year (409 non-severe, 152 severe) from four regions of the country (North Alabama, Washington D.C., High Plains of CO/KS, and Oklahoma). Results continue to indicate that the 2 lightning jump algorithm configuration holds the most promise in terms of prospective operational lightning jump algorithms, with a probability of detection (POD) at 81%, a false alarm rate (FAR) of 45%, a critical success index (CSI) of 49% and a Heidke Skill Score (HSS) of 0.66. The second best performing algorithm configuration was the Threshold 4 algorithm, which had a POD of 72%, FAR of 51%, a CSI of 41% and an HSS of 0.58. Because a more complex algorithm configuration shows the most promise in terms of prospective operational lightning jump algorithms, accurate thunderstorm cell tracking work must be undertaken to track lightning trends on an individual thunderstorm basis over time. While these numbers for the 2 configuration are impressive, the algorithm does have its weaknesses. Specifically, low-topped and tropical cyclone thunderstorm environments are present issues for the 2 lightning jump algorithm, because of the suppressed vertical depth impact on overall flash counts (i.e., a relative dearth in lightning). For example, in a sample of 120 thunderstorms from northern Alabama that contained 72 missed events by the 2 algorithm 36% of the misses were associated with these two environments (17 storms).
Concurrent constant modulus algorithm and multi-modulus algorithm scheme for high-order QAM signals
NASA Astrophysics Data System (ADS)
Rao, Wei
2011-10-01
In order to overcome the slow convergence rate and large steady-state mean square error of constant modulus algorithm (CMA), a concurrent constant modulus algorithm and multi-modulus algorithm scheme for high-order QAM signals is proposed, which makes full use of the character which is that the high-order QAM signals locate in the different modulus. This algorithm uses the CMA as the basal mode. And in the second mode it uses the multi-modulus algorithm. Furthermore, the two modes operate concurrently. The efficiency of the method is proved by computer simulations in underwater acoustic channels.
NASA Astrophysics Data System (ADS)
Zheng, Genrang; Lin, ZhengChun
The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.
ALGORITHM FOR SORTING GROUPED DATA
NASA Technical Reports Server (NTRS)
Evans, J. D.
1994-01-01
It is often desirable to sort data sets in ascending or descending order. This becomes more difficult for grouped data, i.e., multiple sets of data, where each set of data involves several measurements or related elements. The sort becomes increasingly cumbersome when more than a few elements exist for each data set. In order to achieve an efficient sorting process, an algorithm has been devised in which the maximum most significant element is found, and then compared to each element in succession. The program was written to handle the daily temperature readings of the Voyager spacecraft, particularly those related to the special tracking requirements of Voyager 2. By reducing each data set to a single representative number, the sorting process becomes very easy. The first step in the process is to reduce the data set of width 'n' to a data set of width '1'. This is done by representing each data set by a polynomial of length 'n' based on the differences of the maximum and minimum elements. These single numbers are then sorted and converted back to obtain the original data sets. Required input data are the name of the data file to read and sort, and the starting and ending record numbers. The package includes a sample data file, containing 500 sets of data with 5 elements in each set. This program will perform a sort of the 500 data sets in 3 - 5 seconds on an IBM PC-AT with a hard disk; on a similarly equipped IBM PC-XT the time is under 10 seconds. This program is written in BASIC (specifically the Microsoft QuickBasic compiler) for interactive execution and has been implemented on the IBM PC computer series operating under PC-DOS with a central memory requirement of approximately 40K of 8 bit bytes. A hard disk is desirable for speed considerations, but is not required. This program was developed in 1986.
Updated treatment algorithm of pulmonary arterial hypertension.
Galiè, Nazzareno; Corris, Paul A; Frost, Adaani; Girgis, Reda E; Granton, John; Jing, Zhi Cheng; Klepetko, Walter; McGoon, Michael D; McLaughlin, Vallerie V; Preston, Ioana R; Rubin, Lewis J; Sandoval, Julio; Seeger, Werner; Keogh, Anne
2013-12-24
The demands on a pulmonary arterial hypertension (PAH) treatment algorithm are multiple and in some ways conflicting. The treatment algorithm usually includes different types of recommendations with varying degrees of scientific evidence. In addition, the algorithm is required to be comprehensive but not too complex, informative yet simple and straightforward. The type of information in the treatment algorithm are heterogeneous including clinical, hemodynamic, medical, interventional, pharmacological and regulatory recommendations. Stakeholders (or users) including physicians from various specialties and with variable expertise in PAH, nurses, patients and patients' associations, healthcare providers, regulatory agencies and industry are often interested in the PAH treatment algorithm for different reasons. These are the considerable challenges faced when proposing appropriate updates to the current evidence-based treatment algorithm.The current treatment algorithm may be divided into 3 main areas: 1) general measures, supportive therapy, referral strategy, acute vasoreactivity testing and chronic treatment with calcium channel blockers; 2) initial therapy with approved PAH drugs; and 3) clinical response to the initial therapy, combination therapy, balloon atrial septostomy, and lung transplantation. All three sections will be revisited highlighting information newly available in the past 5 years and proposing updates where appropriate. The European Society of Cardiology grades of recommendation and levels of evidence will be adopted to rank the proposed treatments. PMID:24355643
Algorithm for dynamic Speckle pattern processing
NASA Astrophysics Data System (ADS)
Cariñe, J.; Guzmán, R.; Torres-Ruiz, F. A.
2016-07-01
In this paper we present a new algorithm for determining surface activity by processing speckle pattern images recorded with a CCD camera. Surface activity can be produced by motility or small displacements among other causes, and is manifested as a change in the pattern recorded in the camera with reference to a static background pattern. This intensity variation is considered to be a small perturbation compared with the mean intensity. Based on a perturbative method we obtain an equation with which we can infer information about the dynamic behavior of the surface that generates the speckle pattern. We define an activity index based on our algorithm that can be easily compared with the outcomes from other algorithms. It is shown experimentally that this index evolves in time in the same way as the Inertia Moment method, however our algorithm is based on direct processing of speckle patterns without the need for other kinds of post-processes (like THSP and co-occurrence matrix), making it a viable real-time method. We also show how this algorithm compares with several other algorithms when applied to calibration experiments. From these results we conclude that our algorithm offer qualitative and quantitative advantages over current methods.
Novel and efficient tag SNPs selection algorithms.
Chen, Wen-Pei; Hung, Che-Lun; Tsai, Suh-Jen Jane; Lin, Yaw-Ling
2014-01-01
SNPs are the most abundant forms of genetic variations amongst species; the association studies between complex diseases and SNPs or haplotypes have received great attention. However, these studies are restricted by the cost of genotyping all SNPs; thus, it is necessary to find smaller subsets, or tag SNPs, representing the rest of the SNPs. In fact, the existing tag SNP selection algorithms are notoriously time-consuming. An efficient algorithm for tag SNP selection was presented, which was applied to analyze the HapMap YRI data. The experimental results show that the proposed algorithm can achieve better performance than the existing tag SNP selection algorithms; in most cases, this proposed algorithm is at least ten times faster than the existing methods. In many cases, when the redundant ratio of the block is high, the proposed algorithm can even be thousands times faster than the previously known methods. Tools and web services for haplotype block analysis integrated by hadoop MapReduce framework are also developed using the proposed algorithm as computation kernels. PMID:24212035
Passive MMW algorithm performance characterization using MACET
NASA Astrophysics Data System (ADS)
Williams, Bradford D.; Watson, John S.; Amphay, Sengvieng A.
1997-06-01
As passive millimeter wave sensor technology matures, algorithms which are tailored to exploit the benefits of this technology are being developed. The expedient development of such algorithms requires an understanding of not only the gross phenomenology, but also specific quirks and limitations inherent in sensors and the data gathering methodology specific to this regime. This level of understanding is approached as the technology matures and increasing amounts of data become available for analysis. The Armament Directorate of Wright Laboratory, WL/MN, has spearheaded the advancement of passive millimeter-wave technology in algorithm development tools and modeling capability as well as sensor development. A passive MMW channel is available within WL/MNs popular multi-channel modeling program Irma, and a sample passive MMW algorithm is incorporated into the Modular Algorithm Concept Evaluation Tool, an algorithm development and evaluation system. The Millimeter Wave Analysis of Passive Signatures system provides excellent data collection capability in the 35, 60, and 95 GHz MMW bands. This paper exploits these assets for the study of the PMMW signature of a High Mobility Multi- Purpose Wheeled Vehicle in the three bands mentioned, and the effect of camouflage upon this signature and autonomous target recognition algorithm performance.
An algorithmic approach to crustal deformation analysis
NASA Technical Reports Server (NTRS)
Iz, Huseyin Baki
1987-01-01
In recent years the analysis of crustal deformation measurements has become important as a result of current improvements in geodetic methods and an increasing amount of theoretical and observational data provided by several earth sciences. A first-generation data analysis algorithm which combines a priori information with current geodetic measurements was proposed. Relevant methods which can be used in the algorithm were discussed. Prior information is the unifying feature of this algorithm. Some of the problems which may arise through the use of a priori information in the analysis were indicated and preventive measures were demonstrated. The first step in the algorithm is the optimal design of deformation networks. The second step in the algorithm identifies the descriptive model of the deformation field. The final step in the algorithm is the improved estimation of deformation parameters. Although deformation parameters are estimated in the process of model discrimination, they can further be improved by the use of a priori information about them. According to the proposed algorithm this information must first be tested against the estimates calculated using the sample data only. Null-hypothesis testing procedures were developed for this purpose. Six different estimators which employ a priori information were examined. Emphasis was put on the case when the prior information is wrong and analytical expressions for possible improvements under incompatible prior information were derived.
Algorithm Optimally Allocates Actuation of a Spacecraft
NASA Technical Reports Server (NTRS)
Motaghedi, Shi
2007-01-01
A report presents an algorithm that solves the following problem: Allocate the force and/or torque to be exerted by each thruster and reaction-wheel assembly on a spacecraft for best performance, defined as minimizing the error between (1) the total force and torque commanded by the spacecraft control system and (2) the total of forces and torques actually exerted by all the thrusters and reaction wheels. The algorithm incorporates the matrix vector relationship between (1) the total applied force and torque and (2) the individual actuator force and torque values. It takes account of such constraints as lower and upper limits on the force or torque that can be applied by a given actuator. The algorithm divides the aforementioned problem into two optimization problems that it solves sequentially. These problems are of a type, known in the art as semi-definite programming problems, that involve linear matrix inequalities. The algorithm incorporates, as sub-algorithms, prior algorithms that solve such optimization problems very efficiently. The algorithm affords the additional advantage that the solution requires the minimum rate of consumption of fuel for the given best performance.
Image segmentation using an improved differential algorithm
NASA Astrophysics Data System (ADS)
Gao, Hao; Shi, Yujiao; Wu, Dongmei
2014-10-01
Among all the existing segmentation techniques, the thresholding technique is one of the most popular due to its simplicity, robustness, and accuracy (e.g. the maximum entropy method, Otsu's method, and K-means clustering). However, the computation time of these algorithms grows exponentially with the number of thresholds due to their exhaustive searching strategy. As a population-based optimization algorithm, differential algorithm (DE) uses a population of potential solutions and decision-making processes. It has shown considerable success in solving complex optimization problems within a reasonable time limit. Thus, applying this method into segmentation algorithm should be a good choice during to its fast computational ability. In this paper, we first propose a new differential algorithm with a balance strategy, which seeks a balance between the exploration of new regions and the exploitation of the already sampled regions. Then, we apply the new DE into the traditional Otsu's method to shorten the computation time. Experimental results of the new algorithm on a variety of images show that, compared with the EA-based thresholding methods, the proposed DE algorithm gets more effective and efficient results. It also shortens the computation time of the traditional Otsu method.
Fast ordering algorithm for exact histogram specification.
Nikolova, Mila; Steidl, Gabriele
2014-12-01
This paper provides a fast algorithm to order in a meaningful, strict way the integer gray values in digital (quantized) images. It can be used in any exact histogram specification-based application. Our algorithm relies on the ordering procedure based on the specialized variational approach. This variational method was shown to be superior to all other state-of-the art ordering algorithms in terms of faithful total strict ordering but not in speed. Indeed, the relevant functionals are in general difficult to minimize because their gradient is nearly flat over vast regions. In this paper, we propose a simple and fast fixed point algorithm to minimize these functionals. The fast convergence of our algorithm results from known analytical properties of the model. Our algorithm is equivalent to an iterative nonlinear filtering. Furthermore, we show that a particular form of the variational model gives rise to much faster convergence than other alternative forms. We demonstrate that only a few iterations of this filter yield almost the same pixel ordering as the minimizer. Thus, we apply only few iteration steps to obtain images, whose pixels can be ordered in a strict and faithful way. Numerical experiments confirm that our algorithm outperforms by far its main competitors. PMID:25347881
LCD motion blur: modeling, analysis, and algorithm.
Chan, Stanley H; Nguyen, Truong Q
2011-08-01
Liquid crystal display (LCD) devices are well known for their slow responses due to the physical limitations of liquid crystals. Therefore, fast moving objects in a scene are often perceived as blurred. This effect is known as the LCD motion blur. In order to reduce LCD motion blur, an accurate LCD model and an efficient deblurring algorithm are needed. However, existing LCD motion blur models are insufficient to reflect the limitation of human-eye-tracking system. Also, the spatiotemporal equivalence in LCD motion blur models has not been proven directly in the discrete 2-D spatial domain, although it is widely used. There are three main contributions of this paper: modeling, analysis, and algorithm. First, a comprehensive LCD motion blur model is presented, in which human-eye-tracking limits are taken into consideration. Second, a complete analysis of spatiotemporal equivalence is provided and verified using real video sequences. Third, an LCD motion blur reduction algorithm is proposed. The proposed algorithm solves an l(1)-norm regularized least-squares minimization problem using a subgradient projection method. Numerical results show that the proposed algorithm gives higher peak SNR, lower temporal error, and lower spatial error than motion-compensated inverse filtering and Lucy-Richardson deconvolution algorithm, which are two state-of-the-art LCD deblurring algorithms. PMID:21292596
A new algorithm for attitude-independent magnetometer calibration
NASA Technical Reports Server (NTRS)
Alonso, Roberto; Shuster, Malcolm D.
1994-01-01
A new algorithm is developed for inflight magnetometer bias determination without knowledge of the attitude. This algorithm combines the fast convergence of a heuristic algorithm currently in use with the correct treatment of the statistics and without discarding data. The algorithm performance is examined using simulated data and compared with previous algorithms.
Quantum adiabatic algorithm for factorization and its experimental implementation.
Peng, Xinhua; Liao, Zeyang; Xu, Nanyang; Qin, Gan; Zhou, Xianyi; Suter, Dieter; Du, Jiangfeng
2008-11-28
We propose an adiabatic quantum algorithm capable of factorizing numbers, using fewer qubits than Shor's algorithm. We implement the algorithm in a NMR quantum information processor and experimentally factorize the number 21. In the range that our classical computer could simulate, the quantum adiabatic algorithm works well, providing evidence that the running time of this algorithm scales polynomially with the problem size. PMID:19113467
SAGE II inversion algorithm. [Stratospheric Aerosol and Gas Experiment
NASA Technical Reports Server (NTRS)
Chu, W. P.; Mccormick, M. P.; Lenoble, J.; Brogniez, C.; Pruvost, P.
1989-01-01
The operational Stratospheric Aerosol and Gas Experiment II multichannel data inversion algorithm is described. Aerosol and ozone retrievals obtained with the algorithm are discussed. The algorithm is compared to an independently developed algorithm (Lenoble, 1989), showing that the inverted aerosol and ozone profiles from the two algorithms are similar within their respective uncertainties.
Phase unwrapping algorithms in laser propagation simulation
NASA Astrophysics Data System (ADS)
Du, Rui; Yang, Lijia
2013-08-01
Currently simulating on laser propagation in atmosphere usually need to deal with beam in strong turbulence, which may lose a part of information via Fourier Transform to simulate the transmission, makes the phase of beam as a 2-D array wrap by 2π . An effective unwrapping algorithm is needed for continuing result and faster calculation. The unwrapping algorithms in atmospheric propagation are similar to the unwrapping algorithm in radar or 3-D surface rebuilding, but not the same. In this article, three classic unwrapping algorithms: the block least squares (BLS), mask-cut (MCUT), and the Flynn's minimal discontinuity algorithm (FMD) are tried in wave-front reconstruction simulation. Each of those algorithms are tested 100 times in 6 same conditions, including low(64x64), medium(128x128), and high(256x256) resolution phase array, with and without noises. Compared the results, the conclusions are delivered as follows. The BLS-based algorithm is the fastest, and the result is acceptable in low resolution environment without noise. The MCUT are higher in accuracy, though they are slower with the array resolution increased, and it is sensitive to noise, resulted in large area errors. Flynn's algorithm has the better accuracy, and it occupies large memory in calculation. After all, the article delivered a new algorithm that based on Active on Vertex (AOV) Network, to build a logical graph to cut the search space then find minimal discontinuity solution. The AOV is faster than MCUT in dealing with high resolution phase arrays, and better accuracy as FMD that has been tested.
A survey of DNA motif finding algorithms
Das, Modan K; Dai, Ho-Kwok
2007-01-01
Background Unraveling the mechanisms that regulate gene expression is a major challenge in biology. An important task in this challenge is to identify regulatory elements, especially the binding sites in deoxyribonucleic acid (DNA) for transcription factors. These binding sites are short DNA segments that are called motifs. Recent advances in genome sequence availability and in high-throughput gene expression analysis technologies have allowed for the development of computational methods for motif finding. As a result, a large number of motif finding algorithms have been implemented and applied to various motif models over the past decade. This survey reviews the latest developments in DNA motif finding algorithms. Results Earlier algorithms use promoter sequences of coregulated genes from single genome and search for statistically overrepresented motifs. Recent algorithms are designed to use phylogenetic footprinting or orthologous sequences and also an integrated approach where promoter sequences of coregulated genes and phylogenetic footprinting are used. All the algorithms studied have been reported to correctly detect the motifs that have been previously detected by laboratory experimental approaches, and some algorithms were able to find novel motifs. However, most of these motif finding algorithms have been shown to work successfully in yeast and other lower organisms, but perform significantly worse in higher organisms. Conclusion Despite considerable efforts to date, DNA motif finding remains a complex challenge for biologists and computer scientists. Researchers have taken many different approaches in developing motif discovery tools and the progress made in this area of research is very encouraging. Performance comparison of different motif finding tools and identification of the best tools have proven to be a difficult task because tools are designed based on algorithms and motif models that are diverse and complex and our incomplete understanding of
Algorithmic Perspectives on Problem Formulations in MDO
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia M.; Lewis, Robert Michael
2000-01-01
This work is concerned with an approach to formulating the multidisciplinary optimization (MDO) problem that reflects an algorithmic perspective on MDO problem solution. The algorithmic perspective focuses on formulating the problem in light of the abilities and inabilities of optimization algorithms, so that the resulting nonlinear programming problem can be solved reliably and efficiently by conventional optimization techniques. We propose a modular approach to formulating MDO problems that takes advantage of the problem structure, maximizes the autonomy of implementation, and allows for multiple easily interchangeable problem statements to be used depending on the available resources and the characteristics of the application problem.
Comparative Study of Two Automatic Registration Algorithms
NASA Astrophysics Data System (ADS)
Grant, D.; Bethel, J.; Crawford, M.
2013-10-01
The Iterative Closest Point (ICP) algorithm is prevalent for the automatic fine registration of overlapping pairs of terrestrial laser scanning (TLS) data. This method along with its vast number of variants, obtains the least squares parameters that are necessary to align the TLS data by minimizing some distance metric between the scans. The ICP algorithm uses a "model-data" concept in which the scans obtain differential treatment in the registration process depending on whether they were assigned to be the "model" or "data". For each of the "data" points, corresponding points from the "model" are sought. Another concept of "symmetric correspondence" was proposed in the Point-to-Plane (P2P) algorithm, where both scans are treated equally in the registration process. The P2P method establishes correspondences on both scans and minimizes the point-to-plane distances between the scans by simultaneously considering the stochastic properties of both scans. This paper studies both the ICP and P2P algorithms in terms of their consistency in registration parameters for pairs of TLS data. The question being investigated in this paper is, should scan A be registered to scan B, will the parameters be the same if scan B were registered to scan A? Experiments were conducted with eight pairs of real TLS data which were registered by the two algorithms in the forward (scan A to scan B) and backward (scan B to scan A) modes and the results were compared. The P2P algorithm was found to be more consistent than the ICP algorithm. The differences in registration accuracy between the forward and backward modes were negligible when using the P2P algorithm (mean difference of 0.03 mm). However, the ICP had a mean difference of 4.26 mm. Each scan was also transformed by the forward and backward parameters of the two algorithms and the misclosure computed. The mean misclosure for the P2P algorithm was 0.80 mm while that for the ICP algorithm was 5.39 mm. The conclusion from this study is
Fast decoding algorithms for coded aperture systems
NASA Astrophysics Data System (ADS)
Byard, Kevin
2014-08-01
Fast decoding algorithms are described for a number of established coded aperture systems. The fast decoding algorithms for all these systems offer significant reductions in the number of calculations required when reconstructing images formed by a coded aperture system and hence require less computation time to produce the images. The algorithms may therefore be of use in applications that require fast image reconstruction, such as near real-time nuclear medicine and location of hazardous radioactive spillage. Experimental tests confirm the efficacy of the fast decoding techniques.
Algorithms for optimal dyadic decision trees
Hush, Don; Porter, Reid
2009-01-01
A new algorithm for constructing optimal dyadic decision trees was recently introduced, analyzed, and shown to be very effective for low dimensional data sets. This paper enhances and extends this algorithm by: introducing an adaptive grid search for the regularization parameter that guarantees optimal solutions for all relevant trees sizes, revising the core tree-building algorithm so that its run time is substantially smaller for most regularization parameter values on the grid, and incorporating new data structures and data pre-processing steps that provide significant run time enhancement in practice.
Quantum hyperparallel algorithm for matrix multiplication
NASA Astrophysics Data System (ADS)
Zhang, Xin-Ding; Zhang, Xiao-Ming; Xue, Zheng-Yuan
2016-04-01
Hyperentangled states, entangled states with more than one degree of freedom, are considered as promising resource in quantum computation. Here we present a hyperparallel quantum algorithm for matrix multiplication with time complexity O(N2), which is better than the best known classical algorithm. In our scheme, an N dimensional vector is mapped to the state of a single source, which is separated to N paths. With the assistance of hyperentangled states, the inner product of two vectors can be calculated with a time complexity independent of dimension N. Our algorithm shows that hyperparallel quantum computation may provide a useful tool in quantum machine learning and “big data” analysis.