Sample records for optimization code glo

  1. Utility of coupling nonlinear optimization methods with numerical modeling software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, M.J.

    1996-08-05

    Results of using GLO (Global Local Optimizer), a general purpose nonlinear optimization software package for investigating multi-parameter problems in science and engineering is discussed. The package consists of the modular optimization control system (GLO), a graphical user interface (GLO-GUI), a pre-processor (GLO-PUT), a post-processor (GLO-GET), and nonlinear optimization software modules, GLOBAL & LOCAL. GLO is designed for controlling and easy coupling to any scientific software application. GLO runs the optimization module and scientific software application in an iterative loop. At each iteration, the optimization module defines new values for the set of parameters being optimized. GLO-PUT inserts the new parametermore » values into the input file of the scientific application. GLO runs the application with the new parameter values. GLO-GET determines the value of the objective function by extracting the results of the analysis and comparing to the desired result. GLO continues to run the scientific application over and over until it finds the ``best`` set of parameters by minimizing (or maximizing) the objective function. An example problem showing the optimization of material model is presented (Taylor cylinder impact test).« less

  2. GloVe C++ v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cox, Jonathan A.

    2015-12-02

    This code implements the GloVe algorithm for learning word vectors from a text corpus. It uses a modern C++ approach. This algorithm is described in the open literature in the referenced paper by Pennington, Jeffrey, Richard Socher, and Christopher D. Manning.

  3. Validation of in vitro assays in three-dimensional human dermal constructs.

    PubMed

    Idrees, Ayesha; Chiono, Valeria; Ciardelli, Gianluca; Shah, Siegfried; Viebahn, Richard; Zhang, Xiang; Salber, Jochen

    2018-05-01

    Three-dimensional cell culture systems are urgently needed for cytocompatibility testing of biomaterials. This work aimed at the development of three-dimensional in vitro dermal skin models and their optimization for cytocompatibility evaluation. Initially "murine in vitro dermal construct" based on L929 cells was generated, leading to the development of "human in vitro dermal construct" consisting of normal human dermal fibroblasts in rat tail tendon collagen type I. To assess the viability of the cells, different assays CellTiter-Blue ® , RealTime-Glo ™ MT, and CellTiter-Glo ® (Promega) were evaluated to optimize the best-suited assay to the respective cell type and three-dimensional system. Z-stack imaging (Live/Dead and Phalloidin/DAPI-Promokine) was performed to visualize normal human dermal fibroblasts inside matrix revealing filopodia-like morphology and a uniform distribution of normal human dermal fibroblasts in matrix. CellTiter-Glo was found to be the optimal cell viability assay among those analyzed. CellTiter-Blue reagent affected the cell morphology of normal human dermal fibroblasts (unlike L929), suggesting an interference with cell biological activity, resulting in less reliable viability data. On the other hand, RealTime-Glo provided a linear signal only with a very low cell density, which made this assay unsuitable for this system. CellTiter-Glo adapted to three-dimensional dermal construct by optimizing the "shaking time" to enhance the reagent penetration and maximum adenosine triphosphate release, indicating 2.4 times higher viability value by shaking for 60 min than for 5 min. In addition, viability results showed that cells were viable inside the matrix. This model would be further advanced with more layers of skin to make a full thickness model.

  4. Direct conversion of starch to ethanol using recombınant Saccharomyces cerevisiae containing glucoamylase gene

    NASA Astrophysics Data System (ADS)

    Purkan, P.; Baktir, A.; Puspaningsih, N. N. T.; Ni'mah, M.

    2017-09-01

    Saccharomyces cerevisiae is known for its high fermentative capacity, high ethanol yield and its high ethanol tolerance. The yeast is inability converting starch (relatively inexpensive substrate) into biofuel ethanol. Insertion of glucoamylase gene in yeast cell of Saccharomyces cerevisiae had been done to increase the yeast function in ethanol fermentation from starch. Transformation of yeast of S. cerevisiae with recombinant plasmid yEP-GLO1 carrying gene encoding glucoamylase (GLO1) produced the recombinant yeast which enable to degrade starch. Optimizing of bioconversion process of starch into ethanol by the yeast of recombinant Saccharomyces cerevisiae [yEP-GLO1] had been also done. Starch concentration which could be digested by recombinant yeast of S. cerevisiae [yEP-GLO1] was 10% (w/v). Bioconversion of starch having concentration 10% (b/v) using recombinant yeast of S. cerevisiae BY5207 [yEP-GLO1] could result ethanol as 20% (v/v) to alcoholmeter and 19,5% (v/v) to gas of chromatography. Otherwise, using recombinant yeast S. cerevisiae S. cerevisiae AS3324 [yEP-GLO1] resulted ethanol as 17% (v/v) to alcoholmeter and 17,5% (v/v) to gas of chromatography. The highest ethanol in starch bioconversion using both recombinant yeasts BY5207 and AS3324 could be resulted on 144 hours of fermentation time as well as in pH 5.

  5. GLoBES: General Long Baseline Experiment Simulator

    NASA Astrophysics Data System (ADS)

    Huber, Patrick; Kopp, Joachim; Lindner, Manfred; Rolinec, Mark; Winter, Walter

    2007-09-01

    GLoBES (General Long Baseline Experiment Simulator) is a flexible software package to simulate neutrino oscillation long baseline and reactor experiments. On the one hand, it contains a comprehensive abstract experiment definition language (AEDL), which allows to describe most classes of long baseline experiments at an abstract level. On the other hand, it provides a C-library to process the experiment information in order to obtain oscillation probabilities, rate vectors, and Δχ-values. Currently, GLoBES is available for GNU/Linux. Since the source code is included, the port to other operating systems is in principle possible. GLoBES is an open source code that has previously been described in Computer Physics Communications 167 (2005) 195 and in Ref. [7]). The source code and a comprehensive User Manual for GLoBES v3.0.8 is now available from the CPC Program Library as described in the Program Summary below. The home of GLobES is http://www.mpi-hd.mpg.de/~globes/. Program summaryProgram title: GLoBES version 3.0.8 Catalogue identifier: ADZI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 145 295 No. of bytes in distributed program, including test data, etc.: 1 811 892 Distribution format: tar.gz Programming language: C Computer: GLoBES builds and installs on 32bit and 64bit Linux systems Operating system: 32bit or 64bit Linux RAM: Typically a few MBs Classification: 11.1, 11.7, 11.10 External routines: GSL—The GNU Scientific Library, www.gnu.org/software/gsl/ Nature of problem: Neutrino oscillations are now established as the leading flavor transition mechanism for neutrinos. In a long history of many experiments, see, e.g., [1], two oscillation frequencies have been identified: The fast atmospheric and the slow solar oscillations, which are driven by the respective mass squared differences. In addition, there could be interference effects between these two oscillations, provided that the coupling given by the small mixing angle θ is large enough. Such interference effects include, for example, leptonic CP violation. In order to test the unknown oscillation parameters, i.e. the mixing angle θ, the leptonic CP phase, and the neutrino mass hierarchy, new long-baseline and reactor experiments are proposed. These experiments send an artificial neutrino beam to a detector, or detect the neutrinos produced by a nuclear fission reactor. However, the presence of multiple solutions which are intrinsic to neutrino oscillation probabilities [2-5] affect these measurements. Thus optimization strategies are required which maximally exploit complementarity between experiments. Therefore, a modern, complete experiment simulation and analysis tool does not only need to have a highly accurate beam and detector simulation, but also powerful means to analyze correlations and degeneracies, especially for the combination of several experiments. The GLoBES software package is such a tool [6,7]. Solution method: GLoBES is a flexible software tool to simulate and analyze neutrino oscillation long-baseline and reactor experiments using a complete three-flavor description. On the one hand, it contains a comprehensive abstract experiment definition language (AEDL), which makes it possible to describe most classes of long baseline and reactor experiments at an abstract level. On the other hand, it provides a C-library to process the experiment information in order to obtain oscillation probabilities, rate vectors, and Δχ-values. In addition, it provides a binary program to test experiment definitions very quickly, before they are used by the application software. Restrictions: Currently restricted to discrete sets of sources and detectors. For example, the simulation of an atmospheric neutrino flux is not supported. Unusual features: Clear separation between experiment description and the simulation software. Additional comments: To find information on the latest version of the software and user manual, please check the author's web site, http://www.mpi-hd.mpg.de/~globes Running time: The examples included in the distribution take only a few minutes to complete. More sophisticated problems can take up to several days. References [1] V. Barger, D. Marfatia, K. Whisnant, Int. J. Mod. Phys. E 12 (2003) 569, hep-ph/0308123, and references therein. [2] G.L. Fogli, E. Lisi, Phys. Rev. D 54 (1996) 3667, hep-ph/9604415. [3] J. Burguet-Castell, M.B. Gavela, J.J. Gomez-Cadenas, P. Hernandez, O. Mena, Nucl. Phys. B 608 (2001) 301, hep-ph/0103258. [4] H. Minakata, H. Nunokawa, JHEP 0110 (2001) 001, hep-ph/0108085. [5] V. Barger, D. Marfatia, K. Whisnant, Phys. Rev. D 65 (2002) 073023, hep-ph/0112119. [6] P. Huber, M. Lindner, W. Winter, Comput. Phys. Commun. 167 (2005) 195. [7] P. Huber, J. Kopp, M. Lindner, M. Rolinec, W. Winter, Comput. Phys. Commun. 177 (2007) 432.

  6. Reactive flow model development for PBXW-126 using modern nonlinear optimization methods

    NASA Astrophysics Data System (ADS)

    Murphy, M. J.; Simpson, R. L.; Urtiew, P. A.; Souers, P. C.; Garcia, F.; Garza, R. G.

    1996-05-01

    The initiation and detonation behavior of PBXW-126 has been characterized and is described. PBXW-126 is a composite explosive consisting of approximately equal amounts of RDX, AP, AL, and NTO with a polyurethane binder. The three term ignition and growth of reaction model parameters (ignition+two growth terms) have been found using nonlinear optimization methods to determine the "best" set of model parameters. The ignition term treats the initiation of up to 0.5% of the RDX. The first growth term in the model treats the RDX growth of reaction up to 20% reacted. The second growth term treats the subsequent growth of reaction of the remaining AP/AL/NTO. The unreacted equation of state (EOS) was determined from the wave profiles of embedded gauge tests while the JWL product EOS was determined from cylinder expansion test results. The nonlinear optimization code, NLQPEB/GLO, was used to determine the "best" set of coefficients for the three term Lee-Tarver ignition and growth of reaction model.

  7. Reactive flow model development for PBXW-126 using modern nonlinear optimization methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, M.J.; Simpson, R.L.; Urtiew, P.A.

    1995-08-01

    The initiation and detonation behavior of PBXW-126 has been characterized and is described. PBXW-126 is a composite explosive consisting of approximately equal amounts of RDX, AP, AL, and NTO with a polyurethane binder. The three term ignition and growth of reaction model parameters (ignition + two growth terms) have been found using nonlinear optimization methods to determine the {open_quotes}best{close_quotes} set of model parameters. The ignition term treats the initiation of up to 0.5% of the RDX The first growth term in the model treats the RDX growth of reaction up to 20% reacted. The second growth term treats the subsequentmore » growth of reaction of the remaining AP/AL/NTO. The unreacted equation of state (EOS) was determined from the wave profiles of embedded gauge tests while the JWL product EOS was determined from cylinder expansion test results. The nonlinear optimization code, NLQPEB/GLO, was used to determine the {open_quotes}best{close_quotes} set of coefficients for the three term Lee-Tarver ignition and growth of reaction model.« less

  8. USER'S GUIDE FOR GLOED VERSION 1.0 - THE GLOBAL EMISSIONS DATABASE

    EPA Science Inventory

    The document is a user's guide for the EPA-developed, powerful software package, Global Emissions Database (GloED). GloED is a user-friendly, menu-driven tool for storing and retrieving emissions factors and activity data on a country-specific basis. Data can be selected from dat...

  9. High throughput RNAi assay optimization using adherent cell cytometry.

    PubMed

    Nabzdyk, Christoph S; Chun, Maggie; Pradhan, Leena; Logerfo, Frank W

    2011-04-25

    siRNA technology is a promising tool for gene therapy of vascular disease. Due to the multitude of reagents and cell types, RNAi experiment optimization can be time-consuming. In this study adherent cell cytometry was used to rapidly optimize siRNA transfection in human aortic vascular smooth muscle cells (AoSMC). AoSMC were seeded at a density of 3000-8000 cells/well of a 96 well plate. 24 hours later AoSMC were transfected with either non-targeting unlabeled siRNA (50 nM), or non-targeting labeled siRNA, siGLO Red (5 or 50 nM) using no transfection reagent, HiPerfect or Lipofectamine RNAiMax. For counting cells, Hoechst nuclei stain or Cell Tracker green were used. For data analysis an adherent cell cytometer, Celigo® was used. Data was normalized to the transfection reagent alone group and expressed as red pixel count/cell. After 24 hours, none of the transfection conditions led to cell loss. Red fluorescence counts were normalized to the AoSMC count. RNAiMax was more potent compared to HiPerfect or no transfection reagent at 5 nM siGLO Red (4.12 +/-1.04 vs. 0.70 +/-0.26 vs. 0.15 +/-0.13 red pixel/cell) and 50 nM siGLO Red (6.49 +/-1.81 vs. 2.52 +/-0.67 vs. 0.34 +/-0.19). Fluorescence expression results supported gene knockdown achieved by using MARCKS targeting siRNA in AoSMCs. This study underscores that RNAi delivery depends heavily on the choice of delivery method. Adherent cell cytometry can be used as a high throughput-screening tool for the optimization of RNAi assays. This technology can accelerate in vitro cell assays and thus save costs.

  10. High throughput RNAi assay optimization using adherent cell cytometry

    PubMed Central

    2011-01-01

    Background siRNA technology is a promising tool for gene therapy of vascular disease. Due to the multitude of reagents and cell types, RNAi experiment optimization can be time-consuming. In this study adherent cell cytometry was used to rapidly optimize siRNA transfection in human aortic vascular smooth muscle cells (AoSMC). Methods AoSMC were seeded at a density of 3000-8000 cells/well of a 96well plate. 24 hours later AoSMC were transfected with either non-targeting unlabeled siRNA (50 nM), or non-targeting labeled siRNA, siGLO Red (5 or 50 nM) using no transfection reagent, HiPerfect or Lipofectamine RNAiMax. For counting cells, Hoechst nuclei stain or Cell Tracker green were used. For data analysis an adherent cell cytometer, Celigo® was used. Data was normalized to the transfection reagent alone group and expressed as red pixel count/cell. Results After 24 hours, none of the transfection conditions led to cell loss. Red fluorescence counts were normalized to the AoSMC count. RNAiMax was more potent compared to HiPerfect or no transfection reagent at 5 nM siGLO Red (4.12 +/-1.04 vs. 0.70 +/-0.26 vs. 0.15 +/-0.13 red pixel/cell) and 50 nM siGLO Red (6.49 +/-1.81 vs. 2.52 +/-0.67 vs. 0.34 +/-0.19). Fluorescence expression results supported gene knockdown achieved by using MARCKS targeting siRNA in AoSMCs. Conclusion This study underscores that RNAi delivery depends heavily on the choice of delivery method. Adherent cell cytometry can be used as a high throughput-screening tool for the optimization of RNAi assays. This technology can accelerate in vitro cell assays and thus save costs. PMID:21518450

  11. The Drosophila hnRNP F/H Homolog Glorund Uses Two Distinct RNA-Binding Modes to Diversify Target Recognition.

    PubMed

    Tamayo, Joel V; Teramoto, Takamasa; Chatterjee, Seema; Hall, Traci M Tanaka; Gavis, Elizabeth R

    2017-04-04

    The Drosophila hnRNP F/H homolog, Glorund (Glo), regulates nanos mRNA translation by interacting with a structured UA-rich motif in the nanos 3' untranslated region. Glo regulates additional RNAs, however, and mammalian homologs bind G-tract sequences to regulate alternative splicing, suggesting that Glo also recognizes G-tract RNA. To gain insight into how Glo recognizes both structured UA-rich and G-tract RNAs, we used mutational analysis guided by crystal structures of Glo's RNA-binding domains and identified two discrete RNA-binding surfaces that allow Glo to recognize both RNA motifs. By engineering Glo variants that favor a single RNA-binding mode, we show that a subset of Glo's functions in vivo is mediated solely by the G-tract binding mode, whereas regulation of nanos requires both recognition modes. Our findings suggest a molecular mechanism for the evolution of dual RNA motif recognition in Glo that may be applied to understanding the functional diversity of other RNA-binding proteins. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  12. Multiple roles of glyoxalase 1-mediated suppression of methylglyoxal glycation in cancer biology-Involvement in tumour suppression, tumour growth, multidrug resistance and target for chemotherapy.

    PubMed

    Rabbani, Naila; Xue, Mingzhan; Weickert, Martin O; Thornalley, Paul J

    2018-04-01

    Glyoxalase 1 (Glo1) is part of the glyoxalase system in the cytoplasm of all human cells. It catalyses the glutathione-dependent removal of the endogenous reactive dicarbonyl metabolite, methylglyoxal (MG). MG is formed mainly as a side product of anaerobic glycolysis. It modifies protein and DNA to form mainly hydroimidazolone MG-H1 and imidazopurinone MGdG adducts, respectively. Abnormal accumulation of MG, dicarbonyl stress, increases adduct levels which may induce apoptosis and replication catastrophe. In the non-malignant state, Glo1 is a tumour suppressor protein and small molecule inducers of Glo1 expression may find use in cancer prevention. Increased Glo1 expression is permissive for growth of tumours with high glycolytic activity and is thereby a biomarker of tumour growth. High Glo1 expression is a cause of multi-drug resistance. It is produced by over-activation of the Nrf2 pathway and GLO1 amplification. Glo1 inhibitors are antitumour agents, inducing apoptosis and necrosis, and anoikis. Tumour stem cells and tumours with high flux of MG formation and Glo1 expression are sensitive to Glo1 inhibitor therapy. It is likely that MG-induced cell death contributes to the mechanism of action of current antitumour agents. Common refractory tumours have high prevalence of Glo1 overexpression for which Glo1 inhibitors may improve therapy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Glyoxalase 1 copy number variation in patients with well differentiated gastro-entero-pancreatic neuroendocrine tumours (GEP-NET)

    PubMed Central

    Xue, Mingzhan; Shafie, Alaa; Qaiser, Talha; Rajpoot, Nasir M.; Kaltsas, Gregory; James, Sean; Gopalakrishnan, Kishore; Fisk, Adrian; Dimitriadis, Georgios K.; Grammatopoulos, Dimitris K.; Rabbani, Naila; Thornalley, Paul J.; Weickert, Martin O.

    2017-01-01

    Background The glyoxalase-1 gene (GLO1) is a hotspot for copy-number variation (CNV) in human genomes. Increased GLO1 copy-number is associated with multidrug resistance in tumour chemotherapy, but prevalence of GLO1 CNV in gastro-entero-pancreatic neuroendocrine tumours (GEP-NET) is unknown. Methods GLO1 copy-number variation was measured in 39 patients with GEP-NET (midgut NET, n = 25; pancreatic NET, n = 14) after curative or debulking surgical treatment. Primary tumour tissue, surrounding healthy tissue and, where applicable, additional metastatic tumour tissue were analysed, using real time qPCR. Progression and survival following surgical treatment were monitored over 4.2 ± 0.5 years. Results In the pooled GEP-NET cohort, GLO1 copy-number in healthy tissue was 2.0 in all samples but significantly increased in primary tumour tissue in 43% of patients with pancreatic NET and in 72% of patients with midgut NET, mainly driven by significantly higher GLO1 copy-number in midgut NET. In tissue from additional metastases resection (18 midgut NET and one pancreatic NET), GLO1 copy number was also increased, compared with healthy tissue; but was not significantly different compared with primary tumour tissue. During mean 3 - 5 years follow-up, 8 patients died and 16 patients showed radiological progression. In midgut NET, a high GLO1 copy-number was associated with earlier progression. In NETs with increased GLO1 copy number, there was increased Glo1 protein expression compared to non-malignant tissue. Conclusions GLO1 copy-number was increased in a large percentage of patients with GEP-NET and correlated positively with increased Glo1 protein in tumour tissue. Analysis of GLO1 copy-number variation particularly in patients with midgut NET could be a novel prognostic marker for tumour progression. PMID:29100361

  14. Reactive flow model development for PBXW-126 using modern nonlinear optimization methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, M.J.; Simpson, R.L.; Urtiew, P.A.

    1996-05-01

    The initiation and detonation behavior of PBXW-126 has been characterized and is described. PBXW-126 is a composite explosive consisting of approximately equal amounts of RDX, AP, AL, and NTO with a polyurethane binder. The three term ignition and growth of reaction model parameters (ignition+two growth terms) have been found using nonlinear optimization methods to determine the {open_quotes}best{close_quotes} set of model parameters. The ignition term treats the initiation of up to 0.5{percent} of the RDX. The first growth term in the model treats the RDX growth of reaction up to 20{percent} reacted. The second growth term treats the subsequent growth ofmore » reaction of the remaining AP/AL/NTO. The unreacted equation of state (EOS) was determined from the wave profiles of embedded gauge tests while the JWL product EOS was determined from cylinder expansion test results. The nonlinear optimization code, NLQPEB/GLO, was used to determine the {open_quotes}best{close_quotes} set of coefficients for the three term Lee-Tarver ignition and growth of reaction model. {copyright} {ital 1996 American Institute of Physics.}« less

  15. Inducible antisense suppression of glycolate oxidase reveals its strong regulation over photosynthesis in rice.

    PubMed

    Xu, Huawei; Zhang, Jianjun; Zeng, Jiwu; Jiang, Linrong; Liu, Ee; Peng, Changlian; He, Zhenghui; Peng, Xinxiang

    2009-01-01

    Photorespiration is one of the most intensively studied topics in plant biology. While a number of mutants deficient in photorespiratory enzymes have been identified and characterized for their physiological functions, efforts on glycolate oxidase (GLO; EC 1.1.3.15) have not been so successful. This is a report about the generation of transgenic rice (Oryza sativa L.) plants carrying a GLO antisense gene driven by an estradiol-inducible promoter, which allowed for controllable suppressions of GLO and its detailed functional analyses. The GLO-suppressed plants showed typical photorespiration-deficient phenotypes. More intriguingly, it was found that a positive and linear correlation existed between GLO activities and the net photosynthetic rates (P(N)), and photoinhibition subsequently occurred once P(N) reduction surpassed 60%, indicating GLO can exert a strong regulation over photosynthesis. Various expression analyses identified that Rubisco activase was transcriptionally suppressed in the GLO-suppressed plants, consistent with the decreased Rubisco activation states. While the substrate glycolate accumulated substantially, few changes were observed for the product glyoxylate, and for some other downstream metabolites or genes as well in the transgenic plants. Further analyses revealed that isocitrate lyase and malate synthase, two key enzymes in the glyoxylate cycle, were highly up-regulated under GLO deficiency. Taken together, the results suggest that GLO is a typical photorespiratory enzyme and that it can exert a strong regulation over photosynthesis, possibly through a feed-back inhibition on Rubisco activase, and that the glyoxylate cycle may be partially activated to compensate for the photorespiratory glyoxylate when GLO is suppressed in rice.

  16. Specificity of the trypanothione-dependent Leishmania major glyoxalase I: structure and biochemical comparison with the human enzyme.

    PubMed

    Ariza, Antonio; Vickers, Tim J; Greig, Neil; Armour, Kirsten A; Dixon, Mark J; Eggleston, Ian M; Fairlamb, Alan H; Bond, Charles S

    2006-02-01

    Trypanothione replaces glutathione in defence against cellular damage caused by oxidants, xenobiotics and methylglyoxal in the trypanosomatid parasites, which cause trypanosomiasis and leishmaniasis. In Leishmania major, the first step in methylglyoxal detoxification is performed by a trypanothione-dependent glyoxalase I (GLO1) containing a nickel cofactor; all other characterized eukaryotic glyoxalases use zinc. In kinetic studies L. major and human enzymes were active with methylglyoxal derivatives of several thiols, but showed opposite substrate selectivities: N1-glutathionylspermidine hemithioacetal is 40-fold better with L. major GLO1, whereas glutathione hemithioacetal is 300-fold better with human GLO1. Similarly, S-4-bromobenzylglutathionylspermidine is a 24-fold more potent linear competitive inhibitor of L. major than human GLO1 (Kis of 0.54 microM and 12.6 microM, respectively), whereas S-4-bromobenzylglutathione is >4000-fold more active against human than L. major GLO1 (Kis of 0.13 microM and >500 microM respectively). The crystal structure of L. major GLO1 reveals differences in active site architecture to both human GLO1 and the nickel-dependent Escherichia coli GLO1, including increased negative charge and hydrophobic character and truncation of a loop that may regulate catalysis in the human enzyme. These differences correlate with the differential binding of glutathione and trypanothione-based substrates, and thus offer a route to the rational design of L. major-specific GLO1 inhibitors.

  17. The Role of Glyoxalase-I (Glo-I), Advanced Glycation Endproducts (AGEs), and Their Receptor (RAGE) in Chronic Liver Disease and Hepatocellular Carcinoma (HCC)

    PubMed Central

    2017-01-01

    Glyoxalase-I (Glo-I) and glyoxalase-II (Glo-II) comprise the glyoxalase system and are responsible for the detoxification of methylglyoxal (MGO). MGO is formed non-enzymatically as a by-product, mainly in glycolysis, and leads to the formation of advanced glycation endproducts (AGEs). AGEs bind to their receptor, RAGE, and activate intracellular transcription factors, resulting in the production of pro-inflammatory cytokines, oxidative stress, and inflammation. This review will focus on the implication of the Glo-I/AGE/RAGE system in liver injury and hepatocellular carcinoma (HCC). AGEs and RAGE are upregulated in liver fibrosis, and the silencing of RAGE reduced collagen deposition and the tumor growth of HCC. Nevertheless, data relating to Glo-I in fibrosis and cirrhosis are preliminary. Glo-I expression was found to be reduced in early and advanced cirrhosis with a subsequent increase of MGO-levels. On the other hand, pharmacological modulation of Glo-I resulted in the reduced activation of hepatic stellate cells and therefore reduced fibrosis in the CCl4-model of cirrhosis. Thus, current research highlighted the Glo-I/AGE/RAGE system as an interesting therapeutic target in chronic liver diseases. These findings need further elucidation in preclinical and clinical studies. PMID:29156655

  18. The Drosophila hnRNP F/H homolog glorund uses two distinct RNA-binding modes to diversify target recognition

    DOE PAGES

    Tamayo, Joel V.; Teramoto, Takamasa; Chatterjee, Seema; ...

    2017-04-04

    The Drosophila hnRNP F/H homolog, Glorund (Glo), regulates nanos mRNA translation by interacting with a structured UA-rich motif in the nanos 3' untranslated region. Glo regulates additional RNAs, however, and mammalian homologs bind G-tract sequences to regulate alternative splicing, suggesting that Glo also recognizes G-tract RNA. To gain insight into how Glo recognizes both structured UA-rich and G-tract RNAs, we used mutational analysis guided by crystal structures of Glo’s RNA-binding domains and identified two discrete RNA-binding surfaces that allow Glo to recognize both RNA motifs. By engineering Glo variants that favor a single RNA-binding mode, we show that a subsetmore » of Glo’s functions in vivo is mediated solely by the G-tract binding mode, whereas regulation of nanos requires both recognition modes. Lastly, our findings suggest a molecular mechanism for the evolution of dual RNA motif recognition in Glo that may be applied to understanding the functional diversity of other RNA-binding proteins.« less

  19. The Drosophila hnRNP F/H Homolog Glorund Uses Two Distinct RNA-Binding Modes to Diversify Target Recognition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamayo, Joel V.; Teramoto, Takamasa; Chatterjee, Seema

    The Drosophila hnRNP F/H homolog, Glorund (Glo), regulates nanos mRNA translation by interacting with a structured UA-rich motif in the nanos 3' untranslated region. Glo regulates additional RNAs, however, and mammalian homologs bind G-tract sequences to regulate alternative splicing, suggesting that Glo also recognizes G-tract RNA. To gain insight into how Glo recognizes both structured UA-rich and G-tract RNAs, we used mutational analysis guided by crystal structures of Glo’s RNA-binding domains and identified two discrete RNA-binding surfaces that allow Glo to recognize both RNA motifs. By engineering Glo variants that favor a single RNA-binding mode, we show that a subsetmore » of Glo’s functions in vivo is mediated solely by the G-tract binding mode, whereas regulation of nanos requires both recognition modes. Our findings suggest a molecular mechanism for the evolution of dual RNA motif recognition in Glo that may be applied to understanding the functional diversity of other RNA-binding proteins.« less

  20. The Drosophila hnRNP F/H homolog glorund uses two distinct RNA-binding modes to diversify target recognition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamayo, Joel V.; Teramoto, Takamasa; Chatterjee, Seema

    The Drosophila hnRNP F/H homolog, Glorund (Glo), regulates nanos mRNA translation by interacting with a structured UA-rich motif in the nanos 3' untranslated region. Glo regulates additional RNAs, however, and mammalian homologs bind G-tract sequences to regulate alternative splicing, suggesting that Glo also recognizes G-tract RNA. To gain insight into how Glo recognizes both structured UA-rich and G-tract RNAs, we used mutational analysis guided by crystal structures of Glo’s RNA-binding domains and identified two discrete RNA-binding surfaces that allow Glo to recognize both RNA motifs. By engineering Glo variants that favor a single RNA-binding mode, we show that a subsetmore » of Glo’s functions in vivo is mediated solely by the G-tract binding mode, whereas regulation of nanos requires both recognition modes. Lastly, our findings suggest a molecular mechanism for the evolution of dual RNA motif recognition in Glo that may be applied to understanding the functional diversity of other RNA-binding proteins.« less

  1. Suppression of glycolate oxidase causes glyoxylate accumulation that inhibits photosynthesis through deactivating Rubisco in rice.

    PubMed

    Lu, Yusheng; Li, Yong; Yang, Qiaosong; Zhang, Zhisheng; Chen, Yan; Zhang, Sheng; Peng, Xin-Xiang

    2014-03-01

    Glycolate oxidase (GLO) is a key enzyme for photorespiration in plants. Previous studies have demonstrated that suppression of GLO causes photosynthetic inhibition, and the accumulated glycolate with the deactivated Rubisco is likely involved in the regulation. Using isolated Rubisco and chloroplasts, it has been found that only glyoxylate can effectively inactivate Rubisco and meanwhile inhibit photosynthesis, but little in vivo evidence has been acquired and reported. In this study, we have generated the transgenic rice (Oryza sativa) plants with GLO being constitutively silenced, and conducted the physiological and biochemical analyses on these plants to explore the regulatory mechanism. When GLO was downregulated, the net photosynthetic rate (Pn) was reduced and the plant growth was correspondingly stunted. Surprisingly, glyoxylate, as a product of the GLO catalysis, was accumulated in response to the GLO suppression, like its substrate glycolate. Furthermore, the glyoxylate content was found to be inversely proportional to the Pn while the Pn is directly proportional to the Rubisco activation state in the GLO-suppressed plants. A mathematical fitting equation using least square method also demonstrated that the Rubisco activation state was inversely proportional to the glyoxylate content. Despite that the further analyses we have conducted failed to reveal how glyoxylate was accumulated in response to the GLO suppression, the current results do strongly suggest that there may exist an unidentified, alternative pathway to produce glyoxylate, and that the accumulated glyoxylate inhibits photosynthesis by deactivating Rubisco, and causes the photorespiratory phenotype in the GLO-suppressed rice plants. © 2013 Scandinavian Plant Physiology Society.

  2. Transcriptional and posttranscriptional regulation of the glycolate oxidase gene in tobacco seedlings.

    PubMed

    Barak, S; Nejidat, A; Heimer, Y; Volokita, M

    2001-03-01

    The roles of light and of the putative plastid signal in glycolate oxidase (GLO) gene expression were investigated in tobacco (Nicotiana tabacum cv. Samsun NN) seedlings during their shift from skotomorphogenic to photomorphogenic development. GLO transcript and enzyme activities were detected in etiolated seedlings. Their respective levels increased three- and six-fold during 96 h of exposure to light. The GLO transcript was almost undetectable in seedlings in which chloroplast development was impaired by photooxidation with the herbicide norflurazon. In transgenic tobacco seedlings, photooxidation inhibited the light-dependent increase in GUS activity when it was placed under the regulation of the GLO promoter P(GLO). However, even under these photooxidative conditions, a continuous increase in GUS activity was observed as compared to etiolated seedlings. When GUS expression was driven by the CaMV 35S promoter (P35S), no apparent difference was observed between etiolated, deetiolated and photooxidized seedlings. These observations indicate that the effects of the putative plastid development signal and light on GUS expression can be separated. Translational yield analysis indicated that the translation of the GUS transcript in P(GLO)::GUS seedlings was enhanced 30-fold over that of the GUS transcript in P35S::GUS seedlings. The overall picture emerging from these results is that in etiolated seedlings GLO transcript, though present at a substantial level, is translated at a low rate. Increased GLO transcription is enhanced, however, in response to signals originating from the developing plastids. GLO gene expression is further enhanced at the translational level by a yet undefined light-dependent mechanism.

  3. Reduced ovarian glyoxalase-I activity by dietary glycotoxins and androgen excess: a causative link to polycystic ovarian syndrome.

    PubMed

    Kandaraki, Eleni; Chatzigeorgiou, Antonis; Piperi, Christina; Palioura, Eleni; Palimeri, Sotiria; Korkolopoulou, Penelope; Koutsilieris, Michael; Papavassiliou, Athanasios G

    2012-10-24

    Glyoxalase detoxification system composed of glyoxalase (GLO)-I and GLO-II is ubiquitously expressed and implicated in the protection against cellular damage because of cytotoxic metabolites such as advanced glycation end products (AGEs). Recently, ovarian tissue has emerged as a new target of excessive AGE deposition and has been associated with either a high AGE diet in experimental animals or hyperandrogenic disorders such as polycystic ovarian syndrome (PCOS) in humans. This study was designed to investigate the impact of dietary AGEs and androgens in rat ovarian GLO-I activity of normal nonandrogenized (NAN, group A, n = 18) and androgenized prepubertal (AN) rats (group B, n = 29). Both groups were further randomly assigned, either to a high-AGE (HA) or low-AGE (LA) diet for 3 months. The activity of ovarian GLO-I was significantly reduced in normal NAN animals fed an HA diet compared with an LA diet (p = 0.006). Furthermore, GLO-I activity was markedly reduced in AN animals compared with NAN (p ≤ 0.001) when fed with the corresponding diet type. In addition, ovarian GLO-I activity was positively correlated with the body weight gain (r(s) = 0.533, p < 0.001), estradiol (r(s) = 0.326, p = 0.033) and progesterone levels (r(s) = 0.500, p < 0.001). A negative correlation was observed between GLO-I activity and AGE expression in the ovarian granulosa cell layer of all groups with marginal statistical significance (r(s) = -0.263, p = 0.07). The present data demonstrate that ovarian GLO-I activity may be regulated by dietary composition and androgen levels. Modification of ovarian GLO-I activity, observed for the first time in this androgenized prepubertal rat model, may present a contributing factor to the reproductive dysfunction characterizing PCOS.

  4. Inhibition of GLO1 in Glioblastoma Multiforme Increases DNA-AGEs, Stimulates RAGE Expression, and Inhibits Brain Tumor Growth in Orthotopic Mouse Models.

    PubMed

    Jandial, Rahul; Neman, Josh; Lim, Punnajit P; Tamae, Daniel; Kowolik, Claudia M; Wuenschell, Gerald E; Shuck, Sarah C; Ciminera, Alexandra K; De Jesus, Luis R; Ouyang, Ching; Chen, Mike Y; Termini, John

    2018-01-30

    Cancers that exhibit the Warburg effect may elevate expression of glyoxylase 1 (GLO1) to detoxify the toxic glycolytic byproduct methylglyoxal (MG) and inhibit the formation of pro-apoptotic advanced glycation endproducts (AGEs). Inhibition of GLO1 in cancers that up-regulate glycolysis has been proposed as a therapeutic targeting strategy, but this approach has not been evaluated for glioblastoma multiforme (GBM), the most aggressive and difficult to treat malignancy of the brain. Elevated GLO1 expression in GBM was established in patient tumors and cell lines using bioinformatics tools and biochemical approaches. GLO1 inhibition in GBM cell lines and in an orthotopic xenograft GBM mouse model was examined using both small molecule and short hairpin RNA (shRNA) approaches. Inhibition of GLO1 with S -( p -bromobenzyl) glutathione dicyclopentyl ester ( p- BrBzGSH(Cp)₂) increased levels of the DNA-AGE N ²-1-(carboxyethyl)-2'-deoxyguanosine (CEdG), a surrogate biomarker for nuclear MG exposure; substantially elevated expression of the immunoglobulin-like receptor for AGEs (RAGE); and induced apoptosis in GBM cell lines. Targeting GLO1 with shRNA similarly increased CEdG levels and RAGE expression, and was cytotoxic to glioma cells. Mice bearing orthotopic GBM xenografts treated systemically with p -BrBzGSH(Cp)₂ exhibited tumor regression without significant off-target effects suggesting that GLO1 inhibition may have value in the therapeutic management of these drug-resistant tumors.

  5. Inhibition of GLO1 in Glioblastoma Multiforme Increases DNA-AGEs, Stimulates RAGE Expression, and Inhibits Brain Tumor Growth in Orthotopic Mouse Models

    PubMed Central

    Jandial, Rahul; Neman, Josh; Tamae, Daniel; Kowolik, Claudia M.; Wuenschell, Gerald E.; Ciminera, Alexandra K.; De Jesus, Luis R.; Ouyang, Ching; Chen, Mike Y.

    2018-01-01

    Cancers that exhibit the Warburg effect may elevate expression of glyoxylase 1 (GLO1) to detoxify the toxic glycolytic byproduct methylglyoxal (MG) and inhibit the formation of pro-apoptotic advanced glycation endproducts (AGEs). Inhibition of GLO1 in cancers that up-regulate glycolysis has been proposed as a therapeutic targeting strategy, but this approach has not been evaluated for glioblastoma multiforme (GBM), the most aggressive and difficult to treat malignancy of the brain. Elevated GLO1 expression in GBM was established in patient tumors and cell lines using bioinformatics tools and biochemical approaches. GLO1 inhibition in GBM cell lines and in an orthotopic xenograft GBM mouse model was examined using both small molecule and short hairpin RNA (shRNA) approaches. Inhibition of GLO1 with S-(p-bromobenzyl) glutathione dicyclopentyl ester (p-BrBzGSH(Cp)2) increased levels of the DNA-AGE N2-1-(carboxyethyl)-2′-deoxyguanosine (CEdG), a surrogate biomarker for nuclear MG exposure; substantially elevated expression of the immunoglobulin-like receptor for AGEs (RAGE); and induced apoptosis in GBM cell lines. Targeting GLO1 with shRNA similarly increased CEdG levels and RAGE expression, and was cytotoxic to glioma cells. Mice bearing orthotopic GBM xenografts treated systemically with p-BrBzGSH(Cp)2 exhibited tumor regression without significant off-target effects suggesting that GLO1 inhibition may have value in the therapeutic management of these drug-resistant tumors. PMID:29385725

  6. Evidence for the involvement of Globosa-like gene duplications and expression divergence in the evolution of floral morphology in the Zingiberales.

    PubMed

    Bartlett, Madelaine E; Specht, Chelsea D

    2010-07-01

    *The MADS box transcription factor family has long been identified as an important contributor to the control of floral development. It is often hypothesized that the evolution of floral development across angiosperms and within specific lineages may occur as a result of duplication, functional diversification, and changes in regulation of MADS box genes. Here we examine the role of Globosa (GLO)-like genes, members of the B-class MADS box gene lineage, in the evolution of floral development within the monocot order Zingiberales. *We assessed changes in perianth and stamen whorl morphology in a phylogenetic framework. We identified GLO homologs (ZinGLO1-4) from 50 Zingiberales species and investigated the evolution of this gene lineage. Expression of two GLO homologs was assessed in Costus spicatus and Musa basjoo. *Based on the phylogenetic data and expression results, we propose several family-specific losses and gains of GLO homologs that appear to be associated with key morphological changes. The GLO-like gene lineage has diversified concomitant with the evolution of the dimorphic perianth and the staminodial labellum. *Duplications and expression divergence within the GLO-like gene lineage may have played a role in floral diversification in the Zingiberales.

  7. Expression of antimicrobial peptide genes in Bombyx mori gut modulated by oral bacterial infection and development.

    PubMed

    Wu, Shan; Zhang, Xiaofeng; He, Yongqiang; Shuai, Jiangbing; Chen, Xiaomei; Ling, Erjun

    2010-11-01

    Although Bombyx mori systematic immunity is extensively studied, little is known about the silkworm's intestine-specific responses to bacterial infection. Antimicrobial peptides (AMPs) gene expression analysis of B. mori intestinal tissue to oral infection with the Gram-positive (Staphylococcus aureus) and -negative (Escherichia coli) bacteria revealed that there is specificity in the interaction between host immune responses and parasite types. Neither Att1 nor Leb could be stimulated by S. aureus and E. coli. However, CecA1, Glo1, Glo2, Glo3, Glo4 and Lys, could only be trigged by S. aureus. On the contrary, E. coli stimulation caused the decrease in the expression of CecA1, Glo3 and Glo4 in some time points. Interestingly, there is regional specificity in the silkworm local gut immunity. During the immune response, the increase in Def, Hem and LLP3 was only detected in the foregut and midgut. For CecB1, CecD, LLP2 and Mor, after orally administered with E. coli, the up-regulation was only limited in the midgut and hindgut. CecE was the only AMP that positively responses to the both bacteria in all the testing situations. With development, the expression levels of the AMPs were also changed dramatically. That is, at spinning and prepupa stages, a large increase in the expression of CecA1, CecB1, CecD, CecE, Glo1, Glo2, Glo3, Glo4, Leb, Def, Hem, Mor and Lys was detected in the gut. Unexpectedly, in addition to the IMD pathway genes, the Toll and JAK/STAT pathway genes in the silkworm gut can also be activated by microbial oral infection. But in the developmental course, corresponding to the increase in expression of AMPs at spinning and prepupa stages, only the Toll pathway genes in the gut exhibit the similar increasing trend. Our results imply that the immune responses in the silkworm gut are synergistically regulated by the Toll, JAK/STAT and IMD pathways. However, as the time for approaching pupation, the Toll pathway may play a role in the AMPs expression. Copyright © 2010 Elsevier Ltd. All rights reserved.

  8. Glyoxalase 1 Modulation in Obesity and Diabetes.

    PubMed

    Rabbani, Naila; Thornalley, Paul J

    2018-01-02

    Obesity and type 2 diabetes mellitus are increasing globally. There is also increasing associated complications, such as non-alcoholic fatty liver disease (NAFLD) and vascular complications of diabetes. There is currently no licensed treatment for NAFLD and no recent treatments for diabetic complications. New approaches are required, particularly those addressing mechanism-based risk factors for health decline and disease progression. Recent Advances: Dicarbonyl stress is the abnormal accumulation of reactive dicarbonyl metabolites such as methylglyoxal (MG) leading to cell and tissue dysfunction. It is a potential driver of obesity, diabetes, and related complications that are unaddressed by current treatments. Increased formation of MG is linked to increased glyceroneogenesis and hyperglycemia in obesity and diabetes and also down-regulation of glyoxalase 1 (Glo1)-which provides the main enzymatic detoxification of MG. Glo1 functional genomics studies suggest that increasing Glo1 expression and activity alleviates dicarbonyl stress; slows development of obesity, related insulin resistance; and prevents development of diabetic nephropathy and other microvascular complications of diabetes. A new therapeutic approach constitutes small-molecule inducers of Glo1 expression-Glo1 inducers-exploiting a regulatory antioxidant response element in the GLO1 gene. A prototype Glo1 inducer, trans-resveratrol (tRES)-hesperetin (HESP) combination, in corrected insulin resistance, improved glycemic control and vascular inflammation in healthy overweight and obese subjects in clinical trial. tRES and HESP synergize pharmacologically, and HESP likely overcomes the low bioavailability of tRES by inhibition of intestinal glucuronosyltransferases. Glo1 inducers may now be evaluated in Phase 2 clinical trials for treatment of NAFLD and vascular complications of diabetes. Antioxid. Redox Signal. 00, 000-000.

  9. Inhibition of Th17 Cell Differentiation as a Treatment for Multiple Sclerosis

    DTIC Science & Technology

    2012-10-01

    sequence) using Lipofectamine . After 48 hours Dual Glo substrate was added to the cells and luciferase activity and Renilla Luciferase activity were...pmirGLO326 and pMR04 (encoding mir-326) using Lipofectamine . After 48 hours Dual Glo substrate was added to the cells and Firefly and Renilla

  10. Automated cell counts on CSF samples: A multicenter performance evaluation of the GloCyte system.

    PubMed

    Hod, E A; Brugnara, C; Pilichowska, M; Sandhaus, L M; Luu, H S; Forest, S K; Netterwald, J C; Reynafarje, G M; Kratz, A

    2018-02-01

    Automated cell counters have replaced manual enumeration of cells in blood and most body fluids. However, due to the unreliability of automated methods at very low cell counts, most laboratories continue to perform labor-intensive manual counts on many or all cerebrospinal fluid (CSF) samples. This multicenter clinical trial investigated if the GloCyte System (Advanced Instruments, Norwood, MA), a recently FDA-approved automated cell counter, which concentrates and enumerates red blood cells (RBCs) and total nucleated cells (TNCs), is sufficiently accurate and precise at very low cell counts to replace all manual CSF counts. The GloCyte System concentrates CSF and stains RBCs with fluorochrome-labeled antibodies and TNCs with nucleic acid dyes. RBCs and TNCs are then counted by digital image analysis. Residual adult and pediatric CSF samples obtained for clinical analysis at five different medical centers were used for the study. Cell counts were performed by the manual hemocytometer method and with the GloCyte System following the same protocol at all sites. The limits of the blank, detection, and quantitation, as well as precision and accuracy of the GloCyte, were determined. The GloCyte detected as few as 1 TNC/μL and 1 RBC/μL, and reliably counted as low as 3 TNCs/μL and 2 RBCs/μL. The total coefficient of variation was less than 20%. Comparison with cell counts obtained with a hemocytometer showed good correlation (>97%) between the GloCyte and the hemocytometer, including at very low cell counts. The GloCyte instrument is a precise, accurate, and stable system to obtain red cell and nucleated cell counts in CSF samples. It allows for the automated enumeration of even very low cell numbers, which is crucial for CSF analysis. These results suggest that GloCyte is an acceptable alternative to the manual method for all CSF samples, including those with normal cell counts. © 2017 John Wiley & Sons Ltd.

  11. The yeast Arf-GAP Glo3p is required for the endocytic recycling of cell surface proteins.

    PubMed

    Kawada, Daiki; Kobayashi, Hiromu; Tomita, Tsuyoshi; Nakata, Eisuke; Nagano, Makoto; Siekhaus, Daria Elisabeth; Toshima, Junko Y; Toshima, Jiro

    2015-01-01

    Small GTP-binding proteins of the Ras superfamily play diverse roles in intracellular trafficking. Among them, the Rab, Arf, and Rho families function in successive steps of vesicle transport, in forming vesicles from donor membranes, directing vesicle trafficking toward target membranes and docking vesicles onto target membranes. These proteins act as molecular switches that are controlled by a cycle of GTP binding and hydrolysis regulated by guanine nucleotide exchange factors (GEFs) and GTPase-activating proteins (GAPs). In this study we explored the role of GAPs in the regulation of the endocytic pathway using fluorescently labeled yeast mating pheromone α-factor. Among 25 non-essential GAP mutants, we found that deletion of the GLO3 gene, encoding Arf-GAP protein, caused defective internalization of fluorescently labeled α-factor. Quantitative analysis revealed that glo3Δ cells show defective α-factor binding to the cell surface. Interestingly, Ste2p, the α-factor receptor, was mis-localized from the plasma membrane to the vacuole in glo3Δ cells. Domain deletion mutants of Glo3p revealed that a GAP-independent function, as well as the GAP activity, of Glo3p is important for both α-factor binding and Ste2p localization at the cell surface. Additionally, we found that deletion of the GLO3 gene affects the size and number of Arf1p-residing Golgi compartments and causes a defect in transport from the TGN to the plasma membrane. Furthermore, we demonstrated that glo3Δ cells were defective in the late endosome-to-TGN transport pathway, but not in the early endosome-to-TGN transport pathway. These findings suggest novel roles for Arf-GAP Glo3p in endocytic recycling of cell surface proteins. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Candesartan Attenuates Diabetic Retinal Vascular Pathology by Restoring Glyoxalase-I Function

    PubMed Central

    Miller, Antonia G.; Tan, Genevieve; Binger, Katrina J.; Pickering, Raelene J.; Thomas, Merlin C.; Nagaraj, Ram H.; Cooper, Mark E.; Wilkinson-Berka, Jennifer L.

    2010-01-01

    OBJECTIVE Advanced glycation end products (AGEs) and the renin-angiotensin system (RAS) are both implicated in the development of diabetic retinopathy. How these pathways interact to promote retinal vasculopathy is not fully understood. Glyoxalase-I (GLO-I) is an enzyme critical for the detoxification of AGEs and retinal vascular cell survival. We hypothesized that, in retina, angiotensin II (Ang II) downregulates GLO-I, which leads to an increase in methylglyoxal-AGE formation. The angiotensin type 1 receptor blocker, candesartan, rectifies this imbalance and protects against retinal vasculopathy. RESEARCH DESIGN AND METHODS Cultured bovine retinal endothelial cells (BREC) and bovine retinal pericytes (BRP) were incubated with Ang II (100 nmol/l) or Ang II+candesartan (1 μmol/l). Transgenic Ren-2 rats that overexpress the RAS were randomized to be nondiabetic, diabetic, or diabetic+candesartan (5 mg/kg/day) and studied over 20 weeks. Comparisons were made with diabetic Sprague-Dawley rats. RESULTS In BREC and BRP, Ang II induced apoptosis and reduced GLO-I activity and mRNA, with a concomitant increase in nitric oxide (NO•), the latter being a known negative regulator of GLO-I in BRP. In BREC and BRP, candesartan restored GLO-I and reduced NO•. Similar events occurred in vivo, with the elevated RAS of the diabetic Ren-2 rat, but not the diabetic Sprague-Dawley rat, reducing retinal GLO-I. In diabetic Ren-2 rats, candesartan reduced retinal acellular capillaries, inflammation, and inducible nitric oxide synthase and NO•, and restored GLO-I. CONCLUSIONS We have identified a novel mechanism by which candesartan improves diabetic retinopathy through the restoration of GLO-I. PMID:20852029

  13. Learning the scientific method using GloFish.

    PubMed

    Vick, Brianna M; Pollak, Adrianna; Welsh, Cynthia; Liang, Jennifer O

    2012-12-01

    Here we describe projects that used GloFish, brightly colored, fluorescent, transgenic zebrafish, in experiments that enabled students to carry out all steps in the scientific method. In the first project, students in an undergraduate genetics laboratory course successfully tested hypotheses about the relationships between GloFish phenotypes and genotypes using PCR, fluorescence microscopy, and test crosses. In the second and third projects, students doing independent research carried out hypothesis-driven experiments that also developed new GloFish projects for future genetics laboratory students. Brianna Vick, an undergraduate student, identified causes of the different shades of color found in orange GloFish. Adrianna Pollak, as part of a high school science fair project, characterized the fluorescence emission patterns of all of the commercially available colors of GloFish (red, orange, yellow, green, blue, and purple). The genetics laboratory students carrying out the first project found that learning new techniques and applying their knowledge of genetics were valuable. However, assessments of their learning suggest that this project was not challenging to many of the students. Thus, the independent projects will be valuable as bases to widen the scope and range of difficulty of experiments available to future genetics laboratory students.

  14. Ir gene controlled carrier effects in the induction and elicitation of hapten-specific delayed-type hypersensitivity responses.

    PubMed

    Weinberger, J Z; Benacerraf, B; Dorf, M E

    1979-11-01

    The genetic requirements of carrier recognition were examined in the priming and elicitation of hapten specific, T-cell mediated, delayed-type hypersensitivity (DTH) responses. It was shown that nitrophenyl acetyl-poly-(L-glu56-L-lys35-L-phe9) (NP-GLO) could prime for NP responses only in strains of mice which are Ir gene responders to GLO. In contrast to this requirement, NO-GLO could elicit an NP-specific response in NP-bovine gamma globulin primed mice, even in GLO nonresponder strains. Furthermore, the nonimmunogenic molecule, NP-GL, could elicit an NP-specific DTH response in animals primed with NP on an immunogenic carrier.

  15. To B or Not to B a flower: the role of DEFICIENS and GLOBOSA orthologs in the evolution of the angiosperms.

    PubMed

    Zahn, L M; Leebens-Mack, J; DePamphilis, C W; Ma, H; Theissen, G

    2005-01-01

    DEFICIENS (DEF) and GLOBOSA (GLO) function in petal and stamen organ identity in Antirrhinum and are orthologs of APETALA3 and PISTILLATA in Arabidopsis. These genes are known as B-function genes for their role in the ABC genetic model of floral organ identity. Phylogenetic analyses show that DEF and GLO are closely related paralogs, having originated from a gene duplication event after the separation of the lineages leading to the extant gymnosperms and the extant angiosperms. Several additional gene duplications followed, providing multiple potential opportunities for functional divergence. In most angiosperms studied to date, genes in the DEF/GLO MADS-box subfamily are expressed in the petals and stamens during flower development. However, in some angiosperms, the expression of DEF and GLO orthologs are occasionally observed in the first and fourth whorls of flowers or in nonfloral organs, where their function is unknown. In this article we review what is known about function, phylogeny, and expression in the DEF/GLO subfamily to examine their evolution in the angiosperms. Our analyses demonstrate that although the primary role of the DEF/GLO subfamily appears to be in specifying the stamens and inner perianth, several examples of potential sub- and neofunctionalization are observed.

  16. GLYOXALASE I A111E, PARAOXONASE 1 Q192R AND L55M POLYMORPHISMS IN ITALIAN PATIENTS WITH SPORADIC CEREBRAL CAVERNOUS MALFORMATIONS: A PILOT STUDY.

    PubMed

    Rinaldi, C; Bramanti, P; Famà, A; Scimone, C; Donato, L; Antognelli, C; Alafaci, C; Tomasello, F; D'Angelo, R; Sidoti, A

    2015-01-01

    It is already known that the conditions of increased oxidative stress are associated to a greater susceptibility to vascular malformations including cerebral cavernous malformations (CCMs). These are vascular lesions of the CNS characterized by abnormally enlarged capillary cavities that can occur sporadically or as a familial autosomal dominant condition with incomplete penetrance and variable clinical expression attributable to mutations in three different genes: CCM1(Krit1), CCM2 (MGC4607) and CCM3 (PDCD10). Polymorphisms in the genes encoding for enzymes involved in the antioxidant systems such as glyoxalase I (GLO I) and paraoxonase I (PON I) could influence individual susceptibility to the vascular malformations. A single nucleotide polymorphism was identified in the exon 4 of GLO 1 gene that causes an amino acid substitution of Ala for Glu (Ala111Glu). Two common polymorphisms have been described in the coding region of PON1, which lead to glutamine → arginine substitution at 192 (Q192R) and a leucine → methionine substitution at 55 (L55M). The polymorphisms were characterized in 59 patients without mutations in the CCM genes versus 213 healthy controls by PCR/RFLP methods using DNA from lymphocytes. We found that the frequency of patients carrying the GLO1 A/E genotype among the case group (56%) was four-fold higher than among the controls (14.1%). In the cohort of CCM patients, an increase in the frequency of PON192 Q/R genotype was observed (39% in the CCM group versus 3.7% in the healthy controls). Similarly, an increase was observed in the proportion of individuals with the genotype R/R in the disease group (5%) in respect to the normal healthy cohort (0.5%). Finally, the frequency of the PON55 heterozygotes L/M genotype was 29% in patients with CCMs and 4% in the healthy controls. The same trend was observed in PON55 homozygous M/M genotype frequency (CCMs 20% vs controls 10%). The present study aimed to investigate the possible association of GLO1 A111E, PON1 Q192R and L55M polymorphisms with the risk of CCMs. We found that individuals with the GLO1 A /E genotype, PON192/QR-RR genotypes and PON55/LM-MM genotypes had a significantly higher risk of CCMs compared with the other genotypes. However, because CCM is a heterogeneous disease, other additional factors might be involved in the initiation and progression of CCM disease.

  17. Methods for Calibration of Prout-Tompkins Kinetics Parameters Using EZM Iteration and GLO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wemhoff, A P; Burnham, A K; de Supinski, B

    2006-11-07

    This document contains information regarding the standard procedures used to calibrate chemical kinetics parameters for the extended Prout-Tompkins model to match experimental data. Two methods for calibration are mentioned: EZM calibration and GLO calibration. EZM calibration matches kinetics parameters to three data points, while GLO calibration slightly adjusts kinetic parameters to match multiple points. Information is provided regarding the theoretical approach and application procedure for both of these calibration algorithms. It is recommended that for the calibration process, the user begin with EZM calibration to provide a good estimate, and then fine-tune the parameters using GLO. Two examples have beenmore » provided to guide the reader through a general calibrating process.« less

  18. An assessment of Indian monsoon seasonal forecasts and mechanisms underlying monsoon interannual variability in the Met Office GloSea5-GC2 system

    NASA Astrophysics Data System (ADS)

    Johnson, Stephanie J.; Turner, Andrew; Woolnough, Steven; Martin, Gill; MacLachlan, Craig

    2017-03-01

    We assess Indian summer monsoon seasonal forecasts in GloSea5-GC2, the Met Office fully coupled subseasonal to seasonal ensemble forecasting system. Using several metrics, GloSea5-GC2 shows similar skill to other state-of-the-art seasonal forecast systems. The prediction skill of the large-scale South Asian monsoon circulation is higher than that of Indian monsoon rainfall. Using multiple linear regression analysis we evaluate relationships between Indian monsoon rainfall and five possible drivers of monsoon interannual variability. Over the time period studied (1992-2011), the El Niño-Southern Oscillation (ENSO) and the Indian Ocean dipole (IOD) are the most important of these drivers in both observations and GloSea5-GC2. Our analysis indicates that ENSO and its teleconnection with Indian rainfall are well represented in GloSea5-GC2. However, the relationship between the IOD and Indian rainfall anomalies is too weak in GloSea5-GC2, which may be limiting the prediction skill of the local monsoon circulation and Indian rainfall. We show that this weak relationship likely results from a coupled mean state bias that limits the impact of anomalous wind forcing on SST variability, resulting in erroneous IOD SST anomalies. Known difficulties in representing convective precipitation over India may also play a role. Since Indian rainfall responds weakly to the IOD, it responds more consistently to ENSO than in observations. Our assessment identifies specific coupled biases that are likely limiting GloSea5-GC2 Indian summer monsoon seasonal prediction skill, providing targets for model improvement.

  19. Mangiferin Upregulates Glyoxalase 1 Through Activation of Nrf2/ARE Signaling in Central Neurons Cultured with High Glucose.

    PubMed

    Liu, Yao-Wu; Cheng, Ya-Qin; Liu, Xiao-Li; Hao, Yun-Chao; Li, Yu; Zhu, Xia; Zhang, Fan; Yin, Xiao-Xing

    2017-08-01

    Mangiferin, a natural C-glucoside xanthone, has anti-inflammatory, anti-oxidative, neuroprotective actions. Our previous study showed that mangiferin could attenuate diabetes-associated cognitive impairment of rats by enhancing the function of glyoxalase 1 (Glo-1) in brain. The aim of this study was to investigate whether Glo-1 upregulation by mangiferin in central neurons exposed to chronic high glucose may be related to activation of Nrf2/ARE pathway. Compared with normal glucose (25 mmol/L) culture, Glo-1 protein, mRNA, and activity levels were markedly decreased in primary hippocampal and cerebral cortical neurons cultured with high glucose (50 mmol/L) for 72 h, accompanied by the declined Nrf2 nuclear translocation and protein expression of Nrf2 in cell nucleus, as well as protein expression and mRNA level of γ-glutamylcysteine synthetase (γ-GCS) and superoxide dismutase activity, target genes of Nrf2/ARE signaling. Nonetheless, high glucose cotreating with mangiferin or sulforaphane, a typical inducer of Nrf2 activation, attenuated the above changes in both central neurons. In addition, mangiferin and sulforaphane significantly prevented the formation of advanced glycation end-products (AGEs) reflecting Glo-1 activity, while elevated the level of glutathione, a cofactor of Glo-1 activity and production of γ-GCS, in high glucose cultured central neurons. These findings demonstrated that Glo-1 was greatly downregulated in central neurons exposed to chronic high glucose, which is expected to lead the formation of AGEs and oxidative stress damages. We also proved that mangiferin enhanced the function of Glo-1 under high glucose condition by inducing activation of Nrf2/ARE signaling pathway.

  20. Assessing the diversity of AM fungi in arid gypsophilous plant communities.

    PubMed

    Alguacil, M M; Roldán, A; Torres, M P

    2009-10-01

    In the present study, we used PCR-Single-Stranded Conformation Polymorphism (SSCP) techniques to analyse arbuscular mycorrhizal fungi (AMF) communities in four sites within a 10 km(2) gypsum area in Southern Spain. Four common plant species from these ecosystems were selected. The AM fungal small-subunit (SSU) rRNA genes were subjected to PCR, cloning, SSCP analysis, sequencing and phylogenetic analyses. A total of 1443 SSU rRNA sequences were analysed, for 21 AM fungal types: 19 belonged to the genus Glomus, 1 to the genus Diversispora and 1 to the Scutellospora. Four sequence groups were identified, which showed high similarity to sequences of known glomalean species or isolates: Glo G18 to Glomus constrictum, Glo G1 to Glomus intraradices, Glo G16 to Glomus clarum, Scut to Scutellospora dipurpurescens and Div to one new genus in the family Diversisporaceae identified recently as Otospora bareai. There were three sequence groups that received strong support in the phylogenetic analysis, and did not seem to be related to any sequences of AM fungi in culture or previously found in the database; thus, they could be novel taxa within the genus Glomus: Glo G4, Glo G2 and Glo G14. We have detected the presence of both generalist and potential specialist AMF in gypsum ecosystems. The AMF communities were different in the plant studied suggesting some degree of preference in the interactions between these symbionts.

  1. A novel dual-functional biosensor for fluorometric detection of inorganic pyrophosphate and pyrophosphatase activity based on globulin stabilized gold nanoclusters.

    PubMed

    Xu, Shenghao; Feng, Xiuying; Gao, Teng; Wang, Ruizhi; Mao, Yaning; Lin, Jiehua; Yu, Xijuan; Luo, Xiliang

    2017-03-15

    A novel ultrasensitive dual-functional biosensor for highly sensitive detection of inorganic pyrophosphate (PPi) and pyrophosphatase (PPase) activity was developed based on the fluorescent variation of globulin protected gold nanoclusters (Glo@Au NCs) with the assistance of Cu 2+ . Glo@Au NCs and PPi were used as the fluorescent indicator and substrate for PPase activity evaluation, respectively. In the presence of Cu 2+ , the fluorescence of the Glo@Au NCs will be quenched owing to the formation of Cu 2+ -Glo@Au NCs complex, while PPi can restore the fluorescence of the Cu 2+ -Glo@Au NCs complex because of its higher binding affinity with Cu 2+ . As PPase can catalyze the hydrolysis of PPi, it will lead to the release of Cu 2+ and re-quench the fluorescence of the Glo@Au NCs. Based on this mechanism, quantitative evaluation of the PPi and PPase activity can be achieved ranging from 0.05 μM to 218.125 μM for PPi and from 0.1 to 8 mU for PPase, with detection limits of 0.02 μM and 0.04 mU, respectively, which is much lower than that of other PPi and PPase assay methods. More importantly, this ultrasensitive dual-functional biosensor can also be successfully applied to evaluate the PPase activity in human serum, showing great promise for practical diagnostic applications. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Neuroergonomics Deep Dive Literature Review, Volume 2: Neuroergonomics and Performance: Prediction, Assessment, and Facilitation

    DTIC Science & Technology

    2010-11-01

    Glo1 expression and anxiety -like behavior. PLoS One, 4 (3), e4649. Glyoxalase 1(Glo1) has been implicated in anxiety -like behavior in animal models...Glo1 expression and anxiety -like behavior in both inbred strain panels and outbred CD-1 mice. 12. Cirulli, E.T., Kasperavičiūtė, D., Attix, D.K...a new method for heart-rate variability ( HRV ) called CS-index. This index is the ratio of average cardio- intervals and standard cardio-intervals

  3. Photo-chemical transport modelling of tropospheric ozone: A review

    NASA Astrophysics Data System (ADS)

    Sharma, Sumit; Sharma, Prateek; Khare, Mukesh

    2017-06-01

    Ground level ozone (GLO), a secondary pollutant having adverse impact on human health, ecology, and agricultural productivity, apart from being a major contributor to global warming, has been a subject matter of several studies. In order to identify appropriate strategies to control GLO levels, accurate assessment and prediction is essential, for which elaborate simulation and modelling is required. Several studies have been undertaken in the past to simulate GLO levels at different scales and for various applications. It is important to evaluate these studies, widely spread over in literature. This paper aims to critically review various studies that have been undertaken, especially in the past 15 years (2000-15) to model GLO. The review has been done of the studies that range over different spatial scales - urban to regional and continental to global. It also includes a review of performance evaluation and sensitivity analysis of photo-chemical transport models in order to assess the extent of application of these models and their predictive capability. The review indicates following major findings: (a) models tend to over-estimate the night-time GLO concentrations due to limited titration of GLO with NO within the model; (b) dominance of contribution from far-off regional sources to average ozone concentration in the urban region and higher contribution of local sources during days of high ozone episodes; requiring strategies for controlling precursor emissions at both regional and local scales; (c) greater influence of NOx over VOC in export of ozone from urban regions due to shifting of urban plumes from VOC-sensitive regime to NOx-sensitive as they move out from city centres to neighbouring rural regions; (d) models with finer resolution inputs perform better to a certain extent, however, further improvement in resolutions (beyond 10 km) did not show improvement always; (e) future projections show an increase in GLO concentrations mainly due to rise in temperatures and biogenic VOC emissions.

  4. Crystallization and preliminary X-ray analysis of Leishmania major glyoxalase I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ariza, Antonio; Vickers, Tim J.; Greig, Neil

    2005-08-01

    The detoxification enzyme glyoxalase I from L. major has been crystallized. Preliminary molecular-replacement calculations indicate the presence of three glyoxalase I dimers in the asymmetric unit. Glyoxalase I (GLO1) is a putative drug target for trypanosomatids, which are pathogenic protozoa that include the causative agents of leishmaniasis. Significant sequence and functional differences between Leishmania major and human GLO1 suggest that it may make a suitable template for rational inhibitor design. L. major GLO1 was crystallized in two forms: the first is extremely disordered and does not diffract, while the second, an orthorhombic form, produces diffraction to 2.0 Å. Molecular-replacement calculationsmore » indicate that there are three GLO1 dimers in the asymmetric unit, which take up a helical arrangement with their molecular dyads arranged approximately perpendicular to the c axis. Further analysis of these data are under way.« less

  5. GLO-STIX: Graph-Level Operations for Specifying Techniques and Interactive eXploration

    PubMed Central

    Stolper, Charles D.; Kahng, Minsuk; Lin, Zhiyuan; Foerster, Florian; Goel, Aakash; Stasko, John; Chau, Duen Horng

    2015-01-01

    The field of graph visualization has produced a wealth of visualization techniques for accomplishing a variety of analysis tasks. Therefore analysts often rely on a suite of different techniques, and visual graph analysis application builders strive to provide this breadth of techniques. To provide a holistic model for specifying network visualization techniques (as opposed to considering each technique in isolation) we present the Graph-Level Operations (GLO) model. We describe a method for identifying GLOs and apply it to identify five classes of GLOs, which can be flexibly combined to re-create six canonical graph visualization techniques. We discuss advantages of the GLO model, including potentially discovering new, effective network visualization techniques and easing the engineering challenges of building multi-technique graph visualization applications. Finally, we implement the GLOs that we identified into the GLO-STIX prototype system that enables an analyst to interactively explore a graph by applying GLOs. PMID:26005315

  6. Neutrino oscillation parameter sampling with MonteCUBES

    NASA Astrophysics Data System (ADS)

    Blennow, Mattias; Fernandez-Martinez, Enrique

    2010-01-01

    We present MonteCUBES ("Monte Carlo Utility Based Experiment Simulator"), a software package designed to sample the neutrino oscillation parameter space through Markov Chain Monte Carlo algorithms. MonteCUBES makes use of the GLoBES software so that the existing experiment definitions for GLoBES, describing long baseline and reactor experiments, can be used with MonteCUBES. MonteCUBES consists of two main parts: The first is a C library, written as a plug-in for GLoBES, implementing the Markov Chain Monte Carlo algorithm to sample the parameter space. The second part is a user-friendly graphical Matlab interface to easily read, analyze, plot and export the results of the parameter space sampling. Program summaryProgram title: MonteCUBES (Monte Carlo Utility Based Experiment Simulator) Catalogue identifier: AEFJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 69 634 No. of bytes in distributed program, including test data, etc.: 3 980 776 Distribution format: tar.gz Programming language: C Computer: MonteCUBES builds and installs on 32 bit and 64 bit Linux systems where GLoBES is installed Operating system: 32 bit and 64 bit Linux RAM: Typically a few MBs Classification: 11.1 External routines: GLoBES [1,2] and routines/libraries used by GLoBES Subprograms used:Cat Id ADZI_v1_0, Title GLoBES, Reference CPC 177 (2007) 439 Nature of problem: Since neutrino masses do not appear in the standard model of particle physics, many models of neutrino masses also induce other types of new physics, which could affect the outcome of neutrino oscillation experiments. In general, these new physics imply high-dimensional parameter spaces that are difficult to explore using classical methods such as multi-dimensional projections and minimizations, such as those used in GLoBES [1,2]. Solution method: MonteCUBES is written as a plug-in to the GLoBES software [1,2] and provides the necessary methods to perform Markov Chain Monte Carlo sampling of the parameter space. This allows an efficient sampling of the parameter space and has a complexity which does not grow exponentially with the parameter space dimension. The integration of the MonteCUBES package with the GLoBES software makes sure that the experimental definitions already in use by the community can also be used with MonteCUBES, while also lowering the learning threshold for users who already know GLoBES. Additional comments: A Matlab GUI for interpretation of results is included in the distribution. Running time: The typical running time varies depending on the dimensionality of the parameter space, the complexity of the experiment, and how well the parameter space should be sampled. The running time for our simulations [3] with 15 free parameters at a Neutrino Factory with O(10) samples varied from a few hours to tens of hours. References:P. Huber, M. Lindner, W. Winter, Comput. Phys. Comm. 167 (2005) 195, hep-ph/0407333. P. Huber, J. Kopp, M. Lindner, M. Rolinec, W. Winter, Comput. Phys. Comm. 177 (2007) 432, hep-ph/0701187. S. Antusch, M. Blennow, E. Fernandez-Martinez, J. Lopez-Pavon, arXiv:0903.3986 [hep-ph].

  7. Miniaturized GPCR signaling studies in 1536-well format.

    PubMed

    Shultz, S; Worzella, T; Gallagher, A; Shieh, J; Goueli, S; Hsiao, K; Vidugiriene, J

    2008-09-01

    G protein-coupled receptors (GPCRs) are involved in various physiological processes, such as behavior changes, mood alteration, and regulation of immune-system activity. Thus, GPCRs are popular targets in drug screening, and a well-designed assay can speed up the discovery of novel drug candidates. The Promega cAMP-Glo Assay is a homogenous bioluminescent assay to monitor changes in intracellular cyclic adenosine monophosphate (cAMP) concentrations in response to the effect of an agonist, antagonist, or test compound on GPCRs. Together with the Labcyte Echo 555 acoustic liquid handler and the Deerac Fluidics Equator HTS reagent dispenser, this setup can screen compounds in 96-, 384-, and 1536-well formats for their effects on GPCRs. Here, we describe our optimization of the cAMP-Glo assay in 1536-well format, validate the pharmacology, and assess the assay robustness for HTS. We have successfully demonstrated the use of the assay in primary screening applications of known agonist and antagonist compounds, and confirmed the primary hits via secondary screening. Implementing a high-throughput miniaturized GPCR assay as demonstrated here allows effective screening for potential drug candidates.

  8. Miniaturized GPCR Signaling Studies in 1536-Well Format

    PubMed Central

    Shultz, S.; Worzella, T.; Gallagher, A.; Shieh, J.; Goueli, S.; Hsiao, K.; Vidugiriene, J.

    2008-01-01

    G protein-coupled receptors (GPCRs) are involved in various physiological processes, such as behavior changes, mood alteration, and regulation of immune-system activity. Thus, GPCRs are popular targets in drug screening, and a well-designed assay can speed up the discovery of novel drug candidates. The Promega cAMP-Glo Assay is a homogenous bioluminescent assay to monitor changes in intracellular cyclic adenosine monophosphate (cAMP) concentrations in response to the effect of an agonist, antagonist, or test compound on GPCRs. Together with the Labcyte Echo 555 acoustic liquid handler and the Deerac Fluidics Equator HTS reagent dispenser, this setup can screen compounds in 96-, 384-, and 1536-well formats for their effects on GPCRs. Here, we describe our optimization of the cAMP-Glo assay in 1536-well format, validate the pharmacology, and assess the assay robustness for HTS. We have successfully demonstrated the use of the assay in primary screening applications of known agonist and antagonist compounds, and confirmed the primary hits via secondary screening. Implementing a high-throughput miniaturized GPCR assay as demonstrated here allows effective screening for potential drug candidates. PMID:19137117

  9. GloVis

    USGS Publications Warehouse

    Houska, Treva R.; Johnson, A.P.

    2012-01-01

    The Global Visualization Viewer (GloVis) trifold provides basic information for online access to a subset of satellite and aerial photography collections from the U.S. Geological Survey Earth Resources Observation and Science (EROS) Center archive. The GloVis (http://glovis.usgs.gov/) browser-based utility allows users to search and download National Aerial Photography Program (NAPP), National High Altitude Photography (NHAP), Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Earth Observing-1 (EO-1), Global Land Survey, Moderate Resolution Imaging Spectroradiometer (MODIS), and TerraLook data. Minimum computer system requirements and customer service contact information also are included in the brochure.

  10. Cardiovascular and respiratory mortality attributed to ground-level ozone in Ahvaz, Iran.

    PubMed

    Goudarzi, Gholamreza; Geravandi, Sahar; Foruozandeh, Hossein; Babaei, Ali Akbar; Alavi, Nadali; Niri, Mehdi Vosoughi; Khodayar, Mohammad Javad; Salmanzadeh, Shokrollah; Mohammadi, Mohammad Javad

    2015-08-01

    Ahvaz, the capital city of Khuzestan Province, which produces Iran's most oil, is on the rolls of fame in view of air pollution. It has also suffered from dust storm during the recent two decades. So, emissions from transportation systems, steel, oil, black carbon, and other industries as anthropogenic sources and dust storm as a new phenomenon are two major concerns of air pollution in Ahvaz. Without any doubt, they can cause many serious problems for the environment and humans in this megacity. The main objective of the present study was to estimate the impact of ground-level ozone (GLO) as a secondary pollutant on human heath. Data of GLO in four monitoring stations were collected at the first step and they were processed and at the final step they were inserted to a health effect model. Findings showed that cumulative cases of cardiovascular and respiratory deaths which attributed to GLO were 43 and 173 persons, respectively. Corresponding RR for these two events were 1.008 (95% CI) and 1.004 (95% CI), respectively. Although we did not provide a distinction between winter and summer in case of mentioned mortalities attributed to GLO, ozone concentrations in winter due to more fuel consumption and sub adiabatic condition in tropospheric atmospherewere higher than those GLO in summer.

  11. Bathythermal habitat use by strains of Great Lakes- and Finger Lakes-origin lake trout in Lake Huron after a change in prey fish abundance and composition

    USGS Publications Warehouse

    Bergstedt, Roger A.; Argyle, Ray L.; Krueger, Charles C.; Taylor, William W.

    2012-01-01

    A study conducted in Lake Huron during October 1998–June 2001 found that strains of Great Lakes-origin (GLO) lake trout Salvelinus namaycush occupied significantly higher temperatures than did Finger Lakes-origin (FLO; New York) lake trout based on data from archival (or data storage) telemetry tags that recorded only temperature. During 2002 and 2003, we implanted archival tags that recorded depth as well as temperature in GLO and FLO lake trout in Lake Huron. Data subsequently recorded by those tags spanned 2002–2005. Based on those data, we examined whether temperatures and depths occupied by GLO and FLO lake trout differed during 2002–2005. Temperatures occupied during those years were also compared with occupied temperatures reported for 1998–2001, before a substantial decline in prey fish biomass. Temperatures occupied by GLO lake trout were again significantly higher than those occupied by FLO lake trout. This result supports the conclusion of the previous study. The GLO lake trout also occupied significantly shallower depths than FLO lake trout. In 2002–2005, both GLO and FLO lake trout occupied significantly lower temperatures than they did in 1998–2001. Aside from the sharp decline in prey fish biomass between study periods, the formerly abundant pelagic alewife Alosa pseudoharengus virtually disappeared and the demersal round goby Neogobius melanostomus invaded the lake and became locally abundant. The lower temperatures occupied by lake trout in Lake Huron during 2002–2005 may be attributable to changes in the composition of the prey fish community, food scarcity (i.e., a retreat to cooler water could increase conversion efficiency), or both.

  12. Methylglyoxal-derived advanced glycation end products contribute to negative cardiac remodeling and dysfunction post-myocardial infarction.

    PubMed

    Blackburn, Nick J R; Vulesevic, Branka; McNeill, Brian; Cimenci, Cagla Eren; Ahmadi, Ali; Gonzalez-Gomez, Mayte; Ostojic, Aleksandra; Zhong, Zhiyuan; Brownlee, Michael; Beisswenger, Paul J; Milne, Ross W; Suuronen, Erik J

    2017-09-01

    Advanced glycation end-products (AGEs) have been associated with poorer outcomes after myocardial infarction (MI), and linked with heart failure. Methylglyoxal (MG) is considered the most important AGE precursor, but its role in MI is unknown. In this study, we investigated the involvement of MG-derived AGEs (MG-AGEs) in MI using transgenic mice that over-express the MG-metabolizing enzyme glyoxalase-1 (GLO1). MI was induced in GLO1 mice and wild-type (WT) littermates. At 6 h post-MI, mass spectrometry revealed that MG-H1 (a principal MG-AGE) was increased in the hearts of WT mice, and immunohistochemistry demonstrated that this persisted for 4 weeks. GLO1 over-expression reduced MG-AGE levels at 6 h and 4 weeks, and GLO1 mice exhibited superior cardiac function at 4 weeks post-MI compared to WT mice. Immunohistochemistry revealed greater vascular density and reduced cardiomyocyte apoptosis in GLO1 vs. WT mice. The recruitment of c-kit + cells and their incorporation into the vasculature (c-kit + CD31 + cells) was higher in the infarcted myocardium of GLO1 mice. MG-AGEs appeared to accumulate in type I collagen surrounding arterioles, prompting investigation in vitro. In culture, the interaction of angiogenic bone marrow cells with MG-modified collagen resulted in reduced cell adhesion, increased susceptibility to apoptosis, fewer progenitor cells, and reduced angiogenic potential. This study reveals that MG-AGEs are produced post-MI and identifies a causative role for their accumulation in the cellular changes, adverse remodeling and functional loss of the heart after MI. MG may represent a novel target for preventing damage and improving function of the infarcted heart.

  13. Global Visualization (GloVis) Viewer

    USGS Publications Warehouse

    ,

    2005-01-01

    GloVis (http://glovis.usgs.gov) is a browse image-based search and order tool that can be used to quickly review the land remote sensing data inventories held at the U.S. Geological Survey (USGS) Center for Earth Resources Observation and Science (EROS). GloVis was funded by the AmericaView project to reduce the difficulty of identifying and acquiring data for user-defined study areas. Updated daily with the most recent satellite acquisitions, GloVis displays data in a mosaic, allowing users to select any area of interest worldwide and immediately view all available browse images for the following Landsat data sets: Multispectral Scanner (MSS), Multi-Resolution Land Characteristics (MRLC), Orthorectified, Thematic Mapper (TM), Enhanced Thematic Mapper Plus (ETM+), and ETM+ Scan Line Corrector-off (SLC-off). Other data sets include Terra Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and Moderate Resolution Imaging Spectroradiometer (MODIS), Aqua MODIS, and the Earth Observing-1 (EO-1) Advanced Land Imager (ALI) and Hyperion data.

  14. GloFAS-Seasonal: Operational Seasonal Ensemble River Flow Forecasts at the Global Scale

    NASA Astrophysics Data System (ADS)

    Emerton, Rebecca; Zsoter, Ervin; Smith, Paul; Salamon, Peter

    2017-04-01

    Seasonal hydrological forecasting has potential benefits for many sectors, including agriculture, water resources management and humanitarian aid. At present, no global scale seasonal hydrological forecasting system exists operationally; although smaller scale systems have begun to emerge around the globe over the past decade, a system providing consistent global scale seasonal forecasts would be of great benefit in regions where no other forecasting system exists, and to organisations operating at the global scale, such as disaster relief. We present here a new operational global ensemble seasonal hydrological forecast, currently under development at ECMWF as part of the Global Flood Awareness System (GloFAS). The proposed system, which builds upon the current version of GloFAS, takes the long-range forecasts from the ECMWF System4 ensemble seasonal forecast system (which incorporates the HTESSEL land surface scheme) and uses this runoff as input to the Lisflood routing model, producing a seasonal river flow forecast out to 4 months lead time, for the global river network. The seasonal forecasts will be evaluated using the global river discharge reanalysis, and observations where available, to determine the potential value of the forecasts across the globe. The seasonal forecasts will be presented as a new layer in the GloFAS interface, which will provide a global map of river catchments, indicating whether the catchment-averaged discharge forecast is showing abnormally high or low flows during the 4-month lead time. Each catchment will display the corresponding forecast as an ensemble hydrograph of the weekly-averaged discharge forecast out to 4 months, with percentile thresholds shown for comparison with the discharge climatology. The forecast visualisation is based on a combination of the current medium-range GloFAS forecasts and the operational EFAS (European Flood Awareness System) seasonal outlook, and aims to effectively communicate the nature of a seasonal outlook while providing useful information to users and partners. We demonstrate the first version of an operational GloFAS seasonal outlook, outlining the model set-up and presenting a first look at the seasonal forecasts that will be displayed in the GloFAS interface, and discuss the initial results of the forecast evaluation.

  15. Interdependence of GLO I and PKM2 in the Metabolic shift to escape apoptosis in GLO I-dependent cancer cells.

    PubMed

    Shimada, Nami; Takasawa, Ryoko; Tanuma, Sei-Ichi

    2018-01-15

    Many cancer cells undergo metabolic reprogramming known as the Warburg effect, which is characterized by a greater dependence on glycolysis for ATP generation, even under normoxic conditions. Glyoxalase I (GLO I) is a rate-limiting enzyme involved in the detoxification of cytotoxic methylglyoxal formed in glycolysis and which is known to be highly expressed in many cancer cells. Thus, specific inhibitors of GLO I are expected to be effective anticancer drugs. We previously discovered a novel GLO I inhibitor named TLSC702. Although the strong inhibitory activity of TLSC702 was observed in the in vitro enzyme assay, higher concentrations were required to induce apoptosis at the cellular level. One of the proposed reasons for this difference is that cancer cells alter the energy metabolism leading them to become more dependent on mitochondrial respiration than glycolysis (Metabolic shift) to avoid apoptosis induction. Thus, we assumed that combination of TLSC702 with shikonin-a specific inhibitor of pyruvate kinase M2 (PKM2) that acts as a driver of TCA cycle by supplying pyruvate and which is known to be specifically expressed in cancer cells-would have anticancer effects. We herein show the anticancer effects of combination treatment with TLSC702 and shikonin, and a possible anticancer mechanism. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Nanoparticle Control of Void Formation and Expansion in Polymeric and Composite Systems

    DTIC Science & Technology

    2007-02-01

    facilities of GloCal Network Corporation, a Delaware legal entity with facilities in Seattle, Washington. The team succeeded at performing work in the State of...Delaware and Washington concurrently. After December 1, 2006, Professor Seferis and his team will continue the research, exclusively through GloCal

  17. The GLO1 C332 (Ala111) allele confers autism vulnerability: family-based genetic association and functional correlates.

    PubMed

    Gabriele, Stefano; Lombardi, Federica; Sacco, Roberto; Napolioni, Valerio; Altieri, Laura; Tirindelli, Maria Cristina; Gregorj, Chiara; Bravaccio, Carmela; Rousseau, Francis; Persico, Antonio M

    2014-12-01

    Glyoxalase I (GLO1) is a homodimeric Zn(2+)-dependent isomerase involved in the detoxification of methylglyoxal and in limiting the formation of advanced glycation end-products (AGE). We previously found the rs4746 A332 (Glu111) allele of the GLO1 gene, which encodes for glyoxalase I, associated with "unaffected sibling" status in families with one or more children affected by Autism Spectrum Disorder (ASD). To identify and characterize this protective allele, we sequenced GLO1 exons and exon-intron junctions, detecting two additional SNPs (rs1049346, rs1130534) in linkage disequilibrium with rs4746. A family-based association study involving 385 simplex and 20 multiplex Italian families yielded a significant association with autism driven only by the rs4746 C332 (Ala111) allele itself (P < 0.05 and P < 0.001 under additive and dominant/recessive models, respectively). Glyoxalase enzymatic activity was significantly reduced both in leukocytes and in post-mortem temporocortical tissue (N = 38 and 13, respectively) of typically developing C332 allele carriers (P < 0.05 and <0.01), with no difference in Glo1 protein levels. Conversely, AGE amounts were significantly higher in the same C332 post-mortem brains (P = 0.001), with a strong negative correlation between glyoxalase activity and AGE levels (τ = -0.588, P < 0.01). Instead, 19 autistic brains show a dysregulation of the glyoxalase-AGE axis (τ = -0.209, P = 0.260), with significant blunting of glyoxalase activity and AGE amounts compared to controls (P < 0.05), and loss of rs4746 genotype effects. In summary, the GLO1 C332 (Ala111) allele confers autism vulnerability by reducing brain glyoxalase activity and enhancing AGE formation, but years after an autism diagnosis the glyoxalase-AGE axis appears profoundly disrupted, with loss of C332 allelic effects. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Practical applications of the geographic coordinate data base in Arkansas

    Treesearch

    Mickie Warwick; Don C. Bragg

    2005-01-01

    Though not intended for these applications, the General Land Office (GLO) survey notes are a primary source of historical, ecological, and cultural information, making it imperative that their spatial coordinates be as reliable as possible. The Geographic Coordinate Data Base (GCDB) is a statistically-based coordinate fitting program that uses the GLO notes and other...

  19. More than Just Fun and Games: BSG and Glo-Bus as Strategic Education Instruments

    ERIC Educational Resources Information Center

    Karriker, Joy H.; Aaron, Joshua R.

    2014-01-01

    Simulations like the BSG and Glo-Bus allow students the opportunity to practice their integrated, strategic management skills in a relatively risk-free environment or "live case." We review these games and address their strengths, along with the challenges associated with their classroom application. Because of their sound designs and…

  20. Shuttle-based measurements: GLO ultraviolet earthlimb view

    NASA Astrophysics Data System (ADS)

    Gardner, James A.; Murad, Edmond; Viereck, Rodney A.; Knecht, David J.; Pike, Charles P.; Broadfoot, A. Lyle

    1996-11-01

    The GLO experiment is an on-going shuttle-based spectrograph/imager project that has returned ultraviolet (100 - 400 nm) limb views. High spectral (0.35 nm FWHM) and temporal (4 s) resolution spectra include simultaneous altitude profiles (in the range of 80 - 400 km tangent height with 10 km resolution) of dayglow and nightglow features. Measured emissions include the NO gamma, N2 Vegard-Kaplan and second positive, N2+ first negative, and O2 Herzberg I band systems and both atomic and cation lines of N, O, and Mg. This data represents a low solar activity benchmark for future observations. We report on the status of the GLO project, which included three space flights in 1995, and present spectral data on important ultraviolet band systems.

  1. Tracking cross-contamination transfer dynamics at a mock retail deli market using GloGerm.

    PubMed

    Maitland, Jessica; Boyer, Renee; Gallagher, Dan; Duncan, Susan; Bauer, Nate; Kause, Janell; Eifert, Joseph

    2013-02-01

    Ready-to-eat (RTE) deli meats are considered a food at high risk for causing foodborne illness. Deli meats are listed as the highest risk RTE food vehicle for Listeria monocytogenes. Cross-contamination in the retail deli market may contribute to spread of pathogens to deli meats. Understanding potential cross-contamination pathways is essential for reducing the risk of contaminating various products. The objective of this study was to track cross-contamination pathways through a mock retail deli market using an abiotic surrogate, GloGerm, to visually represent how pathogens may spread through the deli environment via direct contact with food surfaces. Six contamination origination sites (slicer blade, meat chub, floor drain, preparation table, employee's glove, and employee's hands) were evaluated separately. Each site was inoculated with 20 ml of GloGerm, and a series of standard deli operations were completed (approximately 10 min of work). Photographs were then taken under UV illumination to visualize spread of GloGerm throughout the deli. A sensory panel evaluated the levels of contamination on the resulting contaminated surfaces. Five of the six contamination origination sites were associated with transfer of GloGerm to the deli case door handle, slicer blade, meat chub, preparation table, and the employee's gloves. Additional locations became contaminated (i.e., deli case shelf, prep table sink, and glove box), but this contamination was not consistent across all trials. Contamination did not spread from the floor drain to any food contact surfaces. The findings of this study reinforce the need for consistent equipment cleaning and food safety practices among deli workers to minimize cross-contamination.

  2. Retrograde transport from the yeast Golgi is mediated by two ARF GAP proteins with overlapping function.

    PubMed Central

    Poon, P P; Cassel, D; Spang, A; Rotman, M; Pick, E; Singer, R A; Johnston, G C

    1999-01-01

    ARF proteins, which mediate vesicular transport, have little or no intrinsic GTPase activity. They rely on the actions of GTPase-activating proteins (GAPs) for their function. The in vitro GTPase activity of the Saccharomyces cerevisiae ARF proteins Arf1 and Arf2 is stimulated by the yeast Gcs1 protein, and in vivo genetic interactions between arf and gcs1 mutations implicate Gcs1 in vesicular transport. However, the Gcs1 protein is dispensable, indicating that additional ARF GAP proteins exist. We show that the structurally related protein Glo3, which is also dispensable, also exhibits ARF GAP activity. Genetic and in vitro approaches reveal that Glo3 and Gcs1 have an overlapping essential function at the endoplasmic reticulum (ER)-Golgi stage of vesicular transport. Mutant cells deficient for both ARF GAPs cannot proliferate, undergo a dramatic accumulation of ER and are defective for protein transport between ER and Golgi. The glo3Delta and gcs1Delta single mutations each interact with a sec21 mutation that affects a component of COPI, which mediates vesicular transport within the ER-Golgi shuttle, while increased dosage of the BET1, BOS1 and SEC22 genes encoding members of a v-SNARE family that functions within the ER-Golgi alleviates the effects of a glo3Delta mutation. An in vitro assay indicates that efficient retrieval from the Golgi to the ER requires these two proteins. These findings suggest that Glo3 and Gcs1 ARF GAPs mediate retrograde vesicular transport from the Golgi to the ER. PMID:9927415

  3. Mapping pre-European settlement vegetation at fine resolutions using a hierarchical Bayesian model and GIS

    Treesearch

    Hong S. He; Daniel C. Dey; Xiuli Fan; Mevin B. Hooten; John M. Kabrick; Christopher K. Wikle; Zhaofei. Fan

    2007-01-01

    In the Midwestern United States, the GeneralLandOffice (GLO) survey records provide the only reasonably accurate data source of forest composition and tree species distribution at the time of pre-European settlement (circa late 1800 to early 1850). However, GLO data have two fundamental limitations: coarse spatial resolutions (the square mile section and half mile...

  4. Generation of the first structure-based pharmacophore model containing a selective "zinc binding group" feature to identify potential glyoxalase-1 inhibitors.

    PubMed

    Al-Balas, Qosay; Hassan, Mohammad; Al-Oudat, Buthina; Alzoubi, Hassan; Mhaidat, Nizar; Almaaytah, Ammar

    2012-11-22

    Within this study, a unique 3D structure-based pharmacophore model of the enzyme glyoxalase-1 (Glo-1) has been revealed. Glo-1 is considered a zinc metalloenzyme in which the inhibitor binding with zinc atom at the active site is crucial. To our knowledge, this is the first pharmacophore model that has a selective feature for a "zinc binding group" which has been customized within the structure-based pharmacophore model of Glo-1 to extract ligands that possess functional groups able to bind zinc atom solely from database screening. In addition, an extensive 2D similarity search using three diverse similarity techniques (Tanimoto, Dice, Cosine) has been performed over the commercially available "Zinc Clean Drug-Like Database" that contains around 10 million compounds to help find suitable inhibitors for this enzyme based on known inhibitors from the literature. The resultant hits were mapped over the structure based pharmacophore and the successful hits were further docked using three docking programs with different pose fitting and scoring techniques (GOLD, LibDock, CDOCKER). Nine candidates were suggested to be novel Glo-1 inhibitors containing the "zinc binding group" with the highest consensus scoring from docking.

  5. Pneumocandin biosynthesis: involvement of a trans-selective proline hydroxylase.

    PubMed

    Houwaart, Stefanie; Youssar, Loubna; Hüttel, Wolfgang

    2014-11-03

    Echinocandins are cyclic nonribosomal hexapeptides based mostly on nonproteinogenic amino acids and displaying strong antifungal activity. Despite previous studies on their biosynthesis by fungi, the origin of three amino acids, trans-4- and trans-3-hydroxyproline, as well as trans-3-hydroxy-4-methylproline, is still unknown. Here we describe the identification, overexpression, and characterization of GloF, the first eukaryotic α-ketoglutarate/Fe(II) -dependent proline hydroxylase from the pneumocandin biosynthesis cluster of the fungus Glarea lozoyensis ATCC 74030. In in vitro transformations with L-proline, GloF generates trans-4- and trans-3-hydroxyproline simultaneously in a ratio of 8:1; the latter reaction was previously unknown for proline hydroxylase catalysis. trans-4-Methyl-L-proline is converted into the corresponding trans-3-hydroxyproline. All three hydroxyprolines required for the biosynthesis of the echinocandins pneumocandins A0 and B0 in G. lozoyensis are thus provided by GloF. Sequence analyses revealed that GloF is not related to bacterial proline hydroxylases, and none of the putative proteins with high sequence similarity in the databases has been characterized so far. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Mutation in Torenia fournieri Lind. UFO homolog confers loss of TfLFY interaction and results in a petal to sepal transformation.

    PubMed

    Sasaki, Katsutomo; Yamaguchi, Hiroyasu; Aida, Ryutaro; Shikata, Masahito; Abe, Tomoko; Ohtsubo, Norihiro

    2012-09-01

    We identified a Torenia fournieri Lind. mutant (no. 252) that exhibited a sepaloid phenotype in which the second whorls were changed to sepal-like organs. This mutant had no stamens, and the floral organs consisted of sepals and carpels. Although the expression of a torenia class B MADS-box gene, GLOBOSA (TfGLO), was abolished in the 252 mutant, no mutation of TfGLO was found. Among torenia homologs such as APETALA1 (AP1), LEAFY (LFY), and UNUSUAL FLORAL ORGANS (UFO), which regulate expression of class B genes in Arabidopsis, only accumulation of the TfUFO transcript was diminished in the 252 mutant. Furthermore, a missense mutation was found in the coding region of the mutant TfUFO. Intact TfUFO complemented the mutant phenotype whereas mutated TfUFO did not; in addition, the transgenic phenotype of TfUFO-knockdown torenias coincided with the mutant phenotype. Yeast two-hybrid analysis revealed that the mutated TfUFO lost its ability to interact with TfLFY protein. In situ hybridization analysis indicated that the transcripts of TfUFO and TfLFY were partially accumulated in the same region. These results clearly demonstrate that the defect in TfUFO caused the sepaloid phenotype in the 252 mutant due to the loss of interaction with TfLFY. © 2012 The Authors. The Plant Journal © 2012 Blackwell Publishing Ltd.

  7. NASA Flight Operations of Ikhana and Global Hawk

    NASA Technical Reports Server (NTRS)

    Posada, Herman D.

    2009-01-01

    This viewgraph presentation reviews the flight operations of Ikhana and Global Hawk Fire missions. The Ikhana fire missions modifications, ground systems, flight operations, range safety zones, primary and secondary emergency landing sites, and the Ikhana western states fire missions of 2007 are described, along with The Global Hawk specs, a description of the Global Hawk Pacific Science Campaign (GloPac '09) and GloPac payloads.

  8. Identification of three wheat globulin genes by screening a Triticum aestivum BAC genomic library with cDNA from a diabetes-associated globulin

    PubMed Central

    Loit, Evelin; Melnyk, Charles W; MacFarlane, Amanda J; Scott, Fraser W; Altosaar, Illimar

    2009-01-01

    Background Exposure to dietary wheat proteins in genetically susceptible individuals has been associated with increased risk for the development of Type 1 diabetes (T1D). Recently, a wheat protein encoded by cDNA WP5212 has been shown to be antigenic in mice, rats and humans with autoimmune T1D. To investigate the genomic origin of the identified wheat protein cDNA, a hexaploid wheat genomic library from Glenlea cultivar was screened. Results Three unique wheat globulin genes, Glo-3A, Glo3-B and Glo-3C, were identified. We describe the genomic structure of these genes and their expression pattern in wheat seeds. The Glo-3A gene shared 99% identity with the cDNA of WP5212 at the nucleotide and deduced amino acid level, indicating that we have identified the gene(s) encoding wheat protein WP5212. Southern analysis revealed the presence of multiple copies of Glo-3-like sequences in all wheat samples, including hexaploid, tetraploid and diploid species wheat seed. Aleurone and embryo tissue specificity of WP5212 gene expression, suggested by promoter region analysis, which demonstrated an absence of endosperm specific cis elements, was confirmed by immunofluorescence microscopy using anti-WP5212 antibodies. Conclusion Taken together, the results indicate that a diverse group of globulins exists in wheat, some of which could be associated with the pathogenesis of T1D in some susceptible individuals. These data expand our knowledge of specific wheat globulins and will enable further elucidation of their role in wheat biology and human health. PMID:19615078

  9. Introducing Euro-Glo, a rare earth metal chelate with numerous applications for the fluorescent localization of myelin and amyloid plaques in brain tissue sections.

    PubMed

    Schmued, Larry; Raymick, James

    2017-03-01

    The vast majority of fluorochromes are organic in nature and none of the few existing chelates have been applied as histological tracers for localizing brain anatomy and pathology. In this study we have developed and characterized a Europium chelate with the ability to fluorescently label normal and pathological myelin in control and toxicant-exposed rats, as well as the amyloid plaques in aged AD/Tg mice. This study demonstrates how Euro-Glo can be used for the detailed labeling of both normal myelination in the control rat as well as myelin pathology in the kainic acid exposed rat. In addition, this study demonstrates how E-G will label the shell of amyloid plaques in an AD/Tg mouse model of Alzheimer's disease a red color, while the plaque core appears blue in color. The observed E-G staining pattern is compared with that of well characterized tracers specific for the localization of myelin (Black-Gold II), degenerating neurons (Fluoro-Jade C), A-beta aggregates (Amylo-Glo) and glycolipids (PAS). This study represents the first time a rare earth metal (REM) chelate has been used as a histochemical tracer in the brain. This novel tracer, Euro-Glo (E-G), exhibits numerous advantages over conventional organic fluorophores including high intensity emission, high resistance to fading, compatibility with multiple labeling protocols, high Stoke's shift value and an absence of bleed-through of the signal through other filters. Euro-Glo represents the first fluorescent metal chelate to be used as a histochemical tracer, specifically to localize normal and pathological myelin as well as amyloid plaques. Copyright © 2016. Published by Elsevier B.V.

  10. Up-regulation of glyoxalase 1 by mangiferin prevents diabetic nephropathy progression in streptozotocin-induced diabetic rats.

    PubMed

    Liu, Yao-Wu; Zhu, Xia; Zhang, Liang; Lu, Qian; Wang, Jian-Yun; Zhang, Fan; Guo, Hao; Yin, Jia-Le; Yin, Xiao-Xing

    2013-12-05

    Advanced glycation endproducts (AGEs) and its precursor methylglyoxal are associated with diabetic nephropathy (DN). Mangiferin has many beneficial biological activities, including anti-inflammatory, anti-oxidative and anti-diabetic effects. We investigated the effect of mangiferin on DN and its potential mechanism associated with glyoxalase 1 (Glo-1), a detoxifying enzyme of methylglyoxal, in streptozotocin-induced rat model of DN. Diabetic rats were treated orally with mangiferin (15, 30, and 60 mg/kg) or distilled water for 9 weeks. Kidney tissues were collected for morphologic observation and the determination of associated biochemical parameters. The cultured mesangial cells were used to measure the activity of Glo-1 in vitro. Chronic treatment with mangiferin significantly ameliorated renal dysfunction in diabetic rats, as evidenced by decreases in albuminuria, blood urea nitrogen, kidney weight index, periodic acid-schiff stain positive mesangial matrix area, glomerular extracellular matrix expansion and accumulation, and glomerular basement membrane thickness. Meanwhile, mangiferin treatment caused substantial increases in the enzymatic activity of Glo-1 in vivo and in vitro, and protein and mRNA expression of Glo-1, reduced levels of AGEs and the protein and mRNA expression of their receptor (RAGE) in the renal cortex of diabetic rats. Moreover, mangiferin significantly attenuated oxidative stress damage as reflected by the lowered malondialdehyde and the increased glutathione levels in the kidney of diabetic rats. However, mangiferin did not affect the blood glucose and body weight of diabetic rats. Therefore, mangiferin can remarkably ameliorate DN in rats through inhibiting the AGEs/RAGE aix and oxidative stress damage, and Glo-1 may be a target for mangiferin action. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Dicarbonyl stress and glyoxalase enzyme system regulation in human skeletal muscle.

    PubMed

    Mey, Jacob T; Blackburn, Brian K; Miranda, Edwin R; Chaves, Alec B; Briller, Joan; Bonini, Marcelo G; Haus, Jacob M

    2018-02-01

    Skeletal muscle insulin resistance is a hallmark of Type 2 diabetes (T2DM) and may be exacerbated by protein modifications by methylglyoxal (MG), known as dicarbonyl stress. The glyoxalase enzyme system composed of glyoxalase 1/2 (GLO1/GLO2) is the natural defense against dicarbonyl stress, yet its protein expression, activity, and regulation remain largely unexplored in skeletal muscle. Therefore, this study investigated dicarbonyl stress and the glyoxalase enzyme system in the skeletal muscle of subjects with T2DM (age: 56 ± 5 yr.; BMI: 32 ± 2 kg/m 2 ) compared with lean healthy control subjects (LHC; age: 27 ± 1 yr.; BMI: 22 ± 1 kg/m 2 ). Skeletal muscle biopsies obtained from the vastus lateralis at basal and insulin-stimulated states of the hyperinsulinemic (40 mU·m -2 ·min -1 )-euglycemic (5 mM) clamp were analyzed for proteins related to dicarbonyl stress and glyoxalase biology. At baseline, T2DM had increased carbonyl stress and lower GLO1 protein expression (-78.8%), which inversely correlated with BMI, percent body fat, and HOMA-IR, while positively correlating with clamp-derived glucose disposal rates. T2DM also had lower NRF2 protein expression (-31.6%), which is a positive regulator of GLO1, while Keap1 protein expression, a negative regulator of GLO1, was elevated (207%). Additionally, insulin stimulation during the clamp had a differential effect on NRF2, Keap1, and MG-modified protein expression. These data suggest that dicarbonyl stress and the glyoxalase enzyme system are dysregulated in T2DM skeletal muscle and may underlie skeletal muscle insulin resistance. Whether these phenotypic differences contribute to the development of T2DM warrants further investigation.

  12. Switch of Sensitivity Dynamics Revealed with DyGloSA Toolbox for Dynamical Global Sensitivity Analysis as an Early Warning for System's Critical Transition

    PubMed Central

    Baumuratova, Tatiana; Dobre, Simona; Bastogne, Thierry; Sauter, Thomas

    2013-01-01

    Systems with bifurcations may experience abrupt irreversible and often unwanted shifts in their performance, called critical transitions. For many systems like climate, economy, ecosystems it is highly desirable to identify indicators serving as early warnings of such regime shifts. Several statistical measures were recently proposed as early warnings of critical transitions including increased variance, autocorrelation and skewness of experimental or model-generated data. The lack of automatized tool for model-based prediction of critical transitions led to designing DyGloSA – a MATLAB toolbox for dynamical global parameter sensitivity analysis (GPSA) of ordinary differential equations models. We suggest that the switch in dynamics of parameter sensitivities revealed by our toolbox is an early warning that a system is approaching a critical transition. We illustrate the efficiency of our toolbox by analyzing several models with bifurcations and predicting the time periods when systems can still avoid going to a critical transition by manipulating certain parameter values, which is not detectable with the existing SA techniques. DyGloSA is based on the SBToolbox2 and contains functions, which compute dynamically the global sensitivity indices of the system by applying four main GPSA methods: eFAST, Sobol's ANOVA, PRCC and WALS. It includes parallelized versions of the functions enabling significant reduction of the computational time (up to 12 times). DyGloSA is freely available as a set of MATLAB scripts at http://bio.uni.lu/systems_biology/software/dyglosa. It requires installation of MATLAB (versions R2008b or later) and the Systems Biology Toolbox2 available at www.sbtoolbox2.org. DyGloSA can be run on Windows and Linux systems, -32 and -64 bits. PMID:24367574

  13. Switch of sensitivity dynamics revealed with DyGloSA toolbox for dynamical global sensitivity analysis as an early warning for system's critical transition.

    PubMed

    Baumuratova, Tatiana; Dobre, Simona; Bastogne, Thierry; Sauter, Thomas

    2013-01-01

    Systems with bifurcations may experience abrupt irreversible and often unwanted shifts in their performance, called critical transitions. For many systems like climate, economy, ecosystems it is highly desirable to identify indicators serving as early warnings of such regime shifts. Several statistical measures were recently proposed as early warnings of critical transitions including increased variance, autocorrelation and skewness of experimental or model-generated data. The lack of automatized tool for model-based prediction of critical transitions led to designing DyGloSA - a MATLAB toolbox for dynamical global parameter sensitivity analysis (GPSA) of ordinary differential equations models. We suggest that the switch in dynamics of parameter sensitivities revealed by our toolbox is an early warning that a system is approaching a critical transition. We illustrate the efficiency of our toolbox by analyzing several models with bifurcations and predicting the time periods when systems can still avoid going to a critical transition by manipulating certain parameter values, which is not detectable with the existing SA techniques. DyGloSA is based on the SBToolbox2 and contains functions, which compute dynamically the global sensitivity indices of the system by applying four main GPSA methods: eFAST, Sobol's ANOVA, PRCC and WALS. It includes parallelized versions of the functions enabling significant reduction of the computational time (up to 12 times). DyGloSA is freely available as a set of MATLAB scripts at http://bio.uni.lu/systems_biology/software/dyglosa. It requires installation of MATLAB (versions R2008b or later) and the Systems Biology Toolbox2 available at www.sbtoolbox2.org. DyGloSA can be run on Windows and Linux systems, -32 and -64 bits.

  14. Fractionation, amino acid profiles, antimicrobial and free radical scavenging activities of Citrullus lanatus seed protein.

    PubMed

    Dash, Priyanka; Ghosh, Goutam

    2017-12-01

    In the present study, a modified Osborne fractionation method was followed to isolate albumin (C alb ), globulin (C glo ), prolamin (C pro ) and glutelin (C glu ) successively from seeds of Citrullus lanatus (watermelon). This research work was undertaken to investigate the antimicrobial and antioxidant activities of isolated protein fractions of C. lanatus seed. Amino acid composition and molecular weight distribution were determined to establish their relationship with antimicrobial and antioxidant activity. Among all the fractions, C pro was found to be most effective against A. baumannii followed by C alb and C glo . The results showed that growth of inhibition of these protein fractions differ significantly from each other (p ≤ 0.05). In view of antioxidant potential, C glo exhibited strongest antioxidant capacity while C glu showed weakest antioxidant potential.

  15. Nano Particle Control of Void Formation and Expansion in Polymeric and Composite Systems

    DTIC Science & Technology

    2009-05-01

    ES) 8. PERFORMING ORGANIZATION REPORT NUMBER Glocal Network Corporation 3131 Western Avenue Ste M-526 Seattle, WA 98121...Scientific Research Arlington, VA 22203-1954 Principal Investigator Dr. James C. Seferis Polymeric Composites Laboratory GloCal Network...F.R.E.E.D.O.M., with the flexibility of a profit research and development organization, GloCal Network Corporation, with both entities doing business as the

  16. The Earth Charter Goes Interactive and Live with e-GLO: Using New Media to Train Youth Leaders in Sustainability on Both Sides of the Digital Divide

    ERIC Educational Resources Information Center

    Sheehan, Mike; Laitinen, Jaana

    2010-01-01

    For ten years now the Earth Charter has been inspiring global citizens to engage in conversations and actions that benefit everybody. This article describes e-GLO, the Earth Charter Global Learning Opportunity, the Earth Charter International's semester-long, online leadership course inspired by the Earth Charter. It is developed and implemented…

  17. Concomitant Loss of the Glyoxalase System and Glycolysis Makes the Uncultured Pathogen “Candidatus Liberibacter asiaticus” an Energy Scavenger

    PubMed Central

    Jain, Mukesh; Munoz-Bodnar, Alejandra

    2017-01-01

    ABSTRACT Methylglyoxal (MG) is a cytotoxic, nonenzymatic by-product of glycolysis that readily glycates proteins and DNA, resulting in carbonyl stress. Glyoxalase I and II (GloA and GloB) sequentially convert MG into d-lactic acid using glutathione (GSH) as a cofactor. The glyoxalase system is essential for the mitigation of MG-induced carbonyl stress, preventing subsequent cell death, and recycling GSH for maintenance of cellular redox poise. All pathogenic liberibacters identified to date are uncultured, including “Candidatus Liberibacter asiaticus,” a psyllid endosymbiont and causal agent of the severely damaging citrus disease “huanglongbing.” In silico analysis revealed the absence of gloA in “Ca. Liberibacter asiaticus” and all other pathogenic liberibacters. Both gloA and gloB are present in Liberibacter crescens, the only liberibacter that has been cultured. L. crescens GloA was functional in a heterologous host. Marker interruption of gloA in L. crescens appeared to be lethal. Key glycolytic enzymes were either missing or significantly downregulated in “Ca. Liberibacter asiaticus” compared to (cultured) L. crescens. Marker interruption of sut, a sucrose transporter gene in L. crescens, decreased its ability to take up exogenously supplied sucrose in culture. “Ca. Liberibacter asiaticus” lacks a homologous sugar transporter but has a functional ATP/ADP translocase, enabling it to thrive both in psyllids and in the sugar-rich citrus phloem by (i) avoiding sucrose uptake, (ii) avoiding MG generation via glycolysis, and (iii) directly importing ATP from the host cell. MG detoxification enzymes appear to be predictive of “Candidatus” status for many uncultured pathogenic and environmental bacteria. IMPORTANCE Discovered more than 100 years ago, the glyoxalase system is thought to be present across all domains of life and fundamental to cellular growth and viability. The glyoxalase system protects against carbonyl stress caused by methylglyoxal (MG), a highly reactive, mutagenic and cytotoxic compound that is nonenzymatically formed as a by-product of glycolysis. The uncultured alphaproteobacterium “Ca. Liberibacter asiaticus” is a well-adapted endosymbiont of the Asian citrus psyllid, which transmits the severely damaging citrus disease “huanglongbing.” “Ca. Liberibacter asiaticus” lacks a functional glyoxalase pathway. We report here that the bacterium is able to thrive both in psyllids and in the sugar-rich citrus phloem by (i) avoiding sucrose uptake, (ii) avoiding (significant) MG generation via glycolysis, and (iii) directly importing ATP from the host cell. We hypothesize that failure to culture “Ca. Liberibacter asiaticus” is at least partly due to its dependence on host cells for both ATP and MG detoxification. PMID:28939611

  18. Evaluating the Predictability of South-East Asian Floods Using ECMWF and GloFAS Forecasts

    NASA Astrophysics Data System (ADS)

    Pillosu, F. M.

    2017-12-01

    Between July and September 2017, the monsoon season caused widespread heavy rainfall and severe floods across countries in South-East Asia, notably in India, Nepal and Bangladesh, with deadly consequences. According to the U.N., in Bangladesh 140 people lost their lives and 700,000 homes were destroyed; in Nepal at least 143 people died, and more than 460,000 people were forced to leave their homes; in India there were 726 victims of flooding and landslides, 3 million people were affected by the monsoon floods and 2000 relief camps were established. Monsoon season happens regularly every year in South Asia, but local authorities reported the last monsoon season as the worst in several years. What made the last monsoon season particularly severe in certain regions? Are these causes clear from the forecasts? Regarding the meteorological characterization of the event, an analysis of forecasts from the European Centre for Medium-Range Weather Forecast (ECMWF) for different lead times (from seasonal to short range) will be shown to evaluate how far in advance this event was predicted and start discussion on what were the factors that led to such a severe event. To illustrate hydrological aspects, forecasts from the Global Flood Awareness System (GloFAS) will be shown. GloFAS is developed at ECMWF in co-operation with the European Commission's Joint Research Centre (JRC) and with the support of national authorities and research institutions such as the University of Reading. It will become operational at the end of 2017 as part of the Copernicus Emergency Management Service. GloFAS couples state-of-the-art weather forecasts with a hydrological model to provide a cross-border system with early flood guidance information to help humanitarian agencies and national hydro-meteorological services to strengthen and improve forecasting capacity, preparedness and mitigation of natural hazards. In this case GloFAS has shown good potential to become a useful tool for better and earlier preparedness. For instance, first tests showed that by 28th July GloFAS was able to forecast that a relatively large flood peak would probably occur between 13th and 22nd August. An actual flood peak was recorded around 16th August according to the Bangladeshi Flood Forecasting Centre.

  19. Airglow studies using observations made with the GLO instrument on the Space Shuttle

    NASA Astrophysics Data System (ADS)

    Alfaro Suzan, Ana Luisa

    2009-12-01

    Our understanding of Earth's upper atmosphere has advanced tremendously over the last few decades due to our enhanced capacity for making remote observations from space. Space based observations of Earth's daytime and nighttime airglow emissions are very good examples of such enhancements to our knowledge. The terrestrial nighttime airglow, or nightglow, is barely discernible to the naked eye as viewed from Earth's surface. However, it is clearly visible from space - as most astronauts have been amazed to report. The nightglow consists of emissions of ultraviolet, visible and near-infrared radiation from electronically excited oxygen molecules and atoms and vibrationally excited OH molecules. It mostly emanates from a 10 km thick layer located about 100 km above Earth's surface. Various photochemical models have been proposed to explain the production of the emitting species. In this study some unique observations of Earth's nightglow made with the GLO instrument on NASA's Space Shuttle, are analyzed to assess the proposed excitation models. Previous analyses of these observations by Broadfoot and Gardner (2001), performed using a 1-D inversion technique, have indicated significant spatial structures and have raised serious questions about the proposed nightglow excitation models. However, the observation of such strong spatial structures calls into serious question the appropriateness of the adopted 1-D inversion technique and, therefore, the validity of the conclusions. In this study a more rigorous 2-D tomographic inversion technique is developed and applied to the available GLO data to determine if some of the apparent discrepancies can be explained by the limitations of the previously applied 1-D inversion approach. The results of this study still reveal some potentially serious inadequacies in the proposed photochemical models. However, alternative explanations for the discrepancies between the GLO observations and the model expectations are suggested. These include upper atmospheric tidal effects and possible errors in the pointing of the GLO instrument.

  20. Teaching an Old Client New Tricks - the GloVIS Global Visualization Viewer after 14 Years

    NASA Astrophysics Data System (ADS)

    Meyer, D. J.; Steinwand, D.; Lemig, K.; Davis, B.; Werpy, J.; Quenzer, R.

    2014-12-01

    The US Geological Survey's Global Visualization Viewer (GloVIS) is a web-based, visual search and discovery tool used to access imagery from aircraft and space-based imaging systems. GloVIS was introduced shortly after the launch of Landsat 7 to provide a visual client to select images squired by the Enhanced Thematic Mapper Plus. Since then, it has been expanded to search on other Landsat imagery (Multi-spectral Scanner, Thematic Mapper, Operational Land Imager), imagery from a variety of NASA instruments (Moderate Resolution Imaging Spectroradiometer, Advanced Spaceborne Thermal Emissions and Reflection Radiometer, Advanced Land Imager, Hyperion), along with images from high-resolution airborne photography and special collections representing decades-long observations. GloVIS incorporated a number of features considered novel at its original release, such as rapid visual browse, and the ability to use one type of satellite observation (e.g., vegetation seasonality curves derived from the Advanced Very High Resolution Radiometer) to assist in the selection of another (e.g., Landsat). After 14 years, the GloVIS client has gained a large following, having served millions of images to hundreds of thousands of users, but is due for a major re-design. Described here are a set of guiding principles driving the re-design, the methodology used to understand how users discover and retrieve imagery, and candidate technologies to be leveraged in the re-design. The guiding principles include (1) visual co-discovery - the ability to browse and select imagery from diverse sources simultaneously; (2) user-centric design - understanding user needs prior to design and involving users throughout the design process; (3) adaptability - the use of flexible design to permit rapid incorporation of new capabilities, and (4) interoperability - the use of services, conventions and protocols to permit interaction with external sources of Earth science imagery.

  1. Neuronal damage and shortening of lifespan in C. elegans by peritoneal dialysis fluid: Protection by glyoxalase-1

    PubMed Central

    Schlotterer, Andrea; Pfisterer, Friederike; Kukudov, Georgi; Heckmann, Britta; Henriquez, Daniel; Morath, Christian; Krämer, Bernhard K.; Hammes, Hans-Peter; Schwenger, Vedat; Morcos, Michael

    2018-01-01

    Glucose and glucose degradation products (GDPs), contained in peritoneal dialysis (PD) fluids, contribute to the formation of advanced glycation end-products (AGEs). Local damaging effects, resulting in functional impairment of the peritoneal membrane, are well studied. It is also supposed that detoxification of AGE precursors by glyoxalase-1 (GLO1) has beneficial effects on GDP-mediated toxicity. The aim of the current study was to analyze systemic detrimental effects of PD fluids and their prevention by glyoxlase-1. Wild-type and GLO1-overexpressing Caenorhabditis elegans (C. elegans) were cultivated in the presence of low- and high-GDP PD fluids containing 1.5 or 4% glucose. Lifespan, neuronal integrity and neuronal functions were subsequently studied. The higher concentrations of glucose and GDP content resulted in a decrease of maximum lifespan by 2 (P<0.01) and 9 days (P<0.001), respectively. Exposure to low- and high-GDP fluids caused reduction of neuronal integrity by 34 (P<0.05) and 41% (P<0.05). Cultivation of animals in the presence of low-GDP fluid containing 4% glucose caused significant impairment of neuronal function, reducing relative and absolute head motility by 58.5 (P<0.01) and 56.7% (P<0.01), respectively; and relative and absolute tail motility by 55.1 (P<0.05) and 55.0% (P<0.05), respectively. Taken together, GLO1 overexpression protected from glucose-induced lifespan reduction, neurostructural damage and neurofunctional damage under low-GDP-conditions. In conclusion, both glucose and GDP content in PD fluids have systemic impact on the lifespan and neuronal integrity of C. elegans. Detoxification of reactive metabolites by GLO1 overexpression was sufficient to protect lifespan, neuronal integrity and neuronal function in a low-GDP environment. These data emphasize the relevance of the GLO1 detoxifying pathway as a potential therapeutic target in the treatment of reactive metabolite-mediated pathologies.

  2. Delivery of Unmanned Aerial Vehicle Data

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.; Sullivan, Donald V.

    2011-01-01

    To support much of NASA's Upper Atmosphere Research Program science, NASA has acquired two Global Hawk Unmanned Aerial Vehicles (UAVs). Two major missions are currently planned using the Global Hawk: the Global Hawk Pacific (GloPac) and the Genesis and Rapid Intensification Processes (GRIP) missions. This paper briefly describes GloPac and GRIP, the concept of operations and the resulting requirements and communication architectures. Also discussed are requirements for future missions that may use satellite systems and networks owned and operated by third parties.

  3. GLO-Roots: an imaging platform enabling multidimensional characterization of soil-grown root systems

    PubMed Central

    Rellán-Álvarez, Rubén; Lobet, Guillaume; Lindner, Heike; Pradier, Pierre-Luc; Sebastian, Jose; Yee, Muh-Ching; Geng, Yu; Trontin, Charlotte; LaRue, Therese; Schrager-Lavelle, Amanda; Haney, Cara H; Nieu, Rita; Maloof, Julin; Vogel, John P; Dinneny, José R

    2015-01-01

    Root systems develop different root types that individually sense cues from their local environment and integrate this information with systemic signals. This complex multi-dimensional amalgam of inputs enables continuous adjustment of root growth rates, direction, and metabolic activity that define a dynamic physical network. Current methods for analyzing root biology balance physiological relevance with imaging capability. To bridge this divide, we developed an integrated-imaging system called Growth and Luminescence Observatory for Roots (GLO-Roots) that uses luminescence-based reporters to enable studies of root architecture and gene expression patterns in soil-grown, light-shielded roots. We have developed image analysis algorithms that allow the spatial integration of soil properties, gene expression, and root system architecture traits. We propose GLO-Roots as a system that has great utility in presenting environmental stimuli to roots in ways that evoke natural adaptive responses and in providing tools for studying the multi-dimensional nature of such processes. DOI: http://dx.doi.org/10.7554/eLife.07597.001 PMID:26287479

  4. Transparent ceramic scintillators for gamma spectroscopy and MeV imaging

    NASA Astrophysics Data System (ADS)

    Cherepy, N. J.; Seeley, Z. M.; Payne, S. A.; Swanberg, E. L.; Beck, P. R.; Schneberk, D. J.; Stone, G.; Perry, R.; Wihl, B.; Fisher, S. E.; Hunter, S. L.; Thelin, P. A.; Thompson, R. R.; Harvey, N. M.; Stefanik, T.; Kindem, J.

    2015-09-01

    We report on the development of two new mechanically rugged, high light yield transparent ceramic scintillators: (1) Ce-doped Gd-garnet for gamma spectroscopy, and (2) Eu-doped Gd-Lu-bixbyite for radiography. GYGAG(Ce) garnet transparent ceramics offer ρ = 5.8g/cm3, Zeff = 48, principal decay of <100 ns, and light yield of 50,000 Ph/MeV. Gdgarnet ceramic scintillators offer the best energy resolution of any oxide scintillator, as good as R(662 keV) = 3% (Si-PD readout) for small sizes and typically R(662 keV) < 5% for cubic inch sizes. For radiography, the bixbyite transparent ceramic scintillator, (Gd,Lu,Eu)2O3, or "GLO," offers excellent x-ray stopping, with ρ = 9.1 g/cm3 and Zeff = 68. Several 10" diameter by 0.1" thickness GLO scintillators have been fabricated. GLO outperforms scintillator glass for high energy radiography, due to higher light yield (55,000 Ph/MeV) and better stopping, while providing spatial resolution of >8 lp/mm.

  5. GLO-Roots: An imaging platform enabling multidimensional characterization of soil-grown root systems

    DOE PAGES

    Rellan-Alvarez, Ruben; Lobet, Guillaume; Lindner, Heike; ...

    2015-08-19

    Root systems develop different root types that individually sense cues from their local environment and integrate this information with systemic signals. This complex multi-dimensional amalgam of inputs enables continuous adjustment of root growth rates, direction, and metabolic activity that define a dynamic physical network. Current methods for analyzing root biology balance physiological relevance with imaging capability. To bridge this divide, we developed an integrated-imaging system called Growth and Luminescence Observatory for Roots (GLO-Roots) that uses luminescence-based reporters to enable studies of root architecture and gene expression patterns in soil-grown, light-shielded roots. We have developed image analysis algorithms that allow themore » spatial integration of soil properties, gene expression, and root system architecture traits. We propose GLO-Roots as a system that has great utility in presenting environmental stimuli to roots in ways that evoke natural adaptive responses and in providing tools for studying the multi-dimensional nature of such processes.« less

  6. NASA Dryden Flight Research Center: Unmanned Aircraft Operations

    NASA Technical Reports Server (NTRS)

    Pestana, Mark

    2010-01-01

    This slide presentation reviews several topics related to operating unmanned aircraft in particular sharing aspects of unmanned aircraft from the perspective of a pilot. There is a section on the Global Hawk project which contains information about the first Global Hawk science mission, (i.e., Global Hawk Pacific (GloPac). Included in this information is GloPac science highlights, a listing of the GloPac Instruments. The second Global Hawk science mission was Genesis and Rapid Intensification Process (GRIP), for the NASA Hurricane Science Research Team. Information includes the instrumentation and the flights that were undertaken during the program. A section on Ikhana is next. This section includes views of the Ground Control Station (GCS), and a discussion of how the piloting of UAS is different from piloting in a manned aircraft. There is also discussion about displays and controls of aircraft. There is also discussion about what makes a pilot. The last section relates the use of Ikhana in the western states fire mission.

  7. Molecular basis of floral petaloidy: insights from androecia of Canna indica

    PubMed Central

    Fu, Qian; Liu, Huanfang; Almeida, Ana M. R.; Kuang, Yanfeng; Zou, Pu; Liao, Jingping

    2014-01-01

    Floral organs that take on the characteristics of petals can occur in all whorls of the monocot order Zingiberales. In Canna indica, the most ornamental or ‘petaloid’ parts of the flowers are of androecial origin and are considered staminodes. However, the precise nature of these petaloid organs is yet to be determined. In order to gain a better understanding of the genetic basis of androecial identity, a molecular investigation of B- and C-class genes was carried out. Two MADS-box genes GLOBOSA (GLO) and AGAMOUS (AG) were isolated from young inflorescences of C. indica by 3′ rapid amplification of cDNA ends polymerase chain reaction (3′-RACE PCR). Sequence characterization and phylogenetic analyses show that CiGLO and CiAG belong to the B- and C-class MADS-box gene family, respectively. CiAG is expressed in petaloid staminodes, the labellum, the fertile stamen and carpels. CiGLO is expressed in petals, petaloid staminodes, the labellum, the fertile stamen and carpels. Expression patterns in mature tissues of CiGLO and CiAG suggest that petaloid staminodes and the labellum are of androecial identity, in agreement with their position within the flower and with described Arabidopsis thaliana expression patterns. Although B- and C-class genes are important components of androecial determination, their expression patterns are not sufficient to explain the distinct morphology observed in staminodes and the fertile stamen in C. indica. PMID:24876297

  8. Comparison of bioluminescent kinase assays using substrate depletion and product formation.

    PubMed

    Tanega, Cordelle; Shen, Min; Mott, Bryan T; Thomas, Craig J; MacArthur, Ryan; Inglese, James; Auld, Douglas S

    2009-12-01

    Assays for ATPases have been enabled for high-throughput screening (HTS) by employing firefly luciferase to detect the remaining ATP in the assay. However, for any enzyme assay, measurement of product formation is a more sensitive assay design. Recently, technologies that allow detection of the ADP product from ATPase reactions have been described using fluorescent methods of detection. We describe here the characterization of a bioluminescent assay that employs firefly luciferase in a coupled-enzyme assay format to enable detection of ADP levels from ATPase assays (ADP-Glo, Promega Corp.). We determined the performance of the ADP-Glo assay in 1,536-well microtiter plates using the protein kinase Clk4 and a 1,352 member kinase focused combinatorial library. The ADP-Glo assay was compared to the Clk4 assay performed using a bioluminescence ATP-depletion format (Kinase-Glo, Promega Corp). We performed this analysis using quantitative HTS (qHTS) where we determined potency values for all library members and identified approximately 300 compounds with potencies ranging from as low as 50 nM to >10 microM, yielding a robust dataset for the comparison. Both assay formats showed high performance (Z'-factors approximately 0.9) and showed a similar potency distribution for the actives. We conclude that the bioluminescence ADP detection assay system is a viable generic alternative to the widely used ATP-depletion assay for ATPases and discuss the advantages and disadvantages of both approaches.

  9. Boreal summer sub-seasonal variability of the South Asian monsoon in the Met Office GloSea5 initialized coupled model

    NASA Astrophysics Data System (ADS)

    Jayakumar, A.; Turner, A. G.; Johnson, S. J.; Rajagopal, E. N.; Mohandas, Saji; Mitra, A. K.

    2017-09-01

    Boreal summer sub-seasonal variability in the Asian monsoon, otherwise known as the monsoon intra-seasonal oscillation (MISO), is one of the dominant modes of intraseasonal variability in the tropics, with large impacts on total monsoon rainfall and India's agricultural production. However, our understanding of the mechanisms involved in MISO is incomplete and its simulation in various numerical models is often flawed. In this study, we focus on the objective evaluation of the fidelity of MISO simulation in the Met Office Global Seasonal forecast system version 5 (GloSea5), an initialized coupled model. We analyze a series of nine-member hindcasts from GloSea5 over 1996-2009 during the peak monsoon period (July-August) over the South-Asian monsoon domain focusing on aspects of the time-mean background state and air-sea interaction processes pertinent to MISO. Dominant modes during this period are evident in power spectrum analysis, but propagation and evolution characteristics of the MISO are not realistic. We find that simulated air-sea interactions in the central Indian Ocean are not supportive of MISO initiation in that region, likely a result of the low surface wind variance there. As a consequence, the expected near-quadrature phase relationship between SST and convection is not represented properly over the central equatorial Indian Ocean, and northward propagation from the equator is poorly simulated. This may reinforce the equatorial rainfall mean state bias in GloSea5.

  10. Case-control and family-based association studies of candidate genes in autistic disorder and its endophenotypes: TPH2 and GLO1.

    PubMed

    Sacco, Roberto; Papaleo, Veruska; Hager, Jorg; Rousseau, Francis; Moessner, Rainald; Militerni, Roberto; Bravaccio, Carmela; Trillo, Simona; Schneider, Cindy; Melmed, Raun; Elia, Maurizio; Curatolo, Paolo; Manzi, Barbara; Pascucci, Tiziana; Puglisi-Allegra, Stefano; Reichelt, Karl-Ludvig; Persico, Antonio M

    2007-03-08

    The TPH2 gene encodes the enzyme responsible for serotonin (5-HT) synthesis in the Central Nervous System (CNS). Stereotypic and repetitive behaviors are influenced by 5-HT, and initial studies report an association of TPH2 alleles with childhood-onset obsessive-compulsive disorder (OCD) and with autism. GLO1 encodes glyoxalase I, the enzyme which detoxifies alpha-oxoaldehydes such as methylglyoxal in all living cells. The A111E GLO1 protein variant, encoded by SNP C419A, was identified in autopsied autistic brains and proposed to act as an autism susceptibility factor. Hyperserotoninemia, macrocephaly, and peptiduria represent some of the best-characterized endophenotypes in autism research. Family-based and case-control association studies were performed on clinical samples drawn from 312 simplex and 29 multiplex families including 371 non-syndromic autistic patients and 156 unaffected siblings, as well as on 171 controls. TPH2 SNPs rs4570625 and rs4565946 were genotyped using the TaqMan assay; GLO1 SNP C419A was genotyped by PCR and allele-specific restriction digest. Family-based association analyses were performed by TDT and FBAT, case-control by chi2, endophenotypic analyses for 5-HT blood levels, cranial circumference and urinary peptide excretion rates by ANOVA and FBAT. TPH2 alleles and haplotypes are not significantly associated in our sample with autism (rs4570625: TDT P = 0.27, and FBAT P = 0.35; rs4565946: TDT P = 0.45, and FBAT P = 0.55; haplotype P = 0.84), with any endophenotype, or with the presence/absence of prominent repetitive and stereotyped behaviors (motor stereotypies: P = 0.81 and 0.84, verbal stereotypies: P = 0.38 and 0.73 for rs4570625 and rs4565946, respectively). Also GLO1 alleles display no association with autism (191 patients vs 171 controls, P = 0.36; TDT P = 0.79, and FBAT P = 0.37), but unaffected siblings seemingly carry a protective gene variant marked by the A419 allele (TDT P < 0.05; patients vs unaffected siblings TDT and FBAT P < 0.00001). TPH2 gene variants are unlikely to contribute to autism or to the presence/absence of prominent repetitive behaviors in our sample, although an influence on the intensity of these behaviors in autism cannot be excluded. GLO1 gene variants do not confer autism vulnerability in this sample, but allele A419 apparently carries a protective effect, spurring interest into functional correlates of the C419A SNP.

  11. Case-control and family-based association studies of candidate genes in autistic disorder and its endophenotypes: TPH2 and GLO1

    PubMed Central

    Sacco, Roberto; Papaleo, Veruska; Hager, Jorg; Rousseau, Francis; Moessner, Rainald; Militerni, Roberto; Bravaccio, Carmela; Trillo, Simona; Schneider, Cindy; Melmed, Raun; Elia, Maurizio; Curatolo, Paolo; Manzi, Barbara; Pascucci, Tiziana; Puglisi-Allegra, Stefano; Reichelt, Karl-Ludvig; Persico, Antonio M

    2007-01-01

    Background The TPH2 gene encodes the enzyme responsible for serotonin (5-HT) synthesis in the Central Nervous System (CNS). Stereotypic and repetitive behaviors are influenced by 5-HT, and initial studies report an association of TPH2 alleles with childhood-onset obsessive-compulsive disorder (OCD) and with autism. GLO1 encodes glyoxalase I, the enzyme which detoxifies α-oxoaldehydes such as methylglyoxal in all living cells. The A111E GLO1 protein variant, encoded by SNP C419A, was identifed in autopsied autistic brains and proposed to act as an autism susceptibility factor. Hyperserotoninemia, macrocephaly, and peptiduria represent some of the best-characterized endophenotypes in autism research. Methods Family-based and case-control association studies were performed on clinical samples drawn from 312 simplex and 29 multiplex families including 371 non-syndromic autistic patients and 156 unaffected siblings, as well as on 171 controls. TPH2 SNPs rs4570625 and rs4565946 were genotyped using the TaqMan assay; GLO1 SNP C419A was genotyped by PCR and allele-specific restriction digest. Family-based association analyses were performed by TDT and FBAT, case-control by χ2, endophenotypic analyses for 5-HT blood levels, cranial circumference and urinary peptide excretion rates by ANOVA and FBAT. Results TPH2 alleles and haplotypes are not significantly associated in our sample with autism (rs4570625: TDT P = 0.27, and FBAT P = 0.35; rs4565946: TDT P = 0.45, and FBAT P = 0.55; haplotype P = 0.84), with any endophenotype, or with the presence/absence of prominent repetitive and stereotyped behaviors (motor stereotypies: P = 0.81 and 0.84, verbal stereotypies: P = 0.38 and 0.73 for rs4570625 and rs4565946, respectively). Also GLO1 alleles display no association with autism (191 patients vs 171 controls, P = 0.36; TDT P = 0.79, and FBAT P = 0.37), but unaffected siblings seemingly carry a protective gene variant marked by the A419 allele (TDT P < 0.05; patients vs unaffected siblings TDT and FBAT P < 0.00001). Conclusion TPH2 gene variants are unlikely to contribute to autism or to the presence/absence of prominent repetitive behaviors in our sample, although an influence on the intensity of these behaviors in autism cannot be excluded. GLO1 gene variants do not confer autism vulnerability in this sample, but allele A419 apparently carries a protective effect, spurring interest into functional correlates of the C419A SNP. PMID:17346350

  12. A global space-based stratospheric aerosol climatology: 1979-2016

    NASA Astrophysics Data System (ADS)

    Thomason, Larry W.; Ernest, Nicholas; Millán, Luis; Rieger, Landon; Bourassa, Adam; Vernier, Jean-Paul; Manney, Gloria; Luo, Beiping; Arfeuille, Florian; Peter, Thomas

    2018-03-01

    We describe the construction of a continuous 38-year record of stratospheric aerosol optical properties. The Global Space-based Stratospheric Aerosol Climatology, or GloSSAC, provided the input data to the construction of the Climate Model Intercomparison Project stratospheric aerosol forcing data set (1979-2014) and we have extended it through 2016 following an identical process. GloSSAC focuses on the Stratospheric Aerosol and Gas Experiment (SAGE) series of instruments through mid-2005, and on the Optical Spectrograph and InfraRed Imager System (OSIRIS) and the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) data thereafter. We also use data from other space instruments and from ground-based, air, and balloon borne instruments to fill in key gaps in the data set. The end result is a global and gap-free data set focused on aerosol extinction coefficient at 525 and 1020 nm and other parameters on an "as available" basis. For the primary data sets, we developed a new method for filling the post-Pinatubo eruption data gap for 1991-1993 based on data from the Cryogenic Limb Array Etalon Spectrometer. In addition, we developed a new method for populating wintertime high latitudes during the SAGE period employing a latitude-equivalent latitude conversion process that greatly improves the depiction of aerosol at high latitudes compared to earlier similar efforts. We report data in the troposphere only when and where it is available. This is primarily during the SAGE II period except for the most enhanced part of the Pinatubo period. It is likely that the upper troposphere during Pinatubo was greatly enhanced over non-volcanic periods and that domain remains substantially under-characterized. We note that aerosol levels during the OSIRIS/CALIPSO period in the lower stratosphere at mid- and high latitudes is routinely higher than what we observed during the SAGE II period. While this period had nearly continuous low-level volcanic activity, it is possible that the enhancement in part reflects deficiencies in the data set. We also expended substantial effort to quality assess the data set and the product is by far the best we have produced. GloSSAC version 1.0 is available in netCDF format at the NASA Atmospheric Data Center at https://eosweb.larc.nasa.gov/. GloSSAC users should cite this paper and the data set DOI (https://doi.org/10.5067/GloSSAC-L3-V1.0).

  13. Relationships among Ergot Alkaloids, Cytochrome P450 Activity, and Beef Steer Growth

    NASA Astrophysics Data System (ADS)

    Rosenkrans, Charles; Ezell, Nicholas

    2015-03-01

    Determining a grazing animal’s susceptibility to ergot alkaloids has been a research topic for decades. Our objective was to determine if the Promega™ P450-Glo assay could be used to indirectly detect ergot alkaloids or their metabolites in urine of steers. The first experiment validated the effects of ergot alkaloids [0, 20, and 40 μM of ergotamine (ET), dihydroergotamine (DHET), and ergonovine (EN)] on human CYP3A4 using the P450-Glo assay (Promega™ V9800). With this assay, luminescence is directly proportional to CYP450 activity. Relative inhibition of in vitro cytochrome P450 activity was affected (P < 0.001) by an interaction between alkaloids and concentration. That interaction resulted in no concentration effect of EN, but within ET and DHET 20 and 40 µM concentrations inhibited CYP450 activity when compared with controls. In experiment 2, urine was collected from Angus-sired crossbred steers (n = 39; 216 ± 2.6 d of age; 203 ± 1.7 kg) after grazing tall fescue pastures for 105 d. Non-diluted urine was added to the Promega™ P450-Glo assay, and observed inhibition (3.7 % ± 2.7 of control). Urine content of total ergot alkaloids (331.1 ng/mg of creatinine ± 325.7) was determined using enzyme linked immunosorbent assay. Urine inhibition of CYP450 activity and total alkaloids were correlated (r = -0.31; P < 0.05). Steers were genotyped at CYP450 single nucleotide polymorphism, C994G. Steer genotype affected (P < 0.03) inhibition of CYP450 activity by urine; heterozygous steers had the least amount of CYP450 inhibition suggesting that genotyping cattle may be a method of identifying animals that are susceptible to ergot alkaloids. Although, additional research is needed, we demonstrate that the Promega™ P450-Glo assay is sensitive to ergot alkaloids and urine from steers grazing tall fescue. With some refinement the P450-Glo assay has potential as a tool for screening cattle for their exposure to fescue toxins.

  14. The naked and the dead: the ABCs of gymnosperm reproduction and the origin of the angiosperm flower.

    PubMed

    Melzer, Rainer; Wang, Yong-Qiang; Theissen, Günter

    2010-02-01

    20 years after establishment of the ABC model many of the molecular mechanisms underlying development of the angiosperm flower are relatively well understood. Central players in the gene regulatory network controlling flower development are SQUA-like, DEF/GLO-like, AG-like and AGL6/SEP1-like MIKC-type MADS-domain transcription factors. These provide class A, class B, class C and the more recently defined class E floral homeotic functions, respectively. There is evidence that the floral homeotic proteins recognize the DNA of target genes in an organ-specific way as multimeric protein complexes, thus constituting 'floral quartets'. In contrast to the detailed insights into flower development, how the flower originated during evolution has remained enigmatic. However, while orthologues of all classes of floral homeotic genes appear to be absent from all non-seed plants, DEF/GLO-like, AG-like, and AGL6-like genes have been found in diverse extant gymnosperms, the closest relatives of the angiosperms. While SQUA-like and SEP1-like MADS-box genes appear to be absent from extant gymnosperms, reconstruction of MADS-box gene phylogeny surprisingly suggests that the most recent common ancestor of gymnosperms and angiosperms possessed representatives of both genes, but that these have been lost in the lineage that led to extant gymnosperms. Expression studies and genetic complementation experiments indicate that both angiosperm and gymnosperm AG-like and DEF/GLO-like genes have conserved functions in the specification of reproductive organs and in distinguishing male from female organs, respectively. Based on these findings novel models about the molecular basis of flower origin, involving changes in the expression patterns of DEF/GLO-like or AGL6/SEP1/SQUA-like genes in reproductive structures, were developed. While in angiosperms SEP1-like proteins play an important role in floral quartet formation, preliminary evidence suggests that gymnosperm DEF/GLO-like and AG-like proteins alone can already form floral quartet-like complexes, further corroborating the view that the formation of floral quartet-like complexes predated flower origin during evolution. Copyright 2009 Elsevier Ltd. All rights reserved.

  15. First report on rapid screening of nanomaterial-based antimicrobial agents against β-lactamase resistance using pGLO plasmid transformed Escherichia coli HB 101 K-12

    NASA Astrophysics Data System (ADS)

    Raj, M. Alpha; Muralidhar, Y.; Sravanthi, M.; Prasad, T. N. V. K. V.; Nissipriya, M.; Reddy, P. Sirisha; Neelima, T. Shoba; Reddy, G. Dilip; Adilaxmamma, K.; Kumar, P. Anand; Krishna, T. Giridhara

    2016-08-01

    Combating antibiotic resistance requires discovery of novel antimicrobials effective against resistant bacteria. Herein, we present for the first time, pGLO plasmid transformed Escherichia coli HB 101 K 12 as novel model for screening of nanomaterial-based antimicrobial agents against β-lactamase resistance. E. coli HB 101 was transformed by pGLO plasmid in the presence of calcium chloride (50 mM; pH 6.1) aided by heat shock (0-42-0 °C). The transformed bacteria were grown on Luria-Bertani agar containing ampicillin (amp) and arabinose (ara). The transformed culture was able to grow in the presence of ampicillin and also exhibited fluorescence under UV light. Both untransformed and transformed bacteria were used for screening citrate-mediated nanosilver (CNS), aloin-mediated nanosilver (ANS), 11-α-keto-boswellic acid (AKBA)-mediated nanosilver (BNS); nanozinc oxide, nanomanganese oxide (NMO) and phytochemicals such as aloin and AKBA. Minimum inhibitory concentrations (MIC) were obtained by microplate method using ρ-iodo nitro tetrazolium indicator. All the compounds were effective against transformed bacteria except NMO and AKBA. Transformed bacteria exhibited reverse cross resistance against aloin. ANS showed the highest antibacterial activity with a MIC of 0.32 ppm followed by BNS (10.32 ppm), CNS (20.64 ppm) and NZO (34.83 ppm). Thus, pGLO plasmid can be used to induce resistance against β-lactam antibiotics and the model can be used for rapid screening of new antibacterial agents effective against resistant bacteria.

  16. [miR-25 promotes cell proliferation by targeting RECK in human cervical carcinoma HeLa cells].

    PubMed

    Qiu, Gang; Fang, Baoshuan; Xin, Guohong; Wei, Qiang; Yuan, Xiaoye; Wu, Dayong

    2015-01-01

    To investigate the effect of miR-25 on the proliferation of human cervical carcinoma HeLa cells and its association with reversion-inducing cysteine-rich protein with Kazal motifs (RECK). The recombinant plasmids of pcDNATM6.2-GW-pre-miR-25, pmirGLO-RECK-WT, pmirGLO-RECK-MT and anti-miR-25 were constructed, and their transfection efficiencies into HeLa cells were identified by real-time quantitative PCR (qRT-PCR). The potential proliferation-stimulating function of miR-25 was analyzed by MTT assay in HeLa cells. Furthermore, the target effect of miR-25 on the RECK was determined by dual-luciferase reporter assay system, qRT-PCR and Western blotting. Sequence analysis demonstrated that the recombinant plasmids of pcDNATM6.2-GW-pre-miR-25 and pmirGLO-RECK-WT, pmirGLO-RECK-MT were successfully constructed, and qRT-PCR revealed that the transfection efficiencies of pre-miR-25 and anti-miR-25 were desirable in HeLa cells. MTT assay showed that miR-25 over-expression promoted the proliferation of HeLa cells. In addition, the luciferase activity was significantly reduced in HeLa cells cotransfected with pre-miR-25 and RECK-WT. The qRT-PCR and Western blotting indicated that the expression level of RECK was up-regulated in HeLa cells transfected with anti-miR-25 at the transcriptional and posttranscriptional levels. miR-25 could promote cell proliferation by targeting RECK in HeLa cells.

  17. Novel detection method for chemiluminescence derived from the Kinase-Glo luminescent kinase assay platform: Advantages over traditional microplate luminometers.

    PubMed

    Bell, Ryan A V; Storey, Kenneth B

    2014-01-01

    The efficacy of cellular signal transduction is of paramount importance for the proper functioning of a cell and an organism as a whole. Protein kinases are responsible for much of this transmission and thus have been the focal point of extensive research. While there are numerous commercially available protein kinase assays, the Kinase-Glo luminescent kinase assay (Promega) provides an easy-to-use and high throughput platform for determining protein kinase activity. This assay is said to require the use of a microplate spectrophotometer capable of detecting a luminescent signal. This study shows that:•The ChemiGenius Bioimaging system (Syngene), typically used for visualizing chemiluminescence from Western blots, provides an alternative detection system for Kinase-Glo luminescence.•The novel detection system confers an advantage over traditional luminometers, in that it allows visualization of the luminescent wells, which allows for the real-time analysis and correction of experimental errors (i.e. bubble formation).•Determining kinase kinetics using this detection system produced comparable results to previous studies on the same enzyme (i.e. glycogen synthase kinase 3).

  18. The effectiveness of ground level post-flight 100 percent oxygen breathing as therapy for pain-only altitude Decompression Sickness (DCS)

    NASA Technical Reports Server (NTRS)

    Demboski, John T.; Pilmanis, Andrew A.

    1994-01-01

    In both the aviation and space environments, decompression sickness (DCS) is an operational limitation. Hyperbaric recompression is the most efficacious treatment for altitude DCS. However, the inherent recompression of descent to ground level while breathing oxygen is in itself therapy for altitude DCS. If pain-only DCS occurs during a hypobaric exposure, and the symptoms resolver during descent, ground level post-flight breathing of 100% O2 for 2 hours (GLO2) is considered sufficient treatment by USAF Regulation 161-21. The effectiveness of the GLO2 treatment protocol is defined.

  19. Effects of surface motion and electron-hole pair excitations in CO2 dissociation and scattering on Ni(100)

    NASA Astrophysics Data System (ADS)

    Luo, Xuan; Zhou, Xueyao; Jiang, Bin

    2018-05-01

    The energy transfer between different channels is an important aspect in chemical reactions at surfaces. We investigate here in detail the energy transfer dynamics in a prototypical system, i.e., reactive and nonreactive scattering of CO2 on Ni(100), which is related to heterogeneous catalytic processes with Ni-based catalysts for CO2 reduction. On the basis of our earlier nine-dimensional potential energy surface for CO2/Ni(100), dynamical calculations have been done using the generalized Langevin oscillator (GLO) model combined with local density friction approximation (LDFA), in which the former accounts for the surface motion and the latter accounts for the low-energy electron-hole pair (EHP) excitation. In spite of its simplicity, it is found that the GLO model yields quite satisfactory results, including the significant energy loss and product energy disposal, trapping, and steering dynamics, all of which agree well with the ab initio molecular dynamics ones where many surface atoms are explicitly involved with high computational cost. However, the GLO model fails to describe the reactivity enhancement due to the lattice motion because it intrinsically does not incorporate the variance of barrier height on the surface atom displacement. On the other hand, in LDFA, the energy transferred to EHPs is found to play a minor role and barely alter the dynamics, except for slightly reducing the dissociation probabilities. In addition, vibrational state-selected dissociative sticking probabilities are calculated and previously observed strong mode specificity is confirmed. Our work suggests that further improvement of the GLO model is needed to consider the lattice-induced barrier lowering.

  20. The glycolytic metabolite methylglyoxal activates Pap1 and Sty1 stress responses in Schizosaccharomyces pombe.

    PubMed

    Zuin, Alice; Vivancos, Ana P; Sansó, Miriam; Takatsume, Yoshifumi; Ayté, José; Inoue, Yoshiharu; Hidalgo, Elena

    2005-11-04

    Methylglyoxal, a toxic metabolite synthesized in vivo during glycolysis, inhibits cell growth. One of the mechanisms protecting eukaryotic cells against its toxicity is the glyoxalase system, composed of glyoxalase I and II (glo1 and glo2), which converts methylglyoxal into d-lactic acid in the presence of glutathione. Here we have shown that the two principal oxidative stress response pathways of Schizosaccharomyces pombe, Sty1 and Pap1, are involved in the response to methylglyoxal toxicity. The mitogen-activated protein kinase Sty1 is phosphorylated and accumulates in the nucleus following methylglyoxal treatment. Moreover, glo2 expression is induced by methylglyoxal and environmental stresses in a Sty1-dependent manner. The transcription factor Pap1 also accumulates in the nucleus, activating the expression of its target genes following methylglyoxal treatment. Our studies showed that the C-terminal cysteine-rich domain of Pap1 is sufficient for methylglyoxal sensing. Furthermore, the redox status of Pap1 is not changed by methylglyoxal. We propose that methylglyoxal treatment triggers Pap1 and Sty1 nuclear accumulation, and we describe the molecular basis of such activation mechanisms. In addition, we discuss the potential physiological significance of these responses to a natural toxic metabolite.

  1. Methylglyoxal, the foe and friend of glyoxalase and Trx/TrxR systems in HT22 nerve cells.

    PubMed

    Dafre, A L; Goldberg, J; Wang, T; Spiegel, D A; Maher, P

    2015-12-01

    Methylglyoxal (MGO) is a major glycating agent that reacts with basic residues of proteins and promotes the formation of advanced glycation end products (AGEs) which are believed to play key roles in a number of pathologies, such as diabetes, Alzheimer's disease, and inflammation. Here, we examined the effects of MGO on immortalized mouse hippocampal HT22 nerve cells. The endpoints analyzed were MGO and thiol status, the glyoxalase system, comprising glyoxalase 1 and 2 (GLO1/2), and the cytosolic and mitochondrial Trx/TrxR systems, as well as nuclear Nrf2 and its target genes. We found that nuclear Nrf2 is induced by MGO treatment in HT22 cells, as corroborated by induction of the Nrf2-controlled target genes and proteins glutamate cysteine ligase and heme oxygenase 1. Nrf2 knockdown prevented MGO-dependent induction of glutamate cysteine ligase and heme oxygenase 1. The cystine/glutamate antiporter, system xc(-), which is also controlled by Nrf2, was also induced. The increased cystine import (system xc(-)) activity and GCL expression promoted GSH synthesis, leading to increased levels of GSH. The data indicate that MGO can act as both a foe and a friend of the glyoxalase and the Trx/TrxR systems. At low concentrations of MGO (0.3mM), GLO2 is strongly induced, but at high MGO (0.75 mM) concentrations, GLO1 is inhibited and GLO2 is downregulated. The cytosolic Trx/TrxR system is impaired by MGO, where Trx is downregulated yet TrxR is induced, but strong MGO-dependent glycation may explain the loss in TrxR activity. We propose that Nrf2 can be the unifying element to explain the observed upregulation of GSH, GCL, HO1, TrxR1, Trx2, TrxR2, and system xc(-) system activity. Copyright © 2015. Published by Elsevier Inc.

  2. Methylglyoxal, the foe and friend of glyoxalase and Trx/TrxR systems in HT22 nerve cells

    PubMed Central

    Dafre, A.L.; Goldberg, J.; Wang, T.; Spiegel, D.A.; Maher, P.

    2017-01-01

    Methylglyoxal (MGO) is a major glycating agent that reacts with basic residues of proteins and promotes the formation of advanced glycation end products (AGEs) which are believed to play key roles in a number of pathologies, such as diabetes, Alzheimer’s disease, and inflammation. Here, we examined the effects of MGO on immortalized mouse hippocampal HT22 nerve cells. The endpoints analyzed were MGO and thiol status, the glyoxalase system, comprising glyoxalase 1 and 2 (GLO1/2), and the cytosolic and mitochondrial Trx/TrxR systems, as well as nuclear Nrf2 and its target genes. We found that nuclear Nrf2 is induced by MGO treatment in HT22 cells, as corroborated by induction of the Nrf2-controlled target genes and proteins glutamate cysteine ligase and heme oxygenase 1. Nrf2 knockdown prevented MGO-dependent induction of glutamate cysteine ligase and heme oxygenase 1. The cystine/glutamate antiporter, system xc−, which is also controlled by Nrf2, was also induced. The increased cystine import (system xc−) activity and GCL expression promoted GSH synthesis, leading to increased levels of GSH. The data indicate that MGO can act as both a foe and a friend of the glyoxalase and the Trx/TrxR systems. At low concentrations of MGO (0.3 mM), GLO2 is strongly induced, but at high MGO (0.75 mM) concentrations, GLO1 is inhibited and GLO2 is downregulated. The cytosolic Trx/TrxR system is impaired by MGO, where Trx is downregulated yet TrxR is induced, but strong MGO-dependent glycation may explain the loss in TrxR activity. We propose that Nrf2 can be the unifying element to explain the observed upregulation of GSH, GCL, HO1, TrxR1, Trx2, TrxR2, and system xc− system activity. PMID:26165190

  3. Skillful seasonal predictions of winter precipitation over southern China

    NASA Astrophysics Data System (ADS)

    Lu, Bo; Scaife, Adam A.; Dunstone, Nick; Smith, Doug; Ren, Hong-Li; Liu, Ying; Eade, Rosie

    2017-07-01

    Southern China experiences large year-to-year variability in the amount of winter precipitation, which can result in severe social and economic impacts. In this study, we demonstrate prediction skill of southern China winter precipitation by three operational seasonal prediction models: the operational Global seasonal forecasting system version 5 (GloSea5), the NCEP Climate Forecast System (CFSv2) and the Beijing Climate Center Climate System Model (BCC-CSM1.1m). The correlation scores reach 0.76 and 0.67 in GloSea5 and CFSv2, respectively; and the amplitude of the ensemble mean forecast signal is comparable to the observed variations. The skilful predictions in GloSea5 and CFSv2 mainly benefit from the successful representation of the observed ENSO teleconnection. El Niño weakens the Walker circulation and leads to the strengthening of the subtropical high over the northwestern Pacific. The anti-cyclone then induces anomalous northward flow over the South China Sea and brings water vapor to southern China, resulting in more precipitation. This teleconnection pattern is too weak in BCC-CSM1.1m, which explains its low skill (0.13). Whereas the most skilful forecast system is also able to simulate the influence of the Indian Ocean on southern China precipitation via changes in southwesterly winds over the Bay of Bengal. Finally, we examine the real-time forecast for 2015/16 winter when a strong El Niño event led to the highest rainfall over southern China in recent decades. We find that the GloSea5 system gave good advice as it produced the third wettest southern China in the hindcast, but underestimated the observed amplitude. This is likely due to the underestimation of the Siberian High strength in 2015/2016 winter, which has driven strong convergence over southern China. We conclude that some current seasonal forecast systems can give useful warning of impending extremes. However, there is still need for further model improvement to fully represent the complex dynamics of the region.

  4. Inhibited-coupling HC-PCF based beam-delivery-system for high power green industrial lasers

    NASA Astrophysics Data System (ADS)

    Chafer, M.; Gorse, A.; Beaudou, B.; Lekiefs, Q.; Maurel, M.; Debord, B.; Gérôme, F.; Benabid, F.

    2018-02-01

    We report on an ultra-low loss Hollow-Core Photonic Crystal Fiber (HC-PCF) beam delivery system (GLO-GreenBDS) for high power ultra-short pulse lasers operating in the green spectral range (including 515 nm and 532 nm). The GLOBDS- Green combines ease-of-use, high laser-coupling efficiency, robustness and industrial compatible cabling. It comprises a pre-aligned laser-injection head, a sheath-cable protected HC-PCF and a modular fiber-output head. It enables fiber-core gas loading and evacuation in a hermetic fashion. A 5 m long GLO-BDS were demonstrated for a green short pulse laser with a transmission coefficient larger than 80%, and a laser output profile close to single-mode (M2 <1.3).

  5. Performing aggressive code optimization with an ability to rollback changes made by the aggressive optimizations

    DOEpatents

    Gschwind, Michael K

    2013-07-23

    Mechanisms for aggressively optimizing computer code are provided. With these mechanisms, a compiler determines an optimization to apply to a portion of source code and determines if the optimization as applied to the portion of source code will result in unsafe optimized code that introduces a new source of exceptions being generated by the optimized code. In response to a determination that the optimization is an unsafe optimization, the compiler generates an aggressively compiled code version, in which the unsafe optimization is applied, and a conservatively compiled code version in which the unsafe optimization is not applied. The compiler stores both versions and provides them for execution. Mechanisms are provided for switching between these versions during execution in the event of a failure of the aggressively compiled code version. Moreover, predictive mechanisms are provided for predicting whether such a failure is likely.

  6. Prevention of airway inflammation with topical cream containing imiquimod and small interfering RNA for natriuretic peptide receptor

    PubMed Central

    Wang, Xiaoqin; Xu, Weidong; Mohapatra, Subhra; Kong, Xiaoyuan; Li, Xu; Lockey, Richard F; Mohapatra, Shyam S

    2008-01-01

    Background Asthma is a complex disease, characterized by reversible airway obstruction, hyperresponsiveness and chronic inflammation. Principle pharmacologic treatments for asthma include bronchodilating beta2-agonists and anti-inflammatory glucocorticosteroids; but these agents do not target the main cause of the disease, the generation of pathogenic Th2 cells. We previously reported reduction in allergic inflammation in mice deficient in the ANP receptor NPRA. Here we determined whether siRNA for natriuretic peptide receptor A (siNPRA) protected against asthma when administered transdermally. Methods Imiquimod cream mixed with chitosan nanoparticles containing either siRNA green indicator (siGLO) or siNPRA was applied to the skin of mice. Delivery of siGLO was confirmed by fluorescence microscopy. The anti-inflammatory activity of transdermal siNPRA was tested in OVA-sensitized mice by measuring airway hyperresponsiveness, eosinophilia, lung histopathology and pro-inflammatory cytokines. Results SiGLO appearing in the lung proved the feasibility of transdermal delivery. In a mouse asthma model, BALB/c mice treated with imiquimod cream containing siNPRA chitosan nanoparticles showed significantly reduced airway hyperresponsiveness, eosinophilia, lung histopathology and pro-inflammatory cytokines IL-4 and IL-5 in lung homogenates compared to controls. Conclusion These results demonstrate that topical cream containing imiquimod and siNPRA nanoparticles exerts an anti-inflammatory effect and may provide a new and simple therapy for asthma. PMID:18279512

  7. Global Soil and Sediment transfer during the Anthropocene

    NASA Astrophysics Data System (ADS)

    Hoffmann, Thomas; Vanacker, Veerle; Stinchcombe, Gary; Penny, Dan; Xixi, Lu

    2016-04-01

    The vulnerability of soils to human-induced erosion and its downstream effects on fluvial and deltaic ecosystems is highly variable in space and time; dependent on climate, geology, the nature and duration of land use, and topography. Despite our knowledge of the mechanistic relationships between erosion, sediment storage, land-use and climate change, the global patterns of soil erosion, fluvial sediment flux and storage throughout the Holocene remain poorly understood. The newly launched PAGES working group GloSS aims to determine the sensitivity of soil resources and sediment routing systems to varying land use types during the period of agriculture, under contrasting climate regimes and socio-ecological settings. Successfully addressing these questions in relation to the sustainable use of soils, sediments and river systems requires an understanding of past human-landscape interactions. GloSS, therefore, aims to: Develop proxies for, or indices of, human impact on rates of soil erosion and fluvial sediment transfer that are applicable on a global scale and throughout the Holocene; Create a global database of long-term (102-104 years) human-accelerated soil erosion and sediment flux records; Identify hot spots of soil erosion and sediment deposition during the Anthropocene, and Locate data-poor regions where particular socio-ecological systems are not well understood, as strategic foci for future work. This paper will present the latest progress of the PAGES GloSS working group.

  8. Role of the Caenorhabditis elegans multidrug resistance gene, mrp-4, in gut granule differentiation.

    PubMed

    Currie, Erin; King, Brian; Lawrenson, Andrea L; Schroeder, Lena K; Kershner, Aaron M; Hermann, Greg J

    2007-11-01

    Caenorhabditis elegans gut granules are lysosome-related organelles with birefringent contents. mrp-4, which encodes an ATP-binding cassette (ABC) transporter homologous to mammalian multidrug resistance proteins, functions in the formation of gut granule birefringence. mrp-4(-) embryos show a delayed appearance of birefringent material in the gut granule but otherwise appear to form gut granules properly. mrp-4(+) activity is required for the extracellular mislocalization of birefringent material, body-length retraction, and NaCl sensitivity, phenotypes associated with defective gut granule biogenesis exhibited by embryos lacking the activity of GLO-1/Rab38, a putative GLO-1 guanine nucleotide exchange factor GLO-4, and the AP-3 complex. Multidrug resistance protein (MRP)-4 localizes to the gut granule membrane, consistent with it playing a direct role in the transport of molecules that compose and/or facilitate the formation of birefringent crystals within the gut granule. However, MRP-4 is also present in oocytes and early embryos, and our genetic analyses indicate that its site of action in the formation of birefringent material may not be limited to just the gut granule in embryos. In a search for genes that function similarly to mrp-4(+), we identified WHT-2, another ABC transporter that acts in parallel to MRP-4 for the formation of birefringent material in the gut granule.

  9. Characterization of the chromosome 4 genes that affect fluconazole-induced disomy formation in Cryptococcus neoformans.

    PubMed

    Ngamskulrungroj, Popchai; Chang, Yun; Hansen, Bryan; Bugge, Cliff; Fischer, Elizabeth; Kwon-Chung, Kyung J

    2012-01-01

    Heteroresistance in Cryptococcus neoformans is an intrinsic adaptive resistance to azoles and the heteroresistant phenotype is associated with disomic chromosomes. Two chromosome 1 (Chr1) genes, ERG11, the fluconazole target, and AFR1, a drug transporter, were reported as major factors in the emergence of Chr1 disomy. In the present study, we show Chr4 to be the second most frequently formed disomy at high concentrations of fluconazole (FLC) and characterize the importance of resident genes contributing to disomy formation. We deleted nine Chr4 genes presumed to have functions in ergosterol biosynthesis, membrane composition/integrity or drug transportation that could influence Chr4 disomy under FLC stress. Of these nine, disruption of three genes homologous to Sey1 (a GTPase), Glo3 and Gcs2 (the ADP-ribosylation factor GTPase activating proteins) significantly reduced the frequency of Chr4 disomy in heteroresistant clones. Furthermore, FLC resistant clones derived from sey1Δglo3Δ did not show disomy of either Chr4 or Chr1 but instead had increased the copy number of the genes proximal to ERG11 locus on Chr1. Since the three genes are critical for the integrity of endoplasmic reticulum (ER) in Saccharomyces cerevisiae, we used Sec61ß-GFP fusion as a marker to study the ER in the mutants. The cytoplasmic ER was found to be elongated in sey1Δ but without any discernable alteration in gcs2Δ and glo3Δ under fluorescence microscopy. The aberrant ER morphology of all three mutant strains, however, was discernable by transmission electron microscopy. A 3D reconstruction using Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) revealed considerably reduced reticulation in the ER of glo3Δ and gcs2Δ strains. In sey1Δ, ER reticulation was barely detectable and cisternae were expanded extensively compared to the wild type strains. These data suggest that the genes required for maintenance of ER integrity are important for the formation of disomic chromosomes in C. neoformans under azole stress.

  10. Thermal-Flow Code for Modeling Gas Dynamics and Heat Transfer in Space Shuttle Solid Rocket Motor Joints

    NASA Technical Reports Server (NTRS)

    Wang, Qunzhen; Mathias, Edward C.; Heman, Joe R.; Smith, Cory W.

    2000-01-01

    A new, thermal-flow simulation code, called SFLOW. has been developed to model the gas dynamics, heat transfer, as well as O-ring and flow path erosion inside the space shuttle solid rocket motor joints by combining SINDA/Glo, a commercial thermal analyzer. and SHARPO, a general-purpose CFD code developed at Thiokol Propulsion. SHARP was modified so that friction, heat transfer, mass addition, as well as minor losses in one-dimensional flow can be taken into account. The pressure, temperature and velocity of the combustion gas in the leak paths are calculated in SHARP by solving the time-dependent Navier-Stokes equations while the heat conduction in the solid is modeled by SINDA/G. The two codes are coupled by the heat flux at the solid-gas interface. A few test cases are presented and the results from SFLOW agree very well with the exact solutions or experimental data. These cases include Fanno flow where friction is important, Rayleigh flow where heat transfer between gas and solid is important, flow with mass addition due to the erosion of the solid wall, a transient volume venting process, as well as some transient one-dimensional flows with analytical solutions. In addition, SFLOW is applied to model the RSRM nozzle joint 4 subscale hot-flow tests and the predicted pressures, temperatures (both gas and solid), and O-ring erosions agree well with the experimental data. It was also found that the heat transfer between gas and solid has a major effect on the pressures and temperatures of the fill bottles in the RSRM nozzle joint 4 configuration No. 8 test.

  11. GloSensor assay for discovery of GPCR-selective ligands.

    PubMed

    Kumar, Boda Arun; Kumari, Poonam; Sona, Chandan; Yadav, Prem N

    2017-01-01

    G protein-coupled receptors (GPCRs) are modulators of almost every physiological process, and therefore, are most favorite therapeutic target for wide spectrum of diseases. Ideally, high-throughput functional assays should be implemented that allow the screening of large compound libraries in cost-effective manner to identify agonist, antagonist, and allosteric modulators in the same assay. Taking advantage of the increased understanding of the GPCR structure and signaling, several commercially available functional assays based on fluorescence or chemiluminescence detection are being used in both academia and industry. In this chapter, we provide step-by-step method and guidelines to perform cAMP measurement using GloSensor assay. Finally, we have also discussed the analysis and interpretation of results obtained using this assay by providing several examples of G s - and G i -coupled GPCRs. © 2017 Elsevier Inc. All rights reserved.

  12. Acquisition management of the Global Transportation Network

    DOT National Transportation Integrated Search

    2001-08-02

    This report discusses the acquisition management of the Global transportation Network by the U.S. Transportation Command. This report is one in a series of audit reports addressing DoD acquisition management of information technology systems. The Glo...

  13. Historical land cover changes in the Great Lakes Region

    USGS Publications Warehouse

    Cole, K.L.; Davis, M.B.; Stearns, F.; Guntenspergen, G.; Walker, K.; Sisk, Thomas D.

    1999-01-01

    Two different methods of reconstructing historical vegetation change, drawing on General Land Office (GLO) surveys and fossil pollen deposits, are demonstrated by using data from the Great Lakes region. Both types of data are incorporated into landscape-scale analyses and presented through geographic information systems. Results from the two methods reinforce each other and allow reconstructions of past landscapes at different time scales. Changes to forests of the Great Lakes region during the last 150 years were far greater than the changes recorded over the preceding 1,000 years. Over the last 150 years, the total amount of forested land in the Great Lakes region declined by over 40%, and much of the remaining forest was converted to early successional forest types as a result of extensive logging. These results demonstrate the utility of using GLO survey data in conjunction with other data sources to reconstruct a generalized 'presettlement' condition and assess changes in landcover.

  14. GIS interpolations of witness tree records (1839-1866) for northern Wisconsin at multiple scales

    USGS Publications Warehouse

    He, H.S.; Mladenoff, D.J.; Sickley, T.A.; Guntenspergen, G.R.

    2000-01-01

    To construct forest landscape of pre-European settlement periods, we developed a GIS interpolation approach to convert witness tree records of the U.S. General Land Office (GLO) survey from point to polygon data, which better described continuously distributed vegetation. The witness tree records (1839-1866) were processed for a 3-million ha landscape in northern Wisconsin, U.S.A. at different scales. We provided implications of processing results at each scale. Compared with traditional GLO mapping that has fixed mapping scales and generalized classifications, our approach allows presettlement forest landscapes to be analysed at the individual species level and reconstructed under various classifications. We calculated vegetation indices including relative density, dominance, and importance value for each species, and quantitatively described the possible outcomes when GLO records are analysed at three different scales (resolution). The 1 x 1-section resolution preserved spatial information but derived the most conservative estimates of species distributions measured in percentage area, which increased at coarser resolutions. Such increases under the 2 x 2-section resolution were in the order of three to four times for the least common species, two to three times for the medium to most common species, and one to two times for the most common or highly contagious species. We marred the distributions of hemlock and sugar maple from the pre-European settlement period based on their witness tree locations and reconstructed presettlement forest landscapes based on species importance values derived for all species. The results provide a unique basis to further study land cover changes occurring after European settlement.

  15. Hydrological Predictability for the Peruvian Amazon

    NASA Astrophysics Data System (ADS)

    Towner, Jamie; Stephens, Elizabeth; Cloke, Hannah; Bazo, Juan; Coughlan, Erin; Zsoter, Ervin

    2017-04-01

    Population growth in the Peruvian Amazon has prompted the expansion of livelihoods further into the floodplain and thus increasing vulnerability to the annual rise and fall of the river. This growth has coincided with a period of increasing hydrological extremes with more frequent severe flood events. The anticipation and forecasting of these events is crucial for mitigating vulnerability. Forecast-based Financing (FbF) an initiative of the German Red Cross implements risk reducing actions based on threshold exceedance within hydrometeorological forecasts using the Global Flood Awareness System (GloFAS). However, the lead times required to complete certain actions can be long (e.g. several weeks to months ahead to purchase materials and reinforce houses) and are beyond the current capabilities of GloFAS. Therefore, further calibration of the model is required in addition to understanding the climatic drivers and associated hydrological response for specific flood events, such as those observed in 2009, 2012 and 2015. This review sets out to determine the current capabilities of the GloFAS model while exploring the limits of predictability for the Amazon basin. More specifically, how the temporal patterns of flow within the main coinciding tributaries correspond to the overall Amazonian flood wave under various climatic and meteorological influences. Linking the source areas of flow to predictability within the seasonal forecasting system will develop the ability to expand the limit of predictability of the flood wave. This presentation will focus on the Iquitos region of Peru, while providing an overview of the new techniques and current challenges faced within seasonal flood prediction.

  16. Asian summer monsoon seasonal prediction skill in the Met Office GloSea5 model and its dependence on mean state biases

    NASA Astrophysics Data System (ADS)

    Bush, Stephanie; Turner, Andrew; Martin, Gill; Woolnough, Steve

    2015-04-01

    Predicting the circulation and precipitation features of the Asian monsoon on time scales of weeks to the season ahead remains a challenge for prediction centres. Current state-of-the-art models retain large biases, particularly dryness over India, which evolve rapidly from initialization and persist into centennial length climate integrations, illustrating the seamless nature of the monsoon problem. We present initial results from our Ministry of Earth Sciences Indian Monsoon Mission collaboration project to assess and improve weekly-to-seasonal forecasts in the Met Office Unified Model (MetUM) coupled initialized Global Seasonal Prediction System (GloSea5). Using a 14-year hindcast ensemble of integrations in which atmosphere, ocean and sea-ice components are initialized from May start dates, we assess the monsoon seasonal prediction skill and global mean state biases of GloSea5. Initial May and June biases include a lack of precipitation over the Indian peninsula, and a weakened monsoon flow, and these give way to a more robust pattern of excess precipitation in the western north Pacific, lack of precipitation over the Maritime Continent, excess westerlies across the Indian peninsula and Indochina, and cool SSTs in the eastern equatorial Indian Ocean and western north Pacific in July and August. Despite these mean state biases, the interannual correlation of predicted JJA all India rainfall from 1998 to 2009 with TRMM is fairly high at 0.68. Future work will focus on the prospects for further improving this skill with bias correction techniques.

  17. 75 FR 65383 - Notice Pursuant to the National Cooperative Research and Production Act of 1993-Telemanagement Forum

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-22

    ..., CA; Compunet Services, Inc., Stockbridge, GA; Cordys, Putten, THE NETHERLANDS; Cosmo Bulgaria Mobile... Inc. to Cloud.com , Cupertino, CA; Globul to Cosmo Bulgaria Mobile EAD(GloBul), Sofia, BULGARIA; CTBC...

  18. A national-scale seasonal hydrological forecast system: development and evaluation over Britain

    NASA Astrophysics Data System (ADS)

    Bell, Victoria A.; Davies, Helen N.; Kay, Alison L.; Brookshaw, Anca; Scaife, Adam A.

    2017-09-01

    Skilful winter seasonal predictions for the North Atlantic circulation and northern Europe have now been demonstrated and the potential for seasonal hydrological forecasting in the UK is now being explored. One of the techniques being used combines seasonal rainfall forecasts provided by operational weather forecast systems with hydrological modelling tools to provide estimates of seasonal mean river flows up to a few months ahead. The work presented here shows how spatial information contained in a distributed hydrological model typically requiring high-resolution (daily or better) rainfall data can be used to provide an initial condition for a much simpler forecast model tailored to use low-resolution monthly rainfall forecasts. Rainfall forecasts (hindcasts) from the GloSea5 model (1996 to 2009) are used to provide the first assessment of skill in these national-scale flow forecasts. The skill in the combined modelling system is assessed for different seasons and regions of Britain, and compared to what might be achieved using other approaches such as use of an ensemble of historical rainfall in a hydrological model, or a simple flow persistence forecast. The analysis indicates that only limited forecast skill is achievable for Spring and Summer seasonal hydrological forecasts; however, Autumn and Winter flows can be reasonably well forecast using (ensemble mean) rainfall forecasts based on either GloSea5 forecasts or historical rainfall (the preferred type of forecast depends on the region). Flow forecasts using ensemble mean GloSea5 rainfall perform most consistently well across Britain, and provide the most skilful forecasts overall at the 3-month lead time. Much of the skill (64 %) in the 1-month ahead seasonal flow forecasts can be attributed to the hydrological initial condition (particularly in regions with a significant groundwater contribution to flows), whereas for the 3-month ahead lead time, GloSea5 forecasts account for ˜ 70 % of the forecast skill (mostly in areas of high rainfall to the north and west) and only 30 % of the skill arises from hydrological memory (typically groundwater-dominated areas). Given the high spatial heterogeneity in typical patterns of UK rainfall and evaporation, future development of skilful spatially distributed seasonal forecasts could lead to substantial improvements in seasonal flow forecast capability, potentially benefitting practitioners interested in predicting hydrological extremes, not only in the UK but also across Europe.

  19. A global, space-based stratospheric aerosol climatology: 1979 to 2014

    NASA Astrophysics Data System (ADS)

    Thomason, L. W.; Vernier, J. P.; Bourassa, A. E.; Millan, L.; Manney, G. L.

    2016-12-01

    Herein, we report on a global space-based stratospheric aerosol climatology (GloSSAC) that has been developed to support Coupled Model Intercomparison Project Phase 6 (CMIP6) (REF to CMIP6 and ETH work). GloSSAC is most closely related to the ASAP[SPARC, 2006] and CCMI data sets and follows a similar approach used to produce those data sets. It is primarily built using space-based measurements by a number of instruments including the SAGE series, OSIRIS, CALIPSO, CLAES and HALOE. The data set is presented as monthly depictions for 80S to 80N and from at least the tropopause to 40 km. The data set consists primarily of measurements by the instruments at their native wavelength and measurement type (e.g., extinction coefficient). However, every bin in these monthly grids receives measured or indirectly inferred values for aerosol extinction coefficient at 525 and 1020 nm. Generally, bins where no data are available are filled via simple linear interpolation in time only. The exceptions are in the SAGE I/II gap from 1982 to 1984 where data from SAM II and ground-based and airborne lidar data sets are used to span the 3 years between the end of the SAGE I mission in November 1981 and the beginning of the SAGE II mission in October 1984. Ground-based lidar also supplements space-based data in the months following the Pinatubo eruption when much of the lower stratosphere is too optically opaque for occultation measurements. This data set includes total aerosol surface area density and volume estimates based on Thomason et al.[2008] though these should be interpreted as bounding values (low and high) rather than functional aerosol parameters that are generally produced from this and predecessor data sets by other parties. Unlike previous versions of this data set, GloSSAC has been permanently archived at NASA's Atmospheric Science Data Center and a digital object identifier (doi) for GloSSAC is available. SPARC (2006), Assessment of Stratospheric Aerosol Properties (ASAP), 348 pp., WCRP-124, WMO/TD No. 1295, SPARC Report No. 4. Thomason, L. W., S. P. Burton, B. P. Luo, and T. Peter (2008), SAGE II measurements of stratospheric aerosol properties at non-volcanic levels, Atmospheric Chemistry and Physics, 8(4), 983-995, doi:10.5194/acp-8-983-2008.

  20. The optimal code searching method with an improved criterion of coded exposure for remote sensing image restoration

    NASA Astrophysics Data System (ADS)

    He, Lirong; Cui, Guangmang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2015-03-01

    Coded exposure photography makes the motion de-blurring a well-posed problem. The integration pattern of light is modulated using the method of coded exposure by opening and closing the shutter within the exposure time, changing the traditional shutter frequency spectrum into a wider frequency band in order to preserve more image information in frequency domain. The searching method of optimal code is significant for coded exposure. In this paper, an improved criterion of the optimal code searching is proposed by analyzing relationship between code length and the number of ones in the code, considering the noise effect on code selection with the affine noise model. Then the optimal code is obtained utilizing the method of genetic searching algorithm based on the proposed selection criterion. Experimental results show that the time consuming of searching optimal code decreases with the presented method. The restoration image is obtained with better subjective experience and superior objective evaluation values.

  1. Combination of 13 cis-retinoic acid and tolfenamic acid induces apoptosis and effectively inhibits high-risk neuroblastoma cell proliferation.

    PubMed

    Shelake, Sagar; Eslin, Don; Sutphin, Robert M; Sankpal, Umesh T; Wadwani, Anmol; Kenyon, Laura E; Tabor-Simecka, Leslie; Bowman, W Paul; Vishwanatha, Jamboor K; Basha, Riyaz

    2015-11-01

    Chemotherapeutic regimens used for the treatment of Neuroblastoma (NB) cause long-term side effects in pediatric patients. NB arises in immature sympathetic nerve cells and primarily affects infants and children. A high rate of relapse in high-risk neuroblastoma (HRNB) necessitates the development of alternative strategies for effective treatment. This study investigated the efficacy of a small molecule, tolfenamic acid (TA), for enhancing the anti-proliferative effect of 13 cis-retinoic acid (RA) in HRNB cell lines. LA1-55n and SH-SY5Y cells were treated with TA (30μM) or RA (20μM) or both (optimized doses, derived from dose curves) for 48h and tested the effect on cell viability, apoptosis and selected molecular markers (Sp1, survivin, AKT and ERK1/2). Cell viability and caspase activity were measured using the CellTiter-Glo and Caspase-Glo kits. The apoptotic cell population was determined by flow cytometry with Annexin-V staining. The expression of Sp1, survivin, AKT, ERK1/2 and c-PARP was evaluated by Western blots. The combination therapy of TA and RA resulted in significant inhibition of cell viability (p<0.0001) when compared to individual agents. The anti-proliferative effect is accompanied by a decrease in Sp1 and survivin expression and an increase in apoptotic markers, Annexin-V positive cells, caspase 3/7 activity and c-PARP levels. Notably, TA+RA combination also caused down regulation of AKT and ERK1/2 suggesting a distinct impact on survival and proliferation pathways via signaling cascades. This study demonstrates that the TA mediated inhibition of Sp1 in combination with RA provides a novel therapeutic strategy for the effective treatment of HRNB in children. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Swainsonine biosynthesis genes in diverse symbiotic and pathogenic fungi

    USDA-ARS?s Scientific Manuscript database

    Swainsonine, a cytotoxic fungal alkaloid and a potential cancer therapy drug, is produced by the insect pathogen and plant symbiont, Metarhizium robertsii, the clover pathogen Slafractonia leguminicola, locoweed symbionts belonging to Alternaria sect. Undifilum, and a recently discovered morning glo...

  3. Utility of Adenosine Monophosphate Detection System for Monitoring the Activities of Diverse Enzyme Reactions.

    PubMed

    Mondal, Subhanjan; Hsiao, Kevin; Goueli, Said A

    Adenosine monophosphate (AMP) is a key cellular metabolite regulating energy homeostasis and signal transduction. AMP is also a product of various enzymatic reactions, many of which are dysregulated during disease conditions. Thus, monitoring the activities of these enzymes is a primary goal for developing modulators for these enzymes. In this study, we demonstrate the versatility of an enzyme-coupled assay that quantifies the amount of AMP produced by any enzymatic reaction regardless of its substrates. We successfully implemented it to enzyme reactions that use adenosine triphosphate (ATP) as a substrate (aminoacyl tRNA synthetase and DNA ligase) by an elaborate strategy of removing residual ATP and converting AMP produced into ATP; so it can be detected using luciferase/luciferin and generating light. We also tested this assay to measure the activities of AMP-generating enzymes that do not require ATP as substrate, including phosphodiesterases (cyclic adenosine monophosphate) and Escherichia coli DNA ligases (nicotinamide adenine dinucleotide [NAD + ]). In a further elaboration of the AMP-Glo platform, we coupled it to E. coli DNA ligase, enabling measurement of NAD + and enzymes that use NAD + like monoadenosine and polyadenosine diphosphate-ribosyltransferases. Sulfotransferases use 3'-phosphoadenosine-5'-phosphosulfate as the universal sulfo-group donor and phosphoadenosine-5'-phosphate (PAP) is the universal product. PAP can be quantified by converting PAP to AMP by a Golgi-resident PAP-specific phosphatase, IMPAD1. By coupling IMPAD1 to the AMP-Glo system, we can measure the activities of sulfotransferases. Thus, by utilizing the combinations of biochemical enzymatic conversion of various cellular metabolites to AMP, we were able to demonstrate the versatility of the AMP-Glo assay.

  4. [miR-497 suppresses proliferation of human cervical carcinoma HeLa cells by targeting cyclin E1].

    PubMed

    Han, Jiming; Huo, Manpeng; Mu, Mingtao; Liu, Junjun; Zhang, Jing

    2014-06-01

    To evaluate the effect of miR-497 on proliferation of human cervical carcinoma HeLa cells and target relationship between miR-497 and cyclin E1 (CCNE1). Pre-miR-497 sequences were synthesized and cloned into pcDNATM6.2-GW to construct recombinant plasmid pcDNATM6.2-GW-pre-miR-497 and identified by real-time quantitative PCR (qRT-PCR). In addition, sequences of the wild-type CCNE1 (WT-CCNE1) and mutant CCNE1 (MT-CCNE1) were respectively cloned into pmirGLO vectors. MTT assay was used to explore the impact of miR-497 on the proliferation of HeLa cells. Furthermore, the target effect of miR-497 on the CCNE1 was identified by dual-luciferase reporter assay system, qRT-PCR and Western blotting. The recombinant plasmids pcDNATM6.2-GW-pre-miR-497 and pmirGLO-WT-CCNE1, pmirGLO-MT-CCNE1 were successfully constructed, and the miR-497 expression level in HeLa cells transfected with pre-miR-497 was significantly higher than that in the neg-miR group (P<0.05). MTT assay showed that miR-497 could significantly inhibit the proliferation of HeLa cells (P<0.05). A remarkable reduction of luciferase activities of WT-CCNE1 reporter was observed in HeLa cells with pre-miR-497 transfection (P<0.01), and the mRNA and protein expression levels of CCNE1 were down-regulated in HeLa cells transfected with pre-miR-497 (P<0.05). Over-expressed miR-497 in HeLa cells could suppress cell proliferation by targeting CCNE1.

  5. Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks

    NASA Astrophysics Data System (ADS)

    Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2011-01-01

    In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.

  6. Evaluation of reformulated thermal control coatings in a simulated space environment. Part 1: YB-71

    NASA Technical Reports Server (NTRS)

    Cerbus, Clifford A.; Carlin, Patrick S.

    1994-01-01

    The Air Force Space and Missile Systems Center and Wright Laboratory Materials Directorate (WL/ML) have sponsored and effort to effort to reformulate and qualify Illinois Institute of Technology Research Institute (IITRI) spacecraft thermal control coatings. S13G/LO-1, Z93, and YB-71 coatings were reformulated because the potassium silicate binder, Sylvania PS-7, used in the coatings is no longer manufactured. Coatings utilizing the binder's replacement candidate, Kasil 2130, manufactured by The Philadelphia Quartz (PQ) Corporation, Baltimore, Maryland, and undergoing testing at the Materials Directorate's Space Combined Effects Primary Test and Research Equipment (SCEPTRE) Facility operated by the University of Dayton Research Institute (UDRI). The simulated space environment consists of combined ultraviolet (UV) and electron exposure with in site specimen reflectance measurements. A brief description of the effort at IITRI, results and discussion from testing the reformulated YB-71 coating in SCEPTRE, and plans for further testing of reformulated Z93 and S13G/LO-1 are presented.

  7. A novel neutron energy spectrum unfolding code using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Shahabinejad, H.; Sohrabpour, M.

    2017-07-01

    A novel neutron Spectrum Deconvolution using Particle Swarm Optimization (SDPSO) code has been developed to unfold the neutron spectrum from a pulse height distribution and a response matrix. The Particle Swarm Optimization (PSO) imitates the bird flocks social behavior to solve complex optimization problems. The results of the SDPSO code have been compared with those of the standard spectra and recently published Two-steps Genetic Algorithm Spectrum Unfolding (TGASU) code. The TGASU code have been previously compared with the other codes such as MAXED, GRAVEL, FERDOR and GAMCD and shown to be more accurate than the previous codes. The results of the SDPSO code have been demonstrated to match well with those of the TGASU code for both under determined and over-determined problems. In addition the SDPSO has been shown to be nearly two times faster than the TGASU code.

  8. Observations Regarding Use of Advanced CFD Analysis, Sensitivity Analysis, and Design Codes in MDO

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Hou, Gene J. W.; Taylor, Arthur C., III

    1996-01-01

    Observations regarding the use of advanced computational fluid dynamics (CFD) analysis, sensitivity analysis (SA), and design codes in gradient-based multidisciplinary design optimization (MDO) reflect our perception of the interactions required of CFD and our experience in recent aerodynamic design optimization studies using CFD. Sample results from these latter studies are summarized for conventional optimization (analysis - SA codes) and simultaneous analysis and design optimization (design code) using both Euler and Navier-Stokes flow approximations. The amount of computational resources required for aerodynamic design using CFD via analysis - SA codes is greater than that required for design codes. Thus, an MDO formulation that utilizes the more efficient design codes where possible is desired. However, in the aerovehicle MDO problem, the various disciplines that are involved have different design points in the flight envelope; therefore, CFD analysis - SA codes are required at the aerodynamic 'off design' points. The suggested MDO formulation is a hybrid multilevel optimization procedure that consists of both multipoint CFD analysis - SA codes and multipoint CFD design codes that perform suboptimizations.

  9. Resource allocation for error resilient video coding over AWGN using optimization approach.

    PubMed

    An, Cheolhong; Nguyen, Truong Q

    2008-12-01

    The number of slices for error resilient video coding is jointly optimized with 802.11a-like media access control and the physical layers with automatic repeat request and rate compatible punctured convolutional code over additive white gaussian noise channel as well as channel times allocation for time division multiple access. For error resilient video coding, the relation between the number of slices and coding efficiency is analyzed and formulated as a mathematical model. It is applied for the joint optimization problem, and the problem is solved by a convex optimization method such as the primal-dual decomposition method. We compare the performance of a video communication system which uses the optimal number of slices with one that codes a picture as one slice. From numerical examples, end-to-end distortion of utility functions can be significantly reduced with the optimal slices of a picture especially at low signal-to-noise ratio.

  10. The Evolving Defense Industrial Base

    DTIC Science & Technology

    2007-05-16

    Raytheon Raytheon – Flight Simulation Raython Co -Plant, Quincy The Result Was Dramatic Consolidation... The Market Landscape has Changed in the U.S. 3 26...through Entry into New Markets (Regions and Segments) – Teaming & Competing Globally is Part and Parcel to this: “Glo- opetition ” Has Consolidation

  11. Biobased alternatives to guar gum as tackifiers for hydromulch

    USDA-ARS?s Scientific Manuscript database

    Guar gum, obtained from guar [Cyamopsis tetragonoloba (L.) Taub.] seeds, is currently the principal gum used as a tackifier (binder) for hydraulically-applied mulches (hydromulches) used in erosion control. The oil industry’s increased use of guar gum in hydraulic fracturing together with lower glo...

  12. Positive selection and ancient duplications in the evolution of class B floral homeotic genes of orchids and grasses

    PubMed Central

    Mondragón-Palomino, Mariana; Hiese, Luisa; Härter, Andrea; Koch, Marcus A; Theißen, Günter

    2009-01-01

    Background Positive selection is recognized as the prevalence of nonsynonymous over synonymous substitutions in a gene. Models of the functional evolution of duplicated genes consider neofunctionalization as key to the retention of paralogues. For instance, duplicate transcription factors are specifically retained in plant and animal genomes and both positive selection and transcriptional divergence appear to have played a role in their diversification. However, the relative impact of these two factors has not been systematically evaluated. Class B MADS-box genes, comprising DEF-like and GLO-like genes, encode developmental transcription factors essential for establishment of perianth and male organ identity in the flowers of angiosperms. Here, we contrast the role of positive selection and the known divergence in expression patterns of genes encoding class B-like MADS-box transcription factors from monocots, with emphasis on the family Orchidaceae and the order Poales. Although in the monocots these two groups are highly diverse and have a strongly canalized floral morphology, there is no information on the role of positive selection in the evolution of their distinctive flower morphologies. Published research shows that in Poales, class B-like genes are expressed in stamens and in lodicules, the perianth organs whose identity might also be specified by class B-like genes, like the identity of the inner tepals of their lily-like relatives. In orchids, however, the number and pattern of expression of class B-like genes have greatly diverged. Results The DEF-like genes from Orchidaceae form four well-supported, ancient clades of orthologues. In contrast, orchid GLO-like genes form a single clade of ancient orthologues and recent paralogues. DEF-like genes from orchid clade 2 (OMADS3-like genes) are under less stringent purifying selection than the other orchid DEF-like and GLO-like genes. In comparison with orchids, purifying selection was less stringent in DEF-like and GLO-like genes from Poales. Most importantly, positive selection took place before the major organ reduction and losses in the floral axis that eventually yielded the zygomorphic grass floret. Conclusion In DEF-like genes of Poales, positive selection on the region mediating interactions with other proteins or DNA could have triggered the evolution of the regulatory mechanisms behind the development of grass-specific reproductive structures. Orchidaceae show a different trend, where gene duplication and transcriptional divergence appear to have played a major role in the canalization and modularization of perianth development. PMID:19383167

  13. Program optimizations: The interplay between power, performance, and energy

    DOE PAGES

    Leon, Edgar A.; Karlin, Ian; Grant, Ryan E.; ...

    2016-05-16

    Practical considerations for future supercomputer designs will impose limits on both instantaneous power consumption and total energy consumption. Working within these constraints while providing the maximum possible performance, application developers will need to optimize their code for speed alongside power and energy concerns. This paper analyzes the effectiveness of several code optimizations including loop fusion, data structure transformations, and global allocations. A per component measurement and analysis of different architectures is performed, enabling the examination of code optimizations on different compute subsystems. Using an explicit hydrodynamics proxy application from the U.S. Department of Energy, LULESH, we show how code optimizationsmore » impact different computational phases of the simulation. This provides insight for simulation developers into the best optimizations to use during particular simulation compute phases when optimizing code for future supercomputing platforms. Here, we examine and contrast both x86 and Blue Gene architectures with respect to these optimizations.« less

  14. Phenotypic Graphs and Evolution Unfold the Standard Genetic Code as the Optimal

    NASA Astrophysics Data System (ADS)

    Zamudio, Gabriel S.; José, Marco V.

    2018-03-01

    In this work, we explicitly consider the evolution of the Standard Genetic Code (SGC) by assuming two evolutionary stages, to wit, the primeval RNY code and two intermediate codes in between. We used network theory and graph theory to measure the connectivity of each phenotypic graph. The connectivity values are compared to the values of the codes under different randomization scenarios. An error-correcting optimal code is one in which the algebraic connectivity is minimized. We show that the SGC is optimal in regard to its robustness and error-tolerance when compared to all random codes under different assumptions.

  15. Partial Life-Cycle and Acute Toxicity of Perfluoroalkyl Acids to Freshwater Mussels

    EPA Science Inventory

    Freshwater mussels are among the most sensitive aquatic organisms to many contaminants and have complex life-cycles that include several distinct life stages with unique contaminant exposure pathways. Standard acute (24–96 h) and chronic (28 d) toxicity tests with free larva (glo...

  16. 75 FR 51823 - Government-Owned Inventions; Availability for Licensing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-23

    ... applications. Transforming Growth Factor Beta-1 (TGF-[beta]1) Transgenic Mouse Model Description of Technology... developed a transgenic mouse model, designated [beta]1\\glo\\, which permits conditional, gene-specific... gene by Cre recombinase allows expression of TGF-[beta]1. Thus, these mice may be cross-bred with a...

  17. Girls Leading Outward

    ERIC Educational Resources Information Center

    Hamed, Heather; Reyes, Jazmin; Moceri, Dominic C.; Morana, Laura; Elias, Maurice J.

    2011-01-01

    The authors describe a program implemented in Red Bank Middle School in New Jersey to help at-risk, minority middle school girls realize their leadership potential. The GLO (Girls Leading Outward) program was developed by the Developing Safe and Civil Schools Project at Rutgers University and is facilitated by university students. Selected middle…

  18. An optimization program based on the method of feasible directions: Theory and users guide

    NASA Technical Reports Server (NTRS)

    Belegundu, Ashok D.; Berke, Laszlo; Patnaik, Surya N.

    1994-01-01

    The theory and user instructions for an optimization code based on the method of feasible directions are presented. The code was written for wide distribution and ease of attachment to other simulation software. Although the theory of the method of feasible direction was developed in the 1960's, many considerations are involved in its actual implementation as a computer code. Included in the code are a number of features to improve robustness in optimization. The search direction is obtained by solving a quadratic program using an interior method based on Karmarkar's algorithm. The theory is discussed focusing on the important and often overlooked role played by the various parameters guiding the iterations within the program. Also discussed is a robust approach for handling infeasible starting points. The code was validated by solving a variety of structural optimization test problems that have known solutions obtained by other optimization codes. It has been observed that this code is robust: it has solved a variety of problems from different starting points. However, the code is inefficient in that it takes considerable CPU time as compared with certain other available codes. Further work is required to improve its efficiency while retaining its robustness.

  19. The effect of code expanding optimizations on instruction cache design

    NASA Technical Reports Server (NTRS)

    Chen, William Y.; Chang, Pohua P.; Conte, Thomas M.; Hwu, Wen-Mei W.

    1991-01-01

    It is shown that code expanding optimizations have strong and non-intuitive implications on instruction cache design. Three types of code expanding optimizations are studied: instruction placement, function inline expansion, and superscalar optimizations. Overall, instruction placement reduces the miss ratio of small caches. Function inline expansion improves the performance for small cache sizes, but degrades the performance of medium caches. Superscalar optimizations increases the cache size required for a given miss ratio. On the other hand, they also increase the sequentiality of instruction access so that a simple load-forward scheme effectively cancels the negative effects. Overall, it is shown that with load forwarding, the three types of code expanding optimizations jointly improve the performance of small caches and have little effect on large caches.

  20. Global Conceptualization of the Professional Learning Community Process: Transitioning from Country Perspectives to International Commonalities

    ERIC Educational Resources Information Center

    Huffman, Jane B.; Olivier, Dianne F.; Wang, Ting; Chen, Peiying; Hairon, Salleh; Pang, Nicholas

    2016-01-01

    The authors seek to find common PLC structures and actions among global educational systems to enhance understanding and practice. Six international researchers formed the Global Professional Learning Community Network (GloPLCNet), conducted literature reviews of each country's involvement with PLC actions, and noted similarities and common…

  1. Teaching of Computer Science Topics Using Meta-Programming-Based GLOs and LEGO Robots

    ERIC Educational Resources Information Center

    Štuikys, Vytautas; Burbaite, Renata; Damaševicius, Robertas

    2013-01-01

    The paper's contribution is a methodology that integrates two educational technologies (GLO and LEGO robot) to teach Computer Science (CS) topics at the school level. We present the methodology as a framework of 5 components (pedagogical activities, technology driven processes, tools, knowledge transfer actors, and pedagogical outcomes) and…

  2. Characterization of unpaved road condition through the use of remote sensing project - phase II, deliverable 8-D: final report.

    DOT National Transportation Integrated Search

    2016-03-07

    Building on the success of developing a UAV based unpaved road assessment system in Phase I, the project team was awarded a Phase II project by the USDOT to focus on outreach and implementation. The project team added Valerie Lefler of Integrated Glo...

  3. Weeds of the Midwestern United States and Central Canada

    USDA-ARS?s Scientific Manuscript database

    The book, Weeds of the Central United States and Canada, includes 356 of the most common and/or troublesome weeds of agricultural and natural areas found within the central region of the United States and Canada. The books includes an introduction, a key to plant families contained in the book, glo...

  4. HERCULES: A Pattern Driven Code Transformation System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kartsaklis, Christos; Hernandez, Oscar R; Hsu, Chung-Hsing

    2012-01-01

    New parallel computers are emerging, but developing efficient scientific code for them remains difficult. A scientist must manage not only the science-domain complexity but also the performance-optimization complexity. HERCULES is a code transformation system designed to help the scientist to separate the two concerns, which improves code maintenance, and facilitates performance optimization. The system combines three technologies, code patterns, transformation scripts and compiler plugins, to provide the scientist with an environment to quickly implement code transformations that suit his needs. Unlike existing code optimization tools, HERCULES is unique in its focus on user-level accessibility. In this paper we discuss themore » design, implementation and an initial evaluation of HERCULES.« less

  5. Effect of four over-the-counter tooth-whitening products on enamel microhardness.

    PubMed

    Majeed, A; Grobler, S R; Moola, M H; Oberholzer, T G

    2011-10-01

    This in vitro study evaluated the effect of four over-the-counter tooth-whitening products on enamel microhardness. Fifty enamel blocks were prepared from extracted human molar teeth. The enamel surfaces were polished up to 1200 grit fineness and the specimens randomly divided into five groups. Enamel blocks were exposed to: Rapid White (n=10); Absolute White (n=10); Speed White (n=10) and White Glo (n=10) whitening products, according to the manufacturers' instructions. As control, ten enamel blocks were kept in artificial saliva at 37 degrees C without any treatment. Microhardness values were obtained before exposure (baseline) and after 1, 7 and 14-day treatment periods using a digital hardness tester with a Vickers diamond indenter. Data were analysed using Wilcoxon Signed Rank Sum Test, one-way ANOVA and Tukey-Kramer Multiple Comparison Test (p<0.05). Both Rapid White and Absolute White reduced enamel microhardness. Speed White increased the microhardness of enamel, while White Glo and artificial saliva had no effect on hardness. Over-the-counter tooth-whitening products might decrease enamel microhardness depending on the type of product.

  6. Regional frequency analysis of extreme rainfalls using partial L moments method

    NASA Astrophysics Data System (ADS)

    Zakaria, Zahrahtul Amani; Shabri, Ani

    2013-07-01

    An approach based on regional frequency analysis using L moments and LH moments are revisited in this study. Subsequently, an alternative regional frequency analysis using the partial L moments (PL moments) method is employed, and a new relationship for homogeneity analysis is developed. The results were then compared with those obtained using the method of L moments and LH moments of order two. The Selangor catchment, consisting of 37 sites and located on the west coast of Peninsular Malaysia, is chosen as a case study. PL moments for the generalized extreme value (GEV), generalized logistic (GLO), and generalized Pareto distributions were derived and used to develop the regional frequency analysis procedure. PL moment ratio diagram and Z test were employed in determining the best-fit distribution. Comparison between the three approaches showed that GLO and GEV distributions were identified as the suitable distributions for representing the statistical properties of extreme rainfall in Selangor. Monte Carlo simulation used for performance evaluation shows that the method of PL moments would outperform L and LH moments methods for estimation of large return period events.

  7. Evidence of Intertube Excitons Observed in the Raman Resonance Excitation Profiles of (6 , 5) -Enriched SWCNT Bundles

    NASA Astrophysics Data System (ADS)

    Simpson, J. R.; Hight Walker, A. R.; Roslyak, O.; Haroz, E.; Telg, H.; Duque, J. G.; Crochet, J. J.; Piryatinkski, A.; Doorn, S. K.

    Understanding the photophysics of exciton behavior in single wall carbon nanotube (SWCNT) bundles remains important for opto-electronic device applications. We report resonance Raman spectroscopy (RRS) measurements on (6 , 5) -enriched SWCNTs, dispersed in aqueous solutions and separated using density gradient ultracentrifugation into fractions of increasing bundle size. Near-IR to UV absorption spectroscopy demonstrates a redshift and broadening of the main excitonic transitions with bundling. A continuously tunable dye laser coupled to a triple-grating spectrometer affords measurement of Raman resonance excitation profiles (REPs) over a range of wavelengths, (505 to 585) nm, covering the (6 , 5) -E22S excitation. REPs of both the radial breathing mode (RBM) and GLO+reveal a redshifting and broadening of the (6 , 5) E22S transition energy with increasing bundle size. Most interestingly, we observe an additional peak in both the RBM and GLO+REPs of bundled SWCNTs, which is shifted lower in energy than the main E22S and is anomalously narrow. We attribute this additional peak to a transverse, intertube exciton.

  8. Fundamental Limits of Delay and Security in Device-to-Device Communication

    DTIC Science & Technology

    2013-01-01

    systematic MDS (maximum distance separable) codes and random binning strategies that achieve a Pareto optimal delayreconstruction tradeoff. The erasure MD...file, and a coding scheme based on erasure compression and Slepian-Wolf binning is presented. The coding scheme is shown to provide a Pareto optimal...ble) codes and random binning strategies that achieve a Pareto optimal delay- reconstruction tradeoff. The erasure MD setup is then used to propose a

  9. A Degree Distribution Optimization Algorithm for Image Transmission

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Yang, Junjie

    2016-09-01

    Luby Transform (LT) code is the first practical implementation of digital fountain code. The coding behavior of LT code is mainly decided by the degree distribution which determines the relationship between source data and codewords. Two degree distributions are suggested by Luby. They work well in typical situations but not optimally in case of finite encoding symbols. In this work, the degree distribution optimization algorithm is proposed to explore the potential of LT code. Firstly selection scheme of sparse degrees for LT codes is introduced. Then probability distribution is optimized according to the selected degrees. In image transmission, bit stream is sensitive to the channel noise and even a single bit error may cause the loss of synchronization between the encoder and the decoder. Therefore the proposed algorithm is designed for image transmission situation. Moreover, optimal class partition is studied for image transmission with unequal error protection. The experimental results are quite promising. Compared with LT code with robust soliton distribution, the proposed algorithm improves the final quality of recovered images obviously with the same overhead.

  10. Fatty Acid Synthase Cooperates with Glyoxalase 1 to Protect against Sugar Toxicity

    PubMed Central

    Garrido, Damien; Rubin, Thomas; Poidevin, Mickael; Maroni, Brigitte; Le Rouzic, Arnaud; Parvy, Jean-Philippe; Montagne, Jacques

    2015-01-01

    Fatty acid (FA) metabolism is deregulated in several human diseases including metabolic syndrome, type 2 diabetes and cancers. Therefore, FA-metabolic enzymes are potential targets for drug therapy, although the consequence of these treatments must be precisely evaluated at the organismal and cellular levels. In healthy organism, synthesis of triacylglycerols (TAGs)—composed of three FA units esterified to a glycerol backbone—is increased in response to dietary sugar. Saturation in the storage and synthesis capacity of TAGs is associated with type 2 diabetes progression. Sugar toxicity likely depends on advanced-glycation-end-products (AGEs) that form through covalent bounding between amine groups and carbonyl groups of sugar or their derivatives α-oxoaldehydes. Methylglyoxal (MG) is a highly reactive α-oxoaldehyde that is derived from glycolysis through a non-enzymatic reaction. Glyoxalase 1 (Glo1) works to neutralize MG, reducing its deleterious effects. Here, we have used the power of Drosophila genetics to generate Fatty acid synthase (FASN) mutants, allowing us to investigate the consequence of this deficiency upon sugar-supplemented diets. We found that FASN mutants are lethal but can be rescued by an appropriate lipid diet. Rescued animals do not exhibit insulin resistance, are dramatically sensitive to dietary sugar and accumulate AGEs. We show that FASN and Glo1 cooperate at systemic and cell-autonomous levels to protect against sugar toxicity. We observed that the size of FASN mutant cells decreases as dietary sucrose increases. Genetic interactions at the cell-autonomous level, where glycolytic enzymes or Glo1 were manipulated in FASN mutant cells, revealed that this sugar-dependent size reduction is a direct consequence of MG-derived-AGE accumulation. In summary, our findings indicate that FASN is dispensable for cell growth if extracellular lipids are available. In contrast, FA-synthesis appears to be required to limit a cell-autonomous accumulation of MG-derived-AGEs, supporting the notion that MG is the most deleterious α-oxoaldehyde at the intracellular level. PMID:25692475

  11. Evaluation of medical treatments to increase survival of ebullism in guinea pigs

    NASA Technical Reports Server (NTRS)

    Stegmann, Barbara J.; Pilmanis, Andrew A.; Wolf, E. G.; Derion, Toniann; Fanton, J. W.; Davis, H.; Kemper, G. B.; Scoggins, Terrell E.

    1993-01-01

    Spaceflight carriers run a constant risk of exposure to vacuum. Above 63,000 ft (47 mmHg), the ambient pressure falls below the vapor pressure of water at 37 C, and tissue vaporization (ebullism) begins. Little is know about appropriate resuscitative protocols after such an ebullism exposure. This study identified injury patterns and mortality rates associated with ebullism while verifying effectiveness of traditional pulmonary resuscitative techniques. Male Hartley guinea pigs were exposed to 87,000 ft for periods of 40 to 115 sec. After descent, those animals that did not breathe spontaneously were given artificial ventilation by bag and mask for up to 15 minutes. Those animals surviving were randomly assigned to one of three treatment groups--hyperbaric oxygen (HBO), ground-level oxygen (GLO2), and ground-level air (GLAIR). The HBO group was treated on a standard treatment table 6A while the GLO2 animals received O2 for an equivalent length of time. Those animals in the GLAIR group were observed only. All surviving animals were humanely sacrified at 48 hours. Inflation of the animal's lungs after the exposure was found to be difficult and, at times, impossible. This may be due to surfactant disruption at the alveolar lining. Electron microscopy identified a disruption of the surfactant layer in animals that did not survive initial exposure. Mortality was found to increase with exposure time: 40 sec--0 percent; 60 sec--6 percent; 70 sec--40 percent; 80 sec--13 percent; 100 sec--38 percent; 110 sec--40 percent; and 115 sec--100 percent. There was no difference in the delayed mortality among the treatment groups (HBO--15 percent, GLO2--11 percent, GLAIR--11 percent). However, since resuscitation was ineffective, the effectiveness of any post-exposure treatment was severely limited. Preliminary results indicate that reuscitation of guinea pigs following ebullism exposure is difficult, and that current techniques (such as traditional CPR) may not be appropriate.

  12. A survey of compiler optimization techniques

    NASA Technical Reports Server (NTRS)

    Schneck, P. B.

    1972-01-01

    Major optimization techniques of compilers are described and grouped into three categories: machine dependent, architecture dependent, and architecture independent. Machine-dependent optimizations tend to be local and are performed upon short spans of generated code by using particular properties of an instruction set to reduce the time or space required by a program. Architecture-dependent optimizations are global and are performed while generating code. These optimizations consider the structure of a computer, but not its detailed instruction set. Architecture independent optimizations are also global but are based on analysis of the program flow graph and the dependencies among statements of source program. A conceptual review of a universal optimizer that performs architecture-independent optimizations at source-code level is also presented.

  13. Multi-Constraint Multi-Variable Optimization of Source-Driven Nuclear Systems

    NASA Astrophysics Data System (ADS)

    Watkins, Edward Francis

    1995-01-01

    A novel approach to the search for optimal designs of source-driven nuclear systems is investigated. Such systems include radiation shields, fusion reactor blankets and various neutron spectrum-shaping assemblies. The novel approach involves the replacement of the steepest-descents optimization algorithm incorporated in the code SWAN by a significantly more general and efficient sequential quadratic programming optimization algorithm provided by the code NPSOL. The resulting SWAN/NPSOL code system can be applied to more general, multi-variable, multi-constraint shield optimization problems. The constraints it accounts for may include simple bounds on variables, linear constraints, and smooth nonlinear constraints. It may also be applied to unconstrained, bound-constrained and linearly constrained optimization. The shield optimization capabilities of the SWAN/NPSOL code system is tested and verified in a variety of optimization problems: dose minimization at constant cost, cost minimization at constant dose, and multiple-nonlinear constraint optimization. The replacement of the optimization part of SWAN with NPSOL is found feasible and leads to a very substantial improvement in the complexity of optimization problems which can be efficiently handled.

  14. Ligand Binding Site Detection by Local Structure Alignment and Its Performance Complementarity

    PubMed Central

    Lee, Hui Sun; Im, Wonpil

    2013-01-01

    Accurate determination of potential ligand binding sites (BS) is a key step for protein function characterization and structure-based drug design. Despite promising results of template-based BS prediction methods using global structure alignment (GSA), there is a room to improve the performance by properly incorporating local structure alignment (LSA) because BS are local structures and often similar for proteins with dissimilar global folds. We present a template-based ligand BS prediction method using G-LoSA, our LSA tool. A large benchmark set validation shows that G-LoSA predicts drug-like ligands’ positions in single-chain protein targets more precisely than TM-align, a GSA-based method, while the overall success rate of TM-align is better. G-LoSA is particularly efficient for accurate detection of local structures conserved across proteins with diverse global topologies. Recognizing the performance complementarity of G-LoSA to TM-align and a non-template geometry-based method, fpocket, a robust consensus scoring method, CMCS-BSP (Complementary Methods and Consensus Scoring for ligand Binding Site Prediction), is developed and shows improvement on prediction accuracy. The G-LoSA source code is freely available at http://im.bioinformatics.ku.edu/GLoSA. PMID:23957286

  15. Optimized nonorthogonal transforms for image compression.

    PubMed

    Guleryuz, O G; Orchard, M T

    1997-01-01

    The transform coding of images is analyzed from a common standpoint in order to generate a framework for the design of optimal transforms. It is argued that all transform coders are alike in the way they manipulate the data structure formed by transform coefficients. A general energy compaction measure is proposed to generate optimized transforms with desirable characteristics particularly suited to the simple transform coding operation of scalar quantization and entropy coding. It is shown that the optimal linear decoder (inverse transform) must be an optimal linear estimator, independent of the structure of the transform generating the coefficients. A formulation that sequentially optimizes the transforms is presented, and design equations and algorithms for its computation provided. The properties of the resulting transform systems are investigated. In particular, it is shown that the resulting basis are nonorthogonal and complete, producing energy compaction optimized, decorrelated transform coefficients. Quantization issues related to nonorthogonal expansion coefficients are addressed with a simple, efficient algorithm. Two implementations are discussed, and image coding examples are given. It is shown that the proposed design framework results in systems with superior energy compaction properties and excellent coding results.

  16. User's manual for the BNW-I optimization code for dry-cooled power plants. Volume III. [PLCIRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braun, D.J.; Daniel, D.J.; De Mier, W.V.

    1977-01-01

    This appendix to User's Manual for the BNW-1 Optimization Code for Dry-Cooled Power Plants provides a listing of the BNW-I optimization code for determining, for a particular size power plant, the optimum dry cooling tower design using a plastic tube cooling surface and circular tower arrangement of the tube bundles. (LCL)

  17. Numerical optimization of perturbative coils for tokamaks

    NASA Astrophysics Data System (ADS)

    Lazerson, Samuel; Park, Jong-Kyu; Logan, Nikolas; Boozer, Allen; NSTX-U Research Team

    2014-10-01

    Numerical optimization of coils which apply three dimensional (3D) perturbative fields to tokamaks is presented. The application of perturbative 3D magnetic fields in tokamaks is now commonplace for control of error fields, resistive wall modes, resonant field drive, and neoclassical toroidal viscosity (NTV) torques. The design of such systems has focused on control of toroidal mode number, with coil shapes based on simple window-pane designs. In this work, a numerical optimization suite based on the STELLOPT 3D equilibrium optimization code is presented. The new code, IPECOPT, replaces the VMEC equilibrium code with the IPEC perturbed equilibrium code, and targets NTV torque by coupling to the PENT code. Fixed boundary optimizations of the 3D fields for the NSTX-U experiment are underway. Initial results suggest NTV torques can be driven by normal field spectrums which are not pitch-resonant with the magnetic field lines. Work has focused on driving core torque with n = 1 and edge torques with n = 3 fields. Optimizations of the coil currents for the planned NSTX-U NCC coils highlight the code's free boundary capability. This manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the U.S. Department of Energy.

  18. pGLO Mutagenesis: A Laboratory Procedure in Molecular Biology for Biology Students

    ERIC Educational Resources Information Center

    Bassiri, Eby A.

    2011-01-01

    A five-session laboratory project was designed to familiarize or increase the laboratory proficiency of biology students and others with techniques and instruments commonly used in molecular biology research laboratories and industries. In this project, the EZ-Tn5 transposon is used to generate and screen a large number of cells transformed with…

  19. Natural Presettlement Features of the Ashley County, Arkansas Area

    Treesearch

    Don C. Bragg

    2003-01-01

    The General Land Office (GLO) survey records of the Ashley County, Arkansas, area were analyzed for natural attributes including forest composition and structure, prairie communities and aquatic and geomorphological features. Almost 13,000 witness trees from at least 23 families were extracted from the surveys. Most (68% of the total) witness trees were black oak (

  20. Patterns of Oak Dominance in the Eastern Ouachita Mountains Suggested by Early Records

    Treesearch

    Don C. Bragg

    2004-01-01

    Many years of human influence across the Interior Highlands have caused profound changes in forest composition, disturbance regimes, and understory dynamics. However, information on the historical condition of these forests is limited. General Land Office (GLO) records, old documents, and contemporary studies provided data on the township encompassing the Lake Winona...

  1. Historical and contemporary environmental context for the Saline-Fifteen site (3BR119)

    Treesearch

    Don C. Bragg; Hope A. Bragg

    2016-01-01

    This paper summarizes the historical environmental context of the Saline-Fifteen site (3BR119) in Bradley County, Arkansas, developed from the General Land Office (GLO) public land surveys, other old documents, and an examination of current forest inventories and modern research to approximate past environmental attributes for this locality. While an imperfect source...

  2. Rocketdyne/Westinghouse nuclear thermal rocket engine modeling

    NASA Technical Reports Server (NTRS)

    Glass, James F.

    1993-01-01

    The topics are presented in viewgraph form and include the following: systems approach needed for nuclear thermal rocket (NTR) design optimization; generic NTR engine power balance codes; rocketdyne nuclear thermal system code; software capabilities; steady state model; NTR engine optimizer code-logic; reactor power calculation logic; sample multi-component configuration; NTR design code output; generic NTR code at Rocketdyne; Rocketdyne NTR model; and nuclear thermal rocket modeling directions.

  3. Fundamental differences between optimization code test problems in engineering applications

    NASA Technical Reports Server (NTRS)

    Eason, E. D.

    1984-01-01

    The purpose here is to suggest that there is at least one fundamental difference between the problems used for testing optimization codes and the problems that engineers often need to solve; in particular, the level of precision that can be practically achieved in the numerical evaluation of the objective function, derivatives, and constraints. This difference affects the performance of optimization codes, as illustrated by two examples. Two classes of optimization problem were defined. Class One functions and constraints can be evaluated to a high precision that depends primarily on the word length of the computer. Class Two functions and/or constraints can only be evaluated to a moderate or a low level of precision for economic or modeling reasons, regardless of the computer word length. Optimization codes have not been adequately tested on Class Two problems. There are very few Class Two test problems in the literature, while there are literally hundreds of Class One test problems. The relative performance of two codes may be markedly different for Class One and Class Two problems. Less sophisticated direct search type codes may be less likely to be confused or to waste many function evaluations on Class Two problems. The analysis accuracy and minimization performance are related in a complex way that probably varies from code to code. On a problem where the analysis precision was varied over a range, the simple Hooke and Jeeves code was more efficient at low precision while the Powell code was more efficient at high precision.

  4. Sentiments Analysis of Reviews Based on ARCNN Model

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoyu; Xu, Ming; Xu, Jian; Zheng, Ning; Yang, Tao

    2017-10-01

    The sentiments analysis of product reviews is designed to help customers understand the status of the product. The traditional method of sentiments analysis relies on the input of a fixed feature vector which is performance bottleneck of the basic codec architecture. In this paper, we propose an attention mechanism with BRNN-CNN model, referring to as ARCNN model. In order to have a good analysis of the semantic relations between words and solves the problem of dimension disaster, we use the GloVe algorithm to train the vector representations for words. Then, ARCNN model is proposed to deal with the problem of deep features training. Specifically, BRNN model is proposed to investigate non-fixed-length vectors and keep time series information perfectly and CNN can study more connection of deep semantic links. Moreover, the attention mechanism can automatically learn from the data and optimize the allocation of weights. Finally, a softmax classifier is designed to complete the sentiment classification of reviews. Experiments show that the proposed method can improve the accuracy of sentiment classification compared with benchmark methods.

  5. Application of Hydrometeorological Information for Short-term and Long-term Water Resources Management over Ungauged Basin in Korea

    NASA Astrophysics Data System (ADS)

    Kim, Ji-in; Ryu, Kyongsik; Suh, Ae-sook

    2016-04-01

    In 2014, three major governmental organizations that are Korea Meteorological Administration (KMA), K-water, and Korea Rural Community Corporation have been established the Hydrometeorological Cooperation Center (HCC) to accomplish more effective water management for scarcely gauged river basins, where data are uncertain or non-consistent. To manage the optimal drought and flood control over the ungauged river, HCC aims to interconnect between weather observations and forecasting information, and hydrological model over sparse regions with limited observations sites in Korean peninsula. In this study, long-term forecasting ensemble models so called Global Seasonal forecast system version 5 (GloSea5): a high-resolution seasonal forecast system, provided by KMA was used in order to produce drought outlook. Glosea5 ensemble model prediction provides predicted drought information for 1 and 3 months ahead with drought index including Standardized Precipitation Index (SPI3) and Palmer Drought Severity Index (PDSI). Also, Global Precipitation Measurement and Global Climate Observation Measurement - Water1 satellites data products are used to estimate rainfall and soil moisture contents over the ungauged region.

  6. A Multi-Scale, Multi-Physics Optimization Framework for Additively Manufactured Structural Components

    NASA Astrophysics Data System (ADS)

    El-Wardany, Tahany; Lynch, Mathew; Gu, Wenjiong; Hsu, Arthur; Klecka, Michael; Nardi, Aaron; Viens, Daniel

    This paper proposes an optimization framework enabling the integration of multi-scale / multi-physics simulation codes to perform structural optimization design for additively manufactured components. Cold spray was selected as the additive manufacturing (AM) process and its constraints were identified and included in the optimization scheme. The developed framework first utilizes topology optimization to maximize stiffness for conceptual design. The subsequent step applies shape optimization to refine the design for stress-life fatigue. The component weight was reduced by 20% while stresses were reduced by 75% and the rigidity was improved by 37%. The framework and analysis codes were implemented using Altair software as well as an in-house loading code. The optimized design was subsequently produced by the cold spray process.

  7. Optimized atom position and coefficient coding for matching pursuit-based image compression.

    PubMed

    Shoa, Alireza; Shirani, Shahram

    2009-12-01

    In this paper, we propose a new encoding algorithm for matching pursuit image coding. We show that coding performance is improved when correlations between atom positions and atom coefficients are both used in encoding. We find the optimum tradeoff between efficient atom position coding and efficient atom coefficient coding and optimize the encoder parameters. Our proposed algorithm outperforms the existing coding algorithms designed for matching pursuit image coding. Additionally, we show that our algorithm results in better rate distortion performance than JPEG 2000 at low bit rates.

  8. Optimizing fusion PIC code performance at scale on Cori Phase 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koskela, T. S.; Deslippe, J.

    In this paper we present the results of optimizing the performance of the gyrokinetic full-f fusion PIC code XGC1 on the Cori Phase Two Knights Landing system. The code has undergone substantial development to enable the use of vector instructions in its most expensive kernels within the NERSC Exascale Science Applications Program. We study the single-node performance of the code on an absolute scale using the roofline methodology to guide optimization efforts. We have obtained 2x speedups in single node performance due to enabling vectorization and performing memory layout optimizations. On multiple nodes, the code is shown to scale wellmore » up to 4000 nodes, near half the size of the machine. We discuss some communication bottlenecks that were identified and resolved during the work.« less

  9. Joint-layer encoder optimization for HEVC scalable extensions

    NASA Astrophysics Data System (ADS)

    Tsai, Chia-Ming; He, Yuwen; Dong, Jie; Ye, Yan; Xiu, Xiaoyu; He, Yong

    2014-09-01

    Scalable video coding provides an efficient solution to support video playback on heterogeneous devices with various channel conditions in heterogeneous networks. SHVC is the latest scalable video coding standard based on the HEVC standard. To improve enhancement layer coding efficiency, inter-layer prediction including texture and motion information generated from the base layer is used for enhancement layer coding. However, the overall performance of the SHVC reference encoder is not fully optimized because rate-distortion optimization (RDO) processes in the base and enhancement layers are independently considered. It is difficult to directly extend the existing joint-layer optimization methods to SHVC due to the complicated coding tree block splitting decisions and in-loop filtering process (e.g., deblocking and sample adaptive offset (SAO) filtering) in HEVC. To solve those problems, a joint-layer optimization method is proposed by adjusting the quantization parameter (QP) to optimally allocate the bit resource between layers. Furthermore, to make more proper resource allocation, the proposed method also considers the viewing probability of base and enhancement layers according to packet loss rate. Based on the viewing probability, a novel joint-layer RD cost function is proposed for joint-layer RDO encoding. The QP values of those coding tree units (CTUs) belonging to lower layers referenced by higher layers are decreased accordingly, and the QP values of those remaining CTUs are increased to keep total bits unchanged. Finally the QP values with minimal joint-layer RD cost are selected to match the viewing probability. The proposed method was applied to the third temporal level (TL-3) pictures in the Random Access configuration. Simulation results demonstrate that the proposed joint-layer optimization method can improve coding performance by 1.3% for these TL-3 pictures compared to the SHVC reference encoder without joint-layer optimization.

  10. Optimization of Particle-in-Cell Codes on RISC Processors

    NASA Technical Reports Server (NTRS)

    Decyk, Viktor K.; Karmesin, Steve Roy; Boer, Aeint de; Liewer, Paulette C.

    1996-01-01

    General strategies are developed to optimize particle-cell-codes written in Fortran for RISC processors which are commonly used on massively parallel computers. These strategies include data reorganization to improve cache utilization and code reorganization to improve efficiency of arithmetic pipelines.

  11. Optimal aggregation of binary classifiers for multiclass cancer diagnosis using gene expression profiles.

    PubMed

    Yukinawa, Naoto; Oba, Shigeyuki; Kato, Kikuya; Ishii, Shin

    2009-01-01

    Multiclass classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. There have been many studies of aggregating binary classifiers to construct a multiclass classifier based on one-versus-the-rest (1R), one-versus-one (11), or other coding strategies, as well as some comparison studies between them. However, the studies found that the best coding depends on each situation. Therefore, a new problem, which we call the "optimal coding problem," has arisen: how can we determine which coding is the optimal one in each situation? To approach this optimal coding problem, we propose a novel framework for constructing a multiclass classifier, in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. Although there is no a priori answer to the optimal coding problem, our weight tuning method can be a consistent answer to the problem. We apply this method to various classification problems including a synthesized data set and some cancer diagnosis data sets from gene expression profiling. The results demonstrate that, in most situations, our method can improve classification accuracy over simple voting heuristics and is better than or comparable to state-of-the-art multiclass predictors.

  12. Optimal patch code design via device characterization

    NASA Astrophysics Data System (ADS)

    Wu, Wencheng; Dalal, Edul N.

    2012-01-01

    In many color measurement applications, such as those for color calibration and profiling, "patch code" has been used successfully for job identification and automation to reduce operator errors. A patch code is similar to a barcode, but is intended primarily for use in measurement devices that cannot read barcodes due to limited spatial resolution, such as spectrophotometers. There is an inherent tradeoff between decoding robustness and the number of code levels available for encoding. Previous methods have attempted to address this tradeoff, but those solutions have been sub-optimal. In this paper, we propose a method to design optimal patch codes via device characterization. The tradeoff between decoding robustness and the number of available code levels is optimized in terms of printing and measurement efforts, and decoding robustness against noises from the printing and measurement devices. Effort is drastically reduced relative to previous methods because print-and-measure is minimized through modeling and the use of existing printer profiles. Decoding robustness is improved by distributing the code levels in CIE Lab space rather than in CMYK space.

  13. Optimal bit allocation for hybrid scalable/multiple-description video transmission over wireless channels

    NASA Astrophysics Data System (ADS)

    Jubran, Mohammad K.; Bansal, Manu; Kondi, Lisimachos P.

    2006-01-01

    In this paper, we consider the problem of optimal bit allocation for wireless video transmission over fading channels. We use a newly developed hybrid scalable/multiple-description codec that combines the functionality of both scalable and multiple-description codecs. It produces a base layer and multiple-description enhancement layers. Any of the enhancement layers can be decoded (in a non-hierarchical manner) with the base layer to improve the reconstructed video quality. Two different channel coding schemes (Rate-Compatible Punctured Convolutional (RCPC)/Cyclic Redundancy Check (CRC) coding and, product code Reed Solomon (RS)+RCPC/CRC coding) are used for unequal error protection of the layered bitstream. Optimal allocation of the bitrate between source and channel coding is performed for discrete sets of source coding rates and channel coding rates. Experimental results are presented for a wide range of channel conditions. Also, comparisons with classical scalable coding show the effectiveness of using hybrid scalable/multiple-description coding for wireless transmission.

  14. DSP code optimization based on cache

    NASA Astrophysics Data System (ADS)

    Xu, Chengfa; Li, Chengcheng; Tang, Bin

    2013-03-01

    DSP program's running efficiency on board is often lower than which via the software simulation during the program development, which is mainly resulted from the user's improper use and incomplete understanding of the cache-based memory. This paper took the TI TMS320C6455 DSP as an example, analyzed its two-level internal cache, and summarized the methods of code optimization. Processor can achieve its best performance when using these code optimization methods. At last, a specific algorithm application in radar signal processing is proposed. Experiment result shows that these optimization are efficient.

  15. Air Force Office of Scientific Research 1991 Research Highlights

    DTIC Science & Technology

    1991-01-01

    research at Air Force Europe, allied victory in the Persian Gulf con- programs totaling nearly $300 million annual- laboratories . Air Force ...transitioning nological environment? laboratories and research centers into four research accomplishments for Air Force use. In this added role as... Air Force’s saries; maintaining a strong research Organizationally, AFOSR has also glo ehran gol per infrastructure among Air Force

  16. Checklist of Major Plant Species in Ashley County, Arkansas Noted by General Land Office Surveyors

    Treesearch

    Don C. Bragg

    2002-01-01

    The original General Land Office (GLO) survey notes for the Ashley County, Arkansas, area were examined to determine the plant taxa mentioned during the 1818 to 1855 surveys. While some challenges in identifying species were encountered, at least 39 families and approximately 100 species were identified with reasonable certainty. Most references were for trees used to...

  17. General Land Office Surveys as a Source for Arkansas History: The Example of Ashley County

    Treesearch

    Don C. Bragg

    2004-01-01

    Deputy surveyor Caleb Langtree's rather bleak assessment of a landscape in southern Arkansas captures the struggle that was the General Land Office (GLO) survey. Charged with laying the foundation for settlement of territories ceded to the nation, the surveyors that traversed the public domain of the United States in the eighteenth and nineteenth centuries toiled...

  18. Evaluating a new method for reconstructing forest conditions from General Land Office survey records

    Treesearch

    Carrie R. Levine; Charles V. Cogbill; Brandon M. Collins; Andrew J. Larson; James A. Lutz; Malcolm P. North; Christina M. Restaino; Hugh D. Safford; Scott L. Stephens; John J. Battles

    2017-01-01

    Historical forest conditions are often used to inform contemporary management goals because historical forests are considered to be resilient to ecological disturbances. The General Land Office (GLO) surveys of the late 19th and early 20th centuries provide regionally quasi-contiguous data sets of historical forests across much of the Western United States....

  19. Demonstration of Automatically-Generated Adjoint Code for Use in Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Green, Lawrence; Carle, Alan; Fagan, Mike

    1999-01-01

    Gradient-based optimization requires accurate derivatives of the objective function and constraints. These gradients may have previously been obtained by manual differentiation of analysis codes, symbolic manipulators, finite-difference approximations, or existing automatic differentiation (AD) tools such as ADIFOR (Automatic Differentiation in FORTRAN). Each of these methods has certain deficiencies, particularly when applied to complex, coupled analyses with many design variables. Recently, a new AD tool called ADJIFOR (Automatic Adjoint Generation in FORTRAN), based upon ADIFOR, was developed and demonstrated. Whereas ADIFOR implements forward-mode (direct) differentiation throughout an analysis program to obtain exact derivatives via the chain rule of calculus, ADJIFOR implements the reverse-mode counterpart of the chain rule to obtain exact adjoint form derivatives from FORTRAN code. Automatically-generated adjoint versions of the widely-used CFL3D computational fluid dynamics (CFD) code and an algebraic wing grid generation code were obtained with just a few hours processing time using the ADJIFOR tool. The codes were verified for accuracy and were shown to compute the exact gradient of the wing lift-to-drag ratio, with respect to any number of shape parameters, in about the time required for 7 to 20 function evaluations. The codes have now been executed on various computers with typical memory and disk space for problems with up to 129 x 65 x 33 grid points, and for hundreds to thousands of independent variables. These adjoint codes are now used in a gradient-based aerodynamic shape optimization problem for a swept, tapered wing. For each design iteration, the optimization package constructs an approximate, linear optimization problem, based upon the current objective function, constraints, and gradient values. The optimizer subroutines are called within a design loop employing the approximate linear problem until an optimum shape is found, the design loop limit is reached, or no further design improvement is possible due to active design variable bounds and/or constraints. The resulting shape parameters are then used by the grid generation code to define a new wing surface and computational grid. The lift-to-drag ratio and its gradient are computed for the new design by the automatically-generated adjoint codes. Several optimization iterations may be required to find an optimum wing shape. Results from two sample cases will be discussed. The reader should note that this work primarily represents a demonstration of use of automatically- generated adjoint code within an aerodynamic shape optimization. As such, little significance is placed upon the actual optimization results, relative to the method for obtaining the results.

  20. The formation of argpyrimidine, a methylglyoxal-arginine adduct, in the nucleus of neural cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakadate, Yusuke; Uchida, Koji; Shikata, Keiji

    2009-01-09

    Methylglyoxal (MG) is an endogenous metabolite in glycolysis and forms stable adducts primarily with arginine residues of intracellular proteins. The biological role of this modification in cell function is not known. In the present study, we found that a MG-detoxification enzyme glyoxalase I (GLO1) is mainly expressed in the ventricular zone (VZ) at embryonic day 16 which neural stem and progenitor cells localize. Moreover, immunohistochemical analysis revealed that argpyrimidine, a major MG-arginine adduct, is predominantly produced in cortical plate neurons not VZ during cerebral cortex development and is exclusively located in the nucleus. Immunoblotting experiment showed that the formation ofmore » argpyrimidine occurs on some nuclear proteins of cortical neurons. To our knowledge, this is first report of the argpyrimidine formation in the nucleus of neuron. These findings suggest that GLO1, which is dominantly expressed in the embryonic VZ, reduces the intracellular level of MG and suppresses the formation of argpyrimidine in neural stem and progenitor cells. Argpyrimidine may contribute to the neural differentiation and/or the maintenance of the differentiated state via the modification of nuclear proteins.« less

  1. Regional analysis of annual maximum rainfall using TL-moments method

    NASA Astrophysics Data System (ADS)

    Shabri, Ani Bin; Daud, Zalina Mohd; Ariff, Noratiqah Mohd

    2011-06-01

    Information related to distributions of rainfall amounts are of great importance for designs of water-related structures. One of the concerns of hydrologists and engineers is the probability distribution for modeling of regional data. In this study, a novel approach to regional frequency analysis using L-moments is revisited. Subsequently, an alternative regional frequency analysis using the TL-moments method is employed. The results from both methods were then compared. The analysis was based on daily annual maximum rainfall data from 40 stations in Selangor Malaysia. TL-moments for the generalized extreme value (GEV) and generalized logistic (GLO) distributions were derived and used to develop the regional frequency analysis procedure. TL-moment ratio diagram and Z-test were employed in determining the best-fit distribution. Comparison between the two approaches showed that the L-moments and TL-moments produced equivalent results. GLO and GEV distributions were identified as the most suitable distributions for representing the statistical properties of extreme rainfall in Selangor. Monte Carlo simulation was used for performance evaluation, and it showed that the method of TL-moments was more efficient for lower quantile estimation compared with the L-moments.

  2. Comparison of various staining methods for the detection of Cryptosporidium in cell-free culture.

    PubMed

    Boxell, Annika; Hijjawi, Nawal; Monis, Paul; Ryan, Una

    2008-09-01

    The complete development of Cryptosporidium in host cell-free medium first described in 2004, represented a significant advance that can facilitate many aspects of Cryptosporidium research. A current limitation of host cell-free cultivation is the difficulty involved in visualising the life-cycle stages as they are very small in size, morphologically difficult to identify and dispersed throughout the media. This is in contrast to conventional cell culture methods for Cryptosporidium, where it is possible to focus on the host cells and view the foci of infection on the host cells. In the present study, we compared three specific and three non-specific techniques for visualising Cryptosporidium parvum life-cycle stages in cell-free culture; antibody staining using anti-sporozoite and anti-oocyst wall antibodies (Sporo-Glo and Crypto Cel), fluorescent in-situ hybridization (FISH) using a Cryptosporidium specific rRNA oligonucleotide probe and the non-specific dyes; Texas Red, carboxyfluorescein diacetate succinimidyl ester (CFSE) and 4,6' diamino-2-phenylindole dihydrochloride (DAPI). Results revealed that a combination of Sporo-Glo and Crypto Cel staining resulted in easy and reliable identification of all life-cycle stages.

  3. Global Cryptosporidium Loads from Livestock Manure.

    PubMed

    Vermeulen, Lucie C; Benders, Jorien; Medema, Gertjan; Hofstra, Nynke

    2017-08-01

    Understanding the environmental pathways of Cryptosporidium is essential for effective management of human and animal cryptosporidiosis. In this paper we aim to quantify livestock Cryptosporidium spp. loads to land on a global scale using spatially explicit process-based modeling, and to explore the effect of manure storage and treatment on oocyst loads using scenario analysis. Our model GloWPa-Crypto L1 calculates a total global Cryptosporidium spp. load from livestock manure of 3.2 × 10 23 oocysts per year. Cattle, especially calves, are the largest contributors, followed by chickens and pigs. Spatial differences are linked to animal spatial distributions. North America, Europe, and Oceania together account for nearly a quarter of the total oocyst load, meaning that the developing world accounts for the largest share. GloWPa-Crypto L1 is most sensitive to oocyst excretion rates, due to large variation reported in literature. We compared the current situation to four alternative management scenarios. We find that although manure storage halves oocyst loads, manure treatment, especially of cattle manure and particularly at elevated temperatures, has a larger load reduction potential than manure storage (up to 4.6 log units). Regions with high reduction potential include India, Bangladesh, western Europe, China, several countries in Africa, and New Zealand.

  4. Thermal control paints on LDEF: Results of M0003 sub-experiment 18

    NASA Technical Reports Server (NTRS)

    Jaggers, C. H.; Meshishnek, M. J.; Coggi, J. M.

    1993-01-01

    Several thermal control paints were flown on the Long Duration Exposure Facility (LDEF), including the white paints Chemglaze A276, S13GLO, and YB-71, and the black paint D-111. The effects of low earth orbit, which includes those induced by UV radiation and atomic oxygen, varied significantly with each paint and its location on LDEF. For example, samples of Chemglaze A276 located on the trailing edge of LDEF darkened significantly due to UV-induced degradation of the paint's binder, while leading edge samples remained white but exhibited severe atomic oxygen erosion of the binder. Although the response of S13GLO to low earth orbit is much more complicated, it also exhibited greater darkening on trailing edge samples as compared to leading edge samples. In contrast, YB-71 and D-111 remained relatively stable and showed minimal degradation. The performance of these paints as determined by changes in their optical and physical properties, including solar absorptance as well as surface chemical changes and changes in surface morphology is examined. It will also provide a correlation of these optical and physical property changes to the physical phenomena that occurred in these materials during the LDEF mission.

  5. Multiple interactions amongst floral homeotic MADS box proteins.

    PubMed Central

    Davies, B; Egea-Cortines, M; de Andrade Silva, E; Saedler, H; Sommer, H

    1996-01-01

    Most known floral homeotic genes belong to the MADS box family and their products act in combination to specify floral organ identity by an unknown mechanism. We have used a yeast two-hybrid system to investigate the network of interactions between the Antirrhinum organ identity gene products. Selective heterodimerization is observed between MADS box factors. Exclusive interactions are detected between two factors, DEFICIENS (DEF) and GLOBOSA (GLO), previously known to heterodimerize and control development of petals and stamens. In contrast, a third factor, PLENA (PLE), which is required for reproductive organ development, can interact with the products of MADS box genes expressed at early, intermediate and late stages. We also demonstrate that heterodimerization of DEF and GLO requires the K box, a domain not found in non-plant MADS box factors, indicating that the plant MADS box factors may have different criteria for interaction. The association of PLENA and the temporally intermediate MADS box factors suggests that part of their function in mediating between the meristem and organ identity genes is accomplished through direct interaction. These data reveal an unexpectedly complex network of interactions between the factors controlling flower development and have implications for the determination of organ identity. Images PMID:8861961

  6. Multicolor bleach-rate imaging enlightens in vivo sterol transport

    PubMed Central

    Sage, Daniel

    2010-01-01

    Elucidation of in vivo cholesterol transport and its aberrations in cardiovascular diseases requires suitable model organisms and the development of appropriate monitoring technology. We recently presented a new approach to visualize transport of the intrinsically fluorescent sterol, dehydroergosterol (DHE) in the genetically tractable model organism Caenorhabditis elegans (C. elegans). DHE is structurally very similar to cholesterol and ergosterol, two sterols used by the sterol-auxotroph nematode. We developed a new computational method measuring fluorophore bleaching kinetics at every pixel position, which can be used as a fingerprint to distinguish rapidly bleaching DHE from slowly bleaching autofluorescence in the animals. Here, we introduce multicolor bleach-rate sterol imaging. By this method, we demonstrate that some DHE is targeted to a population of basolateral recycling endosomes (RE) labelled with GFP-tagged RME-1 (GFP-RME-1) in the intestine of both, wild-type nematodes and mutant animals lacking intestinal gut granules (glo1-mutants). DHE-enriched intestinal organelles of glo1-mutants were decorated with GFPrme8, a marker for early endosomes. No co-localization was found with a lysosomal marker, GFP-LMP1. Our new methods hold great promise for further studies on endosomal sterol transport in C. elegans. PMID:20798830

  7. N2 triplet band systems and atomic oxygen in the dayglow

    NASA Astrophysics Data System (ADS)

    Broadfoot, A. L.; Hatfield, D. B.; Anderson, E. R.; Stone, T. C.; Sandel, B. R.; Gardner, J. A.; Murad, E.; Knecht, D. J.; Pike, C. P.; Viereck, R. A.

    1997-06-01

    New spectrographic observations of the Earth's dayglow have been acquired by the Arizona Airglow Experiment (GLO) flown on the space shuttle. GLO is an imaging spectrograph that records simultaneous vertical profiles of prominent Earth limb emissions occurring at wavelengths between 115 and 900 nm. This study addresses the measured emissions from the N2 triplet states (first positive, second positive, and Vegard-Kaplan band systems) and their excitation by the local photoelectron flux. The triplet state population distributions modeled for aurora by Cartwright [1978] are modified for dayglow conditions by changing to a photoelectron-flux energy distribution and including resonance scattering by the first positive system. Modeled and observed intensities are in excellent agreement, in contrast to the well-studied auroral case. This work concentrates on dayglow conditions at 200 km altitude near the subsolar point. Parameters to infer the local photoelectron flux from the emission band intensities are provided. Several atomic oxygen dayglow emission features were analyzed to complement the N2 analysis. The photoelectron-excited O I(135.6, 777.4 nm) lines were found to be 3 to 4 times weaker than predicted while the O I(630.0, 844.6 nm) lines were in close agreement with the model prediction.

  8. User's manual for the BNW-II optimization code for dry/wet-cooled power plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braun, D.J.; Bamberger, J.A.; Braun, D.J.

    1978-05-01

    This volume provides a listing of the BNW-II dry/wet ammonia heat rejection optimization code and is an appendix to Volume I which gives a narrative description of the code's algorithms as well as logic, input and output information.

  9. The Use of a Code-generating System for the Derivation of the Equations for Wind Turbine Dynamics

    NASA Astrophysics Data System (ADS)

    Ganander, Hans

    2003-10-01

    For many reasons the size of wind turbines on the rapidly growing wind energy market is increasing. Relations between aeroelastic properties of these new large turbines change. Modifications of turbine designs and control concepts are also influenced by growing size. All these trends require development of computer codes for design and certification. Moreover, there is a strong desire for design optimization procedures, which require fast codes. General codes, e.g. finite element codes, normally allow such modifications and improvements of existing wind turbine models. This is done relatively easy. However, the calculation times of such codes are unfavourably long, certainly for optimization use. The use of an automatic code generating system is an alternative for relevance of the two key issues, the code and the design optimization. This technique can be used for rapid generation of codes of particular wind turbine simulation models. These ideas have been followed in the development of new versions of the wind turbine simulation code VIDYN. The equations of the simulation model were derived according to the Lagrange equation and using Mathematica®, which was directed to output the results in Fortran code format. In this way the simulation code is automatically adapted to an actual turbine model, in terms of subroutines containing the equations of motion, definitions of parameters and degrees of freedom. Since the start in 1997, these methods, constituting a systematic way of working, have been used to develop specific efficient calculation codes. The experience with this technique has been very encouraging, inspiring the continued development of new versions of the simulation code as the need has arisen, and the interest for design optimization is growing.

  10. Shared prefetching to reduce execution skew in multi-threaded systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eichenberger, Alexandre E; Gunnels, John A

    Mechanisms are provided for optimizing code to perform prefetching of data into a shared memory of a computing device that is shared by a plurality of threads that execute on the computing device. A memory stream of a portion of code that is shared by the plurality of threads is identified. A set of prefetch instructions is distributed across the plurality of threads. Prefetch instructions are inserted into the instruction sequences of the plurality of threads such that each instruction sequence has a separate sub-portion of the set of prefetch instructions, thereby generating optimized code. Executable code is generated basedmore » on the optimized code and stored in a storage device. The executable code, when executed, performs the prefetches associated with the distributed set of prefetch instructions in a shared manner across the plurality of threads.« less

  11. Aeroelastic Tailoring Study of N+2 Low-Boom Supersonic Commercial Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi

    2015-01-01

    The Lockheed Martins N+2 Low-boom Supersonic Commercial Transport (LSCT) aircraft is optimized in this study through the use of a multidisciplinary design optimization tool developed at the NASA Armstrong Flight Research Center. A total of 111 design variables are used in the first optimization run. Total structural weight is the objective function in this optimization run. Design requirements for strength, buckling, and flutter are selected as constraint functions during the first optimization run. The MSC Nastran code is used to obtain the modal, strength, and buckling characteristics. Flutter and trim analyses are based on ZAERO code and landing and ground control loads are computed using an in-house code.

  12. The DOPEX code: An application of the method of steepest descent to laminated-shield-weight optimization with several constraints

    NASA Technical Reports Server (NTRS)

    Lahti, G. P.

    1972-01-01

    A two- or three-constraint, two-dimensional radiation shield weight optimization procedure and a computer program, DOPEX, is described. The DOPEX code uses the steepest descent method to alter a set of initial (input) thicknesses for a shield configuration to achieve a minimum weight while simultaneously satisfying dose constaints. The code assumes an exponential dose-shield thickness relation with parameters specified by the user. The code also assumes that dose rates in each principal direction are dependent only on thicknesses in that direction. Code input instructions, FORTRAN 4 listing, and a sample problem are given. Typical computer time required to optimize a seven-layer shield is about 0.1 minute on an IBM 7094-2.

  13. Gear optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.; Chen, Xiang; Zhang, Ning-Tian

    1988-01-01

    The use of formal numerical optimization methods for the design of gears is investigated. To achieve this, computer codes were developed for the analysis of spur gears and spiral bevel gears. These codes calculate the life, dynamic load, bending strength, surface durability, gear weight and size, and various geometric parameters. It is necessary to calculate all such important responses because they all represent competing requirements in the design process. The codes developed here were written in subroutine form and coupled to the COPES/ADS general purpose optimization program. This code allows the user to define the optimization problem at the time of program execution. Typical design variables include face width, number of teeth and diametral pitch. The user is free to choose any calculated response as the design objective to minimize or maximize and may impose lower and upper bounds on any calculated responses. Typical examples include life maximization with limits on dynamic load, stress, weight, etc. or minimization of weight subject to limits on life, dynamic load, etc. The research codes were written in modular form for easy expansion and so that they could be combined to create a multiple reduction optimization capability in future.

  14. Dynamic state estimation based on Poisson spike trains—towards a theory of optimal encoding

    NASA Astrophysics Data System (ADS)

    Susemihl, Alex; Meir, Ron; Opper, Manfred

    2013-03-01

    Neurons in the nervous system convey information to higher brain regions by the generation of spike trains. An important question in the field of computational neuroscience is how these sensory neurons encode environmental information in a way which may be simply analyzed by subsequent systems. Many aspects of the form and function of the nervous system have been understood using the concepts of optimal population coding. Most studies, however, have neglected the aspect of temporal coding. Here we address this shortcoming through a filtering theory of inhomogeneous Poisson processes. We derive exact relations for the minimal mean squared error of the optimal Bayesian filter and, by optimizing the encoder, obtain optimal codes for populations of neurons. We also show that a class of non-Markovian, smooth stimuli are amenable to the same treatment, and provide results for the filtering and prediction error which hold for a general class of stochastic processes. This sets a sound mathematical framework for a population coding theory that takes temporal aspects into account. It also formalizes a number of studies which discussed temporal aspects of coding using time-window paradigms, by stating them in terms of correlation times and firing rates. We propose that this kind of analysis allows for a systematic study of temporal coding and will bring further insights into the nature of the neural code.

  15. Upland Hardwood Forests and Related Communities of the Arkansas Ozarks in the Early 19th Century

    Treesearch

    Thomas L. Foti

    2004-01-01

    Historic accounts of the 19 th Century Arkansas Ozarks mention such communities as oak forests, pine forests, barrens and prairies. I document the region-wide distribution of these types based on data from the first land survey conducted by the General Land Office (GLO). Structural classes used here include closed forest, open forest, woodland, savanna, open savanna...

  16. Diversity-optimal power loading for intensity modulated MIMO optical wireless communications.

    PubMed

    Zhang, Yan-Yu; Yu, Hong-Yi; Zhang, Jian-Kang; Zhu, Yi-Jun

    2016-04-18

    In this paper, we consider the design of space code for an intensity modulated direct detection multi-input-multi-output optical wireless communication (IM/DD MIMO-OWC) system, in which channel coefficients are independent and non-identically log-normal distributed, with variances and means known at the transmitter and channel state information available at the receiver. Utilizing the existing space code design criterion for IM/DD MIMO-OWC with a maximum likelihood (ML) detector, we design a diversity-optimal space code (DOSC) that maximizes both large-scale diversity and small-scale diversity gains and prove that the spatial repetition code (RC) with a diversity-optimized power allocation is diversity-optimal among all the high dimensional nonnegative space code schemes under a commonly used optical power constraint. In addition, we show that one of significant advantages of the DOSC is to allow low-complexity ML detection. Simulation results indicate that in high signal-to-noise ratio (SNR) regimes, our proposed DOSC significantly outperforms RC, which is the best space code currently available for such system.

  17. Selective visualization of fluorescent sterols in Caenorhabditis elegans by bleach-rate-based image segmentation.

    PubMed

    Wüstner, Daniel; Landt Larsen, Ane; Faergeman, Nils J; Brewer, Jonathan R; Sage, Daniel

    2010-04-01

    The nematode Caenorhabditis elegans is a genetically tractable model organism to investigate sterol transport. In vivo imaging of the fluorescent sterol, dehydroergosterol (DHE), is challenged by C. elegans' high autofluorescence in the same spectral region as emission of DHE. We present a method to detect DHE selectively, based on its rapid bleaching kinetics compared to cellular autofluorescence. Worms were repeatedly imaged on an ultraviolet-sensitive wide field (UV-WF) microscope, and bleaching kinetics of DHE were fitted on a pixel-basis to mathematical models describing the intensity decay. Bleach-rate constants were determined for DHE in vivo and confirmed in model membranes. Using this method, we could detect enrichment of DHE in specific tissues like the nerve ring, the spermateca and oocytes. We confirm these results in C. elegans gut-granule-loss (glo) mutants with reduced autofluorescence and compare our method with three-photon excitation microscopy of sterol in selected tissues. Bleach-rate-based UV-WF imaging is a useful tool for genetic screening experiments on sterol transport, as exemplified by RNA interference against the rme-2 gene coding for the yolk receptor and for worm homologues of Niemann-Pick C disease proteins. Our approach is generally useful for identifying fluorescent probes in the presence of high cellular autofluorescence.

  18. The role of crossover operator in evolutionary-based approach to the problem of genetic code optimization.

    PubMed

    Błażej, Paweł; Wnȩtrzak, Małgorzata; Mackiewicz, Paweł

    2016-12-01

    One of theories explaining the present structure of canonical genetic code assumes that it was optimized to minimize harmful effects of amino acid replacements resulting from nucleotide substitutions and translational errors. A way to testify this concept is to find the optimal code under given criteria and compare it with the canonical genetic code. Unfortunately, the huge number of possible alternatives makes it impossible to find the optimal code using exhaustive methods in sensible time. Therefore, heuristic methods should be applied to search the space of possible solutions. Evolutionary algorithms (EA) seem to be ones of such promising approaches. This class of methods is founded both on mutation and crossover operators, which are responsible for creating and maintaining the diversity of candidate solutions. These operators possess dissimilar characteristics and consequently play different roles in the process of finding the best solutions under given criteria. Therefore, the effective searching for the potential solutions can be improved by applying both of them, especially when these operators are devised specifically for a given problem. To study this subject, we analyze the effectiveness of algorithms for various combinations of mutation and crossover probabilities under three models of the genetic code assuming different restrictions on its structure. To achieve that, we adapt the position based crossover operator for the most restricted model and develop a new type of crossover operator for the more general models. The applied fitness function describes costs of amino acid replacement regarding their polarity. Our results indicate that the usage of crossover operators can significantly improve the quality of the solutions. Moreover, the simulations with the crossover operator optimize the fitness function in the smaller number of generations than simulations without this operator. The optimal genetic codes without restrictions on their structure minimize the costs about 2.7 times better than the canonical genetic code. Interestingly, the optimal codes are dominated by amino acids characterized by polarity close to its average value for all amino acids. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Computer optimization of reactor-thermoelectric space power systems

    NASA Technical Reports Server (NTRS)

    Maag, W. L.; Finnegan, P. M.; Fishbach, L. H.

    1973-01-01

    A computer simulation and optimization code that has been developed for nuclear space power systems is described. The results of using this code to analyze two reactor-thermoelectric systems are presented.

  20. Computerized Dental Comparison: A Critical Review of Dental Coding and Ranking Algorithms Used in Victim Identification.

    PubMed

    Adams, Bradley J; Aschheim, Kenneth W

    2016-01-01

    Comparison of antemortem and postmortem dental records is a leading method of victim identification, especially for incidents involving a large number of decedents. This process may be expedited with computer software that provides a ranked list of best possible matches. This study provides a comparison of the most commonly used conventional coding and sorting algorithms used in the United States (WinID3) with a simplified coding format that utilizes an optimized sorting algorithm. The simplified system consists of seven basic codes and utilizes an optimized algorithm based largely on the percentage of matches. To perform this research, a large reference database of approximately 50,000 antemortem and postmortem records was created. For most disaster scenarios, the proposed simplified codes, paired with the optimized algorithm, performed better than WinID3 which uses more complex codes. The detailed coding system does show better performance with extremely large numbers of records and/or significant body fragmentation. © 2015 American Academy of Forensic Sciences.

  1. Product code optimization for determinate state LDPC decoding in robust image transmission.

    PubMed

    Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G

    2006-08-01

    We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission.

  2. New technologies for advanced three-dimensional optimum shape design in aeronautics

    NASA Astrophysics Data System (ADS)

    Dervieux, Alain; Lanteri, Stéphane; Malé, Jean-Michel; Marco, Nathalie; Rostaing-Schmidt, Nicole; Stoufflet, Bruno

    1999-05-01

    The analysis of complex flows around realistic aircraft geometries is becoming more and more predictive. In order to obtain this result, the complexity of flow analysis codes has been constantly increasing, involving more refined fluid models and sophisticated numerical methods. These codes can only run on top computers, exhausting their memory and CPU capabilities. It is, therefore, difficult to introduce best analysis codes in a shape optimization loop: most previous works in the optimum shape design field used only simplified analysis codes. Moreover, as the most popular optimization methods are the gradient-based ones, the more complex the flow solver, the more difficult it is to compute the sensitivity code. However, emerging technologies are contributing to make such an ambitious project, of including a state-of-the-art flow analysis code into an optimisation loop, feasible. Among those technologies, there are three important issues that this paper wishes to address: shape parametrization, automated differentiation and parallel computing. Shape parametrization allows faster optimization by reducing the number of design variable; in this work, it relies on a hierarchical multilevel approach. The sensitivity code can be obtained using automated differentiation. The automated approach is based on software manipulation tools, which allow the differentiation to be quick and the resulting differentiated code to be rather fast and reliable. In addition, the parallel algorithms implemented in this work allow the resulting optimization software to run on increasingly larger geometries. Copyright

  3. Next-generation acceleration and code optimization for light transport in turbid media using GPUs

    PubMed Central

    Alerstam, Erik; Lo, William Chun Yip; Han, Tianyi David; Rose, Jonathan; Andersson-Engels, Stefan; Lilge, Lothar

    2010-01-01

    A highly optimized Monte Carlo (MC) code package for simulating light transport is developed on the latest graphics processing unit (GPU) built for general-purpose computing from NVIDIA - the Fermi GPU. In biomedical optics, the MC method is the gold standard approach for simulating light transport in biological tissue, both due to its accuracy and its flexibility in modelling realistic, heterogeneous tissue geometry in 3-D. However, the widespread use of MC simulations in inverse problems, such as treatment planning for PDT, is limited by their long computation time. Despite its parallel nature, optimizing MC code on the GPU has been shown to be a challenge, particularly when the sharing of simulation result matrices among many parallel threads demands the frequent use of atomic instructions to access the slow GPU global memory. This paper proposes an optimization scheme that utilizes the fast shared memory to resolve the performance bottleneck caused by atomic access, and discusses numerous other optimization techniques needed to harness the full potential of the GPU. Using these techniques, a widely accepted MC code package in biophotonics, called MCML, was successfully accelerated on a Fermi GPU by approximately 600x compared to a state-of-the-art Intel Core i7 CPU. A skin model consisting of 7 layers was used as the standard simulation geometry. To demonstrate the possibility of GPU cluster computing, the same GPU code was executed on four GPUs, showing a linear improvement in performance with an increasing number of GPUs. The GPU-based MCML code package, named GPU-MCML, is compatible with a wide range of graphics cards and is released as an open-source software in two versions: an optimized version tuned for high performance and a simplified version for beginners (http://code.google.com/p/gpumcml). PMID:21258498

  4. Multidisciplinary Analysis and Optimal Design: As Easy as it Sounds?

    NASA Technical Reports Server (NTRS)

    Moore, Greg; Chainyk, Mike; Schiermeier, John

    2004-01-01

    The viewgraph presentation examines optimal design for precision, large aperture structures. Discussion focuses on aspects of design optimization, code architecture and current capabilities, and planned activities and collaborative area suggestions. The discussion of design optimization examines design sensitivity analysis; practical considerations; and new analytical environments including finite element-based capability for high-fidelity multidisciplinary analysis, design sensitivity, and optimization. The discussion of code architecture and current capabilities includes basic thermal and structural elements, nonlinear heat transfer solutions and process, and optical modes generation.

  5. Trellis coding techniques for mobile communications

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Simon, M. K.; Jedrey, T.

    1988-01-01

    A criterion for designing optimum trellis codes to be used over fading channels is given. A technique is shown for reducing certain multiple trellis codes, optimally designed for the fading channel, to conventional (i.e., multiplicity one) trellis codes. The computational cutoff rate R0 is evaluated for MPSK transmitted over fading channels. Examples of trellis codes optimally designed for the Rayleigh fading channel are given and compared with respect to R0. Two types of modulation/demodulation techniques are considered, namely coherent (using pilot tone-aided carrier recovery) and differentially coherent with Doppler frequency correction. Simulation results are given for end-to-end performance of two trellis-coded systems.

  6. Exploring the Affordances of the Writing Portal (TWP) as an Online Supplementary Writing Platform (For the Special Issue of GLoCALL 2013 and 2014 Conference Papers)

    ERIC Educational Resources Information Center

    Lee, Kean Wah; Said, Noraini; Tan, Choon Keong

    2016-01-01

    The writing process has traditionally been seen "as a lonely journey" to typify the lack of support that students experience for writing outside the classroom. This paper examines an attempt of The Writing Portal (TWP), a supplementary online writing platform, to support students' writing needs throughout the five stages of the writing…

  7. Inhibition of Th17 Cell Differentiation as a Treatment for Multiple Sclerosis

    DTIC Science & Technology

    2013-10-01

    luciferase reporter construct into the cells. This reporter construct allows for both measurement of the transfection efficiency by Renilla luciferase...miR326 (delivered either by lentivirus or cotransfection) should result in reduced Firefly luminescence, with no change in Renilla luminescence. The...using Lipofectamine. After 48 hours Dual Glo substrate was added to the cells and luciferase activity and Renilla Luciferase activity were measured

  8. Laboratory Evaluation of Light Obscuration Particle Counters used to Establish use Limits for Aviation Fuel

    DTIC Science & Technology

    2015-12-01

    evaluation The major drawback to light obscuration particle counting is that the technology is unable to differentiate between solid particulate ...light obscuration particle counter technologies evaluated were able to properly measure solid particulate contamination and provide an indication of...undissolved water, Aqua-Glo, Particulate , Gravimetric 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT none 18. NUMBER OF PAGES 55 19a. NAME OF

  9. Targeting N-RAS as a Therapeutic Approach for Melanoma

    DTIC Science & Technology

    2013-10-01

    Labs (Woburn, MA), Sigma-Aldrich (St. Louis, MO) and Fisher Scientific (Pittsburgh, PA), respectively. Z-VAD-FMK was purchased from R&D Systems ...proliferation assays (MTS assay) and caspase assays were performed with CellTiter 96 AQueous Non-Radioactive Cell Proliferation Assay kit and Caspase-Glo 3...7 Assay Systems (Promega [Madison, WI]) according to the manufacturers’ protocols. Briefly, for the assays employing inhibitors, cells were plated

  10. A Comparison of Real-Time and Endpoint Cell Viability Assays for Improved Synthetic Lethal Drug Validation.

    PubMed

    Single, Andrew; Beetham, Henry; Telford, Bryony J; Guilford, Parry; Chen, Augustine

    2015-12-01

    Cell viability assays fulfill a central role in drug discovery studies. It is therefore important to understand the advantages and disadvantages of the wide variety of available assay methodologies. In this study, we compared the performance of three endpoint assays (resazurin reduction, CellTiter-Glo, and nuclei enumeration) and two real-time systems (IncuCyte and xCELLigence). Of the endpoint approaches, both the resazurin reduction and CellTiter-Glo assays showed higher cell viabilities when compared directly to stained nuclei counts. The IncuCyte and xCELLigence real-time systems were comparable, and both were particularly effective at tracking the effects of drug treatment on cell proliferation at sub-confluent growth. However, the real-time systems failed to evaluate contrasting cell densities between drug-treated and control-treated cells at full growth confluency. Here, we showed that using real-time systems in combination with endpoint assays alleviates the disadvantages posed by each approach alone, providing a more effective means to evaluate drug toxicity in monolayer cell cultures. Such approaches were shown to be effective in elucidating the toxicity of synthetic lethal drugs in an isogenic pair of MCF10A breast cell lines. © 2015 Society for Laboratory Automation and Screening.

  11. The effect of three whitening oral rinses on enamel micro-hardness.

    PubMed

    Potgieter, E; Osman, Y; Grobler, S R

    2014-05-01

    The purpose of this study was to determine the effect on human enamel micro-hardness of three over-the-counter whitening oral rinses available in South Africa. Enamel fragments were gathered into three groups of 15 each. One group was exposed to Colgate Plax Whitening Blancheur, the second group to White Glo 2 in 1 and the third to Plus White, in each case for periods recommended by the respective manufacturers. Surface micro-hardness of all groups was measured before and after a 14 day treatment period. pH levels of the oral rinses were also determined with a combination pH electrode. Pre- and post- treatment data were analysed by the Wilcoxon Signed Rank Sum Test. According to the micro-hardness values no significant (p > 0.05) enamel damage was found as a result of treatment. However, it was observed that Colgate Pax and White Glo decreased the enamel hardness, an early sign of enamel damage, while Plus White showed a small increase in hardness. The three whitening oral rinses on the South African market do not damage the tooth enamel significantly when used as recommended by the manufacturers. However, extending the contact period and increasing the frequency of application might lead to damage of enamel.

  12. Global Cryptosporidium Loads from Livestock Manure

    PubMed Central

    2017-01-01

    Understanding the environmental pathways of Cryptosporidium is essential for effective management of human and animal cryptosporidiosis. In this paper we aim to quantify livestock Cryptosporidium spp. loads to land on a global scale using spatially explicit process-based modeling, and to explore the effect of manure storage and treatment on oocyst loads using scenario analysis. Our model GloWPa-Crypto L1 calculates a total global Cryptosporidium spp. load from livestock manure of 3.2 × 1023 oocysts per year. Cattle, especially calves, are the largest contributors, followed by chickens and pigs. Spatial differences are linked to animal spatial distributions. North America, Europe, and Oceania together account for nearly a quarter of the total oocyst load, meaning that the developing world accounts for the largest share. GloWPa-Crypto L1 is most sensitive to oocyst excretion rates, due to large variation reported in literature. We compared the current situation to four alternative management scenarios. We find that although manure storage halves oocyst loads, manure treatment, especially of cattle manure and particularly at elevated temperatures, has a larger load reduction potential than manure storage (up to 4.6 log units). Regions with high reduction potential include India, Bangladesh, western Europe, China, several countries in Africa, and New Zealand. PMID:28654242

  13. Eucommia ulmoides Ameliorates Glucotoxicity by Suppressing Advanced Glycation End-Products in Diabetic Mice Kidney

    PubMed Central

    Do, Moon Ho; Hur, Jinyoung; Choi, Jiwon; Kim, Mina; Kim, Min Jung; Kim, Yoonsook; Ha, Sang Keun

    2018-01-01

    Eucommia ulmoides Oliv. (EU), also known as Du-Zhong, is a medicinal herb commonly used in Asia to treat hypertension and diabetes. Despite evidence of the protective effects of EU against diabetes, its precise effects and mechanisms of action against advanced glycation end-products (AGEs) are unclear. In this study, we evaluated the effects of EU on AGEs-induced renal disease and explored the possible underlying mechanisms using streptozotocin (STZ)-induced diabetic mice. STZ-induced diabetic mice received EU extract (200 mg/kg) orally for 6 weeks. EU treatment did not change blood glucose and glycated hemoglobin (HbA1c) levels in diabetic mice. However, the EU-treated group showed a significant increase in the protein expression and activity of glyoxalase 1 (Glo1), which detoxifies the AGE precursor, methylglyoxal (MGO). EU significantly upregulated nuclear factor erythroid 2-related factor 2 (Nrf2) expression but downregulated that of receptor for AGE (RAGE). Furthermore, histological and immunohistochemical analyses of kidney tissue showed that EU reduced periodic acid–Schiff (PAS)-positive staining, AGEs, and MGO accumulation in diabetic mice. Based on these findings, we concluded that EU ameliorated the renal damage in diabetic mice by inhibiting AGEs formation and RAGE expression and reducing oxidative stress, through the Glo1 and Nrf2 pathways. PMID:29495397

  14. Eucommia ulmoides Ameliorates Glucotoxicity by Suppressing Advanced Glycation End-Products in Diabetic Mice Kidney.

    PubMed

    Do, Moon Ho; Hur, Jinyoung; Choi, Jiwon; Kim, Mina; Kim, Min Jung; Kim, Yoonsook; Ha, Sang Keun

    2018-02-26

    Eucommia ulmoides Oliv. (EU), also known as Du-Zhong, is a medicinal herb commonly used in Asia to treat hypertension and diabetes. Despite evidence of the protective effects of EU against diabetes, its precise effects and mechanisms of action against advanced glycation end-products (AGEs) are unclear. In this study, we evaluated the effects of EU on AGEs-induced renal disease and explored the possible underlying mechanisms using streptozotocin (STZ)-induced diabetic mice. STZ-induced diabetic mice received EU extract (200 mg/kg) orally for 6 weeks. EU treatment did not change blood glucose and glycated hemoglobin (HbA1c) levels in diabetic mice. However, the EU-treated group showed a significant increase in the protein expression and activity of glyoxalase 1 (Glo1), which detoxifies the AGE precursor, methylglyoxal (MGO). EU significantly upregulated nuclear factor erythroid 2-related factor 2 (Nrf2) expression but downregulated that of receptor for AGE (RAGE). Furthermore, histological and immunohistochemical analyses of kidney tissue showed that EU reduced periodic acid-Schiff (PAS)-positive staining, AGEs, and MGO accumulation in diabetic mice. Based on these findings, we concluded that EU ameliorated the renal damage in diabetic mice by inhibiting AGEs formation and RAGE expression and reducing oxidative stress, through the Glo1 and Nrf2 pathways.

  15. Retrospective forecasts of the upcoming winter season snow accumulation in the Inn headwaters (European Alps)

    NASA Astrophysics Data System (ADS)

    Förster, Kristian; Hanzer, Florian; Stoll, Elena; Scaife, Adam A.; MacLachlan, Craig; Schöber, Johannes; Huttenlau, Matthias; Achleitner, Stefan; Strasser, Ulrich

    2018-02-01

    This article presents analyses of retrospective seasonal forecasts of snow accumulation. Re-forecasts with 4 months' lead time from two coupled atmosphere-ocean general circulation models (NCEP CFSv2 and MetOffice GloSea5) drive the Alpine Water balance and Runoff Estimation model (AWARE) in order to predict mid-winter snow accumulation in the Inn headwaters. As snowpack is hydrological storage that evolves during the winter season, it is strongly dependent on precipitation totals of the previous months. Climate model (CM) predictions of precipitation totals integrated from November to February (NDJF) compare reasonably well with observations. Even though predictions for precipitation may not be significantly more skilful than for temperature, the predictive skill achieved for precipitation is retained in subsequent water balance simulations when snow water equivalent (SWE) in February is considered. Given the AWARE simulations driven by observed meteorological fields as a benchmark for SWE analyses, the correlation achieved using GloSea5-AWARE SWE predictions is r = 0.57. The tendency of SWE anomalies (i.e. the sign of anomalies) is correctly predicted in 11 of 13 years. For CFSv2-AWARE, the corresponding values are r = 0.28 and 7 of 13 years. The results suggest that some seasonal prediction of hydrological model storage tendencies in parts of Europe is possible.

  16. A Flexible Workflow for Automated Bioluminescent Kinase Selectivity Profiling.

    PubMed

    Worzella, Tracy; Butzler, Matt; Hennek, Jacquelyn; Hanson, Seth; Simdon, Laura; Goueli, Said; Cowan, Cris; Zegzouti, Hicham

    2017-04-01

    Kinase profiling during drug discovery is a necessary process to confirm inhibitor selectivity and assess off-target activities. However, cost and logistical limitations prevent profiling activities from being performed in-house. We describe the development of an automated and flexible kinase profiling workflow that combines ready-to-use kinase enzymes and substrates in convenient eight-tube strips, a bench-top liquid handling device, ADP-Glo Kinase Assay (Promega, Madison, WI) technology to quantify enzyme activity, and a multimode detection instrument. Automated methods were developed for kinase reactions and quantification reactions to be assembled on a Gilson (Middleton, WI) PIPETMAX, following standardized plate layouts for single- and multidose compound profiling. Pipetting protocols were customized at runtime based on user-provided information, including compound number, increment for compound titrations, and number of kinase families to use. After the automated liquid handling procedures, a GloMax Discover (Promega) microplate reader preloaded with SMART protocols was used for luminescence detection and automatic data analysis. The functionality of the automated workflow was evaluated with several compound-kinase combinations in single-dose or dose-response profiling formats. Known target-specific inhibitions were confirmed. Novel small molecule-kinase interactions, including off-target inhibitions, were identified and confirmed in secondary studies. By adopting this streamlined profiling process, researchers can quickly and efficiently profile compounds of interest on site.

  17. Optimization of algorithm of coding of genetic information of Chlamydia

    NASA Astrophysics Data System (ADS)

    Feodorova, Valentina A.; Ulyanov, Sergey S.; Zaytsev, Sergey S.; Saltykov, Yury V.; Ulianova, Onega V.

    2018-04-01

    New method of coding of genetic information using coherent optical fields is developed. Universal technique of transformation of nucleotide sequences of bacterial gene into laser speckle pattern is suggested. Reference speckle patterns of the nucleotide sequences of omp1 gene of typical wild strains of Chlamydia trachomatis of genovars D, E, F, G, J and K and Chlamydia psittaci serovar I as well are generated. Algorithm of coding of gene information into speckle pattern is optimized. Fully developed speckles with Gaussian statistics for gene-based speckles have been used as criterion of optimization.

  18. Development of a turbomachinery design optimization procedure using a multiple-parameter nonlinear perturbation method

    NASA Technical Reports Server (NTRS)

    Stahara, S. S.

    1984-01-01

    An investigation was carried out to complete the preliminary development of a combined perturbation/optimization procedure and associated computational code for designing optimized blade-to-blade profiles of turbomachinery blades. The overall purpose of the procedures developed is to provide demonstration of a rapid nonlinear perturbation method for minimizing the computational requirements associated with parametric design studies of turbomachinery flows. The method combines the multiple parameter nonlinear perturbation method, successfully developed in previous phases of this study, with the NASA TSONIC blade-to-blade turbomachinery flow solver, and the COPES-CONMIN optimization procedure into a user's code for designing optimized blade-to-blade surface profiles of turbomachinery blades. Results of several design applications and a documented version of the code together with a user's manual are provided.

  19. TRO-2D - A code for rational transonic aerodynamic optimization

    NASA Technical Reports Server (NTRS)

    Davis, W. H., Jr.

    1985-01-01

    Features and sample applications of the transonic rational optimization (TRO-2D) code are outlined. TRO-2D includes the airfoil analysis code FLO-36, the CONMIN optimization code and a rational approach to defining aero-function shapes for geometry modification. The program is part of an effort to develop an aerodynamically smart optimizer that will simplify and shorten the design process. The user has a selection of drag minimization and associated minimum lift, moment, and the pressure distribution, a choice among 14 resident aero-function shapes, and options on aerodynamic and geometric constraints. Design variables such as the angle of attack, leading edge radius and camber, shock strength and movement, supersonic pressure plateau control, etc., are discussed. The results of calculations of a reduced leading edge camber transonic airfoil and an airfoil with a natural laminar flow are provided, showing that only four design variables need be specified to obtain satisfactory results.

  20. Nuclear fuel management optimization using genetic algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeChaine, M.D.; Feltus, M.A.

    1995-07-01

    The code independent genetic algorithm reactor optimization (CIGARO) system has been developed to optimize nuclear reactor loading patterns. It uses genetic algorithms (GAs) and a code-independent interface, so any reactor physics code (e.g., CASMO-3/SIMULATE-3) can be used to evaluate the loading patterns. The system is compared to other GA-based loading pattern optimizers. Tests were carried out to maximize the beginning of cycle k{sub eff} for a pressurized water reactor core loading with a penalty function to limit power peaking. The CIGARO system performed well, increasing the k{sub eff} after lowering the peak power. Tests of a prototype parallel evaluation methodmore » showed the potential for a significant speedup.« less

  1. Ada Integrated Environment III Computer Program Development Specification. Volume III. Ada Optimizing Compiler.

    DTIC Science & Technology

    1981-12-01

    file.library-unit{.subunit).SYMAP Statement Map: library-file. library-unit.subunit).SMAP Type Map: 1 ibrary.fi le. 1 ibrary-unit{.subunit). TMAP The library...generator SYMAP Symbol Map code generator SMAP Updated Statement Map code generator TMAP Type Map code generator A.3.5 The PUNIT Command The P UNIT...Core.Stmtmap) NAME Tmap (Core.Typemap) END Example A-3 Compiler Command Stream for the Code Generator Texas Instruments A-5 Ada Optimizing Compiler

  2. Model-Based Speech Signal Coding Using Optimized Temporal Decomposition for Storage and Broadcasting Applications

    NASA Astrophysics Data System (ADS)

    Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret

    2003-12-01

    A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.

  3. Heuristic rules embedded genetic algorithm for in-core fuel management optimization

    NASA Astrophysics Data System (ADS)

    Alim, Fatih

    The objective of this study was to develop a unique methodology and a practical tool for designing loading pattern (LP) and burnable poison (BP) pattern for a given Pressurized Water Reactor (PWR) core. Because of the large number of possible combinations for the fuel assembly (FA) loading in the core, the design of the core configuration is a complex optimization problem. It requires finding an optimal FA arrangement and BP placement in order to achieve maximum cycle length while satisfying the safety constraints. Genetic Algorithms (GA) have been already used to solve this problem for LP optimization for both PWR and Boiling Water Reactor (BWR). The GA, which is a stochastic method works with a group of solutions and uses random variables to make decisions. Based on the theories of evaluation, the GA involves natural selection and reproduction of the individuals in the population for the next generation. The GA works by creating an initial population, evaluating it, and then improving the population by using the evaluation operators. To solve this optimization problem, a LP optimization package, GARCO (Genetic Algorithm Reactor Code Optimization) code is developed in the framework of this thesis. This code is applicable for all types of PWR cores having different geometries and structures with an unlimited number of FA types in the inventory. To reach this goal, an innovative GA is developed by modifying the classical representation of the genotype. To obtain the best result in a shorter time, not only the representation is changed but also the algorithm is changed to use in-core fuel management heuristics rules. The improved GA code was tested to demonstrate and verify the advantages of the new enhancements. The developed methodology is explained in this thesis and preliminary results are shown for the VVER-1000 reactor hexagonal geometry core and the TMI-1 PWR. The improved GA code was tested to verify the advantages of new enhancements. The core physics code used for VVER in this research is Moby-Dick, which was developed to analyze the VVER by SKODA Inc. The SIMULATE-3 code, which is an advanced two-group nodal code, is used to analyze the TMI-1.

  4. Optimized scalar promotion with load and splat SIMD instructions

    DOEpatents

    Eichenberger, Alexander E; Gschwind, Michael K; Gunnels, John A

    2013-10-29

    Mechanisms for optimizing scalar code executed on a single instruction multiple data (SIMD) engine are provided. Placement of vector operation-splat operations may be determined based on an identification of scalar and SIMD operations in an original code representation. The original code representation may be modified to insert the vector operation-splat operations based on the determined placement of vector operation-splat operations to generate a first modified code representation. Placement of separate splat operations may be determined based on identification of scalar and SIMD operations in the first modified code representation. The first modified code representation may be modified to insert or delete separate splat operations based on the determined placement of the separate splat operations to generate a second modified code representation. SIMD code may be output based on the second modified code representation for execution by the SIMD engine.

  5. Optimized scalar promotion with load and splat SIMD instructions

    DOEpatents

    Eichenberger, Alexandre E [Chappaqua, NY; Gschwind, Michael K [Chappaqua, NY; Gunnels, John A [Yorktown Heights, NY

    2012-08-28

    Mechanisms for optimizing scalar code executed on a single instruction multiple data (SIMD) engine are provided. Placement of vector operation-splat operations may be determined based on an identification of scalar and SIMD operations in an original code representation. The original code representation may be modified to insert the vector operation-splat operations based on the determined placement of vector operation-splat operations to generate a first modified code representation. Placement of separate splat operations may be determined based on identification of scalar and SIMD operations in the first modified code representation. The first modified code representation may be modified to insert or delete separate splat operations based on the determined placement of the separate splat operations to generate a second modified code representation. SIMD code may be output based on the second modified code representation for execution by the SIMD engine.

  6. Optimal block cosine transform image coding for noisy channels

    NASA Technical Reports Server (NTRS)

    Vaishampayan, V.; Farvardin, N.

    1986-01-01

    The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.

  7. Numerical optimization of three-dimensional coils for NSTX-U

    DOE PAGES

    Lazerson, S. A.; Park, J. -K.; Logan, N.; ...

    2015-09-03

    A tool for the calculation of optimal three-dimensional (3D) perturbative magnetic fields in tokamaks has been developed. The IPECOPT code builds upon the stellarator optimization code STELLOPT to allow for optimization of linear ideal magnetohydrodynamic perturbed equilibrium (IPEC). This tool has been applied to NSTX-U equilibria, addressing which fields are the most effective at driving NTV torques. The NTV torque calculation is performed by the PENT code. Optimization of the normal field spectrum shows that fields with n = 1 character can drive a large core torque. It is also shown that fields with n = 3 features are capablemore » of driving edge torque and some core torque. Coil current optimization (using the planned in-vessel and existing RWM coils) on NSTX-U suggest the planned coils set is adequate for core and edge torque control. In conclusion, comparison between error field correction experiments on DIII-D and the optimizer show good agreement.« less

  8. Optimal periodic binary codes of lengths 28 to 64

    NASA Technical Reports Server (NTRS)

    Tyler, S.; Keston, R.

    1980-01-01

    Results from computer searches performed to find repeated binary phase coded waveforms with optimal periodic autocorrelation functions are discussed. The best results for lengths 28 to 64 are given. The code features of major concern are where (1) the peak sidelobe in the autocorrelation function is small and (2) the sum of the squares of the sidelobes in the autocorrelation function is small.

  9. Two Classes of New Optimal Asymmetric Quantum Codes

    NASA Astrophysics Data System (ADS)

    Chen, Xiaojing; Zhu, Shixin; Kai, Xiaoshan

    2018-03-01

    Let q be an even prime power and ω be a primitive element of F_{q2}. By analyzing the structure of cyclotomic cosets, we determine a sufficient condition for ω q- 1-constacyclic codes over F_{q2} to be Hermitian dual-containing codes. By the CSS construction, two classes of new optimal AQECCs are obtained according to the Singleton bound for AQECCs.

  10. A Subsonic Aircraft Design Optimization With Neural Network and Regression Approximators

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.; Haller, William J.

    2004-01-01

    The Flight-Optimization-System (FLOPS) code encountered difficulty in analyzing a subsonic aircraft. The limitation made the design optimization problematic. The deficiencies have been alleviated through use of neural network and regression approximations. The insight gained from using the approximators is discussed in this paper. The FLOPS code is reviewed. Analysis models are developed and validated for each approximator. The regression method appears to hug the data points, while the neural network approximation follows a mean path. For an analysis cycle, the approximate model required milliseconds of central processing unit (CPU) time versus seconds by the FLOPS code. Performance of the approximators was satisfactory for aircraft analysis. A design optimization capability has been created by coupling the derived analyzers to the optimization test bed CometBoards. The approximators were efficient reanalysis tools in the aircraft design optimization. Instability encountered in the FLOPS analyzer was eliminated. The convergence characteristics were improved for the design optimization. The CPU time required to calculate the optimum solution, measured in hours with the FLOPS code was reduced to minutes with the neural network approximation and to seconds with the regression method. Generation of the approximators required the manipulation of a very large quantity of data. Design sensitivity with respect to the bounds of aircraft constraints is easily generated.

  11. Cascade Optimization for Aircraft Engines With Regression and Neural Network Analysis - Approximators

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Guptill, James D.; Hopkins, Dale A.; Lavelle, Thomas M.

    2000-01-01

    The NASA Engine Performance Program (NEPP) can configure and analyze almost any type of gas turbine engine that can be generated through the interconnection of a set of standard physical components. In addition, the code can optimize engine performance by changing adjustable variables under a set of constraints. However, for engine cycle problems at certain operating points, the NEPP code can encounter difficulties: nonconvergence in the currently implemented Powell's optimization algorithm and deficiencies in the Newton-Raphson solver during engine balancing. A project was undertaken to correct these deficiencies. Nonconvergence was avoided through a cascade optimization strategy, and deficiencies associated with engine balancing were eliminated through neural network and linear regression methods. An approximation-interspersed cascade strategy was used to optimize the engine's operation over its flight envelope. Replacement of Powell's algorithm by the cascade strategy improved the optimization segment of the NEPP code. The performance of the linear regression and neural network methods as alternative engine analyzers was found to be satisfactory. This report considers two examples-a supersonic mixed-flow turbofan engine and a subsonic waverotor-topped engine-to illustrate the results, and it discusses insights gained from the improved version of the NEPP code.

  12. Novel Integration of Frame Rate Up Conversion and HEVC Coding Based on Rate-Distortion Optimization.

    PubMed

    Guo Lu; Xiaoyun Zhang; Li Chen; Zhiyong Gao

    2018-02-01

    Frame rate up conversion (FRUC) can improve the visual quality by interpolating new intermediate frames. However, high frame rate videos by FRUC are confronted with more bitrate consumption or annoying artifacts of interpolated frames. In this paper, a novel integration framework of FRUC and high efficiency video coding (HEVC) is proposed based on rate-distortion optimization, and the interpolated frames can be reconstructed at encoder side with low bitrate cost and high visual quality. First, joint motion estimation (JME) algorithm is proposed to obtain robust motion vectors, which are shared between FRUC and video coding. What's more, JME is embedded into the coding loop and employs the original motion search strategy in HEVC coding. Then, the frame interpolation is formulated as a rate-distortion optimization problem, where both the coding bitrate consumption and visual quality are taken into account. Due to the absence of original frames, the distortion model for interpolated frames is established according to the motion vector reliability and coding quantization error. Experimental results demonstrate that the proposed framework can achieve 21% ~ 42% reduction in BDBR, when compared with the traditional methods of FRUC cascaded with coding.

  13. Optimization of monitoring networks based on uncertainty quantification of model predictions of contaminant transport

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.; Harp, D.

    2010-12-01

    The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.

  14. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    DOE PAGES

    Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; ...

    2013-01-01

    Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization ismore » based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less

  15. Wing Weight Optimization Under Aeroelastic Loads Subject to Stress Constraints

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.; Issac, J.; Macmurdy, D.; Guruswamy, Guru P.

    1997-01-01

    A minimum weight optimization of the wing under aeroelastic loads subject to stress constraints is carried out. The loads for the optimization are based on aeroelastic trim. The design variables are the thickness of the wing skins and planform variables. The composite plate structural model incorporates first-order shear deformation theory, the wing deflections are expressed using Chebyshev polynomials and a Rayleigh-Ritz procedure is adopted for the structural formulation. The aerodynamic pressures provided by the aerodynamic code at a discrete number of grid points is represented as a bilinear distribution on the composite plate code to solve for the deflections and stresses in the wing. The lifting-surface aerodynamic code FAST is presently being used to generate the pressure distribution over the wing. The envisioned ENSAERO/Plate is an aeroelastic analysis code which combines ENSAERO version 3.0 (for analysis of wing-body configurations) with the composite plate code.

  16. Optimized iterative decoding method for TPC coded CPM

    NASA Astrophysics Data System (ADS)

    Ma, Yanmin; Lai, Penghui; Wang, Shilian; Xie, Shunqin; Zhang, Wei

    2018-05-01

    Turbo Product Code (TPC) coded Continuous Phase Modulation (CPM) system (TPC-CPM) has been widely used in aeronautical telemetry and satellite communication. This paper mainly investigates the improvement and optimization on the TPC-CPM system. We first add the interleaver and deinterleaver to the TPC-CPM system, and then establish an iterative system to iteratively decode. However, the improved system has a poor convergence ability. To overcome this issue, we use the Extrinsic Information Transfer (EXIT) analysis to find the optimal factors for the system. The experiments show our method is efficient to improve the convergence performance.

  17. The Role of AhR in Breast Cancer Development

    DTIC Science & Technology

    2004-07-01

    The renilla luciferase vectorphRL-TK (0.05 rtg) was co-transfected with firefly luciferase reporter constructs (0.1 ptg pGudLuc, 0.5-1.0 [tg wildtype...Glo Luciferase system (Promega, Madison, WI ) which allowed sequential reading of the firefly and renilla signals. Cells were lysed according to the...Madison, WI ). The renilla signal was read after quenching the firefly output, thus allowing normalization between sample wells. The normalized firefly

  18. Using Intel Xeon Phi to accelerate the WRF TEMF planetary boundary layer scheme

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen

    2014-05-01

    The Weather Research and Forecasting (WRF) model is designed for numerical weather prediction and atmospheric research. The WRF software infrastructure consists of several components such as dynamic solvers and physics schemes. Numerical models are used to resolve the large-scale flow. However, subgrid-scale parameterizations are for an estimation of small-scale properties (e.g., boundary layer turbulence and convection, clouds, radiation). Those have a significant influence on the resolved scale due to the complex nonlinear nature of the atmosphere. For the cloudy planetary boundary layer (PBL), it is fundamental to parameterize vertical turbulent fluxes and subgrid-scale condensation in a realistic manner. A parameterization based on the Total Energy - Mass Flux (TEMF) that unifies turbulence and moist convection components produces a better result that the other PBL schemes. For that reason, the TEMF scheme is chosen as the PBL scheme we optimized for Intel Many Integrated Core (MIC), which ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our optimization results for TEMF planetary boundary layer scheme. The optimizations that were performed were quite generic in nature. Those optimizations included vectorization of the code to utilize vector units inside each CPU. Furthermore, memory access was improved by scalarizing some of the intermediate arrays. The results show that the optimization improved MIC performance by 14.8x. Furthermore, the optimizations increased CPU performance by 2.6x compared to the original multi-threaded code on quad core Intel Xeon E5-2603 running at 1.8 GHz. Compared to the optimized code running on a single CPU socket the optimized MIC code is 6.2x faster.

  19. Structured Set Intra Prediction With Discriminative Learning in a Max-Margin Markov Network for High Efficiency Video Coding

    PubMed Central

    Dai, Wenrui; Xiong, Hongkai; Jiang, Xiaoqian; Chen, Chang Wen

    2014-01-01

    This paper proposes a novel model on intra coding for High Efficiency Video Coding (HEVC), which simultaneously predicts blocks of pixels with optimal rate distortion. It utilizes the spatial statistical correlation for the optimal prediction based on 2-D contexts, in addition to formulating the data-driven structural interdependences to make the prediction error coherent with the probability distribution, which is desirable for successful transform and coding. The structured set prediction model incorporates a max-margin Markov network (M3N) to regulate and optimize multiple block predictions. The model parameters are learned by discriminating the actual pixel value from other possible estimates to maximize the margin (i.e., decision boundary bandwidth). Compared to existing methods that focus on minimizing prediction error, the M3N-based model adaptively maintains the coherence for a set of predictions. Specifically, the proposed model concurrently optimizes a set of predictions by associating the loss for individual blocks to the joint distribution of succeeding discrete cosine transform coefficients. When the sample size grows, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. As an internal step, we optimize the underlying Markov network structure to find states that achieve the maximal energy using expectation propagation. For validation, we integrate the proposed model into HEVC for optimal mode selection on rate-distortion optimization. The proposed prediction model obtains up to 2.85% bit rate reduction and achieves better visual quality in comparison to the HEVC intra coding. PMID:25505829

  20. Inclusion of the fitness sharing technique in an evolutionary algorithm to analyze the fitness landscape of the genetic code adaptability.

    PubMed

    Santos, José; Monteagudo, Ángel

    2017-03-27

    The canonical code, although prevailing in complex genomes, is not universal. It was shown the canonical genetic code superior robustness compared to random codes, but it is not clearly determined how it evolved towards its current form. The error minimization theory considers the minimization of point mutation adverse effect as the main selection factor in the evolution of the code. We have used simulated evolution in a computer to search for optimized codes, which helps to obtain information about the optimization level of the canonical code in its evolution. A genetic algorithm searches for efficient codes in a fitness landscape that corresponds with the adaptability of possible hypothetical genetic codes. The lower the effects of errors or mutations in the codon bases of a hypothetical code, the more efficient or optimal is that code. The inclusion of the fitness sharing technique in the evolutionary algorithm allows the extent to which the canonical genetic code is in an area corresponding to a deep local minimum to be easily determined, even in the high dimensional spaces considered. The analyses show that the canonical code is not in a deep local minimum and that the fitness landscape is not a multimodal fitness landscape with deep and separated peaks. Moreover, the canonical code is clearly far away from the areas of higher fitness in the landscape. Given the non-presence of deep local minima in the landscape, although the code could evolve and different forces could shape its structure, the fitness landscape nature considered in the error minimization theory does not explain why the canonical code ended its evolution in a location which is not an area of a localized deep minimum of the huge fitness landscape.

  1. Program user's manual for optimizing the design of a liquid or gaseous propellant rocket engine with the automated combustor design code AUTOCOM

    NASA Technical Reports Server (NTRS)

    Reichel, R. H.; Hague, D. S.; Jones, R. T.; Glatt, C. R.

    1973-01-01

    This computer program manual describes in two parts the automated combustor design optimization code AUTOCOM. The program code is written in the FORTRAN 4 language. The input data setup and the program outputs are described, and a sample engine case is discussed. The program structure and programming techniques are also described, along with AUTOCOM program analysis.

  2. Self-adaptive multimethod optimization applied to a tailored heating forging process

    NASA Astrophysics Data System (ADS)

    Baldan, M.; Steinberg, T.; Baake, E.

    2018-05-01

    The presented paper describes an innovative self-adaptive multi-objective optimization code. Investigation goals concern proving the superiority of this code compared to NGSA-II and applying it to an inductor’s design case study addressed to a “tailored” heating forging application. The choice of the frequency and the heating time are followed by the determination of the turns number and their positions. Finally, a straightforward optimization is performed in order to minimize energy consumption using “optimal control”.

  3. Parallel-vector computation for linear structural analysis and non-linear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.

    1991-01-01

    Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.

  4. Super-linear Precision in Simple Neural Population Codes

    NASA Astrophysics Data System (ADS)

    Schwab, David; Fiete, Ila

    2015-03-01

    A widely used tool for quantifying the precision with which a population of noisy sensory neurons encodes the value of an external stimulus is the Fisher Information (FI). Maximizing the FI is also a commonly used objective for constructing optimal neural codes. The primary utility and importance of the FI arises because it gives, through the Cramer-Rao bound, the smallest mean-squared error achievable by any unbiased stimulus estimator. However, it is well-known that when neural firing is sparse, optimizing the FI can result in codes that perform very poorly when considering the resulting mean-squared error, a measure with direct biological relevance. Here we construct optimal population codes by minimizing mean-squared error directly and study the scaling properties of the resulting network, focusing on the optimal tuning curve width. We then extend our results to continuous attractor networks that maintain short-term memory of external stimuli in their dynamics. Here we find similar scaling properties in the structure of the interactions that minimize diffusive information loss.

  5. Multidisciplinary Aerospace Systems Optimization: Computational AeroSciences (CAS) Project

    NASA Technical Reports Server (NTRS)

    Kodiyalam, S.; Sobieski, Jaroslaw S. (Technical Monitor)

    2001-01-01

    The report describes a method for performing optimization of a system whose analysis is so expensive that it is impractical to let the optimization code invoke it directly because excessive computational cost and elapsed time might result. In such situation it is imperative to have user control the number of times the analysis is invoked. The reported method achieves that by two techniques in the Design of Experiment category: a uniform dispersal of the trial design points over a n-dimensional hypersphere and a response surface fitting, and the technique of krigging. Analyses of all the trial designs whose number may be set by the user are performed before activation of the optimization code and the results are stored as a data base. That code is then executed and referred to the above data base. Two applications, one of the airborne laser system, and one of an aircraft optimization illustrate the method application.

  6. Constellation labeling optimization for bit-interleaved coded APSK

    NASA Astrophysics Data System (ADS)

    Xiang, Xingyu; Mo, Zijian; Wang, Zhonghai; Pham, Khanh; Blasch, Erik; Chen, Genshe

    2016-05-01

    This paper investigates the constellation and mapping optimization for amplitude phase shift keying (APSK) modulation, which is deployed in Digital Video Broadcasting Satellite - Second Generation (DVB-S2) and Digital Video Broadcasting - Satellite services to Handhelds (DVB-SH) broadcasting standards due to its merits of power and spectral efficiency together with the robustness against nonlinear distortion. The mapping optimization is performed for 32-APSK according to combined cost functions related to Euclidean distance and mutual information. A Binary switching algorithm and its modified version are used to minimize the cost function and the estimated error between the original and received data. The optimized constellation mapping is tested by combining DVB-S2 standard Low-Density Parity-Check (LDPC) codes in both Bit-Interleaved Coded Modulation (BICM) and BICM with iterative decoding (BICM-ID) systems. The simulated results validate the proposed constellation labeling optimization scheme which yields better performance against conventional 32-APSK constellation defined in DVB-S2 standard.

  7. Effects of the Changiang river discharge on the change in ocean and atmosphere over the East Asian region

    NASA Astrophysics Data System (ADS)

    Kim, M. H.; Lim, Y. J.; Kang, H. S.; Kim, B. J.; Cho, C.

    2017-12-01

    This study investigates the effects of freshwater from the Changiang river basin over the East Asian region for summer season. To do this, we simulated global seasonal forecasting system (GloSea5) of KMA (Korea Meteorology Administration). GloSea5 consists of atmosphere, ocean, sea ice and land model. Also, it has river routing model (TRIP), which links between land and ocean using freshwater. It is very important component in long-term forecast because of be able to change the air-sea interaction. To improve more the freshwater performance over the East Asian region, we realistically modified the river mouth, direction and storage around Changiang river basin of TRIP in GloSea5. Here, the comparison study among the no freshwater forcing experiment to ocean model (TRIP-OFF), the operated original file based freshwater coupled experiment (TRIP-ON) and the improved one (TRIP-MODI) has been carried out and the results are evaluated against the reanalysis data. As a result, the amount of fresh water to the Yellow Sea increase in TRIP-ON experiment and it attributes to the improvement of bias and RMSE of local SST over the East Asia. The implementation of the realistic river related ancillary files (TRIP-MODI) improves the abnormal salinity distribution around the Changjiang river gate and its related SST reduces cold bias about 0.37˚C for July over the East Sea. Warm SST over this region is caused by barrier layer (BL). Freshwater flux and salinity changes can create a pronounced salinity-induced mixed layer (ML) above the top of the thermocline. The layer between the base of the ML and the top of the thermocline is called a barrier layer (BL), because it isolates the warm surface water from cold deep water. In addition, the improved fresh water forcing can lead to the change in the local volume transport from the Kuroshio to the Strait of Korea and Changed the transport and SST over the Straits of Korea have correlation 0.57 at 95% confidence level. For the atmospheric variables in East Asian region, the error statistics of temperature in TRIP-MODI is the best, reducing about 0.32˚C for July but there is no difference of the precipitation distribution among the experiments.

  8. Optimizing a liquid propellant rocket engine with an automated combustor design code (AUTOCOM)

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Reichel, R. H.; Jones, R. T.; Glatt, C. R.

    1972-01-01

    A procedure for automatically designing a liquid propellant rocket engine combustion chamber in an optimal fashion is outlined. The procedure is contained in a digital computer code, AUTOCOM. The code is applied to an existing engine, and design modifications are generated which provide a substantial potential payload improvement over the existing design. Computer time requirements for this payload improvement were small, approximately four minutes in the CDC 6600 computer.

  9. A stimulus-dependent spike threshold is an optimal neural coder

    PubMed Central

    Jones, Douglas L.; Johnson, Erik C.; Ratnam, Rama

    2015-01-01

    A neural code based on sequences of spikes can consume a significant portion of the brain's energy budget. Thus, energy considerations would dictate that spiking activity be kept as low as possible. However, a high spike-rate improves the coding and representation of signals in spike trains, particularly in sensory systems. These are competing demands, and selective pressure has presumably worked to optimize coding by apportioning a minimum number of spikes so as to maximize coding fidelity. The mechanisms by which a neuron generates spikes while maintaining a fidelity criterion are not known. Here, we show that a signal-dependent neural threshold, similar to a dynamic or adapting threshold, optimizes the trade-off between spike generation (encoding) and fidelity (decoding). The threshold mimics a post-synaptic membrane (a low-pass filter) and serves as an internal decoder. Further, it sets the average firing rate (the energy constraint). The decoding process provides an internal copy of the coding error to the spike-generator which emits a spike when the error equals or exceeds a spike threshold. When optimized, the trade-off leads to a deterministic spike firing-rule that generates optimally timed spikes so as to maximize fidelity. The optimal coder is derived in closed-form in the limit of high spike-rates, when the signal can be approximated as a piece-wise constant signal. The predicted spike-times are close to those obtained experimentally in the primary electrosensory afferent neurons of weakly electric fish (Apteronotus leptorhynchus) and pyramidal neurons from the somatosensory cortex of the rat. We suggest that KCNQ/Kv7 channels (underlying the M-current) are good candidates for the decoder. They are widely coupled to metabolic processes and do not inactivate. We conclude that the neural threshold is optimized to generate an energy-efficient and high-fidelity neural code. PMID:26082710

  10. Wind Farm Turbine Type and Placement Optimization

    NASA Astrophysics Data System (ADS)

    Graf, Peter; Dykes, Katherine; Scott, George; Fields, Jason; Lunacek, Monte; Quick, Julian; Rethore, Pierre-Elouan

    2016-09-01

    The layout of turbines in a wind farm is already a challenging nonlinear, nonconvex, nonlinearly constrained continuous global optimization problem. Here we begin to address the next generation of wind farm optimization problems by adding the complexity that there is more than one turbine type to choose from. The optimization becomes a nonlinear constrained mixed integer problem, which is a very difficult class of problems to solve. This document briefly summarizes the algorithm and code we have developed, the code validation steps we have performed, and the initial results for multi-turbine type and placement optimization (TTP_OPT) we have run.

  11. Wind farm turbine type and placement optimization

    DOE PAGES

    Graf, Peter; Dykes, Katherine; Scott, George; ...

    2016-10-03

    The layout of turbines in a wind farm is already a challenging nonlinear, nonconvex, nonlinearly constrained continuous global optimization problem. Here we begin to address the next generation of wind farm optimization problems by adding the complexity that there is more than one turbine type to choose from. The optimization becomes a nonlinear constrained mixed integer problem, which is a very difficult class of problems to solve. Furthermore, this document briefly summarizes the algorithm and code we have developed, the code validation steps we have performed, and the initial results for multi-turbine type and placement optimization (TTP_OPT) we have run.

  12. Optimal wavelets for biomedical signal compression.

    PubMed

    Nielsen, Mogens; Kamavuako, Ernest Nlandu; Andersen, Michael Midtgaard; Lucas, Marie-Françoise; Farina, Dario

    2006-07-01

    Signal compression is gaining importance in biomedical engineering due to the potential applications in telemedicine. In this work, we propose a novel scheme of signal compression based on signal-dependent wavelets. To adapt the mother wavelet to the signal for the purpose of compression, it is necessary to define (1) a family of wavelets that depend on a set of parameters and (2) a quality criterion for wavelet selection (i.e., wavelet parameter optimization). We propose the use of an unconstrained parameterization of the wavelet for wavelet optimization. A natural performance criterion for compression is the minimization of the signal distortion rate given the desired compression rate. For coding the wavelet coefficients, we adopted the embedded zerotree wavelet coding algorithm, although any coding scheme may be used with the proposed wavelet optimization. As a representative example of application, the coding/encoding scheme was applied to surface electromyographic signals recorded from ten subjects. The distortion rate strongly depended on the mother wavelet (for example, for 50% compression rate, optimal wavelet, mean+/-SD, 5.46+/-1.01%; worst wavelet 12.76+/-2.73%). Thus, optimization significantly improved performance with respect to previous approaches based on classic wavelets. The algorithm can be applied to any signal type since the optimal wavelet is selected on a signal-by-signal basis. Examples of application to ECG and EEG signals are also reported.

  13. RD Optimized, Adaptive, Error-Resilient Transmission of MJPEG2000-Coded Video over Multiple Time-Varying Channels

    NASA Astrophysics Data System (ADS)

    Bezan, Scott; Shirani, Shahram

    2006-12-01

    To reliably transmit video over error-prone channels, the data should be both source and channel coded. When multiple channels are available for transmission, the problem extends to that of partitioning the data across these channels. The condition of transmission channels, however, varies with time. Therefore, the error protection added to the data at one instant of time may not be optimal at the next. In this paper, we propose a method for adaptively adding error correction code in a rate-distortion (RD) optimized manner using rate-compatible punctured convolutional codes to an MJPEG2000 constant rate-coded frame of video. We perform an analysis on the rate-distortion tradeoff of each of the coding units (tiles and packets) in each frame and adapt the error correction code assigned to the unit taking into account the bandwidth and error characteristics of the channels. This method is applied to both single and multiple time-varying channel environments. We compare our method with a basic protection method in which data is either not transmitted, transmitted with no protection, or transmitted with a fixed amount of protection. Simulation results show promising performance for our proposed method.

  14. Hybrid digital-analog coding with bandwidth expansion for correlated Gaussian sources under Rayleigh fading

    NASA Astrophysics Data System (ADS)

    Yahampath, Pradeepa

    2017-12-01

    Consider communicating a correlated Gaussian source over a Rayleigh fading channel with no knowledge of the channel signal-to-noise ratio (CSNR) at the transmitter. In this case, a digital system cannot be optimal for a range of CSNRs. Analog transmission however is optimal at all CSNRs, if the source and channel are memoryless and bandwidth matched. This paper presents new hybrid digital-analog (HDA) systems for sources with memory and channels with bandwidth expansion, which outperform both digital-only and analog-only systems over a wide range of CSNRs. The digital part is either a predictive quantizer or a transform code, used to achieve a coding gain. Analog part uses linear encoding to transmit the quantization error which improves the performance under CSNR variations. The hybrid encoder is optimized to achieve the minimum AMMSE (average minimum mean square error) over the CSNR distribution. To this end, analytical expressions are derived for the AMMSE of asymptotically optimal systems. It is shown that the outage CSNR of the channel code and the analog-digital power allocation must be jointly optimized to achieve the minimum AMMSE. In the case of HDA predictive quantization, a simple algorithm is presented to solve the optimization problem. Experimental results are presented for both Gauss-Markov sources and speech signals.

  15. Hard decoding algorithm for optimizing thresholds under general Markovian noise

    NASA Astrophysics Data System (ADS)

    Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond

    2017-04-01

    Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.

  16. A Spherical Active Coded Aperture for 4π Gamma-ray Imaging

    DOE PAGES

    Hellfeld, Daniel; Barton, Paul; Gunter, Donald; ...

    2017-09-22

    Gamma-ray imaging facilitates the efficient detection, characterization, and localization of compact radioactive sources in cluttered environments. Fieldable detector systems employing active planar coded apertures have demonstrated broad energy sensitivity via both coded aperture and Compton imaging modalities. But, planar configurations suffer from a limited field-of-view, especially in the coded aperture mode. In order to improve upon this limitation, we introduce a novel design by rearranging the detectors into an active coded spherical configuration, resulting in a 4pi isotropic field-of-view for both coded aperture and Compton imaging. This work focuses on the low- energy coded aperture modality and the optimization techniquesmore » used to determine the optimal number and configuration of 1 cm 3 CdZnTe coplanar grid detectors on a 14 cm diameter sphere with 192 available detector locations.« less

  17. Legacy Code Modernization

    NASA Technical Reports Server (NTRS)

    Hribar, Michelle R.; Frumkin, Michael; Jin, Haoqiang; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    Over the past decade, high performance computing has evolved rapidly; systems based on commodity microprocessors have been introduced in quick succession from at least seven vendors/families. Porting codes to every new architecture is a difficult problem; in particular, here at NASA, there are many large CFD applications that are very costly to port to new machines by hand. The LCM ("Legacy Code Modernization") Project is the development of an integrated parallelization environment (IPE) which performs the automated mapping of legacy CFD (Fortran) applications to state-of-the-art high performance computers. While most projects to port codes focus on the parallelization of the code, we consider porting to be an iterative process consisting of several steps: 1) code cleanup, 2) serial optimization,3) parallelization, 4) performance monitoring and visualization, 5) intelligent tools for automated tuning using performance prediction and 6) machine specific optimization. The approach for building this parallelization environment is to build the components for each of the steps simultaneously and then integrate them together. The demonstration will exhibit our latest research in building this environment: 1. Parallelizing tools and compiler evaluation. 2. Code cleanup and serial optimization using automated scripts 3. Development of a code generator for performance prediction 4. Automated partitioning 5. Automated insertion of directives. These demonstrations will exhibit the effectiveness of an automated approach for all the steps involved with porting and tuning a legacy code application for a new architecture.

  18. Creation of Artificial Ionospheric Layers Using High-Power HF Waves

    DTIC Science & Technology

    2010-01-30

    Program ( HAARP ) transmitter in Gakona, Alaska. The HF- driven ionization process is initiated near the 2nd electron gyroharmonic at 220 km altitude in...the 3.6 MW High-Frequency Active Auroral Program ( HAARP ) transmitter in Gakona, Alaska. The HF-driven ionization process is initiated near the 2nd...Maine. USA. Copyright 2010 by the American Geophysical Union. 0094-8276/I0/2009GLO41895SO5.0O Research Program ( HAARP ) transmitter facility, however

  19. Determination of captopril in biological samples by high-performance liquid chromatography with ThioGlo 3 derivatization.

    PubMed

    Aykin, N; Neal, R; Yusof, M; Ercal, N

    2001-11-01

    Captopril, a well-known angiotensin converting enzyme (ACE) inhibitor, is widely used for treatment of arterial hypertension. Recent studies suggest that it may also act as a scavenger of free radicals because of its thiol group. Therefore, the present study describes a rapid, sensitive and relatively simple method for the detection of captopril in biological tissues with reverse-phase HPLC. Captopril was first derivatized with ThioGlo 3 [3H-Naphto[2,1-b]pyran,9-acetoxy-2-(4-(2,5-dihydro-2,5-dioxo-1H-pyrrol-1-yl)phenyl-3-oxo-)]. It was then detected by fluorescence-HPLC using an Astec C(18) column as the stationary phase and a water:acetonitrile:acetic acid:phosphoric acid mixture (50:50; 1 mL/L acids) as the mobile phase (excitation wavelength, 365 nm; emission wavelength, 445 nm). The calibration curve for captopril was linear over a range of 10-2500 nM and the coefficient of variation acquired for the within- and between-run precision for captopril was 0.5 and 3.8%, respectively. The detection limit of captopril with this method was found to be 200 fmol/20 microL injection volume. Its relative recovery from biological samples was determined to the range from 93.3 to 105.3%. Based on these results, we believe that our method is advantageous for captopril determination. Copyright 2001 John Wiley & Sons, Ltd.

  20. An experimental system for flood risk forecasting at global scale

    NASA Astrophysics Data System (ADS)

    Alfieri, L.; Dottori, F.; Kalas, M.; Lorini, V.; Bianchi, A.; Hirpa, F. A.; Feyen, L.; Salamon, P.

    2016-12-01

    Global flood forecasting and monitoring systems are nowadays a reality and are being applied by an increasing range of users and practitioners in disaster risk management. Furthermore, there is an increasing demand from users to integrate flood early warning systems with risk based forecasts, combining streamflow estimations with expected inundated areas and flood impacts. To this end, we have developed an experimental procedure for near-real time flood mapping and impact assessment based on the daily forecasts issued by the Global Flood Awareness System (GloFAS). The methodology translates GloFAS streamflow forecasts into event-based flood hazard maps based on the predicted flow magnitude and the forecast lead time and a database of flood hazard maps with global coverage. Flood hazard maps are then combined with exposure and vulnerability information to derive flood risk. Impacts of the forecasted flood events are evaluated in terms of flood prone areas, potential economic damage, and affected population, infrastructures and cities. To further increase the reliability of the proposed methodology we integrated model-based estimations with an innovative methodology for social media monitoring, which allows for real-time verification of impact forecasts. The preliminary tests provided good results and showed the potential of the developed real-time operational procedure in helping emergency response and management. In particular, the link with social media is crucial for improving the accuracy of impact predictions.

  1. Bilayer Protograph Codes for Half-Duplex Relay Channels

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; VanNguyen, Thuy; Nosratinia, Aria

    2013-01-01

    Direct to Earth return links are limited by the size and power of lander devices. A standard alternative is provided by a two-hops return link: a proximity link (from lander to orbiter relay) and a deep-space link (from orbiter relay to Earth). Although direct to Earth return links are limited by the size and power of lander devices, using an additional link and a proposed coding for relay channels, one can obtain a more reliable signal. Although significant progress has been made in the relay coding problem, existing codes must be painstakingly optimized to match to a single set of channel conditions, many of them do not offer easy encoding, and most of them do not have structured design. A high-performing LDPC (low-density parity-check) code for the relay channel addresses simultaneously two important issues: a code structure that allows low encoding complexity, and a flexible rate-compatible code that allows matching to various channel conditions. Most of the previous high-performance LDPC codes for the relay channel are tightly optimized for a given channel quality, and are not easily adapted without extensive re-optimization for various channel conditions. This code for the relay channel combines structured design and easy encoding with rate compatibility to allow adaptation to the three links involved in the relay channel, and furthermore offers very good performance. The proposed code is constructed by synthesizing a bilayer structure with a pro to graph. In addition to the contribution to relay encoding, an improved family of protograph codes was produced for the point-to-point AWGN (additive white Gaussian noise) channel whose high-rate members enjoy thresholds that are within 0.07 dB of capacity. These LDPC relay codes address three important issues in an integrative manner: low encoding complexity, modular structure allowing for easy design, and rate compatibility so that the code can be easily matched to a variety of channel conditions without extensive re-optimization. The main problem of half-duplex relay coding can be reduced to the simultaneous design of two codes at two rates and two SNRs (signal-to-noise ratios), such that one is a subset of the other. This problem can be addressed by forceful optimization, but a clever method of addressing this problem is via the bilayer lengthened (BL) LDPC structure. This method uses a bilayer Tanner graph to make the two codes while using a concept of "parity forwarding" with subsequent successive decoding that removes the need to directly address the issue of uneven SNRs among the symbols of a given codeword. This method is attractive in that it addresses some of the main issues in the design of relay codes, but it does not by itself give rise to highly structured codes with simple encoding, nor does it give rate-compatible codes. The main contribution of this work is to construct a class of codes that simultaneously possess a bilayer parity- forwarding mechanism, while also benefiting from the properties of protograph codes having an easy encoding, a modular design, and being a rate-compatible code.

  2. Power optimization of wireless media systems with space-time block codes.

    PubMed

    Yousefi'zadeh, Homayoun; Jafarkhani, Hamid; Moshfeghi, Mehran

    2004-07-01

    We present analytical and numerical solutions to the problem of power control in wireless media systems with multiple antennas. We formulate a set of optimization problems aimed at minimizing total power consumption of wireless media systems subject to a given level of QoS and an available bit rate. Our formulation takes into consideration the power consumption related to source coding, channel coding, and transmission of multiple-transmit antennas. In our study, we consider Gauss-Markov and video source models, Rayleigh fading channels along with the Bernoulli/Gilbert-Elliott loss models, and space-time block codes.

  3. Multi-level of Fidelity Multi-Disciplinary Design Optimization of Small, Solid-Propellant Launch Vehicles

    NASA Astrophysics Data System (ADS)

    Roshanian, Jafar; Jodei, Jahangir; Mirshams, Mehran; Ebrahimi, Reza; Mirzaee, Masood

    A new automated multi-level of fidelity Multi-Disciplinary Design Optimization (MDO) methodology has been developed at the MDO Laboratory of K.N. Toosi University of Technology. This paper explains a new design approach by formulation of developed disciplinary modules. A conceptual design for a small, solid-propellant launch vehicle was considered at two levels of fidelity structure. Low and medium level of fidelity disciplinary codes were developed and linked. Appropriate design and analysis codes were defined according to their effect on the conceptual design process. Simultaneous optimization of the launch vehicle was performed at the discipline level and system level. Propulsion, aerodynamics, structure and trajectory disciplinary codes were used. To reach the minimum launch weight, the Low LoF code first searches the whole design space to achieve the mission requirements. Then the medium LoF code receives the output of the low LoF and gives a value near the optimum launch weight with more details and higher fidelity.

  4. Improving soft FEC performance for higher-order modulations via optimized bit channel mappings.

    PubMed

    Häger, Christian; Amat, Alexandre Graell I; Brännström, Fredrik; Alvarado, Alex; Agrell, Erik

    2014-06-16

    Soft forward error correction with higher-order modulations is often implemented in practice via the pragmatic bit-interleaved coded modulation paradigm, where a single binary code is mapped to a nonbinary modulation. In this paper, we study the optimization of the mapping of the coded bits to the modulation bits for a polarization-multiplexed fiber-optical system without optical inline dispersion compensation. Our focus is on protograph-based low-density parity-check (LDPC) codes which allow for an efficient hardware implementation, suitable for high-speed optical communications. The optimization is applied to the AR4JA protograph family, and further extended to protograph-based spatially coupled LDPC codes assuming a windowed decoder. Full field simulations via the split-step Fourier method are used to verify the analysis. The results show performance gains of up to 0.25 dB, which translate into a possible extension of the transmission reach by roughly up to 8%, without significantly increasing the system complexity.

  5. On the optimality of a universal noiseless coder

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Rice, Robert F.; Miller, Warner H.

    1993-01-01

    Rice developed a universal noiseless coding structure that provides efficient performance over an extremely broad range of source entropy. This is accomplished by adaptively selecting the best of several easily implemented variable length coding algorithms. Variations of such noiseless coders have been used in many NASA applications. Custom VLSI coder and decoder modules capable of processing over 50 million samples per second have been fabricated and tested. In this study, the first of the code options used in this module development is shown to be equivalent to a class of Huffman code under the Humblet condition, for source symbol sets having a Laplacian distribution. Except for the default option, other options are shown to be equivalent to the Huffman codes of a modified Laplacian symbol set, at specified symbol entropy values. Simulation results are obtained on actual aerial imagery over a wide entropy range, and they confirm the optimality of the scheme. Comparison with other known techniques are performed on several widely used images and the results further validate the coder's optimality.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hellfeld, Daniel; Barton, Paul; Gunter, Donald

    Gamma-ray imaging facilitates the efficient detection, characterization, and localization of compact radioactive sources in cluttered environments. Fieldable detector systems employing active planar coded apertures have demonstrated broad energy sensitivity via both coded aperture and Compton imaging modalities. But, planar configurations suffer from a limited field-of-view, especially in the coded aperture mode. In order to improve upon this limitation, we introduce a novel design by rearranging the detectors into an active coded spherical configuration, resulting in a 4pi isotropic field-of-view for both coded aperture and Compton imaging. This work focuses on the low- energy coded aperture modality and the optimization techniquesmore » used to determine the optimal number and configuration of 1 cm 3 CdZnTe coplanar grid detectors on a 14 cm diameter sphere with 192 available detector locations.« less

  7. Vector processing efficiency of plasma MHD codes by use of the FACOM 230-75 APU

    NASA Astrophysics Data System (ADS)

    Matsuura, T.; Tanaka, Y.; Naraoka, K.; Takizuka, T.; Tsunematsu, T.; Tokuda, S.; Azumi, M.; Kurita, G.; Takeda, T.

    1982-06-01

    In the framework of pipelined vector architecture, the efficiency of vector processing is assessed with respect to plasma MHD codes in nuclear fusion research. By using a vector processor, the FACOM 230-75 APU, the limit of the enhancement factor due to parallelism of current vector machines is examined for three numerical codes based on a fluid model. Reasonable speed-up factors of approximately 6,6 and 4 times faster than the highly optimized scalar version are obtained for ERATO (linear stability code), AEOLUS-R1 (nonlinear stability code) and APOLLO (1-1/2D transport code), respectively. Problems of the pipelined vector processors are discussed from the viewpoint of restructuring, optimization and choice of algorithms. In conclusion, the important concept of "concurrency within pipelined parallelism" is emphasized.

  8. A numerical similarity approach for using retired Current Procedural Terminology (CPT) codes for electronic phenotyping in the Scalable Collaborative Infrastructure for a Learning Health System (SCILHS).

    PubMed

    Klann, Jeffrey G; Phillips, Lori C; Turchin, Alexander; Weiler, Sarah; Mandl, Kenneth D; Murphy, Shawn N

    2015-12-11

    Interoperable phenotyping algorithms, needed to identify patient cohorts meeting eligibility criteria for observational studies or clinical trials, require medical data in a consistent structured, coded format. Data heterogeneity limits such algorithms' applicability. Existing approaches are often: not widely interoperable; or, have low sensitivity due to reliance on the lowest common denominator (ICD-9 diagnoses). In the Scalable Collaborative Infrastructure for a Learning Healthcare System (SCILHS) we endeavor to use the widely-available Current Procedural Terminology (CPT) procedure codes with ICD-9. Unfortunately, CPT changes drastically year-to-year - codes are retired/replaced. Longitudinal analysis requires grouping retired and current codes. BioPortal provides a navigable CPT hierarchy, which we imported into the Informatics for Integrating Biology and the Bedside (i2b2) data warehouse and analytics platform. However, this hierarchy does not include retired codes. We compared BioPortal's 2014AA CPT hierarchy with Partners Healthcare's SCILHS datamart, comprising three-million patients' data over 15 years. 573 CPT codes were not present in 2014AA (6.5 million occurrences). No existing terminology provided hierarchical linkages for these missing codes, so we developed a method that automatically places missing codes in the most specific "grouper" category, using the numerical similarity of CPT codes. Two informaticians reviewed the results. We incorporated the final table into our i2b2 SCILHS/PCORnet ontology, deployed it at seven sites, and performed a gap analysis and an evaluation against several phenotyping algorithms. The reviewers found the method placed the code correctly with 97 % precision when considering only miscategorizations ("correctness precision") and 52 % precision using a gold-standard of optimal placement ("optimality precision"). High correctness precision meant that codes were placed in a reasonable hierarchal position that a reviewer can quickly validate. Lower optimality precision meant that codes were not often placed in the optimal hierarchical subfolder. The seven sites encountered few occurrences of codes outside our ontology, 93 % of which comprised just four codes. Our hierarchical approach correctly grouped retired and non-retired codes in most cases and extended the temporal reach of several important phenotyping algorithms. We developed a simple, easily-validated, automated method to place retired CPT codes into the BioPortal CPT hierarchy. This complements existing hierarchical terminologies, which do not include retired codes. The approach's utility is confirmed by the high correctness precision and successful grouping of retired with non-retired codes.

  9. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; An Iterative Decoding Algorithm for Linear Block Codes Based on a Low-Weight Trellis Search

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    For long linear block codes, maximum likelihood decoding based on full code trellises would be very hard to implement if not impossible. In this case, we may wish to trade error performance for the reduction in decoding complexity. Sub-optimum soft-decision decoding of a linear block code based on a low-weight sub-trellis can be devised to provide an effective trade-off between error performance and decoding complexity. This chapter presents such a suboptimal decoding algorithm for linear block codes. This decoding algorithm is iterative in nature and based on an optimality test. It has the following important features: (1) a simple method to generate a sequence of candidate code-words, one at a time, for test; (2) a sufficient condition for testing a candidate code-word for optimality; and (3) a low-weight sub-trellis search for finding the most likely (ML) code-word.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graf, Peter; Dykes, Katherine; Scott, George

    The layout of turbines in a wind farm is already a challenging nonlinear, nonconvex, nonlinearly constrained continuous global optimization problem. Here we begin to address the next generation of wind farm optimization problems by adding the complexity that there is more than one turbine type to choose from. The optimization becomes a nonlinear constrained mixed integer problem, which is a very difficult class of problems to solve. Furthermore, this document briefly summarizes the algorithm and code we have developed, the code validation steps we have performed, and the initial results for multi-turbine type and placement optimization (TTP_OPT) we have run.

  11. Manual of phosphoric acid fuel cell power plant optimization model and computer program

    NASA Technical Reports Server (NTRS)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    An optimized cost and performance model for a phosphoric acid fuel cell power plant system was derived and developed into a modular FORTRAN computer code. Cost, energy, mass, and electrochemical analyses were combined to develop a mathematical model for optimizing the steam to methane ratio in the reformer, hydrogen utilization in the PAFC plates per stack. The nonlinear programming code, COMPUTE, was used to solve this model, in which the method of mixed penalty function combined with Hooke and Jeeves pattern search was chosen to evaluate this specific optimization problem.

  12. Design optimization studies using COSMIC NASTRAN

    NASA Technical Reports Server (NTRS)

    Pitrof, Stephen M.; Bharatram, G.; Venkayya, Vipperla B.

    1993-01-01

    The purpose of this study is to create, test and document a procedure to integrate mathematical optimization algorithms with COSMIC NASTRAN. This procedure is very important to structural design engineers who wish to capitalize on optimization methods to ensure that their design is optimized for its intended application. The OPTNAST computer program was created to link NASTRAN and design optimization codes into one package. This implementation was tested using two truss structure models and optimizing their designs for minimum weight, subject to multiple loading conditions and displacement and stress constraints. However, the process is generalized so that an engineer could design other types of elements by adding to or modifying some parts of the code.

  13. Targeting multiple heterogeneous hardware platforms with OpenCL

    NASA Astrophysics Data System (ADS)

    Fox, Paul A.; Kozacik, Stephen T.; Humphrey, John R.; Paolini, Aaron; Kuller, Aryeh; Kelmelis, Eric J.

    2014-06-01

    The OpenCL API allows for the abstract expression of parallel, heterogeneous computing, but hardware implementations have substantial implementation differences. The abstractions provided by the OpenCL API are often insufficiently high-level to conceal differences in hardware architecture. Additionally, implementations often do not take advantage of potential performance gains from certain features due to hardware limitations and other factors. These factors make it challenging to produce code that is portable in practice, resulting in much OpenCL code being duplicated for each hardware platform being targeted. This duplication of effort offsets the principal advantage of OpenCL: portability. The use of certain coding practices can mitigate this problem, allowing a common code base to be adapted to perform well across a wide range of hardware platforms. To this end, we explore some general practices for producing performant code that are effective across platforms. Additionally, we explore some ways of modularizing code to enable optional optimizations that take advantage of hardware-specific characteristics. The minimum requirement for portability implies avoiding the use of OpenCL features that are optional, not widely implemented, poorly implemented, or missing in major implementations. Exposing multiple levels of parallelism allows hardware to take advantage of the types of parallelism it supports, from the task level down to explicit vector operations. Static optimizations and branch elimination in device code help the platform compiler to effectively optimize programs. Modularization of some code is important to allow operations to be chosen for performance on target hardware. Optional subroutines exploiting explicit memory locality allow for different memory hierarchies to be exploited for maximum performance. The C preprocessor and JIT compilation using the OpenCL runtime can be used to enable some of these techniques, as well as to factor in hardware-specific optimizations as necessary.

  14. Rotorcraft Optimization Tools: Incorporating Rotorcraft Design Codes into Multi-Disciplinary Design, Analysis, and Optimization

    NASA Technical Reports Server (NTRS)

    Meyn, Larry A.

    2018-01-01

    One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use

  15. Optimal Codes for the Burst Erasure Channel

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2010-01-01

    Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure protection. As can be seen, the simple interleaved RS codes have substantially lower inefficiency over a wide range of transmission lengths.

  16. A finite element code for electric motor design

    NASA Technical Reports Server (NTRS)

    Campbell, C. Warren

    1994-01-01

    FEMOT is a finite element program for solving the nonlinear magnetostatic problem. This version uses nonlinear, Newton first order elements. The code can be used for electric motor design and analysis. FEMOT can be embedded within an optimization code that will vary nodal coordinates to optimize the motor design. The output from FEMOT can be used to determine motor back EMF, torque, cogging, and magnet saturation. It will run on a PC and will be available to anyone who wants to use it.

  17. Constructing a Pre-Emptive System Based on a Multidimentional Matrix and Autocompletion to Improve Diagnostic Coding in Acute Care Hospitals.

    PubMed

    Noussa-Yao, Joseph; Heudes, Didier; Escudie, Jean-Baptiste; Degoulet, Patrice

    2016-01-01

    Short-stay MSO (Medicine, Surgery, Obstetrics) hospitalization activities in public and private hospitals providing public services are funded through charges for the services provided (T2A in French). Coding must be well matched to the severity of the patient's condition, to ensure that appropriate funding is provided to the hospital. We propose the use of an autocompletion process and multidimensional matrix, to help physicians to improve the expression of information and to optimize clinical coding. With this approach, physicians without knowledge of the encoding rules begin from a rough concept, which is gradually refined through semantic proximity and uses information on the associated codes stemming of optimized knowledge bases of diagnosis code.

  18. Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 1; Formulation

    NASA Technical Reports Server (NTRS)

    Walsh, J. L.; Townsend, J. C.; Salas, A. O.; Samareh, J. A.; Mukhopadhyay, V.; Barthelemy, J.-F.

    2000-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a highspeed civil transport configuration. The paper describes the engineering aspects of formulating the optimization by integrating these analysis codes and associated interface codes into the system. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture (CORBA) compliant software product. A companion paper presents currently available results.

  19. Acquisition of Inductive Biconditional Reasoning Skills: Training of Simultaneous and Sequential Processing.

    ERIC Educational Resources Information Center

    Lee, Seong-Soo

    1982-01-01

    Tenth-grade students (n=144) received training on one of three processing methods: coding-mapping (simultaneous), coding only, or decision tree (sequential). The induced simultaneous processing strategy worked optimally under rule learning, while the sequential strategy was difficult to induce and/or not optimal for rule-learning operations.…

  20. Memory-efficient decoding of LDPC codes

    NASA Technical Reports Server (NTRS)

    Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon

    2005-01-01

    We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.

  1. Shaping electromagnetic waves using software-automatically-designed metasurfaces.

    PubMed

    Zhang, Qian; Wan, Xiang; Liu, Shuo; Yuan Yin, Jia; Zhang, Lei; Jun Cui, Tie

    2017-06-15

    We present a fully digital procedure of designing reflective coding metasurfaces to shape reflected electromagnetic waves. The design procedure is completely automatic, controlled by a personal computer. In details, the macro coding units of metasurface are automatically divided into several types (e.g. two types for 1-bit coding, four types for 2-bit coding, etc.), and each type of the macro coding units is formed by discretely random arrangement of micro coding units. By combining an optimization algorithm and commercial electromagnetic software, the digital patterns of the macro coding units are optimized to possess constant phase difference for the reflected waves. The apertures of the designed reflective metasurfaces are formed by arranging the macro coding units with certain coding sequence. To experimentally verify the performance, a coding metasurface is fabricated by automatically designing two digital 1-bit unit cells, which are arranged in array to constitute a periodic coding metasurface to generate the required four-beam radiations with specific directions. Two complicated functional metasurfaces with circularly- and elliptically-shaped radiation beams are realized by automatically designing 4-bit macro coding units, showing excellent performance of the automatic designs by software. The proposed method provides a smart tool to realize various functional devices and systems automatically.

  2. Therapeutic Value of PLK1 Knockdown in Combination with Prostate Cancer Drugs in PIM-1 Overexpressing Prostate Cancer Cells

    DTIC Science & Technology

    2014-11-13

    PIM kinases are not required for essential cellular functions . Furthermore, the presence of a unique hinge region in the ATP-binding site of PIM1...washing and blocking, cells were incubated with the appropriate primary antibodies overnight and incubated with fluorescent secondary antibodies...determined after 72 hrs of reverse transfection by using the CellTiter-Glo Luminescent cell viability assay and the results were normalized to RISC -free siRNA

  3. Evaluation of Instrumentation for Measuring Undissolved Water in Aviation Turbine Fuels per ASTM D3240

    DTIC Science & Technology

    2015-11-05

    Undissolved Water in Aviation Turbine Fuels per ASTM D3240 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Joel Schmitigal... water ) in Aviation Turbine Fuels per ASTM D3240 15. SUBJECT TERMS fuel, JP-8, aviation fuel, contamination, free water , undissolved water , Aqua-Glo 16...Michigan 48397-5000 Evaluation of Instrumentation for Measuring Undissolved Water in Aviation Turbine Fuels per ASTM D3240 Joel Schmitigal Force

  4. Use of a Common Assessment Methodology in Support of Joint Training, Capability Development, and Experimentation

    DTIC Science & Technology

    2007-06-01

    at the joint level on the actual functions they perform. The generic terms include Air Command and Control Agency ( ACCA ), Air Support Control...in the supporting text. USJFCOM 10/22/2007 16UNCLASSIFIED Naval Surface Fires Corps/MEF FSCA JTAC ACCA ASCA Div FSCA BCT/Regt FSCA Bn FSCA TACP TACP...FSCA/ ACCA CAS Aircraft FAC(A) Indirect Surface Fires Hostile Targets WOC TACP GLO Legend ACCA Air Command and Control Agency ISR Intelligence

  5. On the optimality of code options for a universal noiseless coder

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Rice, Robert F.; Miller, Warner

    1991-01-01

    A universal noiseless coding structure was developed that provides efficient performance over an extremely broad range of source entropy. This is accomplished by adaptively selecting the best of several easily implemented variable length coding algorithms. Custom VLSI coder and decoder modules capable of processing over 20 million samples per second are currently under development. The first of the code options used in this module development is shown to be equivalent to a class of Huffman code under the Humblet condition, other options are shown to be equivalent to the Huffman codes of a modified Laplacian symbol set, at specified symbol entropy values. Simulation results are obtained on actual aerial imagery, and they confirm the optimality of the scheme. On sources having Gaussian or Poisson distributions, coder performance is also projected through analysis and simulation.

  6. Optimal lightpath placement on a metropolitan-area network linked with optical CDMA local nets

    NASA Astrophysics Data System (ADS)

    Wang, Yih-Fuh; Huang, Jen-Fa

    2008-01-01

    A flexible optical metropolitan-area network (OMAN) [J.F. Huang, Y.F. Wang, C.Y. Yeh, Optimal configuration of OCDMA-based MAN with multimedia services, in: 23rd Biennial Symposium on Communications, Queen's University, Kingston, Canada, May 29-June 2, 2006, pp. 144-148] structured with OCDMA linkage is proposed to support multimedia services with multi-rate or various qualities of service. To prioritize transmissions in OCDMA, the orthogonal variable spreading factor (OVSF) codes widely used in wireless CDMA are adopted. In addition, for feasible multiplexing, unipolar OCDMA modulation [L. Nguyen, B. Aazhang, J.F. Young, All-optical CDMA with bipolar codes, IEEE Electron. Lett. 31 (6) (1995) 469-470] is used to generate the code selector of multi-rate OMAN, and a flexible fiber-grating-based system is used for the equipment on OCDMA-OVSF code. These enable an OMAN to assign suitable OVSF codes when creating different-rate lightpaths. How to optimally configure a multi-rate OMAN is a challenge because of displaced lightpaths. In this paper, a genetically modified genetic algorithm (GMGA) [L.R. Chen, Flexible fiber Bragg grating encoder/decoder for hybrid wavelength-time optical CDMA, IEEE Photon. Technol. Lett. 13 (11) (2001) 1233-1235] is used to preplan lightpaths in order to optimally configure an OMAN. To evaluate the performance of the GMGA, we compared it with different preplanning optimization algorithms. Simulation results revealed that the GMGA very efficiently solved the problem.

  7. Smooth Upgrade of Existing Passive Optical Networks With Spectral-Shaping Line-Coding Service Overlay

    NASA Astrophysics Data System (ADS)

    Hsueh, Yu-Li; Rogge, Matthew S.; Shaw, Wei-Tao; Kim, Jaedon; Yamamoto, Shu; Kazovsky, Leonid G.

    2005-09-01

    A simple and cost-effective upgrade of existing passive optical networks (PONs) is proposed, which realizes service overlay by novel spectral-shaping line codes. A hierarchical coding procedure allows processing simplicity and achieves desired long-term spectral properties. Different code rates are supported, and the spectral shape can be properly tailored to adapt to different systems. The computation can be simplified by quantization of trigonometric functions. DC balance is achieved by passing the dc residual between processing windows. The proposed line codes tend to introduce bit transitions to avoid long consecutive identical bits and facilitate receiver clock recovery. Experiments demonstrate and compare several different optimized line codes. For a specific tolerable interference level, the optimal line code can easily be determined, which maximizes the data throughput. The service overlay using the line-coding technique leaves existing services and field-deployed fibers untouched but fully functional, providing a very flexible and economic way to upgrade existing PONs.

  8. Performance optimization of spectral amplitude coding OCDMA system using new enhanced multi diagonal code

    NASA Astrophysics Data System (ADS)

    Imtiaz, Waqas A.; Ilyas, M.; Khan, Yousaf

    2016-11-01

    This paper propose a new code to optimize the performance of spectral amplitude coding-optical code division multiple access (SAC-OCDMA) system. The unique two-matrix structure of the proposed enhanced multi diagonal (EMD) code and effective correlation properties, between intended and interfering subscribers, significantly elevates the performance of SAC-OCDMA system by negating multiple access interference (MAI) and associated phase induce intensity noise (PIIN). Performance of SAC-OCDMA system based on the proposed code is thoroughly analyzed for two detection techniques through analytic and simulation analysis by referring to bit error rate (BER), signal to noise ratio (SNR) and eye patterns at the receiving end. It is shown that EMD code while using SDD technique provides high transmission capacity, reduces the receiver complexity, and provides better performance as compared to complementary subtraction detection (CSD) technique. Furthermore, analysis shows that, for a minimum acceptable BER of 10-9 , the proposed system supports 64 subscribers at data rates of up to 2 Gbps for both up-down link transmission.

  9. Aeroelastic Tailoring Study of N+2 Low Boom Supersonic Commerical Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi

    2015-01-01

    The Lockheed Martin N+2 Low - boom Supersonic Commercial Transport (LSCT) aircraft was optimized in this study through the use of a multidisciplinary design optimization tool developed at the National Aeronautics and S pace Administration Armstrong Flight Research Center. A total of 111 design variables we re used in the first optimization run. Total structural weight was the objective function in this optimization run. Design requirements for strength, buckling, and flutter we re selected as constraint functions during the first optimization run. The MSC Nastran code was used to obtain the modal, strength, and buckling characteristics. Flutter and trim analyses we re based on ZAERO code, and landing and ground control loads were computed using an in - house code. The w eight penalty to satisfy all the design requirement s during the first optimization run was 31,367 lb, a 9.4% increase from the baseline configuration. The second optimization run was prepared and based on the big-bang big-crunch algorithm. Six composite ply angles for the second and fourth composite layers were selected as discrete design variables for the second optimization run. Composite ply angle changes can't improve the weight configuration of the N+2 LSCT aircraft. However, this second optimization run can create more tolerance for the active and near active strength constraint values for future weight optimization runs.

  10. Optimizing the use of a sensor resource for opponent polarization coding

    PubMed Central

    Heras, Francisco J.H.

    2017-01-01

    Flies use specialized photoreceptors R7 and R8 in the dorsal rim area (DRA) to detect skylight polarization. R7 and R8 form a tiered waveguide (central rhabdomere pair, CRP) with R7 on top, filtering light delivered to R8. We examine how the division of a given resource, CRP length, between R7 and R8 affects their ability to code polarization angle. We model optical absorption to show how the length fractions allotted to R7 and R8 determine the rates at which they transduce photons, and correct these rates for transduction unit saturation. The rates give polarization signal and photon noise in R7, and in R8. Their signals are combined in an opponent unit, intrinsic noise added, and the unit’s output analysed to extract two measures of coding ability, number of discriminable polarization angles and mutual information. A very long R7 maximizes opponent signal amplitude, but codes inefficiently due to photon noise in the very short R8. Discriminability and mutual information are optimized by maximizing signal to noise ratio, SNR. At lower light levels approximately equal lengths of R7 and R8 are optimal because photon noise dominates. At higher light levels intrinsic noise comes to dominate and a shorter R8 is optimum. The optimum R8 length fractions falls to one third. This intensity dependent range of optimal length fractions corresponds to the range observed in different fly species and is not affected by transduction unit saturation. We conclude that a limited resource, rhabdom length, can be divided between two polarization sensors, R7 and R8, to optimize opponent coding. We also find that coding ability increases sub-linearly with total rhabdom length, according to the law of diminishing returns. Consequently, the specialized shorter central rhabdom in the DRA codes polarization twice as efficiently with respect to rhabdom length than the longer rhabdom used in the rest of the eye. PMID:28316880

  11. System, methods and apparatus for program optimization for multi-threaded processor architectures

    DOEpatents

    Bastoul, Cedric; Lethin, Richard A; Leung, Allen K; Meister, Benoit J; Szilagyi, Peter; Vasilache, Nicolas T; Wohlford, David E

    2015-01-06

    Methods, apparatus and computer software product for source code optimization are provided. In an exemplary embodiment, a first custom computing apparatus is used to optimize the execution of source code on a second computing apparatus. In this embodiment, the first custom computing apparatus contains a memory, a storage medium and at least one processor with at least one multi-stage execution unit. The second computing apparatus contains at least two multi-stage execution units that allow for parallel execution of tasks. The first custom computing apparatus optimizes the code for parallelism, locality of operations and contiguity of memory accesses on the second computing apparatus. This Abstract is provided for the sole purpose of complying with the Abstract requirement rules. This Abstract is submitted with the explicit understanding that it will not be used to interpret or to limit the scope or the meaning of the claims.

  12. Optimizing Excited-State Electronic-Structure Codes for Intel Knights Landing: A Case Study on the BerkeleyGW Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deslippe, Jack; da Jornada, Felipe H.; Vigil-Fowler, Derek

    2016-10-06

    We profile and optimize calculations performed with the BerkeleyGW code on the Xeon-Phi architecture. BerkeleyGW depends both on hand-tuned critical kernels as well as on BLAS and FFT libraries. We describe the optimization process and performance improvements achieved. We discuss a layered parallelization strategy to take advantage of vector, thread and node-level parallelism. We discuss locality changes (including the consequence of the lack of L3 cache) and effective use of the on-package high-bandwidth memory. We show preliminary results on Knights-Landing including a roofline study of code performance before and after a number of optimizations. We find that the GW methodmore » is particularly well-suited for many-core architectures due to the ability to exploit a large amount of parallelism over plane-wave components, band-pairs, and frequencies.« less

  13. Optimization of lightweight structure and supporting bipod flexure for a space mirror.

    PubMed

    Chen, Yi-Cheng; Huang, Bo-Kai; You, Zhen-Ting; Chan, Chia-Yen; Huang, Ting-Ming

    2016-12-20

    This article presents an optimization process for integrated optomechanical design. The proposed optimization process for integrated optomechanical design comprises computer-aided drafting, finite element analysis (FEA), optomechanical transfer codes, and an optimization solver. The FEA was conducted to determine mirror surface deformation; then, deformed surface nodal data were transferred into Zernike polynomials through MATLAB optomechanical transfer codes to calculate the resulting optical path difference (OPD) and optical aberrations. To achieve an optimum design, the optimization iterations of the FEA, optomechanical transfer codes, and optimization solver were automatically connected through a self-developed Tcl script. Two examples of optimization design were illustrated in this research, namely, an optimum lightweight design of a Zerodur primary mirror with an outer diameter of 566 mm that is used in a spaceborne telescope and an optimum bipod flexure design that supports the optimum lightweight primary mirror. Finally, optimum designs were successfully accomplished in both examples, achieving a minimum peak-to-valley (PV) value for the OPD of the deformed optical surface. The simulated optimization results showed that (1) the lightweight ratio of the primary mirror increased from 56% to 66%; and (2) the PV value of the mirror supported by optimum bipod flexures in the horizontal position effectively decreased from 228 to 61 nm.

  14. Wireless Visual Sensor Network Resource Allocation using Cross-Layer Optimization

    DTIC Science & Technology

    2009-01-01

    Rate Compatible Punctured Convolutional (RCPC) codes for channel...vol. 44, pp. 2943–2959, November 1998. [22] J. Hagenauer, “ Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE... coding rate for H.264/AVC video compression is determined. At the data link layer, the Rate - Compatible Puctured Convolutional (RCPC) channel coding

  15. Design of Linear Accelerator (LINAC) tanks for proton therapy via Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellano, T.; De Palma, L.; Laneve, D.

    2015-07-01

    A homemade computer code for designing a Side- Coupled Linear Accelerator (SCL) is written. It integrates a simplified model of SCL tanks with the Particle Swarm Optimization (PSO) algorithm. The computer code main aim is to obtain useful guidelines for the design of Linear Accelerator (LINAC) resonant cavities. The design procedure, assisted via the aforesaid approach seems very promising, allowing future improvements towards the optimization of actual accelerating geometries. (authors)

  16. FRANOPP: Framework for analysis and optimization problems user's guide

    NASA Technical Reports Server (NTRS)

    Riley, K. M.

    1981-01-01

    Framework for analysis and optimization problems (FRANOPP) is a software aid for the study and solution of design (optimization) problems which provides the driving program and plotting capability for a user generated programming system. In addition to FRANOPP, the programming system also contains the optimization code CONMIN, and two user supplied codes, one for analysis and one for output. With FRANOPP the user is provided with five options for studying a design problem. Three of the options utilize the plot capability and present an indepth study of the design problem. The study can be focused on a history of the optimization process or on the interaction of variables within the design problem.

  17. Relay selection in energy harvesting cooperative networks with rateless codes

    NASA Astrophysics Data System (ADS)

    Zhu, Kaiyan; Wang, Fei

    2018-04-01

    This paper investigates the relay selection in energy harvesting cooperative networks, where the relays harvests energy from the radio frequency (RF) signals transmitted by a source, and the optimal relay is selected and uses the harvested energy to assist the information transmission from the source to its destination. Both source and the selected relay transmit information using rateless code, which allows the destination recover original information after collecting codes bits marginally surpass the entropy of original information. In order to improve transmission performance and efficiently utilize the harvested power, the optimal relay is selected. The optimization problem are formulated to maximize the achievable information rates of the system. Simulation results demonstrate that our proposed relay selection scheme outperform other strategies.

  18. Channel modeling, signal processing and coding for perpendicular magnetic recording

    NASA Astrophysics Data System (ADS)

    Wu, Zheng

    With the increasing areal density in magnetic recording systems, perpendicular recording has replaced longitudinal recording to overcome the superparamagnetic limit. Studies on perpendicular recording channels including aspects of channel modeling, signal processing and coding techniques are presented in this dissertation. To optimize a high density perpendicular magnetic recording system, one needs to know the tradeoffs between various components of the system including the read/write transducers, the magnetic medium, and the read channel. We extend the work by Chaichanavong on the parameter optimization for systems via design curves. Different signal processing and coding techniques are studied. Information-theoretic tools are utilized to determine the acceptable region for the channel parameters when optimal detection and linear coding techniques are used. Our results show that a considerable gain can be achieved by the optimal detection and coding techniques. The read-write process in perpendicular magnetic recording channels includes a number of nonlinear effects. Nonlinear transition shift (NLTS) is one of them. The signal distortion induced by NLTS can be reduced by write precompensation during data recording. We numerically evaluate the effect of NLTS on the read-back signal and examine the effectiveness of several write precompensation schemes in combating NLTS in a channel characterized by both transition jitter noise and additive white Gaussian electronics noise. We also present an analytical method to estimate the bit-error-rate and use it to help determine the optimal write precompensation values in multi-level precompensation schemes. We propose a mean-adjusted pattern-dependent noise predictive (PDNP) detection algorithm for use on the channel with NLTS. We show that this detector can offer significant improvements in bit-error-rate (BER) compared to conventional Viterbi and PDNP detectors. Moreover, the system performance can be further improved by combining the new detector with a simple write precompensation scheme. Soft-decision decoding for algebraic codes can improve performance for magnetic recording systems. In this dissertation, we propose two soft-decision decoding methods for tensor-product parity codes. We also present a list decoding algorithm for generalized error locating codes.

  19. Maximum-likelihood soft-decision decoding of block codes using the A* algorithm

    NASA Technical Reports Server (NTRS)

    Ekroot, L.; Dolinar, S.

    1994-01-01

    The A* algorithm finds the path in a finite depth binary tree that optimizes a function. Here, it is applied to maximum-likelihood soft-decision decoding of block codes where the function optimized over the codewords is the likelihood function of the received sequence given each codeword. The algorithm considers codewords one bit at a time, making use of the most reliable received symbols first and pursuing only the partially expanded codewords that might be maximally likely. A version of the A* algorithm for maximum-likelihood decoding of block codes has been implemented for block codes up to 64 bits in length. The efficiency of this algorithm makes simulations of codes up to length 64 feasible. This article details the implementation currently in use, compares the decoding complexity with that of exhaustive search and Viterbi decoding algorithms, and presents performance curves obtained with this implementation of the A* algorithm for several codes.

  20. Data Sciences Summer Institute Topology Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watts, Seth

    DSSI_TOPOPT is a 2D topology optimization code that designs stiff structures made of a single linear elastic material and void space. The code generates a finite element mesh of a rectangular design domain on which the user specifies displacement and load boundary conditions. The code iteratively designs a structure that minimizes the compliance (maximizes the stiffness) of the structure under the given loading, subject to an upper bound on the amount of material used. Depending on user options, the code can evaluate the performance of a user-designed structure, or create a design from scratch. Output includes the finite element mesh,more » design, and visualizations of the design.« less

  1. Cooperative optimization and their application in LDPC codes

    NASA Astrophysics Data System (ADS)

    Chen, Ke; Rong, Jian; Zhong, Xiaochun

    2008-10-01

    Cooperative optimization is a new way for finding global optima of complicated functions of many variables. The proposed algorithm is a class of message passing algorithms and has solid theory foundations. It can achieve good coding gains over the sum-product algorithm for LDPC codes. For (6561, 4096) LDPC codes, the proposed algorithm can achieve 2.0 dB gains over the sum-product algorithm at BER of 4×10-7. The decoding complexity of the proposed algorithm is lower than the sum-product algorithm can do; furthermore, the former can achieve much lower error floor than the latter can do after the Eb / No is higher than 1.8 dB.

  2. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    NASA Astrophysics Data System (ADS)

    DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug

    2018-03-01

    With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  3. Fast H.264/AVC FRExt intra coding using belief propagation.

    PubMed

    Milani, Simone

    2011-01-01

    In the H.264/AVC FRExt coder, the coding performance of Intra coding significantly overcomes the previous still image coding standards, like JPEG2000, thanks to a massive use of spatial prediction. Unfortunately, the adoption of an extensive set of predictors induces a significant increase of the computational complexity required by the rate-distortion optimization routine. The paper presents a complexity reduction strategy that aims at reducing the computational load of the Intra coding with a small loss in the compression performance. The proposed algorithm relies on selecting a reduced set of prediction modes according to their probabilities, which are estimated adopting a belief-propagation procedure. Experimental results show that the proposed method permits saving up to 60 % of the coding time required by an exhaustive rate-distortion optimization method with a negligible loss in performance. Moreover, it permits an accurate control of the computational complexity unlike other methods where the computational complexity depends upon the coded sequence.

  4. Optimizing study design for interobserver reliability: IUGA-ICS classification of complications of prostheses and graft insertion.

    PubMed

    Haylen, Bernard T; Lee, Joseph; Maher, Chris; Deprest, Jan; Freeman, Robert

    2014-06-01

    Results of interobserver reliability studies for the International Urogynecological Association-International Continence Society (IUGA-ICS) Complication Classification coding can be greatly influenced by study design factors such as participant instruction, motivation, and test-question clarity. We attempted to optimize these factors. After a 15-min instructional lecture with eight clinical case examples (including images) and with classification/coding charts available, those clinicians attending an IUGA Surgical Complications workshop were presented with eight similar-style test cases over 10 min and asked to code them using the Category, Time and Site classification. Answers were compared to predetermined correct codes obtained by five instigators of the IUGA-ICS prostheses and grafts complications classification. Prelecture and postquiz participant confidence levels using a five-step Likert scale were assessed. Complete sets of answers to the questions (24 codings) were provided by 34 respondents, only three of whom reported prior use of the charts. Average score [n (%)] out of eight, as well as median score (range) for each coding category were: (i) Category: 7.3 (91 %); 7 (4-8); (ii) Time: 7.8 (98 %); 7 (6-8); (iii) Site: 7.2 (90 %); 7 (5-8). Overall, the equivalent calculations (out of 24) were 22.3 (93 %) and 22 (18-24). Mean prelecture confidence was 1.37 (out of 5), rising to 3.85 postquiz. Urogynecologists had the highest correlation with correct coding, followed closely by fellows and general gynecologists. Optimizing training and study design can lead to excellent results for interobserver reliability of the IUGA-ICS Complication Classification coding, with increased participant confidence in complication-coding ability.

  5. Mg+ and other metallic emissions observed in the thermosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, J.A.; Viereck, R.A.; Murad, E.

    1994-11-17

    Limb observations of UV dayglow emissions from 80 to 300 km tangent heights were made in December 1992, using the GLO instrument, which flew on STS-53 as a Hitchhiker-G experiment. STS-53 was at 330 km altitude and had an orbit inclination of 57 deg. The orbit placed the shuttle near the terminator for the entire mission, resulting in a unique set of observations. The GLO instrument consisted of 12 imagers and 9 spectrographs on an Az/El gimbal system. The data was obtained over 6 days of the mission. Emissions from Mg+ and Ca+ were observed, as were emissions from themore » neutral metallic species Mg and Na. The ultimate source of the metals is ablation of meteors; however, the spatial distribution of the emissions is controlled by upper mesospheric and thermospheric winds and, in the case of the ions, by the electromagnetic fields of the ionosphere. The observed Mg+ emission was the brightest of the metal emissions, and was observed near the poles and around the geomagnetic equator near sunset. The polar emissions were short-lived and intense, indicative of auroral activity. The equatorial emissions were more continuous, with several luminous patches propagation poleward over the period of several orbits. The instrumentation will be described, as will spatial and temporal variations of the metal emissions with emphasis on the metal ions. These observations will be compared to previous observations of thermospheric metallic species.« less

  6. Mg{sup +} and other metallic emissions observed in the thermosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, J.A.; Viereck, R.A.; Murad, E.

    1994-12-31

    Limb observations of UV dayglow emissions from 80 to 300 km tangent heights were made in December, 1992, using the GLO instrument, which flew on STS-53 as a Hitchhiker-G experiment. STS-53 was at 330 km altitude and had an orbit inclination of 57{degree}. The orbit placed the shuttle near the terminator for the entire mission, resulting in a unique set of observations. The GLO instrument consisted of 12 imagers and 9 spectrographs on an Az/El gimbal system. The data was obtained over 6 days of the mission. Emissions from Mg{sup +} and Ca{sup +} were observed, as were emissions frommore » the neutral metallic species Mg and Na. The ultimate source of the metals is ablation of meteors; however, the spatial distribution of the emissions is controlled by upper mesospheric and thermospheric winds and, in the case of the ions, by the electromagnetic fields of the ionosphere. The observed Mg{sup +} emission was the brightest of the metal emissions, and was observed near the poles and around the geomagnetic equator near sunset. The polar emissions were short-lived and intense, indicative of auroral activity. The equatorial emissions were more continuous, with several luminous patches propagating poleward over the period of several orbits. The instrumentation will be described, as will spatial and temporal variations of the metal emissions with emphasis on the metal ions. These observations will be compared to previous observations of thermospheric metallic species.« less

  7. Statewide summary for Texas: Chapter B in Emergent wetlands status and trends in the northern Gulf of Mexico: 1950-2010

    USGS Publications Warehouse

    Handley, Lawrence R.; Spear, Kathryn A.; Gibeaut, Jim; Thatcher, Cindy A.

    2014-01-01

    The Texas coast (Figure 1) consists of complex and diverse ecosystems with a varying precipitation gradient. The northernmost portion of the coast, extending from Sabine Lake to Galveston Bay, is composed of salt, brackish, intermediate, and fresh marshes, with humid flatwoods inland (Moulton and others, 1997). Coastal prairies are found across the entire coast. From Galveston Bay to Corpus Christi Bay, rivers feed into large bays and estuarine ecosystems. Barrier islands and peninsulas exist along the coast from Galveston Bay to the Mexican border. The southernmost portion of the coast is composed of wind-tidal flats and the hypersaline Laguna Madre. The Laguna Madre lacks rivers and has little rainfall and restricted inlet access to the Gulf. Semiarid rangeland and irrigated agricultural land can be found inland.Approximately 6 million people live in Texas’ coastal counties (U.S. Census Bureau, 2010; Texas GLO, 2013). Seventy percent of the state’s industry and commerce occurs within 160.9 km (100 miles) of the coast (Moulton and others, 1997). Texas ports support 1.4 million jobs and generate $6.5 billion in tax revenues (Texas GLO, 2013). Chemical and petroleum production and marine commerce thrive on the Texas coast. Agriculture, grazing, commercial and recreational fishing, and recreation and tourism are strong industries along the coast and in adjacent areas; oil and gas production, agriculture, and tourism are the state’s three largest industries.

  8. Characterization of metabolic network of oxalic acid biosynthesis through RNA seq data analysis of developing spikes of finger millet (Eleusine coracana): Deciphering the role of key genes involved in oxalate formation in relation to grain calcium accumulation.

    PubMed

    Akbar, Naved; Gupta, Supriya; Tiwari, Apoorv; Singh, K P; Kumar, Anil

    2018-04-05

    In the present study, we identified seven major genes of oxalic acid biosynthesis pathway (SGAT, GGAT, ICL, GLO, MHAR, APO and OXO) from developing spike transcriptome of finger millet using rice as a reference. Sequence alignment of identified genes showed high similarity with their respective homolog in rice except for OXO and GLO. Transcript abundance (FPKM) reflects the higher accumulation of identified genes in GP-1 (low calcium genotype) as compared to GP-45 (high calcium genotype) which was further confirmed by qRT-PCR analysis, indicating differential oxalate formation in both genotypes. Determination of oxalic acid and tartaric acid content in developing spikes explain that higher oxalic acid content in GP-1 however, tartaric acid content was more in GP-45. Higher calcium content in GP-45 and lower oxalate accumulation may be due to the diversion of more ascorbic acid into tartaric acid and may correspond to less formation of calcium oxalate. Our results suggest that more than one pathway for oxalic acid biosynthesis might be present in finger millet with probable predominance of ascorbate-tartarate pathway rather than glyoxalate-oxalate conversion. Thus, finger millet can be use as an excellent model system for understanding more specific role of nutrients-antinutrients interactions, as evident from the present study. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Catalysts for cleaner combustion of coal, wood and briquettes sulfur dioxide reduction options for low emission sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, P.V.

    1995-12-31

    Coal fired, low emission sources are a major factor in the air quality problems facing eastern European cities. These sources include: stoker-fired boilers which feed district heating systems and also meet local industrial steam demand, hand-fired boilers which provide heat for one building or a small group of buildings, and masonary tile stoves which heat individual rooms. Global Environmental Systems is marketing through Global Environmental Systems of Polane, Inc. catalysts to improve the combustion of coal, wood or fuel oils in these combustion systems. PCCL-II Combustion Catalysts promotes more complete combustion, reduces or eliminates slag formations, soot, corrosion and somemore » air pollution emissions and is especially effective on high sulfur-high vanadium residual oils. Glo-Klen is a semi-dry powder continuous acting catalyst that is injected directly into the furnace of boilers by operating personnel. It is a multi-purpose catalyst that is a furnace combustion catalyst that saves fuel by increasing combustion efficiency, a cleaner of heat transfer surfaces that saves additional fuel by increasing the absorption of heat, a corrosion-inhibiting catalyst that reduces costly corrosion damage and an air pollution reducing catalyst that reduces air pollution type stack emissions. The reduction of sulfur dioxides from coal or oil-fired boilers of the hand fired stoker design and larger, can be controlled by the induction of the Glo-Klen combustion catalyst and either hydrated lime or pulverized limestone.« less

  10. Exposure of LDEF materials to atomic oxygen: Results of EOIM 3

    NASA Technical Reports Server (NTRS)

    Jaggers, C. H.; Meshishnek, M. J.

    1995-01-01

    The third Effects of Oxygen Atom Interaction with Materials (EOIM 3) experiment flew on STS-46 from July 31 to August 8, 1992. The EOIM-3 sample tray was exposed to the low-earth orbit space environment for 58.55 hours at an altitude of 124 nautical miles resulting in a calculated total atomic oxygen (AO) fluence of 1.99 x 10(exp 20) atoms/sq cm. Five samples previously flown on the Long Duration Exposure Facility (LDEF) Experiment M0003 were included on the Aerospace EOIM 3 experimental tray: (1) Chemglaze A276 white thermal control paint from the LDEF trailing edge (TE); (2) S13GLO white thermal control paint from the LDEF TE; (3) S13GLO from the LDEF leading edge (LE) with a visible contamination layer from the LDEF mission; (4) Z306 black thermal control paint from the LDEF TE with a contamination layer from the LDEF mission; and (5) anodized aluminum from the LDEF TE with a contamination layer from the LDEF mission. The purpose of this experiment was twofold: (l) investigate the response of trailing edge LDEF materials to atomic oxygen exposure, thereby simulating LDEF leading edge phenomena; (2) investigate the response of contaminated LDEF samples to atomic oxygen in attempts to understand LDEF contamination-atomic oxygen interactions. This paper describes the response of these materials to atomic oxygen exposure, and compares the results of the EOIM 3 experiment to the LDEF mission and to ground-based atomic oxygen exposure studies.

  11. SOL - SIZING AND OPTIMIZATION LANGUAGE COMPILER

    NASA Technical Reports Server (NTRS)

    Scotti, S. J.

    1994-01-01

    SOL is a computer language which is geared to solving design problems. SOL includes the mathematical modeling and logical capabilities of a computer language like FORTRAN but also includes the additional power of non-linear mathematical programming methods (i.e. numerical optimization) at the language level (as opposed to the subroutine level). The language-level use of optimization has several advantages over the traditional, subroutine-calling method of using an optimizer: first, the optimization problem is described in a concise and clear manner which closely parallels the mathematical description of optimization; second, a seamless interface is automatically established between the optimizer subroutines and the mathematical model of the system being optimized; third, the results of an optimization (objective, design variables, constraints, termination criteria, and some or all of the optimization history) are output in a form directly related to the optimization description; and finally, automatic error checking and recovery from an ill-defined system model or optimization description is facilitated by the language-level specification of the optimization problem. Thus, SOL enables rapid generation of models and solutions for optimum design problems with greater confidence that the problem is posed correctly. The SOL compiler takes SOL-language statements and generates the equivalent FORTRAN code and system calls. Because of this approach, the modeling capabilities of SOL are extended by the ability to incorporate existing FORTRAN code into a SOL program. In addition, SOL has a powerful MACRO capability. The MACRO capability of the SOL compiler effectively gives the user the ability to extend the SOL language and can be used to develop easy-to-use shorthand methods of generating complex models and solution strategies. The SOL compiler provides syntactic and semantic error-checking, error recovery, and detailed reports containing cross-references to show where each variable was used. The listings summarize all optimizations, listing the objective functions, design variables, and constraints. The compiler offers error-checking specific to optimization problems, so that simple mistakes will not cost hours of debugging time. The optimization engine used by and included with the SOL compiler is a version of Vanderplatt's ADS system (Version 1.1) modified specifically to work with the SOL compiler. SOL allows the use of the over 100 ADS optimization choices such as Sequential Quadratic Programming, Modified Feasible Directions, interior and exterior penalty function and variable metric methods. Default choices of the many control parameters of ADS are made for the user, however, the user can override any of the ADS control parameters desired for each individual optimization. The SOL language and compiler were developed with an advanced compiler-generation system to ensure correctness and simplify program maintenance. Thus, SOL's syntax was defined precisely by a LALR(1) grammar and the SOL compiler's parser was generated automatically from the LALR(1) grammar with a parser-generator. Hence unlike ad hoc, manually coded interfaces, the SOL compiler's lexical analysis insures that the SOL compiler recognizes all legal SOL programs, can recover from and correct for many errors and report the location of errors to the user. This version of the SOL compiler has been implemented on VAX/VMS computer systems and requires 204 KB of virtual memory to execute. Since the SOL compiler produces FORTRAN code, it requires the VAX FORTRAN compiler to produce an executable program. The SOL compiler consists of 13,000 lines of Pascal code. It was developed in 1986 and last updated in 1988. The ADS and other utility subroutines amount to 14,000 lines of FORTRAN code and were also updated in 1988.

  12. ODECS -- A computer code for the optimal design of S.I. engine control strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arsie, I.; Pianese, C.; Rizzo, G.

    1996-09-01

    The computer code ODECS (Optimal Design of Engine Control Strategies) for the design of Spark Ignition engine control strategies is presented. This code has been developed starting from the author`s activity in this field, availing of some original contributions about engine stochastic optimization and dynamical models. This code has a modular structure and is composed of a user interface for the definition, the execution and the analysis of different computations performed with 4 independent modules. These modules allow the following calculations: (1) definition of the engine mathematical model from steady-state experimental data; (2) engine cycle test trajectory corresponding to amore » vehicle transient simulation test such as ECE15 or FTP drive test schedule; (3) evaluation of the optimal engine control maps with a steady-state approach; (4) engine dynamic cycle simulation and optimization of static control maps and/or dynamic compensation strategies, taking into account dynamical effects due to the unsteady fluxes of air and fuel and the influences of combustion chamber wall thermal inertia on fuel consumption and emissions. Moreover, in the last two modules it is possible to account for errors generated by a non-deterministic behavior of sensors and actuators and the related influences on global engine performances, and compute robust strategies, less sensitive to stochastic effects. In the paper the four models are described together with significant results corresponding to the simulation and the calculation of optimal control strategies for dynamic transient tests.« less

  13. Optimization technique of wavefront coding system based on ZEMAX externally compiled programs

    NASA Astrophysics Data System (ADS)

    Han, Libo; Dong, Liquan; Liu, Ming; Zhao, Yuejin; Liu, Xiaohua

    2016-10-01

    Wavefront coding technique as a means of athermalization applied to infrared imaging system, the design of phase plate is the key to system performance. This paper apply the externally compiled programs of ZEMAX to the optimization of phase mask in the normal optical design process, namely defining the evaluation function of wavefront coding system based on the consistency of modulation transfer function (MTF) and improving the speed of optimization by means of the introduction of the mathematical software. User write an external program which computes the evaluation function on account of the powerful computing feature of the mathematical software in order to find the optimal parameters of phase mask, and accelerate convergence through generic algorithm (GA), then use dynamic data exchange (DDE) interface between ZEMAX and mathematical software to realize high-speed data exchanging. The optimization of the rotational symmetric phase mask and the cubic phase mask have been completed by this method, the depth of focus increases nearly 3 times by inserting the rotational symmetric phase mask, while the other system with cubic phase mask can be increased to 10 times, the consistency of MTF decrease obviously, the maximum operating temperature of optimized system range between -40°-60°. Results show that this optimization method can be more convenient to define some unconventional optimization goals and fleetly to optimize optical system with special properties due to its externally compiled function and DDE, there will be greater significance for the optimization of unconventional optical system.

  14. Optimizing zonal advection of the Advanced Research WRF (ARW) dynamics for Intel MIC

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.

    2014-10-01

    The Weather Research and Forecast (WRF) model is the most widely used community weather forecast and research model in the world. There are two distinct varieties of WRF. The Advanced Research WRF (ARW) is an experimental, advanced research version featuring very high resolution. The WRF Nonhydrostatic Mesoscale Model (WRF-NMM) has been designed for forecasting operations. WRF consists of dynamics code and several physics modules. The WRF-ARW core is based on an Eulerian solver for the fully compressible nonhydrostatic equations. In the paper, we will use Intel Intel Many Integrated Core (MIC) architecture to substantially increase the performance of a zonal advection subroutine for optimization. It is of the most time consuming routines in the ARW dynamics core. Advection advances the explicit perturbation horizontal momentum equations by adding in the large-timestep tendency along with the small timestep pressure gradient tendency. We will describe the challenges we met during the development of a high-speed dynamics code subroutine for MIC architecture. Furthermore, lessons learned from the code optimization process will be discussed. The results show that the optimizations improved performance of the original code on Xeon Phi 5110P by a factor of 2.4x.

  15. Optimizing meridional advection of the Advanced Research WRF (ARW) dynamics for Intel Xeon Phi coprocessor

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.-L.

    2015-05-01

    The most widely used community weather forecast and research model in the world is the Weather Research and Forecast (WRF) model. Two distinct varieties of WRF exist. The one we are interested is the Advanced Research WRF (ARW) is an experimental, advanced research version featuring very high resolution. The WRF Nonhydrostatic Mesoscale Model (WRF-NMM) has been designed for forecasting operations. WRF consists of dynamics code and several physics modules. The WRF-ARW core is based on an Eulerian solver for the fully compressible nonhydrostatic equations. In the paper, we optimize a meridional (north-south direction) advection subroutine for Intel Xeon Phi coprocessor. Advection is of the most time consuming routines in the ARW dynamics core. It advances the explicit perturbation horizontal momentum equations by adding in the large-timestep tendency along with the small timestep pressure gradient tendency. We will describe the challenges we met during the development of a high-speed dynamics code subroutine for MIC architecture. Furthermore, lessons learned from the code optimization process will be discussed. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 1.2x.

  16. Improved Speech Coding Based on Open-Loop Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Chen, Ya-Chin; Longman, Richard W.

    2000-01-01

    A nonlinear optimization algorithm for linear predictive speech coding was developed early that not only optimizes the linear model coefficients for the open loop predictor, but does the optimization including the effects of quantization of the transmitted residual. It also simultaneously optimizes the quantization levels used for each speech segment. In this paper, we present an improved method for initialization of this nonlinear algorithm, and demonstrate substantial improvements in performance. In addition, the new procedure produces monotonically improving speech quality with increasing numbers of bits used in the transmitted error residual. Examples of speech encoding and decoding are given for 8 speech segments and signal to noise levels as high as 47 dB are produced. As in typical linear predictive coding, the optimization is done on the open loop speech analysis model. Here we demonstrate that minimizing the error of the closed loop speech reconstruction, instead of the simpler open loop optimization, is likely to produce negligible improvement in speech quality. The examples suggest that the algorithm here is close to giving the best performance obtainable from a linear model, for the chosen order with the chosen number of bits for the codebook.

  17. Circular codes revisited: a statistical approach.

    PubMed

    Gonzalez, D L; Giannerini, S; Rosa, R

    2011-04-21

    In 1996 Arquès and Michel [1996. A complementary circular code in the protein coding genes. J. Theor. Biol. 182, 45-58] discovered the existence of a common circular code in eukaryote and prokaryote genomes. Since then, circular code theory has provoked great interest and underwent a rapid development. In this paper we discuss some theoretical issues related to the synchronization properties of coding sequences and circular codes with particular emphasis on the problem of retrieval and maintenance of the reading frame. Motivated by the theoretical discussion, we adopt a rigorous statistical approach in order to try to answer different questions. First, we investigate the covering capability of the whole class of 216 self-complementary, C(3) maximal codes with respect to a large set of coding sequences. The results indicate that, on average, the code proposed by Arquès and Michel has the best covering capability but, still, there exists a great variability among sequences. Second, we focus on such code and explore the role played by the proportion of the bases by means of a hierarchy of permutation tests. The results show the existence of a sort of optimization mechanism such that coding sequences are tailored as to maximize or minimize the coverage of circular codes on specific reading frames. Such optimization clearly relates the function of circular codes with reading frame synchronization. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Optimal design of composite hip implants using NASA technology

    NASA Technical Reports Server (NTRS)

    Blake, T. A.; Saravanos, D. A.; Davy, D. T.; Waters, S. A.; Hopkins, D. A.

    1993-01-01

    Using an adaptation of NASA software, we have investigated the use of numerical optimization techniques for the shape and material optimization of fiber composite hip implants. The original NASA inhouse codes, were originally developed for the optimization of aerospace structures. The adapted code, which was called OPORIM, couples numerical optimization algorithms with finite element analysis and composite laminate theory to perform design optimization using both shape and material design variables. The external and internal geometry of the implant and the surrounding bone is described with quintic spline curves. This geometric representation is then used to create an equivalent 2-D finite element model of the structure. Using laminate theory and the 3-D geometric information, equivalent stiffnesses are generated for each element of the 2-D finite element model, so that the 3-D stiffness of the structure can be approximated. The geometric information to construct the model of the femur was obtained from a CT scan. A variety of test cases were examined, incorporating several implant constructions and design variable sets. Typically the code was able to produce optimized shape and/or material parameters which substantially reduced stress concentrations in the bone adjacent of the implant. The results indicate that this technology can provide meaningful insight into the design of fiber composite hip implants.

  19. On entanglement-assisted quantum codes achieving the entanglement-assisted Griesmer bound

    NASA Astrophysics Data System (ADS)

    Li, Ruihu; Li, Xueliang; Guo, Luobin

    2015-12-01

    The theory of entanglement-assisted quantum error-correcting codes (EAQECCs) is a generalization of the standard stabilizer formalism. Any quaternary (or binary) linear code can be used to construct EAQECCs under the entanglement-assisted (EA) formalism. We derive an EA-Griesmer bound for linear EAQECCs, which is a quantum analog of the Griesmer bound for classical codes. This EA-Griesmer bound is tighter than known bounds for EAQECCs in the literature. For a given quaternary linear code {C}, we show that the parameters of the EAQECC that EA-stabilized by the dual of {C} can be determined by a zero radical quaternary code induced from {C}, and a necessary condition under which a linear EAQECC may achieve the EA-Griesmer bound is also presented. We construct four families of optimal EAQECCs and then show the necessary condition for existence of EAQECCs is also sufficient for some low-dimensional linear EAQECCs. The four families of optimal EAQECCs are degenerate codes and go beyond earlier constructions. What is more, except four codes, our [[n,k,d_{ea};c

  20. Advanced GF(32) nonbinary LDPC coded modulation with non-uniform 9-QAM outperforming star 8-QAM.

    PubMed

    Liu, Tao; Lin, Changyu; Djordjevic, Ivan B

    2016-06-27

    In this paper, we first describe a 9-symbol non-uniform signaling scheme based on Huffman code, in which different symbols are transmitted with different probabilities. By using the Huffman procedure, prefix code is designed to approach the optimal performance. Then, we introduce an algorithm to determine the optimal signal constellation sets for our proposed non-uniform scheme with the criterion of maximizing constellation figure of merit (CFM). The proposed nonuniform polarization multiplexed signaling 9-QAM scheme has the same spectral efficiency as the conventional 8-QAM. Additionally, we propose a specially designed GF(32) nonbinary quasi-cyclic LDPC code for the coded modulation system based on the 9-QAM non-uniform scheme. Further, we study the efficiency of our proposed non-uniform 9-QAM, combined with nonbinary LDPC coding, and demonstrate by Monte Carlo simulation that the proposed GF(23) nonbinary LDPC coded 9-QAM scheme outperforms nonbinary LDPC coded uniform 8-QAM by at least 0.8dB.

  1. Language Recognition via Sparse Coding

    DTIC Science & Technology

    2016-09-08

    a posteriori (MAP) adaptation scheme that further optimizes the discriminative quality of sparse-coded speech fea - tures. We empirically validate the...significantly improve the discriminative quality of sparse-coded speech fea - tures. In Section 4, we evaluate the proposed approaches against an i-vector

  2. Evaluating and minimizing noise impact due to aircraft flyover

    NASA Technical Reports Server (NTRS)

    Jacobson, I. D.; Cook, G.

    1979-01-01

    Existing techniques were used to assess the noise impact on a community due to aircraft operation and to optimize the flight paths of an approaching aircraft with respect to the annoyance produced. Major achievements are: (1) the development of a population model suitable for determining the noise impact, (2) generation of a numerical computer code which uses this population model along with the steepest descent algorithm to optimize approach/landing trajectories, (3) implementation of this optimization code in several fictitious cases as well as for the community surrounding Patrick Henry International Airport, Virginia.

  3. Throughput Optimization Via Adaptive MIMO Communications

    DTIC Science & Technology

    2006-05-30

    End-to-end matlab packet simulation platform. * Low density parity check code (LDPCC). * Field trials with Silvus DSP MIMO testbed. * High mobility...incorporate advanced LDPC (low density parity check) codes . Realizing that the power of LDPC codes come at the price of decoder complexity, we also...Channel Coding Binary Convolution Code or LDPC Packet Length 0 - 216-1, bytes Coding Rate 1/2, 2/3, 3/4, 5/6 MIMO Channel Training Length 0 - 4, symbols

  4. NSDann2BS, a neutron spectrum unfolding code based on neural networks technology and two bonner spheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortiz-Rodriguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.

    In this work a neutron spectrum unfolding code, based on artificial intelligence technology is presented. The code called ''Neutron Spectrometry and Dosimetry with Artificial Neural Networks and two Bonner spheres'', (NSDann2BS), was designed in a graphical user interface under the LabVIEW programming environment. The main features of this code are to use an embedded artificial neural network architecture optimized with the ''Robust design of artificial neural networks methodology'' and to use two Bonner spheres as the only piece of information. In order to build the code here presented, once the net topology was optimized and properly trained, knowledge stored atmore » synaptic weights was extracted and using a graphical framework build on the LabVIEW programming environment, the NSDann2BS code was designed. This code is friendly, intuitive and easy to use for the end user. The code is freely available upon request to authors. To demonstrate the use of the neural net embedded in the NSDann2BS code, the rate counts of {sup 252}Cf, {sup 241}AmBe and {sup 239}PuBe neutron sources measured with a Bonner spheres system.« less

  5. NSDann2BS, a neutron spectrum unfolding code based on neural networks technology and two bonner spheres

    NASA Astrophysics Data System (ADS)

    Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.

    2013-07-01

    In this work a neutron spectrum unfolding code, based on artificial intelligence technology is presented. The code called "Neutron Spectrometry and Dosimetry with Artificial Neural Networks and two Bonner spheres", (NSDann2BS), was designed in a graphical user interface under the LabVIEW programming environment. The main features of this code are to use an embedded artificial neural network architecture optimized with the "Robust design of artificial neural networks methodology" and to use two Bonner spheres as the only piece of information. In order to build the code here presented, once the net topology was optimized and properly trained, knowledge stored at synaptic weights was extracted and using a graphical framework build on the LabVIEW programming environment, the NSDann2BS code was designed. This code is friendly, intuitive and easy to use for the end user. The code is freely available upon request to authors. To demonstrate the use of the neural net embedded in the NSDann2BS code, the rate counts of 252Cf, 241AmBe and 239PuBe neutron sources measured with a Bonner spheres system.

  6. Code Optimization Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MAGEE,GLEN I.

    Computers transfer data in a number of different ways. Whether through a serial port, a parallel port, over a modem, over an ethernet cable, or internally from a hard disk to memory, some data will be lost. To compensate for that loss, numerous error detection and correction algorithms have been developed. One of the most common error correction codes is the Reed-Solomon code, which is a special subset of BCH (Bose-Chaudhuri-Hocquenghem) linear cyclic block codes. In the AURA project, an unmanned aircraft sends the data it collects back to earth so it can be analyzed during flight and possible flightmore » modifications made. To counter possible data corruption during transmission, the data is encoded using a multi-block Reed-Solomon implementation with a possibly shortened final block. In order to maximize the amount of data transmitted, it was necessary to reduce the computation time of a Reed-Solomon encoding to three percent of the processor's time. To achieve such a reduction, many code optimization techniques were employed. This paper outlines the steps taken to reduce the processing time of a Reed-Solomon encoding and the insight into modern optimization techniques gained from the experience.« less

  7. Optimal sensor placement for spatial lattice structure based on genetic algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Gao, Wei-cheng; Sun, Yi; Xu, Min-jian

    2008-10-01

    Optimal sensor placement technique plays a key role in structural health monitoring of spatial lattice structures. This paper considers the problem of locating sensors on a spatial lattice structure with the aim of maximizing the data information so that structural dynamic behavior can be fully characterized. Based on the criterion of optimal sensor placement for modal test, an improved genetic algorithm is introduced to find the optimal placement of sensors. The modal strain energy (MSE) and the modal assurance criterion (MAC) have been taken as the fitness function, respectively, so that three placement designs were produced. The decimal two-dimension array coding method instead of binary coding method is proposed to code the solution. Forced mutation operator is introduced when the identical genes appear via the crossover procedure. A computational simulation of a 12-bay plain truss model has been implemented to demonstrate the feasibility of the three optimal algorithms above. The obtained optimal sensor placements using the improved genetic algorithm are compared with those gained by exiting genetic algorithm using the binary coding method. Further the comparison criterion based on the mean square error between the finite element method (FEM) mode shapes and the Guyan expansion mode shapes identified by data-driven stochastic subspace identification (SSI-DATA) method are employed to demonstrate the advantage of the different fitness function. The results showed that some innovations in genetic algorithm proposed in this paper can enlarge the genes storage and improve the convergence of the algorithm. More importantly, the three optimal sensor placement methods can all provide the reliable results and identify the vibration characteristics of the 12-bay plain truss model accurately.

  8. A method to optimize the shield compact and lightweight combining the structure with components together by genetic algorithm and MCNP code.

    PubMed

    Cai, Yao; Hu, Huasi; Pan, Ziheng; Hu, Guang; Zhang, Tao

    2018-05-17

    To optimize the shield for neutrons and gamma rays compact and lightweight, a method combining the structure and components together was established employing genetic algorithms and MCNP code. As a typical case, the fission energy spectrum of 235 U which mixed neutrons and gamma rays was adopted in this study. Six types of materials were presented and optimized by the method. Spherical geometry was adopted in the optimization after checking the geometry effect. Simulations have made to verify the reliability of the optimization method and the efficiency of the optimized materials. To compare the materials visually and conveniently, the volume and weight needed to build a shield are employed. The results showed that, the composite multilayer material has the best performance. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Village power options

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lilienthal, P.

    1997-12-01

    This paper describes three different computer codes which have been written to model village power applications. The reasons which have driven the development of these codes include: the existance of limited field data; diverse applications can be modeled; models allow cost and performance comparisons; simulations generate insights into cost structures. The models which are discussed are: Hybrid2, a public code which provides detailed engineering simulations to analyze the performance of a particular configuration; HOMER - the hybrid optimization model for electric renewables - which provides economic screening for sensitivity analyses; and VIPOR the village power model - which is amore » network optimization model for comparing mini-grids to individual systems. Examples of the output of these codes are presented for specific applications.« less

  10. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    NASA Astrophysics Data System (ADS)

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K.; Porter, D.; O'Neill, B. J.; Nolting, C.; Edmon, P.; Donnert, J. M. F.; Jones, T. W.

    2017-02-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.

  11. Performance and structure of single-mode bosonic codes

    NASA Astrophysics Data System (ADS)

    Albert, Victor V.; Noh, Kyungjoo; Duivenvoorden, Kasper; Young, Dylan J.; Brierley, R. T.; Reinhold, Philip; Vuillot, Christophe; Li, Linshu; Shen, Chao; Girvin, S. M.; Terhal, Barbara M.; Jiang, Liang

    2018-03-01

    The early Gottesman, Kitaev, and Preskill (GKP) proposal for encoding a qubit in an oscillator has recently been followed by cat- and binomial-code proposals. Numerically optimized codes have also been proposed, and we introduce codes of this type here. These codes have yet to be compared using the same error model; we provide such a comparison by determining the entanglement fidelity of all codes with respect to the bosonic pure-loss channel (i.e., photon loss) after the optimal recovery operation. We then compare achievable communication rates of the combined encoding-error-recovery channel by calculating the channel's hashing bound for each code. Cat and binomial codes perform similarly, with binomial codes outperforming cat codes at small loss rates. Despite not being designed to protect against the pure-loss channel, GKP codes significantly outperform all other codes for most values of the loss rate. We show that the performance of GKP and some binomial codes increases monotonically with increasing average photon number of the codes. In order to corroborate our numerical evidence of the cat-binomial-GKP order of performance occurring at small loss rates, we analytically evaluate the quantum error-correction conditions of those codes. For GKP codes, we find an essential singularity in the entanglement fidelity in the limit of vanishing loss rate. In addition to comparing the codes, we draw parallels between binomial codes and discrete-variable systems. First, we characterize one- and two-mode binomial as well as multiqubit permutation-invariant codes in terms of spin-coherent states. Such a characterization allows us to introduce check operators and error-correction procedures for binomial codes. Second, we introduce a generalization of spin-coherent states, extending our characterization to qudit binomial codes and yielding a multiqudit code.

  12. Designing stellarator coils by a modified Newton method using FOCUS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao

    To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.

  13. Water cycle algorithm: A detailed standard code

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Eskandar, Hadi; Lee, Ho Min; Yoo, Do Guen; Kim, Joong Hoon

    Inspired by the observation of the water cycle process and movements of rivers and streams toward the sea, a population-based metaheuristic algorithm, the water cycle algorithm (WCA) has recently been proposed. Lately, an increasing number of WCA applications have appeared and the WCA has been utilized in different optimization fields. This paper provides detailed open source code for the WCA, of which the performance and efficiency has been demonstrated for solving optimization problems. The WCA has an interesting and simple concept and this paper aims to use its source code to provide a step-by-step explanation of the process it follows.

  14. Design of 28 GHz, 200 kW Gyrotron for ECRH Applications

    NASA Astrophysics Data System (ADS)

    Yadav, Vivek; Singh, Udaybir; Kumar, Nitin; Kumar, Anil; Deorani, S. C.; Sinha, A. K.

    2013-01-01

    This paper presents the design of 28 GHz, 200 kW gyrotron for Indian TOKAMAK system. The paper reports the designs of interaction cavity, magnetron injection gun and RF window. EGUN code is used for the optimization of electron gun parameters. TE03 mode is selected as the operating mode by using the in-house developed code GCOMS. The simulation and optimization of the cavity parameters are carried out by using the Particle-in-cell, three dimensional (3-D)-electromagnetic simulation code MAGIC. The output power more than 250 kW is achieved.

  15. Development of an LSI maximum-likelihood convolutional decoder for advanced forward error correction capability on the NASA 30/20 GHz program

    NASA Technical Reports Server (NTRS)

    Clark, R. T.; Mccallister, R. D.

    1982-01-01

    The particular coding option identified as providing the best level of coding gain performance in an LSI-efficient implementation was the optimal constraint length five, rate one-half convolutional code. To determine the specific set of design parameters which optimally matches this decoder to the LSI constraints, a breadboard MCD (maximum-likelihood convolutional decoder) was fabricated and used to generate detailed performance trade-off data. The extensive performance testing data gathered during this design tradeoff study are summarized, and the functional and physical MCD chip characteristics are presented.

  16. NASA Electronic Library System (NELS) optimization

    NASA Technical Reports Server (NTRS)

    Pribyl, William L.

    1993-01-01

    This is a compilation of NELS (NASA Electronic Library System) Optimization progress/problem, interim, and final reports for all phases. The NELS database was examined, particularly in the memory, disk contention, and CPU, to discover bottlenecks. Methods to increase the speed of NELS code were investigated. The tasks included restructuring the existing code to interact with others more effectively. An error reporting code to help detect and remove bugs in the NELS was added. Report writing tools were recommended to integrate with the ASV3 system. The Oracle database management system and tools were to be installed on a Sun workstation, intended for demonstration purposes.

  17. Designing stellarator coils by a modified Newton method using FOCUS

    NASA Astrophysics Data System (ADS)

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; Wan, Yuanxi

    2018-06-01

    To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.

  18. Designing stellarator coils by a modified Newton method using FOCUS

    DOE PAGES

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; ...

    2018-03-22

    To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.

  19. CXCL14 Blockade of CXCL12/CXCR4 Signaling in Prostate Cancer Bone Metastasis

    DTIC Science & Technology

    2017-10-01

    the CXCR4 gene was deleted with Crispr /Cas9 gene editing (KO). KO cells with re-expressed CXCR4 (Add- Back) were also generated. The three cell...Nano-Glo Live Cell Assay, Promega) diluted into media or PBS. Crispr /Cas9 deletion of CXCR4 from SUM159 cells CXCR4 was knocked out in SUM-159...cells by CRISPR /Cas9 gene editing using the pGuide-it CRISPR /Cas9 system from Takara Bio USA (Mountain View, CA), expressing Cas9, a fluorescent protein

  20. Analysis of Space Shuttle Primary Reaction-Control Engine-Exhaust Transients

    DTIC Science & Technology

    2008-10-01

    sensi- tivity in the spectral range of 0.4 to 0.9 /xm. The sensor gain was set to limit the size of the spot attributable to saturation by the solar...setting of the LAAT sensor . Table 1 lists the pertinent parameters for 22 attitude-control bums for which quality (30 frames per second) video footage was...intensity evolution of a narrow pulse of l-/im-diam droplets flying from the GLO sensor at the transient 2 representative speed of 1.6 km • s~’. The

  1. Comparative Evaluation of Different Optimization Algorithms for Structural Design Applications

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Non-linear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Centre, a project was initiated to assess the performance of eight different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using the eight different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems, however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with Sequential Unconstrained Minimizations Technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  2. Performance Trend of Different Algorithms for Structural Design Optimization

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Nonlinear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Center, a project was initiated to assess performance of different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with the sequential unconstrained minimizations technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  3. Overall Traveling-Wave-Tube Efficiency Improved By Optimized Multistage Depressed Collector Design

    NASA Technical Reports Server (NTRS)

    Vaden, Karl R.

    2002-01-01

    Depressed Collector Design The microwave traveling wave tube (TWT) is used widely for space communications and high-power airborne transmitting sources. One of the most important features in designing a TWT is overall efficiency. Yet, overall TWT efficiency is strongly dependent on the efficiency of the electron beam collector, particularly for high values of collector efficiency. For these reasons, the NASA Glenn Research Center developed an optimization algorithm based on simulated annealing to quickly design highly efficient multistage depressed collectors (MDC's). Simulated annealing is a strategy for solving highly nonlinear combinatorial optimization problems. Its major advantage over other methods is its ability to avoid becoming trapped in local minima. Simulated annealing is based on an analogy to statistical thermodynamics, specifically the physical process of annealing: heating a material to a temperature that permits many atomic rearrangements and then cooling it carefully and slowly, until it freezes into a strong, minimum-energy crystalline structure. This minimum energy crystal corresponds to the optimal solution of a mathematical optimization problem. The TWT used as a baseline for optimization was the 32-GHz, 10-W, helical TWT developed for the Cassini mission to Saturn. The method of collector analysis and design used was a 2-1/2-dimensional computational procedure that employs two types of codes, a large signal analysis code and an electron trajectory code. The large signal analysis code produces the spatial, energetic, and temporal distributions of the spent beam entering the MDC. An electron trajectory code uses the resultant data to perform the actual collector analysis. The MDC was optimized for maximum MDC efficiency and minimum final kinetic energy of all collected electrons (to reduce heat transfer). The preceding figure shows the geometric and electrical configuration of an optimized collector with an efficiency of 93.8 percent. The results show the improvement in collector efficiency from 89.7 to 93.8 percent, resulting in an increase of three overall efficiency points. In addition, the time to design a highly efficient MDC was reduced from a month to a few days. All work was done in-house at Glenn for the High Rate Data Delivery Program. Future plans include optimizing the MDC and TWT interaction circuit in tandem to further improve overall TWT efficiency.

  4. Soft-Decision Decoding of Binary Linear Block Codes Based on an Iterative Search Algorithm

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Moorthy, H. T.

    1997-01-01

    This correspondence presents a suboptimum soft-decision decoding scheme for binary linear block codes based on an iterative search algorithm. The scheme uses an algebraic decoder to iteratively generate a sequence of candidate codewords one at a time using a set of test error patterns that are constructed based on the reliability information of the received symbols. When a candidate codeword is generated, it is tested based on an optimality condition. If it satisfies the optimality condition, then it is the most likely (ML) codeword and the decoding stops. If it fails the optimality test, a search for the ML codeword is conducted in a region which contains the ML codeword. The search region is determined by the current candidate codeword and the reliability of the received symbols. The search is conducted through a purged trellis diagram for the given code using the Viterbi algorithm. If the search fails to find the ML codeword, a new candidate is generated using a new test error pattern, and the optimality test and search are renewed. The process of testing and search continues until either the MEL codeword is found or all the test error patterns are exhausted and the decoding process is terminated. Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.

  5. A Fast Optimization Method for General Binary Code Learning.

    PubMed

    Shen, Fumin; Zhou, Xiang; Yang, Yang; Song, Jingkuan; Shen, Heng; Tao, Dacheng

    2016-09-22

    Hashing or binary code learning has been recognized to accomplish efficient near neighbor search, and has thus attracted broad interests in recent retrieval, vision and learning studies. One main challenge of learning to hash arises from the involvement of discrete variables in binary code optimization. While the widely-used continuous relaxation may achieve high learning efficiency, the pursued codes are typically less effective due to accumulated quantization error. In this work, we propose a novel binary code optimization method, dubbed Discrete Proximal Linearized Minimization (DPLM), which directly handles the discrete constraints during the learning process. Specifically, the discrete (thus nonsmooth nonconvex) problem is reformulated as minimizing the sum of a smooth loss term with a nonsmooth indicator function. The obtained problem is then efficiently solved by an iterative procedure with each iteration admitting an analytical discrete solution, which is thus shown to converge very fast. In addition, the proposed method supports a large family of empirical loss functions, which is particularly instantiated in this work by both a supervised and an unsupervised hashing losses, together with the bits uncorrelation and balance constraints. In particular, the proposed DPLM with a supervised `2 loss encodes the whole NUS-WIDE database into 64-bit binary codes within 10 seconds on a standard desktop computer. The proposed approach is extensively evaluated on several large-scale datasets and the generated binary codes are shown to achieve very promising results on both retrieval and classification tasks.

  6. Final Report A Multi-Language Environment For Programmable Code Optimization and Empirical Tuning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yi, Qing; Whaley, Richard Clint; Qasem, Apan

    This report summarizes our effort and results of building an integrated optimization environment to effectively combine the programmable control and the empirical tuning of source-to-source compiler optimizations within the framework of multiple existing languages, specifically C, C++, and Fortran. The environment contains two main components: the ROSE analysis engine, which is based on the ROSE C/C++/Fortran2003 source-to-source compiler developed by Co-PI Dr.Quinlan et. al at DOE/LLNL, and the POET transformation engine, which is based on an interpreted program transformation language developed by Dr. Yi at University of Texas at San Antonio (UTSA). The ROSE analysis engine performs advanced compiler analysis,more » identifies profitable code transformations, and then produces output in POET, a language designed to provide programmable control of compiler optimizations to application developers and to support the parameterization of architecture-sensitive optimizations so that their configurations can be empirically tuned later. This POET output can then be ported to different machines together with the user application, where a POET-based search engine empirically reconfigures the parameterized optimizations until satisfactory performance is found. Computational specialists can write POET scripts to directly control the optimization of their code. Application developers can interact with ROSE to obtain optimization feedback as well as provide domain-specific knowledge and high-level optimization strategies. The optimization environment is expected to support different levels of automation and programmer intervention, from fully-automated tuning to semi-automated development and to manual programmable control.« less

  7. [Quality management and strategic consequences of assessing documentation and coding under the German Diagnostic Related Groups system].

    PubMed

    Schnabel, M; Mann, D; Efe, T; Schrappe, M; V Garrel, T; Gotzen, L; Schaeg, M

    2004-10-01

    The introduction of the German Diagnostic Related Groups (D-DRG) system requires redesigning administrative patient management strategies. Wrong coding leads to inaccurate grouping and endangers the reimbursement of treatment costs. This situation emphasizes the roles of documentation and coding as factors of economical success. The aims of this study were to assess the quantity and quality of initial documentation and coding (ICD-10 and OPS-301) and find operative strategies to improve efficiency and strategic means to ensure optimal documentation and coding quality. In a prospective study, documentation and coding quality were evaluated in a standardized way by weekly assessment. Clinical data from 1385 inpatients were processed for initial correctness and quality of documentation and coding. Principal diagnoses were found to be accurate in 82.7% of cases, inexact in 7.1%, and wrong in 10.1%. Effects on financial returns occurred in 16%. Based on these findings, an optimized, interdisciplinary, and multiprofessional workflow on medical documentation, coding, and data control was developed. Workflow incorporating regular assessment of documentation and coding quality is required by the DRG system to ensure efficient accounting of hospital services. Interdisciplinary and multiprofessional cooperation is recognized to be an important factor in establishing an efficient workflow in medical documentation and coding.

  8. TU-AB-BRC-12: Optimized Parallel MonteCarlo Dose Calculations for Secondary MU Checks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    French, S; Nazareth, D; Bellor, M

    Purpose: Secondary MU checks are an important tool used during a physics review of a treatment plan. Commercial software packages offer varying degrees of theoretical dose calculation accuracy, depending on the modality involved. Dose calculations of VMAT plans are especially prone to error due to the large approximations involved. Monte Carlo (MC) methods are not commonly used due to their long run times. We investigated two methods to increase the computational efficiency of MC dose simulations with the BEAMnrc code. Distributed computing resources, along with optimized code compilation, will allow for accurate and efficient VMAT dose calculations. Methods: The BEAMnrcmore » package was installed on a high performance computing cluster accessible to our clinic. MATLAB and PYTHON scripts were developed to convert a clinical VMAT DICOM plan into BEAMnrc input files. The BEAMnrc installation was optimized by running the VMAT simulations through profiling tools which indicated the behavior of the constituent routines in the code, e.g. the bremsstrahlung splitting routine, and the specified random number generator. This information aided in determining the most efficient compiling parallel configuration for the specific CPU’s available on our cluster, resulting in the fastest VMAT simulation times. Our method was evaluated with calculations involving 10{sup 8} – 10{sup 9} particle histories which are sufficient to verify patient dose using VMAT. Results: Parallelization allowed the calculation of patient dose on the order of 10 – 15 hours with 100 parallel jobs. Due to the compiler optimization process, further speed increases of 23% were achieved when compared with the open-source compiler BEAMnrc packages. Conclusion: Analysis of the BEAMnrc code allowed us to optimize the compiler configuration for VMAT dose calculations. In future work, the optimized MC code, in conjunction with the parallel processing capabilities of BEAMnrc, will be applied to provide accurate and efficient secondary MU checks.« less

  9. Toward Optimal Manifold Hashing via Discrete Locally Linear Embedding.

    PubMed

    Rongrong Ji; Hong Liu; Liujuan Cao; Di Liu; Yongjian Wu; Feiyue Huang

    2017-11-01

    Binary code learning, also known as hashing, has received increasing attention in large-scale visual search. By transforming high-dimensional features to binary codes, the original Euclidean distance is approximated via Hamming distance. More recently, it is advocated that it is the manifold distance, rather than the Euclidean distance, that should be preserved in the Hamming space. However, it retains as an open problem to directly preserve the manifold structure by hashing. In particular, it first needs to build the local linear embedding in the original feature space, and then quantize such embedding to binary codes. Such a two-step coding is problematic and less optimized. Besides, the off-line learning is extremely time and memory consuming, which needs to calculate the similarity matrix of the original data. In this paper, we propose a novel hashing algorithm, termed discrete locality linear embedding hashing (DLLH), which well addresses the above challenges. The DLLH directly reconstructs the manifold structure in the Hamming space, which learns optimal hash codes to maintain the local linear relationship of data points. To learn discrete locally linear embeddingcodes, we further propose a discrete optimization algorithm with an iterative parameters updating scheme. Moreover, an anchor-based acceleration scheme, termed Anchor-DLLH, is further introduced, which approximates the large similarity matrix by the product of two low-rank matrices. Experimental results on three widely used benchmark data sets, i.e., CIFAR10, NUS-WIDE, and YouTube Face, have shown superior performance of the proposed DLLH over the state-of-the-art approaches.

  10. FBCOT: a fast block coding option for JPEG 2000

    NASA Astrophysics Data System (ADS)

    Taubman, David; Naman, Aous; Mathew, Reji

    2017-09-01

    Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically < 0.5dB). The algorithm provides only limited quality/SNR scalability, but offers truly reversible transcoding to/from any standard JPEG 2000 block bit-stream. The proposed FAST block coder can be used with EBCOT's post-compression RD-optimization methodology, allowing a target compressed bit-rate to be achieved even at low latencies, leading to the name FBCOT (Fast Block Coding with Optimized Truncation).

  11. Coordinated design of coding and modulation systems

    NASA Technical Reports Server (NTRS)

    Massey, J. L.; Ancheta, T.; Johannesson, R.; Lauer, G.; Lee, L.

    1976-01-01

    The joint optimization of the coding and modulation systems employed in telemetry systems was investigated. Emphasis was placed on formulating inner and outer coding standards used by the Goddard Spaceflight Center. Convolutional codes were found that are nearly optimum for use with Viterbi decoding in the inner coding of concatenated coding systems. A convolutional code, the unit-memory code, was discovered and is ideal for inner system usage because of its byte-oriented structure. Simulations of sequential decoding on the deep-space channel were carried out to compare directly various convolutional codes that are proposed for use in deep-space systems.

  12. Numerical optimization of three-dimensional coils for NSTX-U

    NASA Astrophysics Data System (ADS)

    Lazerson, S. A.; Park, J.-K.; Logan, N.; Boozer, A.

    2015-10-01

    A tool for the calculation of optimal three-dimensional (3D) perturbative magnetic fields in tokamaks has been developed. The IPECOPT code builds upon the stellarator optimization code STELLOPT to allow for optimization of linear ideal magnetohydrodynamic perturbed equilibrium (IPEC). This tool has been applied to NSTX-U equilibria, addressing which fields are the most effective at driving NTV torques. The NTV torque calculation is performed by the PENT code. Optimization of the normal field spectrum shows that fields with n  =  1 character can drive a large core torque. It is also shown that fields with n  =  3 features are capable of driving edge torque and some core torque. Coil current optimization (using the planned in-vessel and existing RWM coils) on NSTX-U suggest the planned coils set is adequate for core and edge torque control. Comparison between error field correction experiments on DIII-D and the optimizer show good agreement. Notice: This manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the U.S. Department of Energy. The publisher, by accepting the article for publication acknowledges, that the United States Government retains a non-exclusive,paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  13. Applications of Coding in Network Communications

    ERIC Educational Resources Information Center

    Chang, Christopher SungWook

    2012-01-01

    This thesis uses the tool of network coding to investigate fast peer-to-peer file distribution, anonymous communication, robust network construction under uncertainty, and prioritized transmission. In a peer-to-peer file distribution system, we use a linear optimization approach to show that the network coding framework significantly simplifies…

  14. Efficient Network Coding-Based Loss Recovery for Reliable Multicast in Wireless Networks

    NASA Astrophysics Data System (ADS)

    Chi, Kaikai; Jiang, Xiaohong; Ye, Baoliu; Horiguchi, Susumu

    Recently, network coding has been applied to the loss recovery of reliable multicast in wireless networks [19], where multiple lost packets are XOR-ed together as one packet and forwarded via single retransmission, resulting in a significant reduction of bandwidth consumption. In this paper, we first prove that maximizing the number of lost packets for XOR-ing, which is the key part of the available network coding-based reliable multicast schemes, is actually a complex NP-complete problem. To address this limitation, we then propose an efficient heuristic algorithm for finding an approximately optimal solution of this optimization problem. Furthermore, we show that the packet coding principle of maximizing the number of lost packets for XOR-ing sometimes cannot fully exploit the potential coding opportunities, and we then further propose new heuristic-based schemes with a new coding principle. Simulation results demonstrate that the heuristic-based schemes have very low computational complexity and can achieve almost the same transmission efficiency as the current coding-based high-complexity schemes. Furthermore, the heuristic-based schemes with the new coding principle not only have very low complexity, but also slightly outperform the current high-complexity ones.

  15. How unrealistic optimism is maintained in the face of reality.

    PubMed

    Sharot, Tali; Korn, Christoph W; Dolan, Raymond J

    2011-10-09

    Unrealistic optimism is a pervasive human trait that influences domains ranging from personal relationships to politics and finance. How people maintain unrealistic optimism, despite frequently encountering information that challenges those biased beliefs, is unknown. We examined this question and found a marked asymmetry in belief updating. Participants updated their beliefs more in response to information that was better than expected than to information that was worse. This selectivity was mediated by a relative failure to code for errors that should reduce optimism. Distinct regions of the prefrontal cortex tracked estimation errors when those called for positive update, both in individuals who scored high and low on trait optimism. However, highly optimistic individuals exhibited reduced tracking of estimation errors that called for negative update in right inferior prefrontal gyrus. These findings indicate that optimism is tied to a selective update failure and diminished neural coding of undesirable information regarding the future.

  16. Efficient Transition State Optimization of Periodic Structures through Automated Relaxed Potential Energy Surface Scans.

    PubMed

    Plessow, Philipp N

    2018-02-13

    This work explores how constrained linear combinations of bond lengths can be used to optimize transition states in periodic structures. Scanning of constrained coordinates is a standard approach for molecular codes with localized basis functions, where a full set of internal coordinates is used for optimization. Common plane wave-codes for periodic boundary conditions almost exlusively rely on Cartesian coordinates. An implementation of constrained linear combinations of bond lengths with Cartesian coordinates is described. Along with an optimization of the value of the constrained coordinate toward the transition states, this allows transition optimization within a single calculation. The approach is suitable for transition states that can be well described in terms of broken and formed bonds. In particular, the implementation is shown to be effective and efficient in the optimization of transition states in zeolite-catalyzed reactions, which have high relevance in industrial processes.

  17. Integration of Rotor Aerodynamic Optimization with the Conceptual Design of a Large Civil Tiltrotor

    NASA Technical Reports Server (NTRS)

    Acree, C. W., Jr.

    2010-01-01

    Coupling of aeromechanics analysis with vehicle sizing is demonstrated with the CAMRAD II aeromechanics code and NDARC sizing code. The example is optimization of cruise tip speed with rotor/wing interference for the Large Civil Tiltrotor (LCTR2) concept design. Free-wake models were used for both rotors and the wing. This report is part of a NASA effort to develop an integrated analytical capability combining rotorcraft aeromechanics, structures, propulsion, mission analysis, and vehicle sizing. The present paper extends previous efforts by including rotor/wing interference explicitly in the rotor performance optimization and implicitly in the sizing.

  18. Electrode channel selection based on backtracking search optimization in motor imagery brain-computer interfaces.

    PubMed

    Dai, Shengfa; Wei, Qingguo

    2017-01-01

    Common spatial pattern algorithm is widely used to estimate spatial filters in motor imagery based brain-computer interfaces. However, use of a large number of channels will make common spatial pattern tend to over-fitting and the classification of electroencephalographic signals time-consuming. To overcome these problems, it is necessary to choose an optimal subset of the whole channels to save computational time and improve the classification accuracy. In this paper, a novel method named backtracking search optimization algorithm is proposed to automatically select the optimal channel set for common spatial pattern. Each individual in the population is a N-dimensional vector, with each component representing one channel. A population of binary codes generate randomly in the beginning, and then channels are selected according to the evolution of these codes. The number and positions of 1's in the code denote the number and positions of chosen channels. The objective function of backtracking search optimization algorithm is defined as the combination of classification error rate and relative number of channels. Experimental results suggest that higher classification accuracy can be achieved with much fewer channels compared to standard common spatial pattern with whole channels.

  19. The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates

    NASA Technical Reports Server (NTRS)

    Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2008-01-01

    We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.

  20. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K.

    2017-02-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it maymore » be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.« less

  1. Sparse bursts optimize information transmission in a multiplexed neural code.

    PubMed

    Naud, Richard; Sprekeler, Henning

    2018-06-22

    Many cortical neurons combine the information ascending and descending the cortical hierarchy. In the classical view, this information is combined nonlinearly to give rise to a single firing-rate output, which collapses all input streams into one. We analyze the extent to which neurons can simultaneously represent multiple input streams by using a code that distinguishes spike timing patterns at the level of a neural ensemble. Using computational simulations constrained by experimental data, we show that cortical neurons are well suited to generate such multiplexing. Interestingly, this neural code maximizes information for short and sparse bursts, a regime consistent with in vivo recordings. Neurons can also demultiplex this information, using specific connectivity patterns. The anatomy of the adult mammalian cortex suggests that these connectivity patterns are used by the nervous system to maintain sparse bursting and optimal multiplexing. Contrary to firing-rate coding, our findings indicate that the physiology and anatomy of the cortex may be interpreted as optimizing the transmission of multiple independent signals to different targets. Copyright © 2018 the Author(s). Published by PNAS.

  2. Application of artificial neural networks to the design optimization of aerospace structural components

    NASA Technical Reports Server (NTRS)

    Berke, Laszlo; Patnaik, Surya N.; Murthy, Pappu L. N.

    1993-01-01

    The application of artificial neural networks to capture structural design expertise is demonstrated. The principal advantage of a trained neural network is that it requires trivial computational effort to produce an acceptable new design. For the class of problems addressed, the development of a conventional expert system would be extremely difficult. In the present effort, a structural optimization code with multiple nonlinear programming algorithms and an artificial neural network code NETS were used. A set of optimum designs for a ring and two aircraft wings for static and dynamic constraints were generated by using the optimization codes. The optimum design data were processed to obtain input and output pairs, which were used to develop a trained artificial neural network with the code NETS. Optimum designs for new design conditions were predicted by using the trained network. Neural net prediction of optimum designs was found to be satisfactory for most of the output design parameters. However, results from the present study indicate that caution must be exercised to ensure that all design variables are within selected error bounds.

  3. Size principle and information theory.

    PubMed

    Senn, W; Wyler, K; Clamann, H P; Kleinle, J; Lüscher, H R; Müller, L

    1997-01-01

    The motor units of a skeletal muscle may be recruited according to different strategies. From all possible recruitment strategies nature selected the simplest one: in most actions of vertebrate skeletal muscles the recruitment of its motor units is by increasing size. This so-called size principle permits a high precision in muscle force generation since small muscle forces are produced exclusively by small motor units. Larger motor units are activated only if the total muscle force has already reached certain critical levels. We show that this recruitment by size is not only optimal in precision but also optimal in an information theoretical sense. We consider the motoneuron pool as an encoder generating a parallel binary code from a common input to that pool. The generated motoneuron code is sent down through the motoneuron axons to the muscle. We establish that an optimization of this motoneuron code with respect to its information content is equivalent to the recruitment of motor units by size. Moreover, maximal information content of the motoneuron code is equivalent to a minimal expected error in muscle force generation.

  4. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    NASA Astrophysics Data System (ADS)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.

  5. An integrated optimum design approach for high speed prop rotors

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Mccarthy, Thomas R.

    1995-01-01

    The objective is to develop an optimization procedure for high-speed and civil tilt-rotors by coupling all of the necessary disciplines within a closed-loop optimization procedure. Both simplified and comprehensive analysis codes are used for the aerodynamic analyses. The structural properties are calculated using in-house developed algorithms for both isotropic and composite box beam sections. There are four major objectives of this study. (1) Aerodynamic optimization: The effects of blade aerodynamic characteristics on cruise and hover performance of prop-rotor aircraft are investigated using the classical blade element momentum approach with corrections for the high lift capability of rotors/propellers. (2) Coupled aerodynamic/structures optimization: A multilevel hybrid optimization technique is developed for the design of prop-rotor aircraft. The design problem is decomposed into a level for improved aerodynamics with continuous design variables and a level with discrete variables to investigate composite tailoring. The aerodynamic analysis is based on that developed in objective 1 and the structural analysis is performed using an in-house code which models a composite box beam. The results are compared to both a reference rotor and the optimum rotor found in the purely aerodynamic formulation. (3) Multipoint optimization: The multilevel optimization procedure of objective 2 is extended to a multipoint design problem. Hover, cruise, and take-off are the three flight conditions simultaneously maximized. (4) Coupled rotor/wing optimization: Using the comprehensive rotary wing code CAMRAD, an optimization procedure is developed for the coupled rotor/wing performance in high speed tilt-rotor aircraft. The developed procedure contains design variables which define the rotor and wing planforms.

  6. Forskolin-free cAMP assay for Gi-coupled receptors.

    PubMed

    Gilissen, Julie; Geubelle, Pierre; Dupuis, Nadine; Laschet, Céline; Pirotte, Bernard; Hanson, Julien

    2015-12-01

    G protein-coupled receptors (GPCRs) represent the most successful receptor family for treating human diseases. Many are poorly characterized with few ligands reported or remain completely orphans. Therefore, there is a growing need for screening-compatible and sensitive assays. Measurement of intracellular cyclic AMP (cAMP) levels is a validated strategy for measuring GPCRs activation. However, agonist ligands for Gi-coupled receptors are difficult to track because inducers such as forskolin (FSK) must be used and are sources of variations and errors. We developed a method based on the GloSensor system, a kinetic assay that consists in a luciferase fused with cAMP binding domain. As a proof of concept, we selected the succinate receptor 1 (SUCNR1 or GPR91) which could be an attractive drug target. It has never been validated as such because very few ligands have been described. Following analyses of SUCNR1 signaling pathways, we show that the GloSensor system allows real time, FSK-free detection of an agonist effect. This FSK-free agonist signal was confirmed on other Gi-coupled receptors such as CXCR4. In a test screening on SUCNR1, we compared the results obtained with a FSK vs FSK-free protocol and were able to identify agonists with both methods but with fewer false positives when measuring the basal levels. In this report, we validate a cAMP-inducer free method for the detection of Gi-coupled receptors agonists compatible with high-throughput screening. This method will facilitate the study and screening of Gi-coupled receptors for active ligands. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Seasonal forecasting of groundwater levels in natural aquifers in the United Kingdom

    NASA Astrophysics Data System (ADS)

    Mackay, Jonathan; Jackson, Christopher; Pachocka, Magdalena; Brookshaw, Anca; Scaife, Adam

    2014-05-01

    Groundwater aquifers comprise the world's largest freshwater resource and provide resilience to climate extremes which could become more frequent under future climate changes. Prolonged dry conditions can induce groundwater drought, often characterised by significantly low groundwater levels which may persist for months to years. In contrast, lasting wet conditions can result in anomalously high groundwater levels which result in flooding, potentially at large economic cost. Using computational models to produce groundwater level forecasts allows appropriate management strategies to be considered in advance of extreme events. The majority of groundwater level forecasting studies to date use data-based models, which exploit the long response time of groundwater levels to meteorological drivers and make forecasts based only on the current state of the system. Instead, seasonal meteorological forecasts can be used to drive hydrological models and simulate groundwater levels months into the future. Such approaches have not been used in the past due to a lack of skill in these long-range forecast products. However systems such as the latest version of the Met Office Global Seasonal Forecast System (GloSea5) are now showing increased skill up to a 3-month lead time. We demonstrate the first groundwater level ensemble forecasting system using a multi-member ensemble of hindcasts from GloSea5 between 1996 and 2009 to force 21 simple lumped conceptual groundwater models covering most of the UK's major aquifers. We present the results from this hindcasting study and demonstrate that the system can be used to forecast groundwater levels with some skill up to three months into the future.

  8. Multidisciplinary design optimization of aircraft wing structures with aeroelastic and aeroservoelastic constraints

    NASA Astrophysics Data System (ADS)

    Jung, Sang-Young

    Design procedures for aircraft wing structures with control surfaces are presented using multidisciplinary design optimization. Several disciplines such as stress analysis, structural vibration, aerodynamics, and controls are considered simultaneously and combined for design optimization. Vibration data and aerodynamic data including those in the transonic regime are calculated by existing codes. Flutter analyses are performed using those data. A flutter suppression method is studied using control laws in the closed-loop flutter equation. For the design optimization, optimization techniques such as approximation, design variable linking, temporary constraint deletion, and optimality criteria are used. Sensitivity derivatives of stresses and displacements for static loads, natural frequency, flutter characteristics, and control characteristics with respect to design variables are calculated for an approximate optimization. The objective function is the structural weight. The design variables are the section properties of the structural elements and the control gain factors. Existing multidisciplinary optimization codes (ASTROS* and MSC/NASTRAN) are used to perform single and multiple constraint optimizations of fully built up finite element wing structures. Three benchmark wing models are developed and/or modified for this purpose. The models are tested extensively.

  9. Optimizing Aspect-Oriented Mechanisms for Embedded Applications

    NASA Astrophysics Data System (ADS)

    Hundt, Christine; Stöhr, Daniel; Glesner, Sabine

    As applications for small embedded mobile devices are getting larger and more complex, it becomes inevitable to adopt more advanced software engineering methods from the field of desktop application development. Aspect-oriented programming (AOP) is a promising approach due to its advanced modularization capabilities. However, existing AOP languages tend to add a substantial overhead in both execution time and code size which restricts their practicality for small devices with limited resources. In this paper, we present optimizations for aspect-oriented mechanisms at the level of the virtual machine. Our experiments show that these optimizations yield a considerable performance gain along with a reduction of the code size. Thus, our optimizations establish the base for using advanced aspect-oriented modularization techniques for developing Java applications on small embedded devices.

  10. Optimizations of a Hardware Decoder for Deep-Space Optical Communications

    NASA Technical Reports Server (NTRS)

    Cheng, Michael K.; Nakashima, Michael A.; Moision, Bruce E.; Hamkins, Jon

    2007-01-01

    The National Aeronautics and Space Administration has developed a capacity approaching modulation and coding scheme that comprises a serial concatenation of an inner accumulate pulse-position modulation (PPM) and an outer convolutional code [or serially concatenated PPM (SCPPM)] for deep-space optical communications. Decoding of this code uses the turbo principle. However, due to the nonbinary property of SCPPM, a straightforward application of classical turbo decoding is very inefficient. Here, we present various optimizations applicable in hardware implementation of the SCPPM decoder. More specifically, we feature a Super Gamma computation to efficiently handle parallel trellis edges, a pipeline-friendly 'maxstar top-2' circuit that reduces the max-only approximation penalty, a low-latency cyclic redundancy check circuit for window-based decoders, and a high-speed algorithmic polynomial interleaver that leads to memory savings. Using the featured optimizations, we implement a 6.72 megabits-per-second (Mbps) SCPPM decoder on a single field-programmable gate array (FPGA). Compared to the current data rate of 256 kilobits per second from Mars, the SCPPM coded scheme represents a throughput increase of more than twenty-six fold. Extension to a 50-Mbps decoder on a board with multiple FPGAs follows naturally. We show through hardware simulations that the SCPPM coded system can operate within 1 dB of the Shannon capacity at nominal operating conditions.

  11. Surveying multidisciplinary aspects in real-time distributed coding for Wireless Sensor Networks.

    PubMed

    Braccini, Carlo; Davoli, Franco; Marchese, Mario; Mongelli, Maurizio

    2015-01-27

    Wireless Sensor Networks (WSNs), where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, "real-time" coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure), under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s) and decoder(s) possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions) and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors' own research, and some highlights on the inter-play of the different theories.

  12. Multi-point optimization of recirculation flow type casing treatment in centrifugal compressors

    NASA Astrophysics Data System (ADS)

    Tun, Min Thaw; Sakaguchi, Daisaku

    2016-06-01

    High-pressure ratio and wide operating range are highly required for a turbocharger in diesel engines. A recirculation flow type casing treatment is effective for flow range enhancement of centrifugal compressors. Two ring grooves on a suction pipe and a shroud casing wall are connected by means of an annular passage and stable recirculation flow is formed at small flow rates from the downstream groove toward the upstream groove through the annular bypass. The shape of baseline recirculation flow type casing is modified and optimized by using a multi-point optimization code with a metamodel assisted evolutionary algorithm embedding a commercial CFD code CFX from ANSYS. The numerical optimization results give the optimized design of casing with improving adiabatic efficiency in wide operating flow rate range. Sensitivity analysis of design parameters as a function of efficiency has been performed. It is found that the optimized casing design provides optimized recirculation flow rate, in which an increment of entropy rise is minimized at grooves and passages of the rotating impeller.

  13. Performance improvement of optical CDMA networks with stochastic artificial bee colony optimization technique

    NASA Astrophysics Data System (ADS)

    Panda, Satyasen

    2018-05-01

    This paper proposes a modified artificial bee colony optimization (ABC) algorithm based on levy flight swarm intelligence referred as artificial bee colony levy flight stochastic walk (ABC-LFSW) optimization for optical code division multiple access (OCDMA) network. The ABC-LFSW algorithm is used to solve asset assignment problem based on signal to noise ratio (SNR) optimization in OCDM networks with quality of service constraints. The proposed optimization using ABC-LFSW algorithm provides methods for minimizing various noises and interferences, regulating the transmitted power and optimizing the network design for improving the power efficiency of the optical code path (OCP) from source node to destination node. In this regard, an optical system model is proposed for improving the network performance with optimized input parameters. The detailed discussion and simulation results based on transmitted power allocation and power efficiency of OCPs are included. The experimental results prove the superiority of the proposed network in terms of power efficiency and spectral efficiency in comparison to networks without any power allocation approach.

  14. Tunable wavefront coded imaging system based on detachable phase mask: Mathematical analysis, optimization and underlying applications

    NASA Astrophysics Data System (ADS)

    Zhao, Hui; Wei, Jingxuan

    2014-09-01

    The key to the concept of tunable wavefront coding lies in detachable phase masks. Ojeda-Castaneda et al. (Progress in Electronics Research Symposium Proceedings, Cambridge, USA, July 5-8, 2010) described a typical design in which two components with cosinusoidal phase variation operate together to make defocus sensitivity tunable. The present study proposes an improved design and makes three contributions: (1) A mathematical derivation based on the stationary phase method explains why the detachable phase mask of Ojeda-Castaneda et al. tunes the defocus sensitivity. (2) The mathematical derivations show that the effective bandwidth wavefront coded imaging system is also tunable by making each component of the detachable phase mask move asymmetrically. An improved Fisher information-based optimization procedure was also designed to ascertain the optimal mask parameters corresponding to specific bandwidth. (3) Possible applications of the tunable bandwidth are demonstrated by simulated imaging.

  15. User's manual for the BNW-II optimization code for dry/wet-cooled power plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braun, D.J.; Bamberger, J.A.; Braun, D.J.

    1978-05-01

    The User's Manual describes how to operate BNW-II, a computer code developed by the Pacific Northwest Laboratory (PNL) as a part of its activities under the Department of Energy (DOE) Dry Cooling Enhancement Program. The computer program offers a comprehensive method of evaluating the cost savings potential of dry/wet-cooled heat rejection systems. Going beyond simple ''figure-of-merit'' cooling tower optimization, this method includes such items as the cost of annual replacement capacity, and the optimum split between plant scale-up and replacement capacity, as well as the purchase and operating costs of all major heat rejection components. Hence the BNW-II code ismore » a useful tool for determining potential cost savings of new dry/wet surfaces, new piping, or other components as part of an optimized system for a dry/wet-cooled plant.« less

  16. Long distance quantum communication with quantum Reed-Solomon codes

    NASA Astrophysics Data System (ADS)

    Muralidharan, Sreraman; Zou, Chang-Ling; Li, Linshu; Jiang, Liang; Jianggroup Team

    We study the construction of quantum Reed Solomon codes from classical Reed Solomon codes and show that they achieve the capacity of quantum erasure channel for multi-level quantum systems. We extend the application of quantum Reed Solomon codes to long distance quantum communication, investigate the local resource overhead needed for the functioning of one-way quantum repeaters with these codes, and numerically identify the parameter regime where these codes perform better than the known quantum polynomial codes and quantum parity codes . Finally, we discuss the implementation of these codes into time-bin photonic states of qubits and qudits respectively, and optimize the performance for one-way quantum repeaters.

  17. Entropy-Based Bounds On Redundancies Of Huffman Codes

    NASA Technical Reports Server (NTRS)

    Smyth, Padhraic J.

    1992-01-01

    Report presents extension of theory of redundancy of binary prefix code of Huffman type which includes derivation of variety of bounds expressed in terms of entropy of source and size of alphabet. Recent developments yielded bounds on redundancy of Huffman code in terms of probabilities of various components in source alphabet. In practice, redundancies of optimal prefix codes often closer to 0 than to 1.

  18. Liner Optimization Studies Using the Ducted Fan Noise Prediction Code TBIEM3D

    NASA Technical Reports Server (NTRS)

    Dunn, M. H.; Farassat, F.

    1998-01-01

    In this paper we demonstrate the usefulness of the ducted fan noise prediction code TBIEM3D as a liner optimization design tool. Boundary conditions on the interior duct wall allow for hard walls or a locally reacting liner with axially segmented, circumferentially uniform impedance. Two liner optimization studies are considered in which farfield noise attenuation due to the presence of a liner is maximized by adjusting the liner impedance. In the first example, the dependence of optimal liner impedance on frequency and liner length is examined. Results show that both the optimal impedance and attenuation levels are significantly influenced by liner length and frequency. In the second example, TBIEM3D is used to compare radiated sound pressure levels between optimal and non-optimal liner cases at conditions designed to simulate take-off. It is shown that significant noise reduction is achieved for most of the sound field by selecting the optimal or near optimal liner impedance. Our results also indicate that there is relatively large region of the impedance plane over which optimal or near optimal liner behavior is attainable. This is an important conclusion for the designer since there are variations in liner characteristics due to manufacturing imprecisions.

  19. Comparison of the LLNL ALE3D and AKTS Thermal Safety Computer Codes for Calculating Times to Explosion in ODTX and STEX Thermal Cookoff Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wemhoff, A P; Burnham, A K

    2006-04-05

    Cross-comparison of the results of two computer codes for the same problem provides a mutual validation of their computational methods. This cross-validation exercise was performed for LLNL's ALE3D code and AKTS's Thermal Safety code, using the thermal ignition of HMX in two standard LLNL cookoff experiments: the One-Dimensional Time to Explosion (ODTX) test and the Scaled Thermal Explosion (STEX) test. The chemical kinetics model used in both codes was the extended Prout-Tompkins model, a relatively new addition to ALE3D. This model was applied using ALE3D's new pseudospecies feature. In addition, an advanced isoconversional kinetic approach was used in the AKTSmore » code. The mathematical constants in the Prout-Tompkins code were calibrated using DSC data from hermetically sealed vessels and the LLNL optimization code Kinetics05. The isoconversional kinetic parameters were optimized using the AKTS Thermokinetics code. We found that the Prout-Tompkins model calculations agree fairly well between the two codes, and the isoconversional kinetic model gives very similar results as the Prout-Tompkins model. We also found that an autocatalytic approach in the beta-delta phase transition model does affect the times to explosion for some conditions, especially STEX-like simulations at ramp rates above 100 C/hr, and further exploration of that effect is warranted.« less

  20. Design optimization of beta- and photovoltaic conversion devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wichner, R.; Blum, A.; Fischer-Colbrie, E.

    1976-01-08

    This report presents the theoretical and experimental results of an LLL Electronics Engineering research program aimed at optimizing the design and electronic-material parameters of beta- and photovoltaic p-n junction conversion devices. To meet this objective, a comprehensive computer code has been developed that can handle a broad range of practical conditions. The physical model upon which the code is based is described first. Then, an example is given of a set of optimization calculations along with the resulting optimized efficiencies for silicon (Si) and gallium-arsenide (GaAs) devices. The model we have developed, however, is not limited to these materials. Itmore » can handle any appropriate material--single or polycrystalline-- provided energy absorption and electron-transport data are available. To check code validity, the performance of experimental silicon p-n junction devices (produced in-house) were measured under various light intensities and spectra as well as under tritium beta irradiation. The results of these tests were then compared with predicted results based on the known or best estimated device parameters. The comparison showed very good agreement between the calculated and the measured results.« less

  1. Optimizing legacy molecular dynamics software with directive-based offload

    NASA Astrophysics Data System (ADS)

    Michael Brown, W.; Carrillo, Jan-Michael Y.; Gavhane, Nitin; Thakkar, Foram M.; Plimpton, Steven J.

    2015-10-01

    Directive-based programming models are one solution for exploiting many-core coprocessors to increase simulation rates in molecular dynamics. They offer the potential to reduce code complexity with offload models that can selectively target computations to run on the CPU, the coprocessor, or both. In this paper, we describe modifications to the LAMMPS molecular dynamics code to enable concurrent calculations on a CPU and coprocessor. We demonstrate that standard molecular dynamics algorithms can run efficiently on both the CPU and an x86-based coprocessor using the same subroutines. As a consequence, we demonstrate that code optimizations for the coprocessor also result in speedups on the CPU; in extreme cases up to 4.7X. We provide results for LAMMPS benchmarks and for production molecular dynamics simulations using the Stampede hybrid supercomputer with both Intel® Xeon Phi™ coprocessors and NVIDIA GPUs. The optimizations presented have increased simulation rates by over 2X for organic molecules and over 7X for liquid crystals on Stampede. The optimizations are available as part of the "Intel package" supplied with LAMMPS.

  2. Simulation of profile evolution from ramp-up to ramp-down and optimization of tokamak plasma termination with the RAPTOR code

    NASA Astrophysics Data System (ADS)

    Teplukhina, A. A.; Sauter, O.; Felici, F.; Merle, A.; Kim, D.; the TCV Team; the ASDEX Upgrade Team; the EUROfusion MST1 Team

    2017-12-01

    The present work demonstrates the capabilities of the transport code RAPTOR as a fast and reliable simulator of plasma profiles for the entire plasma discharge, i.e. from ramp-up to ramp-down. This code focuses, at this stage, on the simulation of electron temperature and poloidal flux profiles using prescribed equilibrium and some kinetic profiles. In this work we extend the RAPTOR transport model to include a time-varying plasma equilibrium geometry and verify the changes via comparison with ATSRA code simulations. In addition a new ad hoc transport model based on constant gradients and suitable for simulations of L-H and H-L mode transitions has been incorporated into the RAPTOR code and validated with rapid simulations of the time evolution of the safety factor and the electron temperature over the entire AUG and TCV discharges. An optimization procedure for the plasma termination phase has also been developed during this work. We define the goal of the optimization as ramping down the plasma current as fast as possible while avoiding any disruptions caused by reaching physical or technical limits. Our numerical study of this problem shows that a fast decrease of plasma elongation during current ramp-down can help in reducing plasma internal inductance. An early transition from H- to L-mode allows us to reduce the drop in poloidal beta, which is also important for plasma MHD stability and control. This work shows how these complex nonlinear interactions can be optimized automatically using relevant cost functions and constraints. Preliminary experimental results for TCV are demonstrated.

  3. Rotor cascade shape optimization with unsteady passing wakes using implicit dual time stepping method

    NASA Astrophysics Data System (ADS)

    Lee, Eun Seok

    2000-10-01

    An improved aerodynamics performance of a turbine cascade shape can be achieved by an understanding of the flow-field associated with the stator-rotor interaction. In this research, an axial gas turbine airfoil cascade shape is optimized for improved aerodynamic performance by using an unsteady Navier-Stokes solver and a parallel genetic algorithm. The objective of the research is twofold: (1) to develop a computational fluid dynamics code having faster convergence rate and unsteady flow simulation capabilities, and (2) to optimize a turbine airfoil cascade shape with unsteady passing wakes for improved aerodynamic performance. The computer code solves the Reynolds averaged Navier-Stokes equations. It is based on the explicit, finite difference, Runge-Kutta time marching scheme and the Diagonalized Alternating Direction Implicit (DADI) scheme, with the Baldwin-Lomax algebraic and k-epsilon turbulence modeling. Improvements in the code focused on the cascade shape design capability, convergence acceleration and unsteady formulation. First, the inverse shape design method was implemented in the code to provide the design capability, where a surface transpiration concept was employed as an inverse technique to modify the geometry satisfying the user specified pressure distribution on the airfoil surface. Second, an approximation storage multigrid method was implemented as an acceleration technique. Third, the preconditioning method was adopted to speed up the convergence rate in solving the low Mach number flows. Finally, the implicit dual time stepping method was incorporated in order to simulate the unsteady flow-fields. For the unsteady code validation, the Stokes's 2nd problem and the Poiseuille flow were chosen and compared with the computed results and analytic solutions. To test the code's ability to capture the natural unsteady flow phenomena, vortex shedding past a cylinder and the shock oscillation over a bicircular airfoil were simulated and compared with experiments and other research results. The rotor cascade shape optimization with unsteady passing wakes was performed to obtain an improved aerodynamic performance using the unsteady Navier-Stokes solver. Two objective functions were defined as minimization of total pressure loss and maximization of lift, while the mass flow rate was fixed. A parallel genetic algorithm was used as an optimizer and the penalty method was introduced. Each individual's objective function was computed simultaneously by using a 32 processor distributed memory computer. One optimization took about four days.

  4. Optimal Near-Hitless Network Failure Recovery Using Diversity Coding

    ERIC Educational Resources Information Center

    Avci, Serhat Nazim

    2013-01-01

    Link failures in wide area networks are common and cause significant data losses. Mesh-based protection schemes offer high capacity efficiency but they are slow, require complex signaling, and instable. Diversity coding is a proactive coding-based recovery technique which offers near-hitless (sub-ms) restoration with a competitive spare capacity…

  5. SOC-DS computer code provides tool for design evaluation of homogeneous two-material nuclear shield

    NASA Technical Reports Server (NTRS)

    Disney, R. K.; Ricks, L. O.

    1967-01-01

    SOC-DS Code /Shield Optimization Code-Direc Search/, selects a nuclear shield material of optimum volume, weight, or cost to meet the requirments of a given radiation dose rate or energy transmission constraint. It is applicable to evaluating neutron and gamma ray shields for all nuclear reactors.

  6. Integration of Dakota into the NEAMS Workbench

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Lefebvre, Robert A.; Langley, Brandon R.

    2017-07-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on integrating Dakota into the NEAMS Workbench. The NEAMS Workbench, developed at Oak Ridge National Laboratory, is a new software framework that provides a graphical user interface, input file creation, parsing, validation, job execution, workflow management, and output processing for a variety of nuclear codes. Dakota is a tool developed at Sandia National Laboratories that provides a suite of uncertainty quantification and optimization algorithms. Providing Dakota within the NEAMS Workbench allows users of nuclear simulation codes to perform uncertainty and optimization studies on their nuclear codes frommore » within a common, integrated environment. Details of the integration and parsing are provided, along with an example of Dakota running a sampling study on the fuels performance code, BISON, from within the NEAMS Workbench.« less

  7. Study of information transfer optimization for communication satellites

    NASA Technical Reports Server (NTRS)

    Odenwalder, J. P.; Viterbi, A. J.; Jacobs, I. M.; Heller, J. A.

    1973-01-01

    The results are presented of a study of source coding, modulation/channel coding, and systems techniques for application to teleconferencing over high data rate digital communication satellite links. Simultaneous transmission of video, voice, data, and/or graphics is possible in various teleconferencing modes and one-way, two-way, and broadcast modes are considered. A satellite channel model including filters, limiter, a TWT, detectors, and an optimized equalizer is treated in detail. A complete analysis is presented for one set of system assumptions which exclude nonlinear gain and phase distortion in the TWT. Modulation, demodulation, and channel coding are considered, based on an additive white Gaussian noise channel model which is an idealization of an equalized channel. Source coding with emphasis on video data compression is reviewed, and the experimental facility utilized to test promising techniques is fully described.

  8. Efficient transformation of an auditory population code in a small sensory system.

    PubMed

    Clemens, Jan; Kutzki, Olaf; Ronacher, Bernhard; Schreiber, Susanne; Wohlgemuth, Sandra

    2011-08-16

    Optimal coding principles are implemented in many large sensory systems. They include the systematic transformation of external stimuli into a sparse and decorrelated neuronal representation, enabling a flexible readout of stimulus properties. Are these principles also applicable to size-constrained systems, which have to rely on a limited number of neurons and may only have to fulfill specific and restricted tasks? We studied this question in an insect system--the early auditory pathway of grasshoppers. Grasshoppers use genetically fixed songs to recognize mates. The first steps of neural processing of songs take place in a small three-layer feed-forward network comprising only a few dozen neurons. We analyzed the transformation of the neural code within this network. Indeed, grasshoppers create a decorrelated and sparse representation, in accordance with optimal coding theory. Whereas the neuronal input layer is best read out as a summed population, a labeled-line population code for temporal features of the song is established after only two processing steps. At this stage, information about song identity is maximal for a population decoder that preserves neuronal identity. We conclude that optimal coding principles do apply to the early auditory system of the grasshopper, despite its size constraints. The inputs, however, are not encoded in a systematic, map-like fashion as in many larger sensory systems. Already at its periphery, part of the grasshopper auditory system seems to focus on behaviorally relevant features, and is in this property more reminiscent of higher sensory areas in vertebrates.

  9. Computer code for the optimization of performance parameters of mixed explosive formulations.

    PubMed

    Muthurajan, H; Sivabalan, R; Talawar, M B; Venugopalan, S; Gandhe, B R

    2006-08-25

    LOTUSES is a novel computer code, which has been developed for the prediction of various thermodynamic properties such as heat of formation, heat of explosion, volume of explosion gaseous products and other related performance parameters. In this paper, we report LOTUSES (Version 1.4) code which has been utilized for the optimization of various high explosives in different combinations to obtain maximum possible velocity of detonation. LOTUSES (Version 1.4) code will vary the composition of mixed explosives automatically in the range of 1-100% and computes the oxygen balance as well as the velocity of detonation for various compositions in preset steps. Further, the code suggests the compositions for which least oxygen balance and the higher velocity of detonation could be achieved. Presently, the code can be applied for two component explosive compositions. The code has been validated with well-known explosives like, TNT, HNS, HNF, TATB, RDX, HMX, AN, DNA, CL-20 and TNAZ in different combinations. The new algorithm incorporated in LOTUSES (Version 1.4) enhances the efficiency and makes it a more powerful tool for the scientists/researches working in the field of high energy materials/hazardous materials.

  10. FLORIS 2017

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-08-04

    This code is an enhancement to the existing FLORIS code, SWR 14-20. In particular, this enhancement computes overall thrust and turbulence intensity throughout a wind plant. This information is used to form a description of the fatigue loads experienced throughtout the wind plant. FLORIS has been updated to include an optimization routine that optimizes FLORIS to minimize thrust and turbulence intensity (and therefore loads) across the wind plant. Previously, FLORIS had been designed to optimize power out of a wind plant. However, as turbines age, more wind plant owner/operators are looking for ways to reduce their fatigue loads without sacrificingmore » too much power.« less

  11. Thermal-Structural Optimization of Integrated Cryogenic Propellant Tank Concepts for a Reusable Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Waters, W. Allen; Singer, Thomas N.; Haftka, Raphael T.

    2004-01-01

    A next generation reusable launch vehicle (RLV) will require thermally efficient and light-weight cryogenic propellant tank structures. Since these tanks will be weight-critical, analytical tools must be developed to aid in sizing the thickness of insulation layers and structural geometry for optimal performance. Finite element method (FEM) models of the tank and insulation layers were created to analyze the thermal performance of the cryogenic insulation layer and thermal protection system (TPS) of the tanks. The thermal conditions of ground-hold and re-entry/soak-through for a typical RLV mission were used in the thermal sizing study. A general-purpose nonlinear FEM analysis code, capable of using temperature and pressure dependent material properties, was used as the thermal analysis code. Mechanical loads from ground handling and proof-pressure testing were used to size the structural geometry of an aluminum cryogenic tank wall. Nonlinear deterministic optimization and reliability optimization techniques were the analytical tools used to size the geometry of the isogrid stiffeners and thickness of the skin. The results from the sizing study indicate that a commercial FEM code can be used for thermal analyses to size the insulation thicknesses where the temperature and pressure were varied. The results from the structural sizing study show that using combined deterministic and reliability optimization techniques can obtain alternate and lighter designs than the designs obtained from deterministic optimization methods alone.

  12. Scalable video transmission over Rayleigh fading channels using LDPC codes

    NASA Astrophysics Data System (ADS)

    Bansal, Manu; Kondi, Lisimachos P.

    2005-03-01

    In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.

  13. A unified framework of unsupervised subjective optimized bit allocation for multiple video object coding

    NASA Astrophysics Data System (ADS)

    Chen, Zhenzhong; Han, Junwei; Ngan, King Ngi

    2005-10-01

    MPEG-4 treats a scene as a composition of several objects or so-called video object planes (VOPs) that are separately encoded and decoded. Such a flexible video coding framework makes it possible to code different video object with different distortion scale. It is necessary to analyze the priority of the video objects according to its semantic importance, intrinsic properties and psycho-visual characteristics such that the bit budget can be distributed properly to video objects to improve the perceptual quality of the compressed video. This paper aims to provide an automatic video object priority definition method based on object-level visual attention model and further propose an optimization framework for video object bit allocation. One significant contribution of this work is that the human visual system characteristics are incorporated into the video coding optimization process. Another advantage is that the priority of the video object can be obtained automatically instead of fixing weighting factors before encoding or relying on the user interactivity. To evaluate the performance of the proposed approach, we compare it with traditional verification model bit allocation and the optimal multiple video object bit allocation algorithms. Comparing with traditional bit allocation algorithms, the objective quality of the object with higher priority is significantly improved under this framework. These results demonstrate the usefulness of this unsupervised subjective quality lifting framework.

  14. SU-E-T-590: Optimizing Magnetic Field Strengths with Matlab for An Ion-Optic System in Particle Therapy Consisting of Two Quadrupole Magnets for Subsequent Simulations with the Monte-Carlo Code FLUKA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baumann, K; Weber, U; Simeonov, Y

    Purpose: Aim of this study was to optimize the magnetic field strengths of two quadrupole magnets in a particle therapy facility in order to obtain a beam quality suitable for spot beam scanning. Methods: The particle transport through an ion-optic system of a particle therapy facility consisting of the beam tube, two quadrupole magnets and a beam monitor system was calculated with the help of Matlab by using matrices that solve the equation of motion of a charged particle in a magnetic field and field-free region, respectively. The magnetic field strengths were optimized in order to obtain a circular andmore » thin beam spot at the iso-center of the therapy facility. These optimized field strengths were subsequently transferred to the Monte-Carlo code FLUKA and the transport of 80 MeV/u C12-ions through this ion-optic system was calculated by using a user-routine to implement magnetic fields. The fluence along the beam-axis and at the iso-center was evaluated. Results: The magnetic field strengths could be optimized by using Matlab and transferred to the Monte-Carlo code FLUKA. The implementation via a user-routine was successful. Analyzing the fluence-pattern along the beam-axis the characteristic focusing and de-focusing effects of the quadrupole magnets could be reproduced. Furthermore the beam spot at the iso-center was circular and significantly thinner compared to an unfocused beam. Conclusion: In this study a Matlab tool was developed to optimize magnetic field strengths for an ion-optic system consisting of two quadrupole magnets as part of a particle therapy facility. These magnetic field strengths could subsequently be transferred to and implemented in the Monte-Carlo code FLUKA to simulate the particle transport through this optimized ion-optic system.« less

  15. Optimal superdense coding over memory channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shadman, Z.; Kampermann, H.; Bruss, D.

    2011-10-15

    We study the superdense coding capacity in the presence of quantum channels with correlated noise. We investigate both the cases of unitary and nonunitary encoding. Pauli channels for arbitrary dimensions are treated explicitly. The superdense coding capacity for some special channels and resource states is derived for unitary encoding. We also provide an example of a memory channel where nonunitary encoding leads to an improvement in the superdense coding capacity.

  16. Design and optimization of a portable LQCD Monte Carlo code using OpenACC

    NASA Astrophysics Data System (ADS)

    Bonati, Claudio; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Calore, Enrico; Schifano, Sebastiano Fabio; Silvi, Giorgio; Tripiccione, Raffaele

    The present panorama of HPC architectures is extremely heterogeneous, ranging from traditional multi-core CPU processors, supporting a wide class of applications but delivering moderate computing performance, to many-core Graphics Processor Units (GPUs), exploiting aggressive data-parallelism and delivering higher performances for streaming computing applications. In this scenario, code portability (and performance portability) become necessary for easy maintainability of applications; this is very relevant in scientific computing where code changes are very frequent, making it tedious and prone to error to keep different code versions aligned. In this work, we present the design and optimization of a state-of-the-art production-level LQCD Monte Carlo application, using the directive-based OpenACC programming model. OpenACC abstracts parallel programming to a descriptive level, relieving programmers from specifying how codes should be mapped onto the target architecture. We describe the implementation of a code fully written in OpenAcc, and show that we are able to target several different architectures, including state-of-the-art traditional CPUs and GPUs, with the same code. We also measure performance, evaluating the computing efficiency of our OpenACC code on several architectures, comparing with GPU-specific implementations and showing that a good level of performance-portability can be reached.

  17. Optimization of residual stresses in MMC's through the variation of interfacial layer architectures and processing parameters

    NASA Technical Reports Server (NTRS)

    Pindera, Marek-Jerzy; Salzar, Robert S.

    1996-01-01

    The objective of this work was the development of efficient, user-friendly computer codes for optimizing fabrication-induced residual stresses in metal matrix composites through the use of homogeneous and heterogeneous interfacial layer architectures and processing parameter variation. To satisfy this objective, three major computer codes have been developed and delivered to the NASA-Lewis Research Center, namely MCCM, OPTCOMP, and OPTCOMP2. MCCM is a general research-oriented code for investigating the effects of microstructural details, such as layered morphology of SCS-6 SiC fibers and multiple homogeneous interfacial layers, on the inelastic response of unidirectional metal matrix composites under axisymmetric thermomechanical loading. OPTCOMP and OPTCOMP2 combine the major analysis module resident in MCCM with a commercially-available optimization algorithm and are driven by user-friendly interfaces which facilitate input data construction and program execution. OPTCOMP enables the user to identify those dimensions, geometric arrangements and thermoelastoplastic properties of homogeneous interfacial layers that minimize thermal residual stresses for the specified set of constraints. OPTCOMP2 provides additional flexibility in the residual stress optimization through variation of the processing parameters (time, temperature, external pressure and axial load) as well as the microstructure of the interfacial region which is treated as a heterogeneous two-phase composite. Overviews of the capabilities of these codes are provided together with a summary of results that addresses the effects of various microstructural details of the fiber, interfacial layers and matrix region on the optimization of fabrication-induced residual stresses in metal matrix composites.

  18. GAME: GAlaxy Machine learning for Emission lines

    NASA Astrophysics Data System (ADS)

    Ucci, G.; Ferrara, A.; Pallottini, A.; Gallerani, S.

    2018-06-01

    We present an updated, optimized version of GAME (GAlaxy Machine learning for Emission lines), a code designed to infer key interstellar medium physical properties from emission line intensities of ultraviolet /optical/far-infrared galaxy spectra. The improvements concern (a) an enlarged spectral library including Pop III stars, (b) the inclusion of spectral noise in the training procedure, and (c) an accurate evaluation of uncertainties. We extensively validate the optimized code and compare its performance against empirical methods and other available emission line codes (PYQZ and HII-CHI-MISTRY) on a sample of 62 SDSS stacked galaxy spectra and 75 observed HII regions. Very good agreement is found for metallicity. However, ionization parameters derived by GAME tend to be higher. We show that this is due to the use of too limited libraries in the other codes. The main advantages of GAME are the simultaneous use of all the measured spectral lines and the extremely short computational times. We finally discuss the code potential and limitations.

  19. Iterative channel decoding of FEC-based multiple-description codes.

    PubMed

    Chang, Seok-Ho; Cosman, Pamela C; Milstein, Laurence B

    2012-03-01

    Multiple description coding has been receiving attention as a robust transmission framework for multimedia services. This paper studies the iterative decoding of FEC-based multiple description codes. The proposed decoding algorithms take advantage of the error detection capability of Reed-Solomon (RS) erasure codes. The information of correctly decoded RS codewords is exploited to enhance the error correction capability of the Viterbi algorithm at the next iteration of decoding. In the proposed algorithm, an intradescription interleaver is synergistically combined with the iterative decoder. The interleaver does not affect the performance of noniterative decoding but greatly enhances the performance when the system is iteratively decoded. We also address the optimal allocation of RS parity symbols for unequal error protection. For the optimal allocation in iterative decoding, we derive mathematical equations from which the probability distributions of description erasures can be generated in a simple way. The performance of the algorithm is evaluated over an orthogonal frequency-division multiplexing system. The results show that the performance of the multiple description codes is significantly enhanced.

  20. The effect of total noise on two-dimension OCDMA codes

    NASA Astrophysics Data System (ADS)

    Dulaimi, Layth A. Khalil Al; Badlishah Ahmed, R.; Yaakob, Naimah; Aljunid, Syed A.; Matem, Rima

    2017-11-01

    In this research, we evaluate the performance of total noise effect on two dimension (2-D) optical code-division multiple access (OCDMA) performance systems using 2-D Modified Double Weight MDW under various link parameters. The impact of the multi-access interference (MAI) and other noise effect on the system performance. The 2-D MDW is compared mathematically with other codes which use similar techniques. We analyzed and optimized the data rate and effective receive power. The performance and optimization of MDW code in OCDMA system are reported, the bit error rate (BER) can be significantly improved when the 2-D MDW code desired parameters are selected especially the cross correlation properties. It reduces the MAI in the system compensate BER and phase-induced intensity noise (PIIN) in incoherent OCDMA The analysis permits a thorough understanding of PIIN, shot and thermal noises impact on 2-D MDW OCDMA system performance. PIIN is the main noise factor in the OCDMA network.

  1. Label consistent K-SVD: learning a discriminative dictionary for recognition.

    PubMed

    Jiang, Zhuolin; Lin, Zhe; Davis, Larry S

    2013-11-01

    A label consistent K-SVD (LC-KSVD) algorithm to learn a discriminative dictionary for sparse coding is presented. In addition to using class labels of training data, we also associate label information with each dictionary item (columns of the dictionary matrix) to enforce discriminability in sparse codes during the dictionary learning process. More specifically, we introduce a new label consistency constraint called "discriminative sparse-code error" and combine it with the reconstruction error and the classification error to form a unified objective function. The optimal solution is efficiently obtained using the K-SVD algorithm. Our algorithm learns a single overcomplete dictionary and an optimal linear classifier jointly. The incremental dictionary learning algorithm is presented for the situation of limited memory resources. It yields dictionaries so that feature points with the same class labels have similar sparse codes. Experimental results demonstrate that our algorithm outperforms many recently proposed sparse-coding techniques for face, action, scene, and object category recognition under the same learning conditions.

  2. HITEMP Material and Structural Optimization Technology Transfer

    NASA Technical Reports Server (NTRS)

    Collier, Craig S.; Arnold, Steve (Technical Monitor)

    2001-01-01

    The feasibility of adding viscoelasticity and the Generalized Method of Cells (GMC) for micromechanical viscoelastic behavior into the commercial HyperSizer structural analysis and optimization code was investigated. The viscoelasticity methodology was developed in four steps. First, a simplified algorithm was devised to test the iterative time stepping method for simple one-dimensional multiple ply structures. Second, GMC code was made into a callable subroutine and incorporated into the one-dimensional code to test the accuracy and usability of the code. Third, the viscoelastic time-stepping and iterative scheme was incorporated into HyperSizer for homogeneous, isotropic viscoelastic materials. Finally, the GMC was included in a version of HyperSizer. MS Windows executable files implementing each of these steps is delivered with this report, as well as source code. The findings of this research are that both viscoelasticity and GMC are feasible and valuable additions to HyperSizer and that the door is open for more advanced nonlinear capability, such as viscoplasticity.

  3. Coded excitation with spectrum inversion (CEXSI) for ultrasound array imaging.

    PubMed

    Wang, Yao; Metzger, Kurt; Stephens, Douglas N; Williams, Gregory; Brownlie, Scott; O'Donnell, Matthew

    2003-07-01

    In this paper, a scheme called coded excitation with spectrum inversion (CEXSI) is presented. An established optimal binary code whose spectrum has no nulls and possesses the least variation is encoded as a burst for transmission. Using this optimal code, the decoding filter can be derived directly from its inverse spectrum. Various transmission techniques can be used to improve energy coupling within the system pass-band. We demonstrate its potential to achieve excellent decoding with very low (< 80 dB) side-lobes. For a 2.6 micros code, an array element with a center frequency of 10 MHz and fractional bandwidth of 38%, range side-lobes of about 40 dB have been achieved experimentally with little compromise in range resolution. The signal-to-noise ratio (SNR) improvement also has been characterized at about 14 dB. Along with simulations and experimental data, we present a formulation of the scheme, according to which CEXSI can be extended to improve SNR in sparse array imaging in general.

  4. Adaptive partially hidden Markov models with application to bilevel image coding.

    PubMed

    Forchhammer, S; Rasmussen, T S

    1999-01-01

    Partially hidden Markov models (PHMMs) have previously been introduced. The transition and emission/output probabilities from hidden states, as known from the HMMs, are conditioned on the past. This way, the HMM may be applied to images introducing the dependencies of the second dimension by conditioning. In this paper, the PHMM is extended to multiple sequences with a multiple token version and adaptive versions of PHMM coding are presented. The different versions of the PHMM are applied to lossless bilevel image coding. To reduce and optimize the model cost and size, the contexts are organized in trees and effective quantization of the parameters is introduced. The new coding methods achieve results that are better than the JBIG standard on selected test images, although at the cost of increased complexity. By the minimum description length principle, the methods presented for optimizing the code length may apply as guidance for training (P)HMMs for, e.g., segmentation or recognition purposes. Thereby, the PHMM models provide a new approach to image modeling.

  5. Use of DoD Architectural Framework in Support of JFIIT Assessments

    DTIC Science & Technology

    2007-06-12

    ACCA ASCA Div FSCA BCT/Regt FSCA Bn FSCA TACP TACP TACP JFO/Observer Friendly Forces Air RCA OV-1 for TA 3.2.2 Conduct Close Air Support OV-1 for TA...3.2.2 Conduct Close Air Support Ground RCA ISR FSCA/ ACCA CAS Aircraft FAC(A) Indirect Surface Fires Hostile Targets WOC TACP GLO Legend ACCA Air...FAC(A)/CAS Aircrew A3.1.4 Control CAS A3.2.1 Coordinate with WOC/ ACCA /ASCA/ACA A3.2.2 Coordinate with JTAC A3.2.3 Provide CAS A3.3.1 Coordinate with

  6. Flood monitoring for ungauged rivers: the power of combining space-based monitoring and global forecasting models

    NASA Astrophysics Data System (ADS)

    Revilla-Romero, Beatriz; Netgeka, Victor; Raynaud, Damien; Thielen, Jutta

    2013-04-01

    Flood warning systems typically rely on forecasts from national meteorological services and in-situ observations from hydrological gauging stations. This capacity is not equally developed in flood-prone developing countries. Low-cost satellite monitoring systems and global flood forecasting systems can be an alternative source of information for national flood authorities. The Global Flood Awareness System (GloFAS) has been develop jointly with the European Centre for Medium-Range Weather Forecast (ECMWF) and the Joint Research Centre, and it is running quasi operational now since June 2011. The system couples state-of-the art weather forecasts with a hydrological model driven at a continental scale. The system provides downstream countries with information on upstream river conditions as well as continental and global overviews. In its test phase, this global forecast system provides probabilities for large transnational river flooding at the global scale up to 30 days in advance. It has shown its real-life potential for the first time during the flood in Southeast Asia in 2011, and more recently during the floods in Australia in March 2012, India (Assam, September-October 2012) and Chad Floods (August-October 2012).The Joint Research Centre is working on further research and development, rigorous testing and adaptations of the system to create an operational tool for decision makers, including national and regional water authorities, water resource managers, hydropower companies, civil protection and first line responders, and international humanitarian aid organizations. Currently efforts are being made to link GloFAS to the Global Flood Detection System (GFDS). GFDS is a Space-based river gauging and flood monitoring system using passive microwave remote sensing which was developed by a collaboration between the JRC and Dartmouth Flood Observatory. GFDS provides flood alerts based on daily water surface change measurements from space. Alerts are shown on a world map, with detailed reports for individual gauging sites. A comparison of discharge estimates from the Global Flood Detection System (GFDS) and the Global Flood Awareness System (GloFAS) with observations for representative climatic zones is presented. Both systems have demonstrated strong potential in forecasting and detecting recent catastrophic floods. The usefulness of their combined information on global scale for decision makers at different levels is discussed. Combining space-based monitoring and global forecasting models is an innovative approach and has significant benefits for international river commissions as well as international aid organisations. This is in line with the objectives of the Hyogo and the Post-2015 Framework that aim at the development of systems which involve trans-boundary collaboration, space-based earth observation, flood forecasting and early warning.

  7. Optimization of a Turboprop UAV for Maximum Loiter and Specific Power Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Dinc, Ali

    2016-09-01

    In this study, a genuine code was developed for optimization of selected parameters of a turboprop engine for an unmanned aerial vehicle (UAV) by employing elitist genetic algorithm. First, preliminary sizing of a UAV and its turboprop engine was done, by the code in a given mission profile. Secondly, single and multi-objective optimization were done for selected engine parameters to maximize loiter duration of UAV or specific power of engine or both. In single objective optimization, as first case, UAV loiter time was improved with an increase of 17.5% from baseline in given boundaries or constraints of compressor pressure ratio and burner exit temperature. In second case, specific power was enhanced by 12.3% from baseline. In multi-objective optimization case, where previous two objectives are considered together, loiter time and specific power were increased by 14.2% and 9.7% from baseline respectively, for the same constraints.

  8. Development of free-piston Stirling engine performance and optimization codes based on Martini simulation technique

    NASA Technical Reports Server (NTRS)

    Martini, William R.

    1989-01-01

    A FORTRAN computer code is described that could be used to design and optimize a free-displacer, free-piston Stirling engine similar to the RE-1000 engine made by Sunpower. The code contains options for specifying displacer and power piston motion or for allowing these motions to be calculated by a force balance. The engine load may be a dashpot, inertial compressor, hydraulic pump or linear alternator. Cycle analysis may be done by isothermal analysis or adiabatic analysis. Adiabatic analysis may be done using the Martini moving gas node analysis or the Rios second-order Runge-Kutta analysis. Flow loss and heat loss equations are included. Graphical display of engine motions and pressures and temperatures are included. Programming for optimizing up to 15 independent dimensions is included. Sample performance results are shown for both specified and unconstrained piston motions; these results are shown as generated by each of the two Martini analyses. Two sample optimization searches are shown using specified piston motion isothermal analysis. One is for three adjustable input and one is for four. Also, two optimization searches for calculated piston motion are presented for three and for four adjustable inputs. The effect of leakage is evaluated. Suggestions for further work are given.

  9. Potential Projective Material on the Rorschach: Comparing Comprehensive System Protocols to Their Modeled R-Optimized Administration Counterparts.

    PubMed

    Pianowski, Giselle; Meyer, Gregory J; Villemor-Amaral, Anna Elisa de

    2016-01-01

    Exner ( 1989 ) and Weiner ( 2003 ) identified 3 types of Rorschach codes that are most likely to contain personally relevant projective material: Distortions, Movement, and Embellishments. We examine how often these types of codes occur in normative data and whether their frequency changes for the 1st, 2nd, 3rd, 4th, or last response to a card. We also examine the impact on these variables of the Rorschach Performance Assessment System's (R-PAS) statistical modeling procedures that convert the distribution of responses (R) from Comprehensive System (CS) administered protocols to match the distribution of R found in protocols obtained using R-optimized administration guidelines. In 2 normative reference databases, the results indicated that about 40% of responses (M = 39.25) have 1 type of code, 15% have 2 types, and 1.5% have all 3 types, with frequencies not changing by response number. In addition, there were no mean differences in the original CS and R-optimized modeled records (M Cohen's d = -0.04 in both databases). When considered alongside findings showing minimal differences between the protocols of people randomly assigned to CS or R-optimized administration, the data suggest R-optimized administration should not alter the extent to which potential projective material is present in a Rorschach protocol.

  10. Control and System Theory, Optimization, Inverse and Ill-Posed Problems

    DTIC Science & Technology

    1988-09-14

    Justlfleatlen Distribut ion/ Availability Codes # AFOSR-87-0350 Avat’ and/or1987-1988 Dist Special *CONTROL AND SYSTEM THEORY , ~ * OPTIMIZATION, * INVERSE...considerable va- riety of research investigations within the grant areas (Control and system theory , Optimization, and Ill-posed problems]. The

  11. Kalai-Smorodinsky bargaining solution for optimal resource allocation over wireless DS-CDMA visual sensor networks

    NASA Astrophysics Data System (ADS)

    Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2012-01-01

    Surveillance applications usually require high levels of video quality, resulting in high power consumption. The existence of a well-behaved scheme to balance video quality and power consumption is crucial for the system's performance. In the present work, we adopt the game-theoretic approach of Kalai-Smorodinsky Bargaining Solution (KSBS) to deal with the problem of optimal resource allocation in a multi-node wireless visual sensor network (VSN). In our setting, the Direct Sequence Code Division Multiple Access (DS-CDMA) method is used for channel access, while a cross-layer optimization design, which employs a central processing server, accounts for the overall system efficacy through all network layers. The task assigned to the central server is the communication with the nodes and the joint determination of their transmission parameters. The KSBS is applied to non-convex utility spaces, efficiently distributing the source coding rate, channel coding rate and transmission powers among the nodes. In the underlying model, the transmission powers assume continuous values, whereas the source and channel coding rates can take only discrete values. Experimental results are reported and discussed to demonstrate the merits of KSBS over competing policies.

  12. Experimental study of an optimized PSP-OSTBC scheme with m-PPM in ultraviolet scattering channel for optical MIMO system.

    PubMed

    Han, Dahai; Gu, Yanjie; Zhang, Min

    2017-08-10

    An optimized scheme of pulse symmetrical position-orthogonal space-time block codes (PSP-OSTBC) is proposed and applied with m-pulse positions modulation (m-PPM) without the use of a complex decoding algorithm in an optical multi-input multi-output (MIMO) ultraviolet (UV) communication system. The proposed scheme breaks through the limitation of the traditional Alamouti code and is suitable for high-order m-PPM in a UV scattering channel, verified by both simulation experiments and field tests with specific parameters. The performances of 1×1, 2×1, and 2×2 PSP-OSTBC systems with 4-PPM are compared experimentally as the optimal tradeoff between modification and coding in practical application. Meanwhile, the feasibility of the proposed scheme for 8-PPM is examined by a simulation experiment as well. The results suggest that the proposed scheme makes the system insensitive to the influence of path loss with a larger channel capacity, and a higher diversity gain and coding gain with a simple decoding algorithm will be achieved by employing the orthogonality of m-PPM in an optical-MIMO-based ultraviolet scattering channel.

  13. Particle-gas dynamics in the protoplanetary nebula

    NASA Technical Reports Server (NTRS)

    Cuzzi, Jeffrey N.; Champney, Joelle M.; Dobrovolskis, Anthony R.

    1991-01-01

    In the past year we made significant progress in improving our fundamental understanding of the physics of particle-gas dynamics in the protoplanetary nebula. Having brought our code to a state of fairly robust functionality, we devoted significant effort to optimizing it for running long cases. We optimized the code for vectorization to the extent that it now runs eight times faster than before. The following subject areas are covered: physical improvements to the model; numerical results; Reynolds averaging of fluid equations; and modeling of turbulence and viscosity.

  14. Performance optimization of Qbox and WEST on Intel Knights Landing

    NASA Astrophysics Data System (ADS)

    Zheng, Huihuo; Knight, Christopher; Galli, Giulia; Govoni, Marco; Gygi, Francois

    We present the optimization of electronic structure codes Qbox and WEST targeting the Intel®Xeon Phi™processor, codenamed Knights Landing (KNL). Qbox is an ab-initio molecular dynamics code based on plane wave density functional theory (DFT) and WEST is a post-DFT code for excited state calculations within many-body perturbation theory. Both Qbox and WEST employ highly scalable algorithms which enable accurate large-scale electronic structure calculations on leadership class supercomputer platforms beyond 100,000 cores, such as Mira and Theta at the Argonne Leadership Computing Facility. In this work, features of the KNL architecture (e.g. hierarchical memory) are explored to achieve higher performance in key algorithms of the Qbox and WEST codes and to develop a road-map for further development targeting next-generation computing architectures. In particular, the optimizations of the Qbox and WEST codes on the KNL platform will target efficient large-scale electronic structure calculations of nanostructured materials exhibiting complex structures and prediction of their electronic and thermal properties for use in solar and thermal energy conversion device. This work was supported by MICCoM, as part of Comp. Mats. Sci. Program funded by the U.S. DOE, Office of Sci., BES, MSE Division. This research used resources of the ALCF, which is a DOE Office of Sci. User Facility under Contract DE-AC02-06CH11357.

  15. Robust information propagation through noisy neural circuits

    PubMed Central

    Pouget, Alexandre

    2017-01-01

    Sensory neurons give highly variable responses to stimulation, which can limit the amount of stimulus information available to downstream circuits. Much work has investigated the factors that affect the amount of information encoded in these population responses, leading to insights about the role of covariability among neurons, tuning curve shape, etc. However, the informativeness of neural responses is not the only relevant feature of population codes; of potentially equal importance is how robustly that information propagates to downstream structures. For instance, to quantify the retina’s performance, one must consider not only the informativeness of the optic nerve responses, but also the amount of information that survives the spike-generating nonlinearity and noise corruption in the next stage of processing, the lateral geniculate nucleus. Our study identifies the set of covariance structures for the upstream cells that optimize the ability of information to propagate through noisy, nonlinear circuits. Within this optimal family are covariances with “differential correlations”, which are known to reduce the information encoded in neural population activities. Thus, covariance structures that maximize information in neural population codes, and those that maximize the ability of this information to propagate, can be very different. Moreover, redundancy is neither necessary nor sufficient to make population codes robust against corruption by noise: redundant codes can be very fragile, and synergistic codes can—in some cases—optimize robustness against noise. PMID:28419098

  16. Connectivity Restoration in Wireless Sensor Networks via Space Network Coding.

    PubMed

    Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing

    2017-04-20

    The problem of finding the number and optimal positions of relay nodes for restoring the network connectivity in partitioned Wireless Sensor Networks (WSNs) is Non-deterministic Polynomial-time hard (NP-hard) and thus heuristic methods are preferred to solve it. This paper proposes a novel polynomial time heuristic algorithm, namely, Relay Placement using Space Network Coding (RPSNC), to solve this problem, where Space Network Coding, also called Space Information Flow (SIF), is a new research paradigm that studies network coding in Euclidean space, in which extra relay nodes can be introduced to reduce the cost of communication. Unlike contemporary schemes that are often based on Minimum Spanning Tree (MST), Euclidean Steiner Minimal Tree (ESMT) or a combination of MST with ESMT, RPSNC is a new min-cost multicast space network coding approach that combines Delaunay triangulation and non-uniform partitioning techniques for generating a number of candidate relay nodes, and then linear programming is applied for choosing the optimal relay nodes and computing their connection links with terminals. Subsequently, an equilibrium method is used to refine the locations of the optimal relay nodes, by moving them to balanced positions. RPSNC can adapt to any density distribution of relay nodes and terminals, as well as any density distribution of terminals. The performance and complexity of RPSNC are analyzed and its performance is validated through simulation experiments.

  17. Preliminary Results from the Application of Automated Adjoint Code Generation to CFL3D

    NASA Technical Reports Server (NTRS)

    Carle, Alan; Fagan, Mike; Green, Lawrence L.

    1998-01-01

    This report describes preliminary results obtained using an automated adjoint code generator for Fortran to augment a widely-used computational fluid dynamics flow solver to compute derivatives. These preliminary results with this augmented code suggest that, even in its infancy, the automated adjoint code generator can accurately and efficiently deliver derivatives for use in transonic Euler-based aerodynamic shape optimization problems with hundreds to thousands of independent design variables.

  18. Analysis and optimization of preliminary aircraft configurations in relationship to emerging agility metrics

    NASA Technical Reports Server (NTRS)

    Sandlin, Doral R.; Bauer, Brent Alan

    1993-01-01

    This paper discusses the development of a FORTRAN computer code to perform agility analysis on aircraft configurations. This code is to be part of the NASA-Ames ACSYNT (AirCraft SYNThesis) design code. This paper begins with a discussion of contemporary agility research in the aircraft industry and a survey of a few agility metrics. The methodology, techniques and models developed for the code are then presented. Finally, example trade studies using the agility module along with ACSYNT are illustrated. These trade studies were conducted using a Northrop F-20 Tigershark aircraft model. The studies show that the agility module is effective in analyzing the influence of common parameters such as thrust-to-weight ratio and wing loading on agility criteria. The module can compare the agility potential between different configurations. In addition one study illustrates the module's ability to optimize a configuration's agility performance.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janjusic, Tommy; Kartsaklis, Christos

    Memory scalability is an enduring problem and bottleneck that plagues many parallel codes. Parallel codes designed for High Performance Systems are typically designed over the span of several, and in some instances 10+, years. As a result, optimization practices which were appropriate for earlier systems may no longer be valid and thus require careful optimization consideration. Specifically, parallel codes whose memory footprint is a function of their scalability must be carefully considered for future exa-scale systems. In this paper we present a methodology and tool to study the memory scalability of parallel codes. Using our methodology we evaluate an applicationmore » s memory footprint as a function of scalability, which we coined memory efficiency, and describe our results. In particular, using our in-house tools we can pinpoint the specific application components which contribute to the application s overall memory foot-print (application data- structures, libraries, etc.).« less

  20. Introduction of the ASGARD code (Automated Selection and Grouping of events in AIA Regional Data)

    NASA Astrophysics Data System (ADS)

    Bethge, Christian; Winebarger, Amy; Tiwari, Sanjiv K.; Fayock, Brian

    2017-08-01

    We have developed the ASGARD code to automatically detect and group brightenings ("events") in AIA data. The event selection and grouping can be optimized to the respective dataset with a multitude of control parameters. The code was initially written for IRIS data, but has since been optimized for AIA. However, the underlying algorithm is not limited to either and could be used for other data as well.Results from datasets in various AIA channels show that brightenings are reliably detected and that coherent coronal structures can be isolated by using the obtained information about the start, peak, and end times of events. We are presently working on a follow-up algorithm to automatically determine the heating and cooling timescales of coronal structures. This will be done by correlating the information from different AIA channels with different temperature responses. We will present the code and preliminary results.

  1. Improvements on non-equilibrium and transport Green function techniques: The next-generation TRANSIESTA

    NASA Astrophysics Data System (ADS)

    Papior, Nick; Lorente, Nicolás; Frederiksen, Thomas; García, Alberto; Brandbyge, Mads

    2017-03-01

    We present novel methods implemented within the non-equilibrium Green function code (NEGF) TRANSIESTA based on density functional theory (DFT). Our flexible, next-generation DFT-NEGF code handles devices with one or multiple electrodes (Ne ≥ 1) with individual chemical potentials and electronic temperatures. We describe its novel methods for electrostatic gating, contour optimizations, and assertion of charge conservation, as well as the newly implemented algorithms for optimized and scalable matrix inversion, performance-critical pivoting, and hybrid parallelization. Additionally, a generic NEGF "post-processing" code (TBTRANS/PHTRANS) for electron and phonon transport is presented with several novelties such as Hamiltonian interpolations, Ne ≥ 1 electrode capability, bond-currents, generalized interface for user-defined tight-binding transport, transmission projection using eigenstates of a projected Hamiltonian, and fast inversion algorithms for large-scale simulations easily exceeding 106 atoms on workstation computers. The new features of both codes are demonstrated and bench-marked for relevant test systems.

  2. Fusion PIC code performance analysis on the Cori KNL system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koskela, Tuomas S.; Deslippe, Jack; Friesen, Brian

    We study the attainable performance of Particle-In-Cell codes on the Cori KNL system by analyzing a miniature particle push application based on the fusion PIC code XGC1. We start from the most basic building blocks of a PIC code and build up the complexity to identify the kernels that cost the most in performance and focus optimization efforts there. Particle push kernels operate at high AI and are not likely to be memory bandwidth or even cache bandwidth bound on KNL. Therefore, we see only minor benefits from the high bandwidth memory available on KNL, and achieving good vectorization ismore » shown to be the most beneficial optimization path with theoretical yield of up to 8x speedup on KNL. In practice we are able to obtain up to a 4x gain from vectorization due to limitations set by the data layout and memory latency.« less

  3. Deep Learning Methods for Improved Decoding of Linear Codes

    NASA Astrophysics Data System (ADS)

    Nachmani, Eliya; Marciano, Elad; Lugosch, Loren; Gross, Warren J.; Burshtein, David; Be'ery, Yair

    2018-02-01

    The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.

  4. Revisiting Molecular Dynamics on a CPU/GPU system: Water Kernel and SHAKE Parallelization.

    PubMed

    Ruymgaart, A Peter; Elber, Ron

    2012-11-13

    We report Graphics Processing Unit (GPU) and Open-MP parallel implementations of water-specific force calculations and of bond constraints for use in Molecular Dynamics simulations. We focus on a typical laboratory computing-environment in which a CPU with a few cores is attached to a GPU. We discuss in detail the design of the code and we illustrate performance comparable to highly optimized codes such as GROMACS. Beside speed our code shows excellent energy conservation. Utilization of water-specific lists allows the efficient calculations of non-bonded interactions that include water molecules and results in a speed-up factor of more than 40 on the GPU compared to code optimized on a single CPU core for systems larger than 20,000 atoms. This is up four-fold from a factor of 10 reported in our initial GPU implementation that did not include a water-specific code. Another optimization is the implementation of constrained dynamics entirely on the GPU. The routine, which enforces constraints of all bonds, runs in parallel on multiple Open-MP cores or entirely on the GPU. It is based on Conjugate Gradient solution of the Lagrange multipliers (CG SHAKE). The GPU implementation is partially in double precision and requires no communication with the CPU during the execution of the SHAKE algorithm. The (parallel) implementation of SHAKE allows an increase of the time step to 2.0fs while maintaining excellent energy conservation. Interestingly, CG SHAKE is faster than the usual bond relaxation algorithm even on a single core if high accuracy is expected. The significant speedup of the optimized components transfers the computational bottleneck of the MD calculation to the reciprocal part of Particle Mesh Ewald (PME).

  5. Context-sensitive trace inlining for Java.

    PubMed

    Häubl, Christian; Wimmer, Christian; Mössenböck, Hanspeter

    2013-12-01

    Method inlining is one of the most important optimizations in method-based just-in-time (JIT) compilers. It widens the compilation scope and therefore allows optimizing multiple methods as a whole, which increases the performance. However, if method inlining is used too frequently, the compilation time increases and too much machine code is generated. This has negative effects on the performance. Trace-based JIT compilers only compile frequently executed paths, so-called traces, instead of whole methods. This may result in faster compilation, less generated machine code, and better optimized machine code. In the previous work, we implemented a trace recording infrastructure and a trace-based compiler for [Formula: see text], by modifying the Java HotSpot VM. Based on this work, we evaluate the effect of trace inlining on the performance and the amount of generated machine code. Trace inlining has several major advantages when compared to method inlining. First, trace inlining is more selective than method inlining, because only frequently executed paths are inlined. Second, the recorded traces may capture information about virtual calls, which simplify inlining. A third advantage is that trace information is context sensitive so that different method parts can be inlined depending on the specific call site. These advantages allow more aggressive inlining while the amount of generated machine code is still reasonable. We evaluate several inlining heuristics on the benchmark suites DaCapo 9.12 Bach, SPECjbb2005, and SPECjvm2008 and show that our trace-based compiler achieves an up to 51% higher peak performance than the method-based Java HotSpot client compiler. Furthermore, we show that the large compilation scope of our trace-based compiler has a positive effect on other compiler optimizations such as constant folding or null check elimination.

  6. Optimization techniques using MODFLOW-GWM

    USGS Publications Warehouse

    Grava, Anna; Feinstein, Daniel T.; Barlow, Paul M.; Bonomi, Tullia; Buarne, Fabiola; Dunning, Charles; Hunt, Randall J.

    2015-01-01

    An important application of optimization codes such as MODFLOW-GWM is to maximize water supply from unconfined aquifers subject to constraints involving surface-water depletion and drawdown. In optimizing pumping for a fish hatchery in a bedrock aquifer system overlain by glacial deposits in eastern Wisconsin, various features of the GWM-2000 code were used to overcome difficulties associated with: 1) Non-linear response matrices caused by unconfined conditions and head-dependent boundaries; 2) Efficient selection of candidate well and drawdown constraint locations; and 3) Optimizing against water-level constraints inside pumping wells. Features of GWM-2000 were harnessed to test the effects of systematically varying the decision variables and constraints on the optimized solution for managing withdrawals. An important lesson of the procedure, similar to lessons learned in model calibration, is that the optimized outcome is non-unique, and depends on a range of choices open to the user. The modeler must balance the complexity of the numerical flow model used to represent the groundwater-flow system against the range of options (decision variables, objective functions, constraints) available for optimizing the model.

  7. Combining analysis with optimization at Langley Research Center. An evolutionary process

    NASA Technical Reports Server (NTRS)

    Rogers, J. L., Jr.

    1982-01-01

    The evolutionary process of combining analysis and optimization codes was traced with a view toward providing insight into the long term goal of developing the methodology for an integrated, multidisciplinary software system for the concurrent analysis and optimization of aerospace structures. It was traced along the lines of strength sizing, concurrent strength and flutter sizing, and general optimization to define a near-term goal for combining analysis and optimization codes. Development of a modular software system combining general-purpose, state-of-the-art, production-level analysis computer programs for structures, aerodynamics, and aeroelasticity with a state-of-the-art optimization program is required. Incorporation of a modular and flexible structural optimization software system into a state-of-the-art finite element analysis computer program will facilitate this effort. This effort results in the software system used that is controlled with a special-purpose language, communicates with a data management system, and is easily modified for adding new programs and capabilities. A 337 degree-of-freedom finite element model is used in verifying the accuracy of this system.

  8. SU-E-T-254: Optimization of GATE and PHITS Monte Carlo Code Parameters for Uniform Scanning Proton Beam Based On Simulation with FLUKA General-Purpose Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurosu, K; Department of Medical Physics ' Engineering, Osaka University Graduate School of Medicine, Osaka; Takashina, M

    Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximummore » step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health, Labor and Welfare of Japan, Grants-in-Aid for Scientific Research (No. 23791419), and JSPS Core-to-Core program (No. 23003). The authors have no conflict of interest.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wright, Warren P.; Nagaraj, Gautam; Kneller, James P.

    It has long been recognized that the neutrinos detected from the next core-collapse supernova in the Galaxy have the potential to reveal important information about the dynamics of the explosion and the nucleosynthesis conditions as well as allowing us to probe the properties of the neutrino itself. The neutrinos emitted from thermonuclear—type Ia—supernovae also possess the same potential, although these supernovae are dimmer neutrino sources. For the first time, we calculate the time, energy, line of sight, and neutrino-flavor-dependent features of the neutrino signal expected from a three-dimensional delayed-detonation explosion simulation, where a deflagration-to-detonation transition triggers the complete disruption ofmore » a near-Chandrasekhar mass carbon-oxygen white dwarf. We also calculate the neutrino flavor evolution along eight lines of sight through the simulation as a function of time and energy using an exact three-flavor transformation code. We identify a characteristic spectral peak at ˜10 MeV as a signature of electron captures on copper. This peak is a potentially distinguishing feature of explosion models since it reflects the nucleosynthesis conditions early in the explosion. We simulate the event rates in the Super-K, Hyper-K, JUNO, and DUNE neutrino detectors with the SNOwGLoBES event rate calculation software and also compute the IceCube signal. Hyper-K will be able to detect neutrinos from our model out to a distance of ˜10 kpc. Here, at 1 kpc, JUNO, Super-K, and DUNE would register a few events while IceCube and Hyper-K would register several tens of events.« less

  10. Neutrinos from type Ia supernovae: The deflagration-to-detonation transition scenario

    DOE PAGES

    Wright, Warren P.; Nagaraj, Gautam; Kneller, James P.; ...

    2016-07-19

    It has long been recognized that the neutrinos detected from the next core-collapse supernova in the Galaxy have the potential to reveal important information about the dynamics of the explosion and the nucleosynthesis conditions as well as allowing us to probe the properties of the neutrino itself. The neutrinos emitted from thermonuclear—type Ia—supernovae also possess the same potential, although these supernovae are dimmer neutrino sources. For the first time, we calculate the time, energy, line of sight, and neutrino-flavor-dependent features of the neutrino signal expected from a three-dimensional delayed-detonation explosion simulation, where a deflagration-to-detonation transition triggers the complete disruption ofmore » a near-Chandrasekhar mass carbon-oxygen white dwarf. We also calculate the neutrino flavor evolution along eight lines of sight through the simulation as a function of time and energy using an exact three-flavor transformation code. We identify a characteristic spectral peak at ˜10 MeV as a signature of electron captures on copper. This peak is a potentially distinguishing feature of explosion models since it reflects the nucleosynthesis conditions early in the explosion. We simulate the event rates in the Super-K, Hyper-K, JUNO, and DUNE neutrino detectors with the SNOwGLoBES event rate calculation software and also compute the IceCube signal. Hyper-K will be able to detect neutrinos from our model out to a distance of ˜10 kpc. Here, at 1 kpc, JUNO, Super-K, and DUNE would register a few events while IceCube and Hyper-K would register several tens of events.« less

  11. A grid generation system for multi-disciplinary design optimization

    NASA Technical Reports Server (NTRS)

    Jones, William T.; Samareh-Abolhassani, Jamshid

    1995-01-01

    A general multi-block three-dimensional volume grid generator is presented which is suitable for Multi-Disciplinary Design Optimization. The code is timely, robust, highly automated, and written in ANSI 'C' for platform independence. Algebraic techniques are used to generate and/or modify block face and volume grids to reflect geometric changes resulting from design optimization. Volume grids are generated/modified in a batch environment and controlled via an ASCII user input deck. This allows the code to be incorporated directly into the design loop. Generated volume grids are presented for a High Speed Civil Transport (HSCT) Wing/Body geometry as well a complex HSCT configuration including horizontal and vertical tails, engine nacelles and pylons, and canard surfaces.

  12. Revisiting Intel Xeon Phi optimization of Thompson cloud microphysics scheme in Weather Research and Forecasting (WRF) model

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen

    2015-10-01

    The Thompson cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Thompson scheme incorporates a large number of improvements. Thus, we have optimized the speed of this important part of WRF. Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the Thompson microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. New optimizations for an updated Thompson scheme are discusses in this paper. The optimizations improved the performance of the original Thompson code on Xeon Phi 7120P by a factor of 1.8x. Furthermore, the same optimizations improved the performance of the Thompson on a dual socket configuration of eight core Intel Xeon E5-2670 CPUs by a factor of 1.8x compared to the original Thompson code.

  13. Optimization of a matched-filter receiver for frequency hopping code acquisition in jamming

    NASA Astrophysics Data System (ADS)

    Pawlowski, P. R.; Polydoros, A.

    A matched-filter receiver for frequency hopping (FH) code acquisition is optimized when either partial-band tone jamming or partial-band Gaussian noise jamming is present. The receiver is matched to a segment of the FH code sequence, sums hard per-channel decisions to form a test, and uses multiple tests to verify acquisition. The length of the matched filter and the number of verification tests are fixed. Optimization is then choosing thresholds to maximize performance based upon the receiver's degree of knowledge about the jammer ('side-information'). Four levels of side-information are considered, ranging from none to complete. The latter level results in a constant-false-alarm-rate (CFAR) design. At each level, performance sensitivity to threshold choice is analyzed. Robust thresholds are chosen to maximize performance as the jammer varies its power distribution, resulting in simple design rules which aid threshold selection. Performance results, which show that optimum distributions for the jammer power over the total FH bandwidth exist, are presented.

  14. Optimizing Distribution of Pandemic Influenza Antiviral Drugs

    PubMed Central

    Huang, Hsin-Chan; Morton, David P.; Johnson, Gregory P.; Gutfraind, Alexander; Galvani, Alison P.; Clements, Bruce; Meyers, Lauren A.

    2015-01-01

    We provide a data-driven method for optimizing pharmacy-based distribution of antiviral drugs during an influenza pandemic in terms of overall access for a target population and apply it to the state of Texas, USA. We found that during the 2009 influenza pandemic, the Texas Department of State Health Services achieved an estimated statewide access of 88% (proportion of population willing to travel to the nearest dispensing point). However, access reached only 34.5% of US postal code (ZIP code) areas containing <1,000 underinsured persons. Optimized distribution networks increased expected access to 91% overall and 60% in hard-to-reach regions, and 2 or 3 major pharmacy chains achieved near maximal coverage in well-populated areas. Independent pharmacies were essential for reaching ZIP code areas containing <1,000 underinsured persons. This model was developed during a collaboration between academic researchers and public health officials and is available as a decision support tool for Texas Department of State Health Services at a Web-based interface. PMID:25625858

  15. The MCUCN simulation code for ultracold neutron physics

    NASA Astrophysics Data System (ADS)

    Zsigmond, G.

    2018-02-01

    Ultracold neutrons (UCN) have very low kinetic energies 0-300 neV, thereby can be stored in specific material or magnetic confinements for many hundreds of seconds. This makes them a very useful tool in probing fundamental symmetries of nature (for instance charge-parity violation by neutron electric dipole moment experiments) and contributing important parameters for the Big Bang nucleosynthesis (neutron lifetime measurements). Improved precision experiments are in construction at new and planned UCN sources around the world. MC simulations play an important role in the optimization of such systems with a large number of parameters, but also in the estimation of systematic effects, in benchmarking of analysis codes, or as part of the analysis. The MCUCN code written at PSI has been extensively used for the optimization of the UCN source optics and in the optimization and analysis of (test) experiments within the nEDM project based at PSI. In this paper we present the main features of MCUCN and interesting benchmark and application examples.

  16. Parallel-vector computation for structural analysis and nonlinear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.

    1990-01-01

    Practical engineering application can often be formulated in the form of a constrained optimization problem. There are several solution algorithms for solving a constrained optimization problem. One approach is to convert a constrained problem into a series of unconstrained problems. Furthermore, unconstrained solution algorithms can be used as part of the constrained solution algorithms. Structural optimization is an iterative process where one starts with an initial design, a finite element structure analysis is then performed to calculate the response of the system (such as displacements, stresses, eigenvalues, etc.). Based upon the sensitivity information on the objective and constraint functions, an optimizer such as ADS or IDESIGN, can be used to find the new, improved design. For the structural analysis phase, the equation solver for the system of simultaneous, linear equations plays a key role since it is needed for either static, or eigenvalue, or dynamic analysis. For practical, large-scale structural analysis-synthesis applications, computational time can be excessively large. Thus, it is necessary to have a new structural analysis-synthesis code which employs new solution algorithms to exploit both parallel and vector capabilities offered by modern, high performance computers such as the Convex, Cray-2 and Cray-YMP computers. The objective of this research project is, therefore, to incorporate the latest development in the parallel-vector equation solver, PVSOLVE into the widely popular finite-element production code, such as the SAP-4. Furthermore, several nonlinear unconstrained optimization subroutines have also been developed and tested under a parallel computer environment. The unconstrained optimization subroutines are not only useful in their own right, but they can also be incorporated into a more popular constrained optimization code, such as ADS.

  17. Heterotopic expression of class B floral homeotic genes supports a modified ABC model for tulip (Tulipa gesneriana).

    PubMed

    Kanno, Akira; Saeki, Hiroshi; Kameya, Toshiaki; Saedler, Heinz; Theissen, Günter

    2003-07-01

    In higher eudicotyledonous angiosperms the floral organs are typically arranged in four different whorls, containing sepals, petals, stamens and carpels. According to the ABC model, the identity of these organs is specified by floral homeotic genes of class A, A+B, B+C and C, respectively. In contrast to the sepal and petal whorls of eudicots, the perianths of many plants from the Liliaceae family have two outer whorls of almost identical petaloid organs, called tepals. To explain the Liliaceae flower morphology, van Tunen et al. (1993) proposed a modified ABC model, exemplified with tulip. According to this model, class B genes are not only expressed in whorls 2 and 3, but also in whorl 1. Thus the organs of both whorls 1 and 2 express class A plus class B genes and, therefore, get the same petaloid identity. To test this modified ABC model we have cloned and characterized putative class B genes from tulip. Two DEF- and one GLO-like gene were identified, named TGDEFA, TGDEFB and TGGLO. Northern hybridization analysis showed that all of these genes are expressed in whorls 1, 2 and 3 (outer and inner tepals and stamens), thus corroborating the modified ABC model. In addition, these experiments demonstrated that TGGLO is also weakly expressed in carpels, leaves, stems and bracts. Gel retardation assays revealed that TGGLO alone binds to DNA as a homodimer. In contrast, TGDEFA and TGDEFB cannot homodimerize, but make heterodimers with PI. Homodimerization of GLO-like protein has also been reported for lily, suggesting that this phenomenon is conserved within Liliaceae plants or even monocot species.

  18. European Wintertime Windstorms and its Links to Large-Scale Variability Modes

    NASA Astrophysics Data System (ADS)

    Befort, D. J.; Wild, S.; Walz, M. A.; Knight, J. R.; Lockwood, J. F.; Thornton, H. E.; Hermanson, L.; Bett, P.; Weisheimer, A.; Leckebusch, G. C.

    2017-12-01

    Winter storms associated with extreme wind speeds and heavy precipitation are the most costly natural hazard in several European countries. Improved understanding and seasonal forecast skill of winter storms will thus help society, policy-makers and (re-) insurance industry to be better prepared for such events. We firstly assess the ability to represent extra-tropical windstorms over the Northern Hemisphere of three seasonal forecast ensemble suites: ECMWF System3, ECMWF System4 and GloSea5. Our results show significant skill for inter-annual variability of windstorm frequency over parts of Europe in two of these forecast suites (ECMWF-S4 and GloSea5) indicating the potential use of current seasonal forecast systems. In a regression model we further derive windstorm variability using the forecasted NAO from the seasonal model suites thus estimating the suitability of the NAO as the only predictor. We find that the NAO as the main large-scale mode over Europe can explain some of the achieved skill and is therefore an important source of variability in the seasonal models. However, our results show that the regression model fails to reproduce the skill level of the directly forecast windstorm frequency over large areas of central Europe. This suggests that the seasonal models also capture other sources of variability/predictability of windstorms than the NAO. In order to investigate which other large-scale variability modes steer the interannual variability of windstorms we develop a statistical model using a Poisson GLM. We find that the Scandinavian Pattern (SCA) in fact explains a larger amount of variability for Central Europe during the 20th century than the NAO. This statistical model is able to skilfully reproduce the interannual variability of windstorm frequency especially for the British Isles and Central Europe with correlations up to 0.8.

  19. Jatropha curcas and Ricinus communis differentially affect arbuscular mycorrhizal fungi diversity in soil when cultivated for biofuel production in a Guantanamo (Cuba) tropical system.

    NASA Astrophysics Data System (ADS)

    Alguacil, M. M.; Torrecillas, E.; Hernández, G.; Torres, P.; Roldán, A.

    2012-04-01

    The arbuscular mycorrhizal fungi (AMF) are a key, integral component of the stability, sustainability and functioning of ecosystems. In this study, we characterised the AMF biodiversity in a control soil and in a soil cultivated with Jatropha curcas or Ricinus communis, in a tropical system in Guantanamo (Cuba), in order to verify if a change of land use to biofuel plant production had any effect on the AMF communities. We also asses whether some soil properties related with the soil fertility (total N, Organic C, microbial biomass C, aggregate stability percentage, pH and electrical conductivity) were changed with the cultivation of both crop species. The AM fungal small sub-unit (SSU) rRNA genes were subjected to PCR, cloning, sequencing and phylogenetic analyses. Twenty AM fungal sequence types were identified: 19 belong to the Glomeraceae and one to the Paraglomeraceae. Two AMF sequence types related to cultured AMF species (Glo G3 for Glomus sinuosum and Glo G6 for Glomus intraradices-G. fasciculatum-G. irregulare) disappeared in the soil cultivated with J. curcas and R. communis. The soil properties (total N, Organic C and microbial biomass C) were improved by the cultivation of the two plant species. The diversity of the AMF community decreased in the soil of both crops, with respect to the control soil, and varied significantly depending on the crop species planted. Thus, R. communis soil showed higher AMF diversity than J. curcas soil. In conclusion, R. communis could be more suitable in long-term conservation and sustainable management of these tropical ecosystems.

  20. A Glutathione-independent Glyoxalase of the DJ-1 Superfamily Plays an Important Role in Managing Metabolically Generated Methylglyoxal in Candida albicans*

    PubMed Central

    Hasim, Sahar; Hussin, Nur Ahmad; Alomar, Fadhel; Bidasee, Keshore R.; Nickerson, Kenneth W.; Wilson, Mark A.

    2014-01-01

    Methylglyoxal is a cytotoxic reactive carbonyl compound produced by central metabolism. Dedicated glyoxalases convert methylglyoxal to d-lactate using multiple catalytic strategies. In this study, the DJ-1 superfamily member ORF 19.251/GLX3 from Candida albicans is shown to possess glyoxalase activity, making this the first demonstrated glutathione-independent glyoxalase in fungi. The crystal structure of Glx3p indicates that the protein is a monomer containing the catalytic triad Cys136-His137-Glu168. Purified Glx3p has an in vitro methylglyoxalase activity (Km = 5.5 mm and kcat = 7.8 s−1) that is significantly greater than that of more distantly related members of the DJ-1 superfamily. A close Glx3p homolog from Saccharomyces cerevisiae (YDR533C/Hsp31) also has glyoxalase activity, suggesting that fungal members of the Hsp31 clade of the DJ-1 superfamily are all probable glutathione-independent glyoxalases. A homozygous glx3 null mutant in C. albicans strain SC5314 displays greater sensitivity to millimolar levels of exogenous methylglyoxal, elevated levels of intracellular methylglyoxal, and carbon source-dependent growth defects, especially when grown on glycerol. These phenotypic defects are complemented by restoration of the wild-type GLX3 locus. The growth defect of Glx3-deficient cells in glycerol is also partially complemented by added inorganic phosphate, which is not observed for wild-type or glucose-grown cells. Therefore, C. albicans Glx3 and its fungal homologs are physiologically relevant glutathione-independent glyoxalases that are not redundant with the previously characterized glutathione-dependent GLO1/GLO2 system. In addition to its role in detoxifying glyoxals, Glx3 and its close homologs may have other important roles in stress response. PMID:24302734

  1. Functional Conservation of PISTILLATA Activity in a Pea Homolog Lacking the PI Motif1

    PubMed Central

    Berbel, Ana; Navarro, Cristina; Ferrándiz, Cristina; Cañas, Luis Antonio; Beltrán, José-Pío; Madueño, Francisco

    2005-01-01

    Current understanding of floral development is mainly based on what we know from Arabidopsis (Arabidopsis thaliana) and Antirrhinum majus. However, we can learn more by comparing developmental mechanisms that may explain morphological differences between species. A good example comes from the analysis of genes controlling flower development in pea (Pisum sativum), a plant with more complex leaves and inflorescences than Arabidopsis and Antirrhinum, and a different floral ontogeny. The analysis of UNIFOLIATA (UNI) and STAMINA PISTILLOIDA (STP), the pea orthologs of LEAFY and UNUSUAL FLORAL ORGANS, has revealed a common link in the regulation of flower and leaf development not apparent in Arabidopsis. While the Arabidopsis genes mainly behave as key regulators of flower development, where they control the expression of B-function genes, UNI and STP also contribute to the development of the pea compound leaf. Here, we describe the characterization of P. sativum PISTILLATA (PsPI), a pea MADS-box gene homologous to B-function genes like PI and GLOBOSA (GLO), from Arabidopsis and Antirrhinum, respectively. PsPI encodes for an atypical PI-type polypeptide that lacks the highly conserved C-terminal PI motif. Nevertheless, constitutive expression of PsPI in tobacco (Nicotiana tabacum) and Arabidopsis shows that it can specifically replace the function of PI, being able to complement the strong pi-1 mutant. Accordingly, PsPI expression in pea flowers, which is dependent on STP, is identical to PI and GLO. Interestingly, PsPI is also transiently expressed in young leaves, suggesting a role of PsPI in pea leaf development, a possibility that fits with the established role of UNI and STP in the control of this process. PMID:16113230

  2. Changes in the Diversity of Soil Arbuscular Mycorrhizal Fungi after Cultivation for Biofuel Production in a Guantanamo (Cuba) Tropical System

    PubMed Central

    Alguacil, Maria del Mar; Torrecillas, Emma; Hernández, Guillermina; Roldán, Antonio

    2012-01-01

    The arbuscular mycorrhizal fungi (AMF) are a key, integral component of the stability, sustainability and functioning of ecosystems. In this study, we characterised the AMF biodiversity in a native vegetation soil and in a soil cultivated with Jatropha curcas or Ricinus communis, in a tropical system in Guantanamo (Cuba), in order to verify if a change of land use to biofuel plant production had any effect on the AMF communities. We also asses whether some soil properties related with the soil fertility (total N, Organic C, microbial biomass C, aggregate stability percentage, pH and electrical conductivity) were changed with the cultivation of both crop species. The AM fungal small sub-unit (SSU) rRNA genes were subjected to PCR, cloning, sequencing and phylogenetic analyses. Twenty AM fungal sequence types were identified: 19 belong to the Glomeraceae and one to the Paraglomeraceae. Two AMF sequence types related to cultured AMF species (Glo G3 for Glomus sinuosum and Glo G6 for Glomus intraradices-G. fasciculatum-G. irregulare) did not occur in the soil cultivated with J. curcas and R. communis. The soil properties (total N, Organic C and microbial biomass C) were higher in the soil cultivated with the two plant species. The diversity of the AMF community decreased in the soil of both crops, with respect to the native vegetation soil, and varied significantly depending on the crop species planted. Thus, R. communis soil showed higher AMF diversity than J. curcas soil. In conclusion, R. communis could be more suitable for the long-term conservation and sustainable management of these tropical ecosytems. PMID:22536339

  3. The association effect of insulin and clonazepam on oxidative stress in liver of an experimental animal model of diabetes and depression.

    PubMed

    Wayhs, Carlos Alberto Yasin; Tortato, Caroline; Mescka, Caroline Paula; Pasquali, Matheus Augusto; Schnorr, Carlos Eduardo; Nin, Maurício Schüler; Barros, Helena Maria Tannhauser; Moreira, José Claudio Fonseca; Vargas, Carmen Regla

    2013-05-01

    It is known that oxidative stress occurs in peripheral blood in an experimental animal model of diabetes and depression, and acute treatment with insulin and clonazepam (CNZ) has a protective effect on oxidative stress in this model. This study evaluated the effect of insulin plus CNZ on oxidative stress parameters in the liver of diabetic male rats induced with streptozotocin (STZ) and subjected to forced swimming test (FST). Diabetes was induced by a single intraperitoneal (i.p.) dose of STZ 60 mg/kg in male Wistar rats. Insulin (4 IU/kg) plus CNZ acute i.p. treatment (0.25 mg/kg) was administered 24, 5 and 1 h before the FST. Nondiabetic control rats received i.p. injections of saline (1 mL/kg). Protein oxidative damage was evaluated by carbonyl formation and the antioxidant redox parameters were analyzed by the measurements of enzymatic activities of the superoxide dismutase (SOD), catalase and glyoxalase I (GLO). Glycemia levels also were determined. Our present study has shown an increase in carbonyl content from diabetic rats subjected to FST (2.04 ± 0.55), while the activity of catalase (51.83 ± 19.02) and SOD (2.30 ± 1.23) were significantly decreased in liver from these animals, which were reverted by the treatment. Also, the activity of GLO (0.15 ± 0.02) in the liver of the animals was decreased. Our findings showed that insulin plus CNZ acute treatment ameliorate the antioxidant redox parameters and protect against protein oxidative damage in the liver of diabetic rats subjected to FST.

  4. The spatial return level of aggregated hourly extreme rainfall in Peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Shaffie, Mardhiyyah; Eli, Annazirin; Wan Zin, Wan Zawiah; Jemain, Abdul Aziz

    2015-07-01

    This paper is intended to ascertain the spatial pattern of extreme rainfall distribution in Peninsular Malaysia at several short time intervals, i.e., on hourly basis. Motivation of this research is due to historical records of extreme rainfall in Peninsular Malaysia, whereby many hydrological disasters at this region occur within a short time period. The hourly periods considered are 1, 2, 3, 6, 12, and 24 h. Many previous hydrological studies dealt with daily rainfall data; thus, this study enables comparison to be made on the estimated performances between daily and hourly rainfall data analyses so as to identify the impact of extreme rainfall at a shorter time scale. Return levels based on the time aggregate considered are also computed. Parameter estimation using L-moment method for four probability distributions, namely, the generalized extreme value (GEV), generalized logistic (GLO), generalized Pareto (GPA), and Pearson type III (PE3) distributions were conducted. Aided with the L-moment diagram test and mean square error (MSE) test, GLO was found to be the most appropriate distribution to represent the extreme rainfall data. At most time intervals (10, 50, and 100 years), the spatial patterns revealed that the rainfall distribution across the peninsula differ for 1- and 24-h extreme rainfalls. The outcomes of this study would provide additional information regarding patterns of extreme rainfall in Malaysia which may not be detected when considering only a higher time scale such as daily; thus, appropriate measures for shorter time scales of extreme rainfall can be planned. The implementation of such measures would be beneficial to the authorities to reduce the impact of any disastrous natural event.

  5. STS-63 Space Shuttle report

    NASA Technical Reports Server (NTRS)

    Fricke, Robert W., Jr.

    1995-01-01

    The STS-63 Space Shuttle Program Mission Report summarizes the Payload activities and provides detailed data on the Orbiter, External Tank (ET), Solid Rocket Booster (SRB), Reusable Solid Rocket Motor (RSRM), and the Space Shuttle Main Engine (SSME) systems performance during this sixty-seventh flight of the Space Shuttle Program, the forty-second since the return to flight, and twentieth flight of the Orbiter vehicle Discovery (OV-103). In addition to the OV-103 Orbiter vehicle, the flight vehicle consisted of an ET that was designated ET-68; three SSME's that were designated 2035, 2109, and 2029 in positions 1, 2, and 3, respectively; and two SRB's that were designated BI-070. The RSRM's that were an integral part of the SRB's were designated 360Q042A for the left SRB and 360L042B for the right SRB. The STS-63 mission was planned as an 8-day duration mission with two contingency days available for weather avoidance or Orbiter contingency operations. The primary objectives of the STS-63 mission were to perform the Mir rendezvous operations, accomplish the Spacehab-3 experiments, and deploy and retrieve the Shuttle Pointed Autonomous Research Tool for Astronomy-204 (SPARTAN-204) payload. The secondary objectives were to perform the Cryogenic Systems Experiment (CSE)/Shuttle Glo-2 Experiment (GLO-2) Payload (CGP)/Orbital Debris Radar Calibration Spheres (ODERACS-2) (CGP/ODERACS-2) payload objectives, the Solid Surface Combustion Experiment (SSCE), and the Air Force Maui Optical Site Calibration Tests (AMOS). The objectives of the Mir rendezvous/flyby were to verify flight techniques, communication and navigation-aid sensor interfaces, and engineering analyses associated with Shuttle/Mir proximity operations in preparation for the STS-71 docking mission.

  6. Optimizing legacy molecular dynamics software with directive-based offload

    DOE PAGES

    Michael Brown, W.; Carrillo, Jan-Michael Y.; Gavhane, Nitin; ...

    2015-05-14

    The directive-based programming models are one solution for exploiting many-core coprocessors to increase simulation rates in molecular dynamics. They offer the potential to reduce code complexity with offload models that can selectively target computations to run on the CPU, the coprocessor, or both. In our paper, we describe modifications to the LAMMPS molecular dynamics code to enable concurrent calculations on a CPU and coprocessor. We also demonstrate that standard molecular dynamics algorithms can run efficiently on both the CPU and an x86-based coprocessor using the same subroutines. As a consequence, we demonstrate that code optimizations for the coprocessor also resultmore » in speedups on the CPU; in extreme cases up to 4.7X. We provide results for LAMMAS benchmarks and for production molecular dynamics simulations using the Stampede hybrid supercomputer with both Intel (R) Xeon Phi (TM) coprocessors and NVIDIA GPUs: The optimizations presented have increased simulation rates by over 2X for organic molecules and over 7X for liquid crystals on Stampede. The optimizations are available as part of the "Intel package" supplied with LAMMPS. (C) 2015 Elsevier B.V. All rights reserved.« less

  7. Merits and limitations of optimality criteria method for structural optimization

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Guptill, James D.; Berke, Laszlo

    1993-01-01

    The merits and limitations of the optimality criteria (OC) method for the minimum weight design of structures subjected to multiple load conditions under stress, displacement, and frequency constraints were investigated by examining several numerical examples. The examples were solved utilizing the Optimality Criteria Design Code that was developed for this purpose at NASA Lewis Research Center. This OC code incorporates OC methods available in the literature with generalizations for stress constraints, fully utilized design concepts, and hybrid methods that combine both techniques. Salient features of the code include multiple choices for Lagrange multiplier and design variable update methods, design strategies for several constraint types, variable linking, displacement and integrated force method analyzers, and analytical and numerical sensitivities. The performance of the OC method, on the basis of the examples solved, was found to be satisfactory for problems with few active constraints or with small numbers of design variables. For problems with large numbers of behavior constraints and design variables, the OC method appears to follow a subset of active constraints that can result in a heavier design. The computational efficiency of OC methods appears to be similar to some mathematical programming techniques.

  8. A Response Surface Methodology for Bi-Level Integrated System Synthesis (BLISS)

    NASA Technical Reports Server (NTRS)

    Altus, Troy David; Sobieski, Jaroslaw (Technical Monitor)

    2002-01-01

    The report describes a new method for optimization of engineering systems such as aerospace vehicles whose design must harmonize a number of subsystems and various physical phenomena, each represented by a separate computer code, e.g., aerodynamics, structures, propulsion, performance, etc. To represent the system internal couplings, the codes receive output from other codes as part of their inputs. The system analysis and optimization task is decomposed into subtasks that can be executed concurrently, each subtask conducted using local state and design variables and holding constant a set of the system-level design variables. The subtasks results are stored in form of the Response Surfaces (RS) fitted in the space of the system-level variables to be used as the subtask surrogates in a system-level optimization whose purpose is to optimize the system objective(s) and to reconcile the system internal couplings. By virtue of decomposition and execution concurrency, the method enables a broad workfront in organization of an engineering project involving a number of specialty groups that might be geographically dispersed, and it exploits the contemporary computing technology of massively concurrent and distributed processing. The report includes a demonstration test case of supersonic business jet design.

  9. Variable Coded Modulation software simulation

    NASA Astrophysics Data System (ADS)

    Sielicki, Thomas A.; Hamkins, Jon; Thorsen, Denise

    This paper reports on the design and performance of a new Variable Coded Modulation (VCM) system. This VCM system comprises eight of NASA's recommended codes from the Consultative Committee for Space Data Systems (CCSDS) standards, including four turbo and four AR4JA/C2 low-density parity-check codes, together with six modulations types (BPSK, QPSK, 8-PSK, 16-APSK, 32-APSK, 64-APSK). The signaling protocol for the transmission mode is based on a CCSDS recommendation. The coded modulation may be dynamically chosen, block to block, to optimize throughput.

  10. Optimal boundary conditions for ORCA-2 model

    NASA Astrophysics Data System (ADS)

    Kazantsev, Eugene

    2013-08-01

    A 4D-Var data assimilation technique is applied to ORCA-2 configuration of the NEMO in order to identify the optimal parametrization of boundary conditions on the lateral boundaries as well as on the bottom and on the surface of the ocean. The influence of boundary conditions on the solution is analyzed both within and beyond the assimilation window. It is shown that the optimal bottom and surface boundary conditions allow us to better represent the jet streams, such as Gulf Stream and Kuroshio. Analyzing the reasons of the jets reinforcement, we notice that data assimilation has a major impact on parametrization of the bottom boundary conditions for u and v. Automatic generation of the tangent and adjoint codes is also discussed. Tapenade software is shown to be able to produce the adjoint code that can be used after a memory usage optimization.

  11. Preliminary Development of an Object-Oriented Optimization Tool

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi

    2011-01-01

    The National Aeronautics and Space Administration Dryden Flight Research Center has developed a FORTRAN-based object-oriented optimization (O3) tool that leverages existing tools and practices and allows easy integration and adoption of new state-of-the-art software. The object-oriented framework can integrate the analysis codes for multiple disciplines, as opposed to relying on one code to perform analysis for all disciplines. Optimization can thus take place within each discipline module, or in a loop between the central executive module and the discipline modules, or both. Six sample optimization problems are presented. The first four sample problems are based on simple mathematical equations; the fifth and sixth problems consider a three-bar truss, which is a classical example in structural synthesis. Instructions for preparing input data for the O3 tool are presented.

  12. Adaptation and optimization of a line-by-line radiative transfer program for the STAR-100 (STARSMART)

    NASA Technical Reports Server (NTRS)

    Rarig, P. L.

    1980-01-01

    A program to calculate upwelling infrared radiation was modified to operate efficiently on the STAR-100. The modified software processes specific test cases significantly faster than the initial STAR-100 code. For example, a midlatitude summer atmospheric model is executed in less than 2% of the time originally required on the STAR-100. Furthermore, the optimized program performs extra operations to save the calculated absorption coefficients. Some of the advantages and pitfalls of virtual memory and vector processing are discussed along with strategies used to avoid loss of accuracy and computing power. Results from the vectorized code, in terms of speed, cost, and relative error with respect to serial code solutions are encouraging.

  13. System optimization on coded aperture spectrometer

    NASA Astrophysics Data System (ADS)

    Liu, Hua; Ding, Quanxin; Wang, Helong; Chen, Hongliang; Guo, Chunjie; Zhou, Liwei

    2017-10-01

    For aim to find a simple multiple configuration solution and achieve higher refractive efficiency, and based on to reduce the situation disturbed by FOV change, especially in a two-dimensional spatial expansion. Coded aperture system is designed by these special structure, which includes an objective a coded component a prism reflex system components, a compensatory plate and an imaging lens Correlative algorithms and perfect imaging methods are available to ensure this system can be corrected and optimized adequately. Simulation results show that the system can meet the application requirements in MTF, REA, RMS and other related criteria. Compared with the conventional design, the system has reduced in volume and weight significantly. Therefore, the determining factors are the prototype selection and the system configuration.

  14. MHD Code Optimizations and Jets in Dense Gaseous Halos

    NASA Astrophysics Data System (ADS)

    Gaibler, Volker; Vigelius, Matthias; Krause, Martin; Camenzind, Max

    We have further optimized and extended the 3D-MHD-code NIRVANA. The magnetized part runs in parallel, reaching 19 Gflops per SX-6 node, and has a passively advected particle population. In addition, the code is MPI-parallel now - on top of the shared memory parallelization. On a 512^3 grid, we reach 561 Gflops with 32 nodes on the SX-8. Also, we have successfully used FLASH on the Opteron cluster. Scientific results are preliminary so far. We report one computation of highly resolved cocoon turbulence. While we find some similarities to earlier 2D work by us and others, we note a strange reluctancy of cold material to enter the low density cocoon, which has to be investigated further.

  15. Comparison of Evolutionary (Genetic) Algorithm and Adjoint Methods for Multi-Objective Viscous Airfoil Optimizations

    NASA Technical Reports Server (NTRS)

    Pulliam, T. H.; Nemec, M.; Holst, T.; Zingg, D. W.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A comparison between an Evolutionary Algorithm (EA) and an Adjoint-Gradient (AG) Method applied to a two-dimensional Navier-Stokes code for airfoil design is presented. Both approaches use a common function evaluation code, the steady-state explicit part of the code,ARC2D. The parameterization of the design space is a common B-spline approach for an airfoil surface, which together with a common griding approach, restricts the AG and EA to the same design space. Results are presented for a class of viscous transonic airfoils in which the optimization tradeoff between drag minimization as one objective and lift maximization as another, produces the multi-objective design space. Comparisons are made for efficiency, accuracy and design consistency.

  16. Robust Nonlinear Neural Codes

    NASA Astrophysics Data System (ADS)

    Yang, Qianli; Pitkow, Xaq

    2015-03-01

    Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.

  17. Helium: lifting high-performance stencil kernels from stripped x86 binaries to halide DSL code

    DOE PAGES

    Mendis, Charith; Bosboom, Jeffrey; Wu, Kevin; ...

    2015-06-03

    Highly optimized programs are prone to bit rot, where performance quickly becomes suboptimal in the face of new hardware and compiler techniques. In this paper we show how to automatically lift performance-critical stencil kernels from a stripped x86 binary and generate the corresponding code in the high-level domain-specific language Halide. Using Halide's state-of-the-art optimizations targeting current hardware, we show that new optimized versions of these kernels can replace the originals to rejuvenate the application for newer hardware. The original optimized code for kernels in stripped binaries is nearly impossible to analyze statically. Instead, we rely on dynamic traces to regeneratemore » the kernels. We perform buffer structure reconstruction to identify input, intermediate and output buffer shapes. Here, we abstract from a forest of concrete dependency trees which contain absolute memory addresses to symbolic trees suitable for high-level code generation. This is done by canonicalizing trees, clustering them based on structure, inferring higher-dimensional buffer accesses and finally by solving a set of linear equations based on buffer accesses to lift them up to simple, high-level expressions. Helium can handle highly optimized, complex stencil kernels with input-dependent conditionals. We lift seven kernels from Adobe Photoshop giving a 75 % performance improvement, four kernels from Irfan View, leading to 4.97 x performance, and one stencil from the mini GMG multigrid benchmark netting a 4.25 x improvement in performance. We manually rejuvenated Photoshop by replacing eleven of Photoshop's filters with our lifted implementations, giving 1.12 x speedup without affecting the user experience.« less

  18. The Optimizer Topology Characteristics in Seismic Hazards

    NASA Astrophysics Data System (ADS)

    Sengor, T.

    2015-12-01

    The characteristic data of the natural phenomena are questioned in a topological space approach to illuminate whether there is an algorithm behind them bringing the situation of physics of phenomena to optimized states even if they are hazards. The optimized code designing the hazard on a topological structure mashes the metric of the phenomena. The deviations in the metric of different phenomena push and/or pull the fold of the other suitable phenomena. For example if the metric of a specific phenomenon A fits to the metric of another specific phenomenon B after variation processes generated with the deviation of the metric of previous phenomenon A. Defining manifold processes covering the metric characteristics of each of every phenomenon is possible for all the physical events; i.e., natural hazards. There are suitable folds in those manifold groups so that each subfold fits to the metric characteristics of one of the natural hazard category at least. Some variation algorithms on those metric structures prepare a gauge effect bringing the long time stability of Earth for largely scaled periods. The realization of that stability depends on some specific conditions. These specific conditions are called optimized codes. The analytical basics of processes in topological structures are developed in [1]. The codes are generated according to the structures in [2]. Some optimized codes are derived related to the seismicity of NAF beginning from the quakes of the year 1999. References1. Taner SENGOR, "Topological theory and analytical configuration for a universal community model," Procedia- Social and Behavioral Sciences, Vol. 81, pp. 188-194, 28 June 2013, 2. Taner SENGOR, "Seismic-Climatic-Hazardous Events Estimation Processes via the Coupling Structures in Conserving Energy Topologies of the Earth," The 2014 AGU Fall Meeting, Abstract no.: 31374, ABD.

  19. Reference View Selection in DIBR-Based Multiview Coding.

    PubMed

    Maugey, Thomas; Petrazzuoli, Giovanni; Frossard, Pascal; Cagnazzo, Marco; Pesquet-Popescu, Beatrice

    2016-04-01

    Augmented reality, interactive navigation in 3D scenes, multiview video, and other emerging multimedia applications require large sets of images, hence larger data volumes and increased resources compared with traditional video services. The significant increase in the number of images in multiview systems leads to new challenging problems in data representation and data transmission to provide high quality of experience on resource-constrained environments. In order to reduce the size of the data, different multiview video compression strategies have been proposed recently. Most of them use the concept of reference or key views that are used to estimate other images when there is high correlation in the data set. In such coding schemes, the two following questions become fundamental: 1) how many reference views have to be chosen for keeping a good reconstruction quality under coding cost constraints? And 2) where to place these key views in the multiview data set? As these questions are largely overlooked in the literature, we study the reference view selection problem and propose an algorithm for the optimal selection of reference views in multiview coding systems. Based on a novel metric that measures the similarity between the views, we formulate an optimization problem for the positioning of the reference views, such that both the distortion of the view reconstruction and the coding rate cost are minimized. We solve this new problem with a shortest path algorithm that determines both the optimal number of reference views and their positions in the image set. We experimentally validate our solution in a practical multiview distributed coding system and in the standardized 3D-HEVC multiview coding scheme. We show that considering the 3D scene geometry in the reference view, positioning problem brings significant rate-distortion improvements and outperforms the traditional coding strategy that simply selects key frames based on the distance between cameras.

  20. Throughput of Coded Optical CDMA Systems with AND Detectors

    NASA Astrophysics Data System (ADS)

    Memon, Kehkashan A.; Umrani, Fahim A.; Umrani, A. W.; Umrani, Naveed A.

    2012-09-01

    Conventional detection techniques used in optical code-division multiple access (OCDMA) systems are not optimal and result in poor bit error rate performance. This paper analyzes the coded performance of optical CDMA systems with AND detectors for enhanced throughput efficiencies and improved error rate performance. The results show that the use of AND detectors significantly improve the performance of an optical channel.

  1. Performance Analysis and Optimization on the UCLA Parallel Atmospheric General Circulation Model Code

    NASA Technical Reports Server (NTRS)

    Lou, John; Ferraro, Robert; Farrara, John; Mechoso, Carlos

    1996-01-01

    An analysis is presented of several factors influencing the performance of a parallel implementation of the UCLA atmospheric general circulation model (AGCM) on massively parallel computer systems. Several modificaitons to the original parallel AGCM code aimed at improving its numerical efficiency, interprocessor communication cost, load-balance and issues affecting single-node code performance are discussed.

  2. Scalability study of parallel spatial direct numerical simulation code on IBM SP1 parallel supercomputer

    NASA Technical Reports Server (NTRS)

    Hanebutte, Ulf R.; Joslin, Ronald D.; Zubair, Mohammad

    1994-01-01

    The implementation and the performance of a parallel spatial direct numerical simulation (PSDNS) code are reported for the IBM SP1 supercomputer. The spatially evolving disturbances that are associated with laminar-to-turbulent in three-dimensional boundary-layer flows are computed with the PS-DNS code. By remapping the distributed data structure during the course of the calculation, optimized serial library routines can be utilized that substantially increase the computational performance. Although the remapping incurs a high communication penalty, the parallel efficiency of the code remains above 40% for all performed calculations. By using appropriate compile options and optimized library routines, the serial code achieves 52-56 Mflops on a single node of the SP1 (45% of theoretical peak performance). The actual performance of the PSDNS code on the SP1 is evaluated with a 'real world' simulation that consists of 1.7 million grid points. One time step of this simulation is calculated on eight nodes of the SP1 in the same time as required by a Cray Y/MP for the same simulation. The scalability information provides estimated computational costs that match the actual costs relative to changes in the number of grid points.

  3. [Complexity level simulation in the German diagnosis-related groups system: the financial effect of coding of comorbidity diagnostics in urology].

    PubMed

    Wenke, A; Gaber, A; Hertle, L; Roeder, N; Pühse, G

    2012-07-01

    Precise and complete coding of diagnoses and procedures is of value for optimizing revenues within the German diagnosis-related groups (G-DRG) system. The implementation of effective structures for coding is cost-intensive. The aim of this study was to prove whether higher costs can be refunded by complete acquisition of comorbidities and complications. Calculations were based on DRG data of the Department of Urology, University Hospital of Münster, Germany, covering all patients treated in 2009. The data were regrouped and subjected to a process of simulation (increase and decrease of patient clinical complexity levels, PCCL) with the help of recently developed software. In urology a strong dependency of quantity and quality of coding of secondary diagnoses on PCCL and subsequent profits was found. Departmental budgetary procedures can be optimized when coding is effective. The new simulation tool can be a valuable aid to improve profits available for distribution. Nevertheless, calculation of time use and financial needs by this procedure are subject to specific departmental terms and conditions. Completeness of coding of (secondary) diagnoses must be the ultimate administrative goal of patient case documentation in urology.

  4. Optimizing Irregular Applications for Energy and Performance on the Tilera Many-core Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Panyala, Ajay R.; Halappanavar, Mahantesh

    Optimizing applications simultaneously for energy and performance is a complex problem. High performance, parallel, irregular applications are notoriously hard to optimize due to their data-dependent memory accesses, lack of structured locality and complex data structures and code patterns. Irregular kernels are growing in importance in applications such as machine learning, graph analytics and combinatorial scientific computing. Performance- and energy-efficient implementation of these kernels on modern, energy efficient, multicore and many-core platforms is therefore an important and challenging problem. We present results from optimizing two irregular applications { the Louvain method for community detection (Grappolo), and high-performance conjugate gradient (HPCCG) {more » on the Tilera many-core system. We have significantly extended MIT's OpenTuner auto-tuning framework to conduct a detailed study of platform-independent and platform-specific optimizations to improve performance as well as reduce total energy consumption. We explore the optimization design space along three dimensions: memory layout schemes, compiler-based code transformations, and optimization of parallel loop schedules. Using auto-tuning, we demonstrate whole node energy savings of up to 41% relative to a baseline instantiation, and up to 31% relative to manually optimized variants.« less

  5. Optimization of 3D Field Design

    NASA Astrophysics Data System (ADS)

    Logan, Nikolas; Zhu, Caoxiang

    2017-10-01

    Recent progress in 3D tokamak modeling is now leveraged to create a conceptual design of new external 3D field coils for the DIII-D tokamak. Using the IPEC dominant mode as a target spectrum, the Finding Optimized Coils Using Space-curves (FOCUS) code optimizes the currents and 3D geometry of multiple coils to maximize the total set's resonant coupling. The optimized coils are individually distorted in space, creating toroidal ``arrays'' containing a variety of shapes that often wrap around a significant poloidal extent of the machine. The generalized perturbed equilibrium code (GPEC) is used to determine optimally efficient spectra for driving total, core, and edge neoclassical toroidal viscosity (NTV) torque and these too provide targets for the optimization of 3D coil designs. These conceptual designs represent a fundamentally new approach to 3D coil design for tokamaks targeting desired plasma physics phenomena. Optimized coil sets based on plasma response theory will be relevant to designs for future reactors or on any active machine. External coils, in particular, must be optimized for reliable and efficient fusion reactor designs. Work supported by the US Department of Energy under DE-AC02-09CH11466.

  6. Anode optimization for miniature electronic brachytherapy X-ray sources using Monte Carlo and computational fluid dynamic codes

    PubMed Central

    Khajeh, Masoud; Safigholi, Habib

    2015-01-01

    A miniature X-ray source has been optimized for electronic brachytherapy. The cooling fluid for this device is water. Unlike the radionuclide brachytherapy sources, this source is able to operate at variable voltages and currents to match the dose with the tumor depth. First, Monte Carlo (MC) optimization was performed on the tungsten target-buffer thickness layers versus energy such that the minimum X-ray attenuation occurred. Second optimization was done on the selection of the anode shape based on the Monte Carlo in water TG-43U1 anisotropy function. This optimization was carried out to get the dose anisotropy functions closer to unity at any angle from 0° to 170°. Three anode shapes including cylindrical, spherical, and conical were considered. Moreover, by Computational Fluid Dynamic (CFD) code the optimal target-buffer shape and different nozzle shapes for electronic brachytherapy were evaluated. The characterization criteria of the CFD were the minimum temperature on the anode shape, cooling water, and pressure loss from inlet to outlet. The optimal anode was conical in shape with a conical nozzle. Finally, the TG-43U1 parameters of the optimal source were compared with the literature. PMID:26966563

  7. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1992-01-01

    Worked performed during the reporting period is summarized. Construction of robustly good trellis codes for use with sequential decoding was developed. The robustly good trellis codes provide a much better trade off between free distance and distance profile. The unequal error protection capabilities of convolutional codes was studied. The problem of finding good large constraint length, low rate convolutional codes for deep space applications is investigated. A formula for computing the free distance of 1/n convolutional codes was discovered. Double memory (DM) codes, codes with two memory units per unit bit position, were studied; a search for optimal DM codes is being conducted. An algorithm for constructing convolutional codes from a given quasi-cyclic code was developed. Papers based on the above work are included in the appendix.

  8. Data and Tools | Concentrating Solar Power | NREL

    Science.gov Websites

    download. Solar Power tower Integrated Layout and Optimization Tool (SolarPILOT(tm)) The SolarPILOT is code rapid layout and optimization capability of the analytical DELSOL3 program with the accuracy and

  9. A novel concatenated code based on the improved SCG-LDPC code for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Xie, Ya; Wang, Lin; Huang, Sheng; Wang, Yong

    2013-01-01

    Based on the optimization and improvement for the construction method of systematically constructed Gallager (SCG) (4, k) code, a novel SCG low density parity check (SCG-LDPC)(3969, 3720) code to be suitable for optical transmission systems is constructed. The novel SCG-LDPC (6561,6240) code with code rate of 95.1% is constructed by increasing the length of SCG-LDPC (3969,3720) code, and in a way, the code rate of LDPC codes can better meet the high requirements of optical transmission systems. And then the novel concatenated code is constructed by concatenating SCG-LDPC(6561,6240) code and BCH(127,120) code with code rate of 94.5%. The simulation results and analyses show that the net coding gain (NCG) of BCH(127,120)+SCG-LDPC(6561,6240) concatenated code is respectively 2.28 dB and 0.48 dB more than those of the classic RS(255,239) code and SCG-LDPC(6561,6240) code at the bit error rate (BER) of 10-7.

  10. Protograph-Based Raptor-Like Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Chen, Tsung-Yi; Wang, Jiadong; Wesel, Richard D.

    2014-01-01

    Theoretical analysis has long indicated that feedback improves the error exponent but not the capacity of pointto- point memoryless channels. The analytic and empirical results indicate that at short blocklength regime, practical rate-compatible punctured convolutional (RCPC) codes achieve low latency with the use of noiseless feedback. In 3GPP, standard rate-compatible turbo codes (RCPT) did not outperform the convolutional codes in the short blocklength regime. The reason is the convolutional codes for low number of states can be decoded optimally using Viterbi decoder. Despite excellent performance of convolutional codes at very short blocklengths, the strength of convolutional codes does not scale with the blocklength for a fixed number of states in its trellis.

  11. Resistor-logic demultiplexers for nanoelectronics based on constant-weight codes.

    PubMed

    Kuekes, Philip J; Robinett, Warren; Roth, Ron M; Seroussi, Gadiel; Snider, Gregory S; Stanley Williams, R

    2006-02-28

    The voltage margin of a resistor-logic demultiplexer can be improved significantly by basing its connection pattern on a constant-weight code. Each distinct code determines a unique demultiplexer, and therefore a large family of circuits is defined. We consider using these demultiplexers for building nanoscale crossbar memories, and determine the voltage margin of the memory system based on a particular code. We determine a purely code-theoretic criterion for selecting codes that will yield memories with large voltage margins, which is to minimize the ratio of the maximum to the minimum Hamming distance between distinct codewords. For the specific example of a 64 × 64 crossbar, we discuss what codes provide optimal performance for a memory.

  12. Optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme for Intel Many Integrated Core (MIC) architecture

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.-L.

    2015-05-01

    Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The co-processor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of Xeon Phi will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 1.3x.

  13. Transoptr — A second order beam transport design code with optimization and constraints

    NASA Astrophysics Data System (ADS)

    Heighway, E. A.; Hutcheon, R. M.

    1981-08-01

    This code was written initially to design an achromatic and isochronous reflecting magnet and has been extended to compete in capability (for constrained problems) with TRANSPORT. Its advantage is its flexibility in that the user writes a routine to describe his transport system. The routine allows the definition of general variables from which the system parameters can be derived. Further, the user can write any constraints he requires as algebraic equations relating the parameters. All variables may be used in either a first or second order optimization.

  14. Final Technical Report for "Applied Mathematics Research: Simulation Based Optimization and Application to Electromagnetic Inverse Problems"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haber, Eldad

    2014-03-17

    The focus of research was: Developing adaptive mesh for the solution of Maxwell's equations; Developing a parallel framework for time dependent inverse Maxwell's equations; Developing multilevel methods for optimization problems with inequality constraints; A new inversion code for inverse Maxwell's equations in the 0th frequency (DC resistivity); A new inversion code for inverse Maxwell's equations in low frequency regime. Although the research concentrated on electromagnetic forward and in- verse problems the results of the research was applied to the problem of image registration.

  15. SMT-Aware Instantaneous Footprint Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roy, Probir; Liu, Xu; Song, Shuaiwen

    Modern architectures employ simultaneous multithreading (SMT) to increase thread-level parallelism. SMT threads share many functional units and the whole memory hierarchy of a physical core. Without a careful code design, SMT threads can easily contend with each other for these shared resources, causing severe performance degradation. Minimizing SMT thread contention for HPC applications running on dedicated platforms is very challenging, because they usually spawn threads within Single Program Multiple Data (SPMD) models. To address this important issue, we introduce a simple scheme for SMT-aware code optimization, which aims to reduce the memory contention across SMT threads.

  16. Code Optimization and Parallelization on the Origins: Looking from Users' Perspective

    NASA Technical Reports Server (NTRS)

    Chang, Yan-Tyng Sherry; Thigpen, William W. (Technical Monitor)

    2002-01-01

    Parallel machines are becoming the main compute engines for high performance computing. Despite their increasing popularity, it is still a challenge for most users to learn the basic techniques to optimize/parallelize their codes on such platforms. In this paper, we present some experiences on learning these techniques for the Origin systems at the NASA Advanced Supercomputing Division. Emphasis of this paper will be on a few essential issues (with examples) that general users should master when they work with the Origins as well as other parallel systems.

  17. Coupling between a multi-physics workflow engine and an optimization framework

    NASA Astrophysics Data System (ADS)

    Di Gallo, L.; Reux, C.; Imbeaux, F.; Artaud, J.-F.; Owsiak, M.; Saoutic, B.; Aiello, G.; Bernardi, P.; Ciraolo, G.; Bucalossi, J.; Duchateau, J.-L.; Fausser, C.; Galassi, D.; Hertout, P.; Jaboulay, J.-C.; Li-Puma, A.; Zani, L.

    2016-03-01

    A generic coupling method between a multi-physics workflow engine and an optimization framework is presented in this paper. The coupling architecture has been developed in order to preserve the integrity of the two frameworks. The objective is to provide the possibility to replace a framework, a workflow or an optimizer by another one without changing the whole coupling procedure or modifying the main content in each framework. The coupling is achieved by using a socket-based communication library for exchanging data between the two frameworks. Among a number of algorithms provided by optimization frameworks, Genetic Algorithms (GAs) have demonstrated their efficiency on single and multiple criteria optimization. Additionally to their robustness, GAs can handle non-valid data which may appear during the optimization. Consequently GAs work on most general cases. A parallelized framework has been developed to reduce the time spent for optimizations and evaluation of large samples. A test has shown a good scaling efficiency of this parallelized framework. This coupling method has been applied to the case of SYCOMORE (SYstem COde for MOdeling tokamak REactor) which is a system code developed in form of a modular workflow for designing magnetic fusion reactors. The coupling of SYCOMORE with the optimization platform URANIE enables design optimization along various figures of merit and constraints.

  18. New optimal asymmetric quantum codes constructed from constacyclic codes

    NASA Astrophysics Data System (ADS)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Lü, Liangdong

    2017-02-01

    In this paper, we propose the construction of asymmetric quantum codes from two families of constacyclic codes over finite field 𝔽q2 of code length n, where for the first family, q is an odd prime power with the form 4t + 1 (t ≥ 1 is integer) or 4t - 1 (t ≥ 2 is integer) and n1 = q2+1 2; for the second family, q is an odd prime power with the form 10t + 3 or 10t + 7 (t ≥ 0 is integer) and n2 = q2+1 5. As a result, families of new asymmetric quantum codes [[n,k,dz/dx

  19. Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.

    PubMed

    Gao, Wei; Kwong, Sam; Jia, Yuheng

    2017-08-25

    In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.

  20. Is QR code an optimal data container in optical encryption systems from an error-correction coding perspective?

    PubMed

    Jiao, Shuming; Jin, Zhi; Zhou, Changyuan; Zou, Wenbin; Li, Xia

    2018-01-01

    Quick response (QR) code has been employed as a data carrier for optical cryptosystems in many recent research works, and the error-correction coding mechanism allows the decrypted result to be noise free. However, in this paper, we point out for the first time that the Reed-Solomon coding algorithm in QR code is not a very suitable option for the nonlocally distributed speckle noise in optical cryptosystems from an information coding perspective. The average channel capacity is proposed to measure the data storage capacity and noise-resistant capability of different encoding schemes. We design an alternative 2D barcode scheme based on Bose-Chaudhuri-Hocquenghem (BCH) coding, which demonstrates substantially better average channel capacity than QR code in numerical simulated optical cryptosystems.

  1. Aerodynamic design using numerical optimization

    NASA Technical Reports Server (NTRS)

    Murman, E. M.; Chapman, G. T.

    1983-01-01

    The procedure of using numerical optimization methods coupled with computational fluid dynamic (CFD) codes for the development of an aerodynamic design is examined. Several approaches that replace wind tunnel tests, develop pressure distributions and derive designs, or fulfill preset design criteria are presented. The method of Aerodynamic Design by Numerical Optimization (ADNO) is described and illustrated with examples.

  2. A dual-route approach to orthographic processing.

    PubMed

    Grainger, Jonathan; Ziegler, Johannes C

    2011-01-01

    In the present theoretical note we examine how different learning constraints, thought to be involved in optimizing the mapping of print to meaning during reading acquisition, might shape the nature of the orthographic code involved in skilled reading. On the one hand, optimization is hypothesized to involve selecting combinations of letters that are the most informative with respect to word identity (diagnosticity constraint), and on the other hand to involve the detection of letter combinations that correspond to pre-existing sublexical phonological and morphological representations (chunking constraint). These two constraints give rise to two different kinds of prelexical orthographic code, a coarse-grained and a fine-grained code, associated with the two routes of a dual-route architecture. Processing along the coarse-grained route optimizes fast access to semantics by using minimal subsets of letters that maximize information with respect to word identity, while coding for approximate within-word letter position independently of letter contiguity. Processing along the fined-grained route, on the other hand, is sensitive to the precise ordering of letters, as well as to position with respect to word beginnings and endings. This enables the chunking of frequently co-occurring contiguous letter combinations that form relevant units for morpho-orthographic processing (prefixes and suffixes) and for the sublexical translation of print to sound (multi-letter graphemes).

  3. A Dual-Route Approach to Orthographic Processing

    PubMed Central

    Grainger, Jonathan; Ziegler, Johannes C.

    2011-01-01

    In the present theoretical note we examine how different learning constraints, thought to be involved in optimizing the mapping of print to meaning during reading acquisition, might shape the nature of the orthographic code involved in skilled reading. On the one hand, optimization is hypothesized to involve selecting combinations of letters that are the most informative with respect to word identity (diagnosticity constraint), and on the other hand to involve the detection of letter combinations that correspond to pre-existing sublexical phonological and morphological representations (chunking constraint). These two constraints give rise to two different kinds of prelexical orthographic code, a coarse-grained and a fine-grained code, associated with the two routes of a dual-route architecture. Processing along the coarse-grained route optimizes fast access to semantics by using minimal subsets of letters that maximize information with respect to word identity, while coding for approximate within-word letter position independently of letter contiguity. Processing along the fined-grained route, on the other hand, is sensitive to the precise ordering of letters, as well as to position with respect to word beginnings and endings. This enables the chunking of frequently co-occurring contiguous letter combinations that form relevant units for morpho-orthographic processing (prefixes and suffixes) and for the sublexical translation of print to sound (multi-letter graphemes). PMID:21716577

  4. Initial results on computational performance of Intel Many Integrated Core (MIC) architecture: implementation of the Weather and Research Forecasting (WRF) Purdue-Lin microphysics scheme

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.

    2014-10-01

    Purdue-Lin scheme is a relatively sophisticated microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme includes six classes of hydro meteors: water vapor, cloud water, raid, cloud ice, snow and graupel. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. In this paper, we accelerate the Purdue Lin scheme using Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi is a high performance coprocessor consists of up to 61 cores. The Xeon Phi is connected to a CPU via the PCI Express (PICe) bus. In this paper, we will discuss in detail the code optimization issues encountered while tuning the Purdue-Lin microphysics Fortran code for Xeon Phi. In particularly, getting a good performance required utilizing multiple cores, the wide vector operations and make efficient use of memory. The results show that the optimizations improved performance of the original code on Xeon Phi 5110P by a factor of 4.2x. Furthermore, the same optimizations improved performance on Intel Xeon E5-2603 CPU by a factor of 1.2x compared to the original code.

  5. Time domain topology optimization of 3D nanophotonic devices

    NASA Astrophysics Data System (ADS)

    Elesin, Y.; Lazarov, B. S.; Jensen, J. S.; Sigmund, O.

    2014-02-01

    We present an efficient parallel topology optimization framework for design of large scale 3D nanophotonic devices. The code shows excellent scalability and is demonstrated for optimization of broadband frequency splitter, waveguide intersection, photonic crystal-based waveguide and nanowire-based waveguide. The obtained results are compared to simplified 2D studies and we demonstrate that 3D topology optimization may lead to significant performance improvements.

  6. The Microwave Temperature Profiler (PERF)

    NASA Technical Reports Server (NTRS)

    Lim, Boon; Mahoney, Michael; Haggerty, Julie; Denning, Richard

    2013-01-01

    The JPL developed Microwave Temperature Profiler (MTP) has recently participated in GloPac, HIPPO (I to V) and TORERO, and the ongoing ATTREX campaigns. The MTP is now capable of supporting the NASA Global Hawk and a new canister version supports the NCAR G-V. The primary product from the MTP is remote measurements of the atmospheric temperature at, above and below the flight path, providing for the vertical state of the atmosphere. The NCAR-MTP has demonstrated unprecedented instrument performance and calibration with plus or minus 0.2 degrees Kelvin flight level temperature error. Derived products include curtain plots, isentropes, lapse rate, cold point height and tropopause height.

  7. Optimal signal constellation design for ultra-high-speed optical transport in the presence of nonlinear phase noise.

    PubMed

    Liu, Tao; Djordjevic, Ivan B

    2014-12-29

    In this paper, we first describe an optimal signal constellation design algorithm suitable for the coherent optical channels dominated by the linear phase noise. Then, we modify this algorithm to be suitable for the nonlinear phase noise dominated channels. In optimization procedure, the proposed algorithm uses the cumulative log-likelihood function instead of the Euclidian distance. Further, an LDPC coded modulation scheme is proposed to be used in combination with signal constellations obtained by proposed algorithm. Monte Carlo simulations indicate that the LDPC-coded modulation schemes employing the new constellation sets, obtained by our new signal constellation design algorithm, outperform corresponding QAM constellations significantly in terms of transmission distance and have better nonlinearity tolerance.

  8. Development of an hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1993-01-01

    The purpose of this research effort is to develop a means to use, and to ultimately implement, hp-version finite elements in the numerical solution of optimal control problems. The hybrid MACSYMA/FORTRAN code GENCODE was developed which utilized h-version finite elements to successfully approximate solutions to a wide class of optimal control problems. In that code the means for improvement of the solution was the refinement of the time-discretization mesh. With the extension to hp-version finite elements, the degrees of freedom include both nodal values and extra interior values associated with the unknown states, co-states, and controls, the number of which depends on the order of the shape functions in each element.

  9. Transonic airfoil analysis and design in nonuniform flow

    NASA Technical Reports Server (NTRS)

    Chang, J. F.; Lan, C. E.

    1986-01-01

    A nonuniform transonic airfoil code is developed for applications in analysis, inverse design and direct optimization involving an airfoil immersed in propfan slipstream. Problems concerning the numerical stability, convergence, divergence and solution oscillations are discussed. The code is validated by comparing with some known results in incompressible flow. A parametric investigation indicates that the airfoil lift-drag ratio can be increased by decreasing the thickness ratio. A better performance can be achieved if the airfoil is located below the slipstream center. Airfoil characteristics designed by the inverse method and a direct optimization are compared. The airfoil designed with the method of direct optimization exhibits better characteristics and achieves a gain of 22 percent in lift-drag ratio with a reduction of 4 percent in thickness.

  10. Code Compression for DSP

    DTIC Science & Technology

    1998-12-01

    PAGES 6 19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b . ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8...Automation Conference, June 1998. [Liao95] S. Liao, S. Devadas , K. Keutzer, “Code Density Optimization for Embedded DSP Processors Using Data Compression

  11. CUBE: Information-optimized parallel cosmological N-body simulation code

    NASA Astrophysics Data System (ADS)

    Yu, Hao-Ran; Pen, Ue-Li; Wang, Xin

    2018-05-01

    CUBE, written in Coarray Fortran, is a particle-mesh based parallel cosmological N-body simulation code. The memory usage of CUBE can approach as low as 6 bytes per particle. Particle pairwise (PP) force, cosmological neutrinos, spherical overdensity (SO) halofinder are included.

  12. Interplanetary Program to Optimize Simulated Trajectories (IPOST). Volume 3: Programmer's manual

    NASA Technical Reports Server (NTRS)

    Hong, P. E.; Kent, P. D.; Olson, D. W.; Vallado, C. A.

    1992-01-01

    The Interplanetary Program to Optimize Space Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. Here, information is given on the IPOST code.

  13. DEGAS: Dynamic Exascale Global Address Space Programming Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demmel, James

    The Dynamic, Exascale Global Address Space programming environment (DEGAS) project will develop the next generation of programming models and runtime systems to meet the challenges of Exascale computing. The Berkeley part of the project concentrated on communication-optimal code generation to optimize speed and energy efficiency by reducing data movement. Our work developed communication lower bounds, and/or communication avoiding algorithms (that either meet the lower bound, or do much less communication than their conventional counterparts) for a variety of algorithms, including linear algebra, machine learning and genomics. The Berkeley part of the project concentrated on communication-optimal code generation to optimize speedmore » and energy efficiency by reducing data movement. Our work developed communication lower bounds, and/or communication avoiding algorithms (that either meet the lower bound, or do much less communication than their conventional counterparts) for a variety of algorithms, including linear algebra, machine learning and genomics.« less

  14. The application of nonlinear programming and collocation to optimal aeroassisted orbital transfers

    NASA Astrophysics Data System (ADS)

    Shi, Y. Y.; Nelson, R. L.; Young, D. H.; Gill, P. E.; Murray, W.; Saunders, M. A.

    1992-01-01

    Sequential quadratic programming (SQP) and collocation of the differential equations of motion were applied to optimal aeroassisted orbital transfers. The Optimal Trajectory by Implicit Simulation (OTIS) computer program codes with updated nonlinear programming code (NZSOL) were used as a testbed for the SQP nonlinear programming (NLP) algorithms. The state-of-the-art sparse SQP method is considered to be effective for solving large problems with a sparse matrix. Sparse optimizers are characterized in terms of memory requirements and computational efficiency. For the OTIS problems, less than 10 percent of the Jacobian matrix elements are nonzero. The SQP method encompasses two phases: finding an initial feasible point by minimizing the sum of infeasibilities and minimizing the quadratic objective function within the feasible region. The orbital transfer problem under consideration involves the transfer from a high energy orbit to a low energy orbit.

  15. Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 2; Preliminary Results

    NASA Technical Reports Server (NTRS)

    Walsh, J. L.; Weston, R. P.; Samareh, J. A.; Mason, B. H.; Green, L. L.; Biedron, R. T.

    2000-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity finite-element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a high-speed civil transport configuration. The paper describes both the preliminary results from implementing and validating the multidisciplinary analysis and the results from an aerodynamic optimization. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture compliant software product. A companion paper describes the formulation of the multidisciplinary analysis and optimization system.

  16. Analysis of the faster-than-Nyquist optimal linear multicarrier system

    NASA Astrophysics Data System (ADS)

    Marquet, Alexandre; Siclet, Cyrille; Roque, Damien

    2017-02-01

    Faster-than-Nyquist signalization enables a better spectral efficiency at the expense of an increased computational complexity. Regarding multicarrier communications, previous work mainly relied on the study of non-linear systems exploiting coding and/or equalization techniques, with no particular optimization of the linear part of the system. In this article, we analyze the performance of the optimal linear multicarrier system when used together with non-linear receiving structures (iterative decoding and direct feedback equalization), or in a standalone fashion. We also investigate the limits of the normality assumption of the interference, used for implementing such non-linear systems. The use of this optimal linear system leads to a closed-form expression of the bit-error probability that can be used to predict the performance and help the design of coded systems. Our work also highlights the great performance/complexity trade-off offered by decision feedback equalization in a faster-than-Nyquist context. xml:lang="fr"

  17. Optimally combining dynamical decoupling and quantum error correction.

    PubMed

    Paz-Silva, Gerardo A; Lidar, D A

    2013-01-01

    Quantum control and fault-tolerant quantum computing (FTQC) are two of the cornerstones on which the hope of realizing a large-scale quantum computer is pinned, yet only preliminary steps have been taken towards formalizing the interplay between them. Here we explore this interplay using the powerful strategy of dynamical decoupling (DD), and show how it can be seamlessly and optimally integrated with FTQC. To this end we show how to find the optimal decoupling generator set (DGS) for various subspaces relevant to FTQC, and how to simultaneously decouple them. We focus on stabilizer codes, which represent the largest contribution to the size of the DGS, showing that the intuitive choice comprising the stabilizers and logical operators of the code is in fact optimal, i.e., minimizes a natural cost function associated with the length of DD sequences. Our work brings hybrid DD-FTQC schemes, and their potentially considerable advantages, closer to realization.

  18. Optimally combining dynamical decoupling and quantum error correction

    PubMed Central

    Paz-Silva, Gerardo A.; Lidar, D. A.

    2013-01-01

    Quantum control and fault-tolerant quantum computing (FTQC) are two of the cornerstones on which the hope of realizing a large-scale quantum computer is pinned, yet only preliminary steps have been taken towards formalizing the interplay between them. Here we explore this interplay using the powerful strategy of dynamical decoupling (DD), and show how it can be seamlessly and optimally integrated with FTQC. To this end we show how to find the optimal decoupling generator set (DGS) for various subspaces relevant to FTQC, and how to simultaneously decouple them. We focus on stabilizer codes, which represent the largest contribution to the size of the DGS, showing that the intuitive choice comprising the stabilizers and logical operators of the code is in fact optimal, i.e., minimizes a natural cost function associated with the length of DD sequences. Our work brings hybrid DD-FTQC schemes, and their potentially considerable advantages, closer to realization. PMID:23559088

  19. Review of Hybrid (Deterministic/Monte Carlo) Radiation Transport Methods, Codes, and Applications at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, John C; Peplow, Douglas E.; Mosher, Scott W

    2011-01-01

    This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(102-4), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less

  20. Intel Many Integrated Core (MIC) architecture optimization strategies for a memory-bound Weather Research and Forecasting (WRF) Goddard microphysics scheme

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.

    2014-10-01

    The Goddard cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The WRF is a widely used weather prediction system in the world. It development is a done in collaborative around the globe. The Goddard microphysics scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Goddard scheme incorporates a large number of improvements. Thus, we have optimized the code of this important part of WRF. In this paper, we present our results of optimizing the Goddard microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The Intel MIC is capable of executing a full operating system and entire programs rather than just kernels as the GPU do. The MIC coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 4.7x. Furthermore, the same optimizations improved performance on a dual socket Intel Xeon E5-2670 system by a factor of 2.8x compared to the original code.

Top