Sample records for crac2 computer code

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ritchie, L.T.; Johnson, J.D.; Blond, R.M.

    The CRAC2 computer code is a revision of the Calculation of Reactor Accident Consequences computer code, CRAC, developed for the Reactor Safety Study. The CRAC2 computer code incorporates significant modeling improvements in the areas of weather sequence sampling and emergency response, and refinements to the plume rise, atmospheric dispersion, and wet deposition models. New output capabilities have also been added. This guide is to facilitate the informed and intelligent use of CRAC2. It includes descriptions of the input data, the output results, the file structures, control information, and five sample problems.

  2. CRAC2 model description

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ritchie, L.T.; Alpert, D.J.; Burke, R.P.

    1984-03-01

    The CRAC2 computer code is a revised version of CRAC (Calculation of Reactor Accident Consequences) which was developed for the Reactor Safety Study. This document provides an overview of the CRAC2 code and a description of each of the models used. Significant improvements incorporated into CRAC2 include an improved weather sequence sampling technique, a new evacuation model, and new output capabilities. In addition, refinements have been made to the atmospheric transport and deposition model. Details of the modeling differences between CRAC2 and CRAC are emphasized in the model descriptions.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, J.E.; Roussin, R.W.; Gilpin, H.

    A version of the CRAC2 computer code applicable for use in analyses of consequences and risks of reactor accidents in case work for environmental statements has been implemented for use on the Nuclear Regulatory Commission Data General MV/8000 computer system. Input preparation is facilitated through the use of an interactive computer program which operates on an IBM personal computer. The resulting CRAC2 input deck is transmitted to the MV/8000 by using an error-free file transfer mechanism. To facilitate the use of CRAC2 at NRC, relevant background material on input requirements and model descriptions has been extracted from four reports -more » ''Calculations of Reactor Accident Consequences,'' Version 2, NUREG/CR-2326 (SAND81-1994) and ''CRAC2 Model Descriptions,'' NUREG/CR-2552 (SAND82-0342), ''CRAC Calculations for Accident Sections of Environmental Statements, '' NUREG/CR-2901 (SAND82-1693), and ''Sensitivity and Uncertainty Studies of the CRAC2 Computer Code,'' NUREG/CR-4038 (ORNL-6114). When this background information is combined with instructions on the input processor, this report provides a self-contained guide for preparing CRAC2 input data with a specific orientation toward applications on the MV/8000. 8 refs., 11 figs., 10 tabs.« less

  4. The clinician rating of adult communication (CRAC): a clinician's guide to the assessment of interpersonal communication skill.

    PubMed

    Basco, M R; Birchler, G R; Kalal, B; Talbott, R; Slater, M A

    1991-05-01

    This paper reports the results of an initial investigation of the psychometric properties of a new clinical marital communication assessment instrument, the Clinician Rating of Adult Communication (CRAC). The sample consisted of 36 marital communication samples from both maritally satisfied and distressed couples. Reliability results indicated that the CRAC demonstrated high levels of internal consistency, test-retest reliability, and interrater agreement. Support for the validity of the CRAC was found in its correspondence with a marital interaction coding system, its relationship to ratings of marital satisfaction, and its concordance with couples' perceptions of their conflict management behavior. Overall, these findings support the conclusion that the CRAC may provide a useful addition to the measurement armamentarium of the marital clinician and researcher.

  5. Amphipathic alpha-helices and putative cholesterol binding domains of the influenza virus matrix M1 protein are crucial for virion structure organisation.

    PubMed

    Tsfasman, Tatyana; Kost, Vladimir; Markushin, Stanislav; Lotte, Vera; Koptiaeva, Irina; Bogacheva, Elena; Baratova, Ludmila; Radyukhin, Victor

    2015-12-02

    The influenza virus matrix M1 protein is an amphitropic membrane-associated protein, forming the matrix layer immediately beneath the virus raft membrane, thereby ensuring the proper structure of the influenza virion. The objective of this study was to elucidate M1 fine structural characteristics, which determine amphitropic properties and raft membrane activities of the protein, via 3D in silico modelling with subsequent mutational analysis. Computer simulations suggest the amphipathic nature of the M1 α-helices and the existence of putative cholesterol binding (CRAC) motifs on six amphipathic α-helices. Our finding explains for the first time many features of this protein, particularly the amphitropic properties and raft/cholesterol binding potential. To verify these results, we generated mutants of the A/WSN/33 strain via reverse genetics. The M1 mutations included F32Y in the CRAC of α-helix 2, W45Y and W45F in the CRAC of α-helix 3, Y100S in the CRAC of α-helix 6, M128A and M128S in the CRAC of α-helix 8 and a double L103I/L130I mutation in both a putative cholesterol consensus motif and the nuclear localisation signal. All mutations resulted in viruses with unusual filamentous morphology. Previous experimental data regarding the morphology of M1-gene mutant influenza viruses can now be explained in structural terms and are consistent with the pivotal role of the CRAC-domains and amphipathic α-helices in M1-lipid interactions. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Stoichiometric requirements for trapping and gating of Ca2+ release-activated Ca2+ (CRAC) channels by stromal interaction molecule 1 (STIM1).

    PubMed

    Hoover, Paul J; Lewis, Richard S

    2011-08-09

    Store-operated Ca(2+) entry depends critically on physical interactions of the endoplasmic reticulum (ER) Ca(2+) sensor stromal interaction molecule 1 (STIM1) and the Ca(2+) release-activated Ca(2+) (CRAC) channel protein Orai1. Recent studies support a diffusion-trap mechanism in which ER Ca(2+) depletion causes STIM1 to accumulate at ER-plasma membrane (PM) junctions, where it binds to Orai1, trapping and activating mobile CRAC channels in the overlying PM. To determine the stoichiometric requirements for CRAC channel trapping and activation, we expressed mCherry-STIM1 and Orai1-GFP at varying ratios in HEK cells and quantified CRAC current (I(CRAC)) activation and the STIM1:Orai1 ratio at ER-PM junctions after store depletion. By competing for a limited amount of STIM1, high levels of Orai1 reduced the junctional STIM1:Orai1 ratio to a lower limit of 0.3-0.6, indicating that binding of one to two STIM1s is sufficient to immobilize the tetrameric CRAC channel at ER-PM junctions. In cells expressing a constant amount of STIM1, CRAC current was a highly nonlinear bell-shaped function of Orai1 expression and the minimum stoichiometry for channel trapping failed to evoke significant activation. Peak current occurred at a ratio of ∼2 STIM1:Orai1, suggesting that maximal CRAC channel activity requires binding of eight STIM1s to each channel. Further increases in Orai1 caused channel activity and fast Ca(2+)-dependent inactivation to decline in parallel. The data are well described by a model in which STIM1 binds to Orai1 with negative cooperativity and channels open with positive cooperativity as a result of stabilization of the open state by STIM1.

  7. Characterization of long-chain acyl-CoA synthetases which stimulate secretion of fatty acids in green algae Chlamydomonas reinhardtii.

    PubMed

    Jia, Bin; Song, Yanzi; Wu, Min; Lin, Baicheng; Xiao, Kang; Hu, Zhangli; Huang, Ying

    2016-01-01

    Microalgae biofuel has become the most promising renewable energy over the past few years. But limitations still exist because of its high cost. Although, efforts have been made in enhancement of lipid productivity, the major cost problem in harvesting and oil extraction is still intractable. Thus, the idea of fatty acids (FAs) secretion which can massively facilitate algae harvesting and oil extraction was investigated here. The cDNAs of two long-chain acyl-CoA synthetases (LACSs) genes were cloned from Chlamydomonas reinhardtii and named as cracs1 and cracs2. They showed different substrate adaptation in the yeast complementation experiments. Cracs2 could utilize FAs C12:0, C14:0, C16:0, C18:0, C16:1 and C18:1, while crac1 could only utilize substrate C14:0, C16:1 and C18:1. Knockdown of cracs1 and cracs2 in C. reinhardtii resulted in accumulation of intracellular lipids. The total intracellular lipids contents of transgenic algae q-15 (knockdown of cracs1) and p-13 (knockdown of cracs2) were 45 and 55 %, respectively higher than that of cc849. Furthermore, FAs secretion was discovered in both transgenic algae. Secreted FAs can reach 8.19 and 9.66 mg/10(9) cells in q-15 and p-13, respectively. These results demonstrated the possibility of FAs secretion by microalgae and may give a new strategy of low-cost oil extraction. According to our findings, we proposed that FAs secretion may also be achieved in other species besides Chlamydomonas reinhardtii by knocking-down cracs genes, which may promote the future industrial application of microalgae biofuels.

  8. Activating mutations in STIM1 and ORAI1 cause overlapping syndromes of tubular myopathy and congenital miosis

    PubMed Central

    Nesin, Vasyl; Wiley, Graham; Kousi, Maria; Ong, E-Ching; Lehmann, Thomas; Nicholl, David J.; Suri, Mohnish; Shahrizaila, Nortina; Katsanis, Nicholas; Gaffney, Patrick M.; Wierenga, Klaas J.; Tsiokas, Leonidas

    2014-01-01

    Signaling through the store-operated Ca2+ release-activated Ca2+ (CRAC) channel regulates critical cellular functions, including gene expression, cell growth and differentiation, and Ca2+ homeostasis. Loss-of-function mutations in the CRAC channel pore-forming protein ORAI1 or the Ca2+ sensing protein stromal interaction molecule 1 (STIM1) result in severe immune dysfunction and nonprogressive myopathy. Here, we identify gain-of-function mutations in the cytoplasmic domain of STIM1 (p.R304W) associated with thrombocytopenia, bleeding diathesis, miosis, and tubular myopathy in patients with Stormorken syndrome, and in ORAI1 (p.P245L), associated with a Stormorken-like syndrome of congenital miosis and tubular aggregate myopathy but without hematological abnormalities. Heterologous expression of STIM1 p.R304W results in constitutive activation of the CRAC channel in vitro, and spontaneous bleeding accompanied by reduced numbers of thrombocytes in zebrafish embryos, recapitulating key aspects of Stormorken syndrome. p.P245L in ORAI1 does not make a constitutively active CRAC channel, but suppresses the slow Ca2+-dependent inactivation of the CRAC channel, thus also functioning as a gain-of-function mutation. These data expand our understanding of the phenotypic spectrum of dysregulated CRAC channel signaling, advance our knowledge of the molecular function of the CRAC channel, and suggest new therapies aiming at attenuating store-operated Ca2+ entry in the treatment of patients with Stormorken syndrome and related pathologic conditions. PMID:24591628

  9. Dental enamel cells express functional SOCE channels

    PubMed Central

    Nurbaeva, Meerim K.; Eckstein, Miriam; Concepcion, Axel R.; Smith, Charles E.; Srikanth, Sonal; Paine, Michael L.; Gwack, Yousang; Hubbard, Michael J.; Feske, Stefan; Lacruz, Rodrigo S.

    2015-01-01

    Dental enamel formation requires large quantities of Ca2+ yet the mechanisms mediating Ca2+ dynamics in enamel cells are unclear. Store-operated Ca2+ entry (SOCE) channels are important Ca2+ influx mechanisms in many cells. SOCE involves release of Ca2+ from intracellular pools followed by Ca2+ entry. The best-characterized SOCE channels are the Ca2+ release-activated Ca2+ (CRAC) channels. As patients with mutations in the CRAC channel genes STIM1 and ORAI1 show abnormal enamel mineralization, we hypothesized that CRAC channels might be an important Ca2+ uptake mechanism in enamel cells. Investigating primary murine enamel cells, we found that key components of CRAC channels (ORAI1, ORAI2, ORAI3, STIM1, STIM2) were expressed and most abundant during the maturation stage of enamel development. Furthermore, inositol 1,4,5-trisphosphate receptor (IP3R) but not ryanodine receptor (RyR) expression was high in enamel cells suggesting that IP3Rs are the main ER Ca2+ release mechanism. Passive depletion of ER Ca2+ stores with thapsigargin resulted in a significant raise in [Ca2+]i consistent with SOCE. In cells pre-treated with the CRAC channel blocker Synta-66 Ca2+ entry was significantly inhibited. These data demonstrate that enamel cells have SOCE mediated by CRAC channels and implicate them as a mechanism for Ca2+ uptake in enamel formation. PMID:26515404

  10. Dental enamel cells express functional SOCE channels.

    PubMed

    Nurbaeva, Meerim K; Eckstein, Miriam; Concepcion, Axel R; Smith, Charles E; Srikanth, Sonal; Paine, Michael L; Gwack, Yousang; Hubbard, Michael J; Feske, Stefan; Lacruz, Rodrigo S

    2015-10-30

    Dental enamel formation requires large quantities of Ca(2+) yet the mechanisms mediating Ca(2+) dynamics in enamel cells are unclear. Store-operated Ca(2+) entry (SOCE) channels are important Ca(2+) influx mechanisms in many cells. SOCE involves release of Ca(2+) from intracellular pools followed by Ca(2+) entry. The best-characterized SOCE channels are the Ca(2+) release-activated Ca(2+) (CRAC) channels. As patients with mutations in the CRAC channel genes STIM1 and ORAI1 show abnormal enamel mineralization, we hypothesized that CRAC channels might be an important Ca(2+) uptake mechanism in enamel cells. Investigating primary murine enamel cells, we found that key components of CRAC channels (ORAI1, ORAI2, ORAI3, STIM1, STIM2) were expressed and most abundant during the maturation stage of enamel development. Furthermore, inositol 1,4,5-trisphosphate receptor (IP3R) but not ryanodine receptor (RyR) expression was high in enamel cells suggesting that IP3Rs are the main ER Ca(2+) release mechanism. Passive depletion of ER Ca(2+) stores with thapsigargin resulted in a significant raise in [Ca(2+)]i consistent with SOCE. In cells pre-treated with the CRAC channel blocker Synta-66 Ca(2+) entry was significantly inhibited. These data demonstrate that enamel cells have SOCE mediated by CRAC channels and implicate them as a mechanism for Ca(2+) uptake in enamel formation.

  11. Physiological roles of STIM1 and Orai1 homologs and CRAC channels in the genetic model organism Caenorhabditis elegans

    PubMed Central

    Strange, Kevin; Yan, Xiaohui; Lorin-Nebel, Catherine; Xing, Juan

    2007-01-01

    Summary The nematode Caenorhabditis elegans provides numerous experimental advantages for developing an integrative molecular understanding of physiological processes and has proven to be a valuable model for characterizing Ca2+ signaling mechanisms. This review will focus on the role of Ca2+ release activated Ca2+ (CRAC) channel activity in function of the worm gonad and intestine. Inositol 1,4,5-trisphosphate (IP3)-dependent oscillatory Ca2+ signaling regulates contractile activity of the gonad and rhythmic posterior body wall muscle contraction (pBoc) required for ovulation and defecation, respectively. The C. elegans genome contains a single homolog of both STIM1 and Orai1, proteins required for CRAC channel function in mammalian and Drosophila cells. C. elegans STIM-1 and ORAI-1 are coexpressed in the worm gonad and intestine and give rise to robust CRAC channel activity when coexpressed in HEK293 cells. STIM-1 or ORAI-1 knockdown causes complete sterility demonstrating that the genes are essential components of gonad Ca2+ signaling. Knockdown of either protein dramatically inhibits intestinal cell CRAC channel activity, but surprisingly has no effect on pBoc, intestinal Ca2+ oscillations or intestinal ER Ca2+ store homeostasis. CRAC channels thus do not play obligate roles in all IP3-dependent signaling processes in C. elegans. Instead, we suggest that CRAC channels carry out highly specialized and cell specific signaling roles and that they may function as a failsafe mechanism to prevent Ca2+ store depletion under pathophysiological and stress conditions. PMID:17376526

  12. CRAC channel activity in C. elegans is mediated by Orai1 and STIM1 homologues and is essential for ovulation and fertility

    PubMed Central

    Lorin-Nebel, Catherine; Xing, Juan; Yan, Xiaohui; Strange, Kevin

    2007-01-01

    The Ca2+ release-activated Ca2+ (CRAC) channel is a plasma membrane Ca2+ entry pathway activated by endoplasmic reticulum (ER) Ca2+ store depletion. STIM1 proteins function as ER Ca2+ sensors and regulate CRAC channel activation. Recent studies have demonstrated that CRAC channels are encoded by the human Orai1 gene and a homologous Drosophila gene. C. elegans intestinal cells express a store-operated Ca2+ channel (SOCC) regulated by STIM-1. We cloned a full-length C. elegans cDNA that encodes a 293 amino acid protein, ORAI-1, homologous to human and Drosophila Orai1 proteins. ORAI-1 GFP reporters are co-expressed with STIM-1 in the gonad and intestine. Inositol 1,4,5-trisphosphate (IP3)-dependent Ca2+ signalling regulates C. elegans gonad function, fertility and rhythmic posterior body wall muscle contraction (pBoc) required for defecation. RNA interference (RNAi) silencing of orai-1 expression phenocopies stim-1 knockdown and causes sterility and prevents intestinal cell SOCC activation, but has no effect on pBoc or intestinal Ca2+ signalling. Orai-1 RNAi suppresses pBoc defects induced by intestinal expression of a STIM-1 Ca2+-binding mutant, indicating that the proteins function in a common pathway. Co-expression of stim-1 and orai-1 cDNAs in HEK293 cells induces large inwardly rectifying cation currents activated by ER Ca2+ depletion. The properties of this current recapitulate those of the native SOCC current. We conclude that C. elegans expresses bona fide CRAC channels that require the function of Orai1- and STIM1-related proteins. CRAC channels thus arose very early in animal evolution. In C. elegans, CRAC channels do not play obligate roles in all IP3-dependent signalling processes and ER Ca2+ homeostasis. Instead, we suggest that CRAC channels carry out highly specialized and cell-specific signalling roles and that they may function as a failsafe mechanism to prevent Ca2+ store depletion under pathophysiological and stress conditions. PMID:17218360

  13. Transcriptome-wide Analysis of Exosome Targets

    PubMed Central

    Schneider, Claudia; Kudla, Grzegorz; Wlotzka, Wiebke; Tuck, Alex; Tollervey, David

    2012-01-01

    Summary The exosome plays major roles in RNA processing and surveillance but the in vivo target range and substrate acquisition mechanisms remain unclear. Here we apply in vivo RNA crosslinking (CRAC) to the nucleases (Rrp44, Rrp6), two structural subunits (Rrp41, Csl4) and a cofactor (Trf4) of the yeast exosome. Analysis of wild-type Rrp44 and catalytic mutants showed that both the CUT and SUT classes of non-coding RNA, snoRNAs and, most prominently, pre-tRNAs and other Pol III transcripts are targeted for oligoadenylation and exosome degradation. Unspliced pre-mRNAs were also identified as targets for Rrp44 and Rrp6. CRAC performed using cleavable proteins (split-CRAC) revealed that Rrp44 endonuclease and exonuclease activities cooperate on most substrates. Mapping oligoadenylated reads suggests that the endonuclease activity may release stalled exosome substrates. Rrp6 was preferentially associated with structured targets, which frequently did not associate with the core exosome indicating that substrates follow multiple pathways to the nucleases. PMID:23000172

  14. Discovery and structural optimization of 1-phenyl-3-(1-phenylethyl)urea derivatives as novel inhibitors of CRAC channel.

    PubMed

    Zhang, Hai-zhen; Xu, Xiao-lan; Chen, Hua-yan; Ali, Sher; Wang, Dan; Yu, Jun-wei; Xu, Tao; Nan, Fa-jun

    2015-09-01

    Ca(2+)-release-activated Ca(2+) (CRAC) channel, a subfamily of store-operated channels, is formed by calcium release-activated calcium modulator 1 (ORAI1), and gated by stromal interaction molecule 1 (STIM1). CRAC channel may be a novel target for the treatment of immune disorders and allergy. The aim of this study was to identify novel small molecule CRAC channel inhibitors. HEK293 cells stably co-expressing both ORAI1 and STIM1 were used for high-throughput screening. A hit, 1-phenyl-3-(1-phenylethyl)urea, was identified that inhibited CRAC channels by targeting ORAI1. Five series of its derivatives were designed and synthesized, and their primary structure-activity relationships (SARs) were analyzed. All derivatives were assessed for their effects on Ca(2+) influx through CRAC channels on HEK293 cells, cytotoxicity in Jurkat cells, and IL-2 production in Jurkat cells expressing ORAI1-SS-eGFP. A total of 19 hits were discovered in libraries containing 32 000 compounds using the high-throughput screening. 1-Phenyl-3-(1-phenylethyl)urea inhibited Ca(2+) influx with IC50 of 3.25±0.17 μmol/L. SAR study on its derivatives showed that the alkyl substituent on the α-position of the left-side benzylic amine (R1) was essential for Ca(2+) influx inhibition and that the S-configuration was better than the R-configuration. The derivatives in which the right-side R3 was substituted by an electron-donating group showed more potent inhibitory activity than those that were substituted by electron-withdrawing groups. Furthermore, the free N-H of urea was not necessary to maintain the high potency of Ca(2+) influx inhibition. The N,N'-disubstituted or N'-substituted derivatives showed relatively low cytotoxicity but maintained the ability to inhibit IL-2 production. Among them, compound 5b showed an improved inhibition of IL-2 production and low cytotoxicity. 1-Phenyl-3-(1-phenylethyl)urea is a novel CRAC channel inhibitor that specifically targets ORAI1. This study provides a new chemical scaffold for design and development of CRAC channel inhibitors with improved Ca(2+) influx inhibition, immune inhibition and low cytotoxicity.

  15. Accelerated Activation of SOCE Current in Myotubes from Two Mouse Models of Anesthetic- and Heat-Induced Sudden Death

    PubMed Central

    Yarotskyy, Viktor; Protasi, Feliciano; Dirksen, Robert T.

    2013-01-01

    Store-operated calcium entry (SOCE) channels play an important role in Ca2+ signaling. Recently, excessive SOCE was proposed to play a central role in the pathogenesis of malignant hyperthermia (MH), a pharmacogenic disorder of skeletal muscle. We tested this hypothesis by characterizing SOCE current (ISkCRAC) magnitude, voltage dependence, and rate of activation in myotubes derived from two mouse models of anesthetic- and heat-induced sudden death: 1) type 1 ryanodine receptor (RyR1) knock-in mice (Y524S/+) and 2) calsequestrin 1 and 2 double knock-out (dCasq-null) mice. ISkCRAC voltage dependence and magnitude at -80 mV were not significantly different in myotubes derived from wild type (WT), Y524S/+ and dCasq-null mice. However, the rate of ISkCRAC activation upon repetitive depolarization was significantly faster at room temperature in myotubes from Y524S/+ and dCasq-null mice. In addition, the maximum rate of ISkCRAC activation in dCasq-null myotubes was also faster than WT at more physiological temperatures (35-37°C). Azumolene (50 µM), a more water-soluble analog of dantrolene that is used to reverse MH crises, failed to alter ISkCRAC density or rate of activation. Together, these results indicate that while an increased rate of ISkCRAC activation is a common characteristic of myotubes derived from Y524S/+ and dCasq-null mice and that the protective effects of azumolene are not due to a direct inhibition of SOCE channels. PMID:24143248

  16. A mirror code for protein-cholesterol interactions in the two leaflets of biological membranes

    NASA Astrophysics Data System (ADS)

    Fantini, Jacques; di Scala, Coralie; Evans, Luke S.; Williamson, Philip T. F.; Barrantes, Francisco J.

    2016-02-01

    Cholesterol controls the activity of a wide range of membrane receptors through specific interactions and identifying cholesterol recognition motifs is therefore critical for understanding signaling receptor function. The membrane-spanning domains of the paradigm neurotransmitter receptor for acetylcholine (AChR) display a series of cholesterol consensus domains (referred to as “CARC”). Here we use a combination of molecular modeling, lipid monolayer/mutational approaches and NMR spectroscopy to study the binding of cholesterol to a synthetic CARC peptide. The CARC-cholesterol interaction is of high affinity, lipid-specific, concentration-dependent, and sensitive to single-point mutations. The CARC motif is generally located in the outer membrane leaflet and its reverse sequence CRAC in the inner one. Their simultaneous presence within the same transmembrane domain obeys a “mirror code” controlling protein-cholesterol interactions in the outer and inner membrane leaflets. Deciphering this code enabled us to elaborate guidelines for the detection of cholesterol-binding motifs in any membrane protein. Several representative examples of neurotransmitter receptors and ABC transporters with the dual CARC/CRAC motifs are presented. The biological significance and potential clinical applications of the mirror code are discussed.

  17. An essential and NSF independent role for α-SNAP in store-operated calcium entry.

    PubMed

    Miao, Yong; Miner, Cathrine; Zhang, Lei; Hanson, Phyllis I; Dani, Adish; Vig, Monika

    2013-07-16

    Store-operated calcium entry (SOCE) by calcium release activated calcium (CRAC) channels constitutes a primary route of calcium entry in most cells. Orai1 forms the pore subunit of CRAC channels and Stim1 is the endoplasmic reticulum (ER) resident Ca(2+) sensor. Upon store-depletion, Stim1 translocates to domains of ER adjacent to the plasma membrane where it interacts with and clusters Orai1 hexamers to form the CRAC channel complex. Molecular steps enabling activation of SOCE via CRAC channel clusters remain incompletely defined. Here we identify an essential role of α-SNAP in mediating functional coupling of Stim1 and Orai1 molecules to activate SOCE. This role for α-SNAP is direct and independent of its known activity in NSF dependent SNARE complex disassembly. Importantly, Stim1-Orai1 clustering still occurs in the absence of α-SNAP but its inability to support SOCE reveals that a previously unsuspected molecular re-arrangement within CRAC channel clusters is necessary for SOCE. DOI:http://dx.doi.org/10.7554/eLife.00802.001.

  18. Molecular basis of activation of the arachidonate-regulated Ca2+ (ARC) channel, a store-independent Orai channel, by plasma membrane STIM1

    PubMed Central

    Thompson, Jill L; Shuttleworth, Trevor J

    2013-01-01

    Currently, Orai proteins are known to encode two distinct agonist-activated, highly calcium-selective channels: the store-operated Ca2+ release-activated Ca2+ (CRAC) channels, and the store-independent, arachidonic acid-activated ARC channels. Surprisingly, whilst the trigger for activation of these channels is entirely different, both depend on stromal interacting molecule 1 (STIM1). However, whilst STIM1 in the endoplasmic reticulum membrane is the critical sensor for the depletion of this calcium store that triggers CRAC channel activation, it is the pool of STIM1 constitutively resident in the plasma membrane that is essential for activation of the ARC channels. Here, using a variety of approaches, we show that the key domains within the cytosolic part of STIM1 identified as critical for the activation of CRAC channels are also key for activation of the ARC channels. However, examination of the actual steps involved in such activation reveal marked differences between these two Orai channel types. Specifically, loss of calcium from the EF-hand of STIM1 that forms the key initiation point for activation of the CRAC channels has no effect on ARC channel activity. Secondly, in marked contrast to the dynamic and labile nature of interactions between STIM1 and the CRAC channels, STIM1 in the plasma membrane appears to be constitutively associated with the ARC channels. Finally, specific mutations in STIM1 that induce an extended, constitutively active, conformation for the CRAC channels actually prevent activation of the ARC channels by arachidonic acid. Based on these findings, we propose that the likely role of arachidonic acid lies in inducing the actual gating of the channel. PMID:23690558

  19. The effect of long-term administered CRAC channels blocker on the functions of respiratory epithelium in guinea pig allergic asthma model.

    PubMed

    Sutovska, Martina; Kocmalova, Michaela; Joskova, Marta; Adamkov, Marian; Franova, Sona

    2015-04-01

    Previously, therapeutic potency of CRAC channels blocker was evidenced as a significant decrease in airway smooth muscle hyperreactivity, antitussive and anti-inflammatory effects. The major role of the respiratory epithelium in asthma pathogenesis was highlighted only recently and CRAC channels were proposed as the most significant route of Ca2+ entry into epithelial cells. The aim of the study was to analyse the impact of long-term administered CRAC channels blocker on airway epithelium, e.g. cytokine production and ciliary beat frequency (CBF) using an animal model of allergic asthma. Ovalbumin-induced allergic airway inflammation of guinea pigs was followed by long-term (14 days lasted) therapy by CRAC blocker (3-fluoropyridine-4-carboxylic acid, FPCA). The influence of long-term therapy on cytokines (IL-4, IL-5 and IL-13) in BALF and in plasma, immunohistochemical staining of pulmonary tissue (c-Fos positivity) and CBF in vitro were used for analysis. Decrease in cytokine levels and in c-Fos positivity confirmed an anti-inflammatory effect of long-term administered FPCA. Cytokine levels in BALF and distribution of c-Fos positivity suggested that FPCA was a more potent inhibitor of respiratory epithelium secretory functions than budesonide. FPCA and budesonide reduced CBF only insignificantly. All findings supported CRAC channels as promising target in the new strategy of antiasthmatic treatment.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Meichun; Department of Physiology, Hubei University of Medicine, Shiyan; Li, Jianjie

    Mast cells play a key role in the pathogenesis of asthma and are a promising target for therapeutic intervention in asthma. This study investigated the effects of polydatin (PD), a resveratrol glucoside, on mast cell degranulation upon cross-linking of the high-affinity IgE receptors (FcεRI), as well as the anti-allergic activity of PD in vivo. Herein, we demonstrated that PD treatment for 30 min suppressed FcεRI-mediated mast cell degranulation in a dose-dependent manner. Concomitantly, PD significantly decreased FcεRI-mediated Ca{sup 2+} increase in mast cells. The suppressive effects of PD on FcεRI-mediated Ca{sup 2+} increase were largely inhibited by using LaCl{sub 3}more » to block the Ca{sup 2+} release-activated Ca{sup 2+} channels (CRACs). Furthermore, PD significantly inhibited Ca{sup 2+} entry through CRACs evoked by thapsigargin (TG). Knocking down protein expression of Orai1, the pore-forming subunit of CRACs, significantly decreased PD suppression of FcεRI-induced intracellular Ca{sup 2+} influx and mast cell degranulation. In a mouse model of mast cell-dependent passive cutaneous anaphylaxis (PCA), in vivo PD administration suppressed mast cell degranulation and inhibited anaphylaxis. Taken together, our data indicate that PD stabilizes mast cells by suppressing FcεRI-induced Ca{sup 2+} mobilization mainly through inhibiting Ca{sup 2+} entry via CRACs, thus exerting a protective effect against PCA. -- Highlights: ► Polydatin can prevent the pathogenesis of passive cutaneous anaphylaxis in mice. ► Polydatin stabilizes mast cells by decreasing FcεRI-mediated degranulation. ► Polydatin suppresses Ca{sup 2+} entry through CRAC channels in mast cells.« less

  1. Mutations of the central tyrosines of putative cholesterol recognition amino acid consensus (CRAC) sequences modify folding, activity, and sterol-sensing of the human ABCG2 multidrug transporter.

    PubMed

    Gál, Zita; Hegedüs, Csilla; Szakács, Gergely; Váradi, András; Sarkadi, Balázs; Özvegy-Laczka, Csilla

    2015-02-01

    Human ABCG2 is a plasma membrane glycoprotein causing multidrug resistance in cancer. Membrane cholesterol and bile acids are efficient regulators of ABCG2 function, while the molecular nature of the sterol-sensing sites has not been elucidated. The cholesterol recognition amino acid consensus (CRAC, L/V-(X)(1-5)-Y-(X)(1-5)-R/K) sequence is one of the conserved motifs involved in cholesterol binding in several proteins. We have identified five potential CRAC motifs in the transmembrane domain of the human ABCG2 protein. In order to define their roles in sterol-sensing, the central tyrosines of these CRACs (Y413, 459, 469, 570 and 645) were mutated to S or F and the mutants were expressed both in insect and mammalian cells. We found that mutation in Y459 prevented protein expression; the Y469S and Y645S mutants lost their activity; while the Y570S, Y469F, and Y645F mutants retained function as well as cholesterol and bile acid sensitivity. We found that in the case of the Y413S mutant, drug transport was efficient, while modulation of the ATPase activity by cholesterol and bile acids was significantly altered. We suggest that the Y413 residue within a putative CRAC motif has a role in sterol-sensing and the ATPase/drug transport coupling in the ABCG2 multidrug transporter. Copyright © 2014. Published by Elsevier B.V.

  2. The Long and Arduous Road to CRAC

    PubMed Central

    Vig, Monika; Kinet, Jean-Pierre

    2007-01-01

    Store-operated calcium (SOC) entry is the major route of calcium influx in non-excitable cells, especially immune cells. The best characterized store operated current, ICRAC, is carried by calcium release activated calcium (CRAC) channels. The existence of the phenomenon of store-operated calcium influx was proposed almost two decades ago. However, in spite of rigorous research by many laboratories, the identity of the key molecules participating in the process has remained a mystery. In all these years, multiple different approaches have been adopted by countless researchers to identify the molecular players in this fundamental process. Along the way many crucial discoveries have been made, some of which have been summarized here. The last couple of years have seen significant breakthroughs in the field–identification of STIM1 as the store Ca2+ sensor and CRACM1 (Orai1) as the pore forming subunit of the CRAC channel. The field is now actively engaged in deciphering the gating mechanism of CRAC channels. We summarize here the latest progress in this direction. PMID:17517435

  3. Store-operated Ca2+ entry regulates Ca2+-activated chloride channels and eccrine sweat gland function.

    PubMed

    Concepcion, Axel R; Vaeth, Martin; Wagner, Larry E; Eckstein, Miriam; Hecht, Lee; Yang, Jun; Crottes, David; Seidl, Maximilian; Shin, Hyosup P; Weidinger, Carl; Cameron, Scott; Turvey, Stuart E; Issekutz, Thomas; Meyts, Isabelle; Lacruz, Rodrigo S; Cuk, Mario; Yule, David I; Feske, Stefan

    2016-11-01

    Eccrine sweat glands are essential for sweating and thermoregulation in humans. Loss-of-function mutations in the Ca2+ release-activated Ca2+ (CRAC) channel genes ORAI1 and STIM1 abolish store-operated Ca2+ entry (SOCE), and patients with these CRAC channel mutations suffer from anhidrosis and hyperthermia at high ambient temperatures. Here we have shown that CRAC channel-deficient patients and mice with ectodermal tissue-specific deletion of Orai1 (Orai1K14Cre) or Stim1 and Stim2 (Stim1/2K14Cre) failed to sweat despite normal sweat gland development. SOCE was absent in agonist-stimulated sweat glands from Orai1K14Cre and Stim1/2K14Cre mice and human sweat gland cells lacking ORAI1 or STIM1 expression. In Orai1K14Cre mice, abolishment of SOCE was associated with impaired chloride secretion by primary murine sweat glands. In human sweat gland cells, SOCE mediated by ORAI1 was necessary for agonist-induced chloride secretion and activation of the Ca2+-activated chloride channel (CaCC) anoctamin 1 (ANO1, also known as TMEM16A). By contrast, expression of TMEM16A, the water channel aquaporin 5 (AQP5), and other regulators of sweat gland function was normal in the absence of SOCE. Our findings demonstrate that Ca2+ influx via store-operated CRAC channels is essential for CaCC activation, chloride secretion, and sweat production in humans and mice.

  4. The Orai-1 and STIM-1 Complex Controls Human Dendritic Cell Maturation

    PubMed Central

    Félix, Romain; Crottès, David; Delalande, Anthony; Fauconnier, Jérémy; Lebranchu, Yvon; Le Guennec, Jean-Yves; Velge-Roussel, Florence

    2013-01-01

    Ca2+ signaling plays an important role in the function of dendritic cells (DC), the professional antigen presenting cells. Here, we described the role of Calcium released activated (CRAC) channels in the maturation and cytokine secretion of human DC. Recent works identified STIM1 and Orai1 in human T lymphocytes as essential for CRAC channel activation. We investigated Ca2+ signaling in human DC maturation by imaging intracellular calcium signaling and pharmalogical inhibitors. The DC response to inflammatory mediators or PAMPs (Pathogen-associated molecular patterns) is due to a depletion of intracellular Ca2+ stores that results in a store-operated Ca2+ entry (SOCE). This Ca2+ influx was inhibited by 2-APB and exhibited a Ca2+permeability similar to the CRAC (Calcium-Released Activated Calcium), found in T lymphocytes. Depending on the PAMPs used, SOCE profiles and amplitudes appeared different, suggesting the involvement of different CRAC channels. Using siRNAi, we identified the STIM1 and Orai1 protein complex as one of the main pathways for Ca2+ entry for LPS- and TNF-α-induced maturation in DC. Cytokine secretions also seemed to be SOCE-dependent with profile differences depending on the maturating agents since IL-12 and IL10 secretions appeared highly sensitive to 2-APB whereas IFN-γ was less affected. Altogether, these results clearly demonstrate that human DC maturation and cytokine secretions depend on SOCE signaling involving STIM1 and Orai1 proteins. PMID:23700407

  5. PAR-CLIP data indicate that Nrd1-Nab3-dependent transcription termination regulates expression of hundreds of protein coding genes in yeast

    PubMed Central

    2014-01-01

    Background Nrd1 and Nab3 are essential sequence-specific yeast RNA binding proteins that function as a heterodimer in the processing and degradation of diverse classes of RNAs. These proteins also regulate several mRNA coding genes; however, it remains unclear exactly what percentage of the mRNA component of the transcriptome these proteins control. To address this question, we used the pyCRAC software package developed in our laboratory to analyze CRAC and PAR-CLIP data for Nrd1-Nab3-RNA interactions. Results We generated high-resolution maps of Nrd1-Nab3-RNA interactions, from which we have uncovered hundreds of new Nrd1-Nab3 mRNA targets, representing between 20 and 30% of protein-coding transcripts. Although Nrd1 and Nab3 showed a preference for binding near 5′ ends of relatively short transcripts, they bound transcripts throughout coding sequences and 3′ UTRs. Moreover, our data for Nrd1-Nab3 binding to 3′ UTRs was consistent with a role for these proteins in the termination of transcription. Our data also support a tight integration of Nrd1-Nab3 with the nutrient response pathway. Finally, we provide experimental evidence for some of our predictions, using northern blot and RT-PCR assays. Conclusions Collectively, our data support the notion that Nrd1 and Nab3 function is tightly integrated with the nutrient response and indicate a role for these proteins in the regulation of many mRNA coding genes. Further, we provide evidence to support the hypothesis that Nrd1-Nab3 represents a failsafe termination mechanism in instances of readthrough transcription. PMID:24393166

  6. Use of recycled chunk rubber asphalt concrete (CRAC) on low volume roads and use of recycled crumb rubber modifier in asphalt pavements. Final report, June 1993-June 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hossain, M.; Funk, L.P.; Sadeq, M.A.

    1995-06-01

    The major objective of this project was to formulate a Chunk Rubber Asphalt Concrete (CRAC) mix for use on low volume roads. CRAC is a rubber modified asphalt concrete product produced by the `dry process` where rubber chunks of 1/2 inch size are used as aggregate in a cold mix with a type C fly ash. The second objective of this project was to develop guidelines concerning the use of rubber modified asphalt concrete hot mix to include: (1) Design methods for use of asphalt-rubber mix for new construction and overlay, (2) Mix design method for asphalt-rubber, and (3) Testmore » method for determining the amount of rubber in an asphalt-rubber concrete for quality control purposes.« less

  7. Store-operated Ca2+ entry regulates Ca2+-activated chloride channels and eccrine sweat gland function

    PubMed Central

    Concepcion, Axel R.; Vaeth, Martin; Wagner, Larry E.; Eckstein, Miriam; Hecht, Lee; Yang, Jun; Crottes, David; Seidl, Maximilian; Shin, Hyosup P.; Weidinger, Carl; Cameron, Scott; Turvey, Stuart E.; Issekutz, Thomas; Meyts, Isabelle; Lacruz, Rodrigo S.; Cuk, Mario; Yule, David I.

    2016-01-01

    Eccrine sweat glands are essential for sweating and thermoregulation in humans. Loss-of-function mutations in the Ca2+ release–activated Ca2+ (CRAC) channel genes ORAI1 and STIM1 abolish store-operated Ca2+ entry (SOCE), and patients with these CRAC channel mutations suffer from anhidrosis and hyperthermia at high ambient temperatures. Here we have shown that CRAC channel–deficient patients and mice with ectodermal tissue–specific deletion of Orai1 (Orai1K14Cre) or Stim1 and Stim2 (Stim1/2K14Cre) failed to sweat despite normal sweat gland development. SOCE was absent in agonist-stimulated sweat glands from Orai1K14Cre and Stim1/2K14Cre mice and human sweat gland cells lacking ORAI1 or STIM1 expression. In Orai1K14Cre mice, abolishment of SOCE was associated with impaired chloride secretion by primary murine sweat glands. In human sweat gland cells, SOCE mediated by ORAI1 was necessary for agonist-induced chloride secretion and activation of the Ca2+-activated chloride channel (CaCC) anoctamin 1 (ANO1, also known as TMEM16A). By contrast, expression of TMEM16A, the water channel aquaporin 5 (AQP5), and other regulators of sweat gland function was normal in the absence of SOCE. Our findings demonstrate that Ca2+ influx via store-operated CRAC channels is essential for CaCC activation, chloride secretion, and sweat production in humans and mice. PMID:27721237

  8. Relevance of CARC and CRAC Cholesterol-Recognition Motifs in the Nicotinic Acetylcholine Receptor and Other Membrane-Bound Receptors.

    PubMed

    Di Scala, Coralie; Baier, Carlos J; Evans, Luke S; Williamson, Philip T F; Fantini, Jacques; Barrantes, Francisco J

    2017-01-01

    Cholesterol is a ubiquitous neutral lipid, which finely tunes the activity of a wide range of membrane proteins, including neurotransmitter and hormone receptors and ion channels. Given the scarcity of available X-ray crystallographic structures and the even fewer in which cholesterol sites have been directly visualized, application of in silico computational methods remains a valid alternative for the detection and thermodynamic characterization of cholesterol-specific sites in functionally important membrane proteins. The membrane-embedded segments of the paradigm neurotransmitter receptor for acetylcholine display a series of cholesterol consensus domains (which we have coined "CARC"). The CARC motif exhibits a preference for the outer membrane leaflet and its mirror motif, CRAC, for the inner one. Some membrane proteins possess the double CARC-CRAC sequences within the same transmembrane domain. In addition to in silico molecular modeling, the affinity, concentration dependence, and specificity of the cholesterol-recognition motif-protein interaction have recently found experimental validation in other biophysical approaches like monolayer techniques and nuclear magnetic resonance spectroscopy. From the combined studies, it becomes apparent that the CARC motif is now more firmly established as a high-affinity cholesterol-binding domain for membrane-bound receptors and remarkably conserved along phylogenetic evolution. © 2017 Elsevier Inc. All rights reserved.

  9. Environmental impact of highway construction and repair materials on surface and ground waters. Case study: crumb rubber asphalt concrete.

    PubMed

    Azizian, Mohammad F; Nelson, Peter O; Thayumanavan, Pugazhendhi; Williamson, Kenneth J

    2003-01-01

    The practice of incorporating certain waste products into highway construction and repair materials (CRMs) has become more popular. These practices have prompted the National Academy of Science, National Cooperative Highway Research Program (NCHRP) to research the possible impacts of these CRMs on the quality of surface and ground waters. State department of transportations (DOTs) are currently experimenting with use of ground tire rubber ( crumb rubber) in bituminous construction and as a crack sealer. Crumb rubber asphalt concrete (CR-AC) leachates contain a mixture of organic and metallic contaminants. Benzothiazole and 2(3H)-benzothiazolone (organic compounds used in tire rubber manufacturing) and the metals mercury and aluminum were leached in potentially harmful concentrations (exceeding toxic concentrations for aquatic toxicity tests). CR-AC leachate exhibited moderate to high toxicity for algae ( Selenastrum capriconutum) and moderate toxicity for water fleas ( Daphnia magna). Benzothiazole was readily removed from CR-AC leachate by the environmental processes of soil sorption, volatilization, and biodegradation. Metals, which do not volatilize or photochemically or biologically degrade, were removed from the leachate by soil sorption. Contaminants from CR-AC leachates are thus degraded or retarded in their transport through nearby soils and ground waters.

  10. Phosphorylated Protein Kinase C (Zeta/Lambda) Expression in Colorectal Adenocarcinoma and Its Correlation with Clinicopathologic Characteristics and Prognosis.

    PubMed

    Yeo, Min-Kyung; Kim, Ji Yeon; Seong, In-Ock; Kim, Jin-Man; Kim, Kyung-Hee

    2017-01-01

    Background: Protein kinase C zeta/lambda (PKCζ/λ) is a family of protein kinase enzymes that contributes to cell proliferation and regulation, which are important for cancer development. PKCζ/λ has been shown to be an important regulator of tumorigenesis in intestinal cancer. The phosphorylated form of PKCζ/λ, p-PKCζ/λ, is suggested as an active form of PKCζ/λ. However, p-PKCζ/λ expression and its clinicopathologic implication in colorectal adenocarcinoma (CRAC) are unclear. Methods: Seven whole-tissue sections of malignant polyps containing both non-neoplastic and neoplastic mucosa, 11 adenomas with low-grade dysplasia, and 173 CRACs were examined by immunohistochemistry and western blot assay for p-PKCζ/λ protein expression. The association of p-PKCζ/λ expression with clinicopathologic factors including patient survival was studied. Results: In non-neoplastic epithelia, p-PKCζ/λ showed a weak cytoplasmic immunostaining. Adenomas and CRACs demonstrated up-regulated p-PKCζ/λ detection. Cytoplasmic p-PKCζ/λ expression was higher in CRAC than in adenoma. In CRACs, p-PKCζ/λ expression was inversely correlated with pathologic TNM stage (I-II versus III-IV) and poor differentiation. Statistical correlations between low expression of p-PKCζ/λ with shortened overall survival and disease-free survival were seen (p=0.004 and p=0.034, respectively). Conclusions: P-PKCζ/λ overexpression is implicated in tumorigenesis but down-regulation was a poor prognostic factor in CRAC.

  11. Synthesis of activated carbon-based amino phosphonic acid chelating resin and its adsorption properties for Ce(III) removal.

    PubMed

    Chen, Tao; Yan, Chunjie; Wang, Yixia; Tang, Conghai; Zhou, Sen; Zhao, Yuan; Ma, Rui; Duan, Ping

    2015-01-01

    This work aims to investigate the adsorption of Ce(III) onto chelating resin based on activated carbon (CRAC). The CRAC adsorbent was prepared from activated carbon (AC) followed by oxidation, silane coupling, ammoniation and phosphorylation, and characterized by Fourier transform-infrared spectrometry, nitrogen adsorption measurements and scanning electron microscopy. The effects of solution pH, adsorbent dosage and contact time were studied by batch technique. Langmuir and Freundlich isotherms were used to describe the adsorption behaviour of Ce(III) by CRAC, and the results showed that the adsorption behaviour well fitted the Langmuir model. The maximum uptake capacity (qmax) calculated by using the Langmuir equation for cerium ions was found to be 94.34 mg/g. A comparison of the kinetic models and the overall experimental data was best fitted with the type 1 pseudo second-order kinetic model. The calculated thermodynamic parameters (ΔG°, ΔH° and ΔS°) showed that the adsorption for Ce(III) was feasible, spontaneous and exothermic at 25-45 °C. The CRAC showed an excellent adsorptive selectivity towards Ce(III). Moreover, more than 82% of Ce(III) adsorbed onto CRAC could be desorbed with HCl and could be used several times.

  12. STIM and Orai proteins and the non-capacitative ARC channels

    PubMed Central

    Shuttleworth, Trevor J.

    2012-01-01

    The ARC channel is a small conductance, highly Ca2+-selective ion channel whose activation is specifically dependent on low concentrations of arachidonic acid acting at an intracellular site. They are widely distributed in diverse cell types where they provide an alternative, store-independent pathway for agonist-activated Ca2+ entry. Although biophysically similar to the store-operated CRAC channels, these two conductances function under distinct conditions of agonist stimulation, with the ARC channels providing the predominant route of Ca2+ entry during the oscillatory signals generated at low agonist concentrations. Despite these differences in function, like the CRAC channel, activation of the ARC channels is dependent on STIM1, but it is the pool of STIM1 that constitutively resides in the plasma membrane that is responsible. Similarly, both channels are formed by Orai proteins but, whilst the CRAC channel pore is a tetrameric assembly of Orai1 subunits, the ARC channel pore is formed by a heteropentameric assembly of three Orai1 subunits and two Orai3 subunits. There is increasing evidence that the activity of these channels plays a critical role a variety of different cellular activities. PMID:22201777

  13. Balloon test project: Cosmic Ray Antimatter Calorimeter (CRAC)

    NASA Technical Reports Server (NTRS)

    Christy, J. C.; Dhenain, G.; Goret, P.; Jorand, J.; Masse, P.; Mestreau, P.; Petrou, N.; Robin, A.

    1984-01-01

    Cosmic ray observations from balloon flights are discussed. The cosmic ray antimatter calorimeter (CRAC) experiment attempts to measure the flux of antimatter in the 200-600 Mev/m energy range and the isotopes of light elements between 600 and 1,000 Mev/m.

  14. Characterization of selective Calcium-Release Activated Calcium channel blockers in mast cells and T-cells from human, rat, mouse and guinea-pig preparations.

    PubMed

    Rice, Louise V; Bax, Heather J; Russell, Linda J; Barrett, Victoria J; Walton, Sarah E; Deakin, Angela M; Thomson, Sally A; Lucas, Fiona; Solari, Roberto; House, David; Begg, Malcolm

    2013-03-15

    Loss of function mutations in the two key proteins which constitute Calcium-Release Activated Calcium (CRAC) channels demonstrate the critical role of this ion channel in immune cell function. The aim of this study was to demonstrate that inhibition of immune cell activation could be achieved with highly selective inhibitors of CRAC channels in vitro using cell preparations from human, rat, mouse and guinea-pig. Two selective small molecule blockers of CRAC channels; GSK-5498A and GSK-7975A were tested to demonstrate their ability to inhibit mediator release from mast cells, and pro-inflammatory cytokine release from T-cells in a variety of species. Both GSK-5498A and GSK-7975A completely inhibited calcium influx through CRAC channels. This led to inhibition of the release of mast cell mediators and T-cell cytokines from multiple human and rat preparations. Mast cells from guinea-pig and mouse preparations were not inhibited by GSK-5498A or GSK-7975A; however cytokine release was fully blocked from T-cells in a mouse preparation. GSK-5498A and GSK-7975A confirm the critical role of CRAC channels in human mast cell and T-cell function, and that inhibition can be achieved in vitro. The rat displays a similar pharmacology to human, promoting this species for future in vivo research with this series of molecules. Together these observations provide a critical forward step in the identification of CRAC blockers suitable for clinical development in the treatment of inflammatory disorders. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Modified use of a dynamic bite opener--treatment and prevention of trismus in a child with head and neck cancer: a case report.

    PubMed

    Dijkstra, P U; Kropmans, T J; Tamminga, R Y

    1992-10-01

    Trismus may be a complication arising during or after treatment of patients with head and neck cancer. Treatment of trismus is difficult, making prevention very important. To prevent and treat trismus in a patient with a nasopharyngeal tumor, the Contract-Relax-Antagonist-Contract (CRAC) technique was applied, with the aid of a custom-made dynamic bite opener (DBO). The CRAC technique in combination with the DBO, as a therapy/prevention program for trismus, is not referred to in the literature. The combination of CRAC and DBO appeared to be a gentle and effective method well tolerated by the patient.

  16. Career Guidance for the Third Age. Report on a NICEC/CRAC Invitational Policy Consultation (Birmingham, England, October 30-31, 1996). CRAC/NICEC Conference Briefing.

    ERIC Educational Resources Information Center

    National Inst. for Careers Education and Counselling, Cambridge (England).

    This document synthesizes the findings of an invitational policy consultation at which 27 invited participants from England, Wales, and Scotland examined the special career guidance needs of third-age adults (adults age 45 or older) and strategies for meeting those needs. First, the special career- and employment-related problems faced by…

  17. Glu¹⁰⁶ in the Orai1 pore contributes to fast Ca²⁺-dependent inactivation and pH dependence of Ca²⁺ release-activated Ca²⁺ (CRAC) current.

    PubMed

    Scrimgeour, Nathan R; Wilson, David P; Rychkov, Grigori Y

    2012-01-15

    FCDI (fast Ca²⁺-dependent inactivation) is a mechanism that limits Ca²⁺ entry through Ca²⁺ channels, including CRAC (Ca²⁺ release-activated Ca²⁺) channels. This phenomenon occurs when the Ca²⁺ concentration rises beyond a certain level in the vicinity of the intracellular mouth of the channel pore. In CRAC channels, several regions of the pore-forming protein Orai1, and STIM1 (stromal interaction molecule 1), the sarcoplasmic/endoplasmic reticulum Ca²⁺ sensor that communicates the Ca²⁺ load of the intracellular stores to Orai1, have been shown to regulate fast Ca²⁺-dependent inactivation. Although significant advances in unravelling the mechanisms of CRAC channel gating have occurred, the mechanisms regulating fast Ca²⁺-dependent inactivation in this channel are not well understood. We have identified that a pore mutation, E106D Orai1, changes the kinetics and voltage dependence of the ICRAC (CRAC current), and the selectivity of the Ca²⁺-binding site that regulates fast Ca²⁺-dependent inactivation, whereas the V102I and E190Q mutants when expressed at appropriate ratios with STIM1 have fast Ca²⁺-dependent inactivation similar to that of WT (wild-type) Orai1. Unexpectedly, the E106D mutation also changes the pH dependence of ICRAC. Unlike WT ICRAC, E106D-mediated current is not inhibited at low pH, but instead the block of Na⁺ permeation through the E106D Orai1 pore by Ca²⁺ is diminished. These results suggest that Glu¹⁰⁶ inside the CRAC channel pore is involved in co-ordinating the Ca²⁺-binding site that mediates fast Ca²⁺-dependent inactivation.

  18. The CRAC cohort model: A computerized low cost registry of interventional cardiology with daily update and long-term follow-up.

    PubMed

    Rangé, G; Chassaing, S; Marcollet, P; Saint-Étienne, C; Dequenne, P; Goralski, M; Bardiére, P; Beverilli, F; Godillon, L; Sabine, B; Laure, C; Gautier, S; Hakim, R; Albert, F; Angoulvant, D; Grammatico-Guillon, L

    2018-05-01

    To assess the reliability and low cost of a computerized interventional cardiology (IC) registry to prospectively and systematically collect high-quality data for all consecutive coronary patients referred for coronary angiogram or/and coronary angioplasty. Rigorous clinical practice assessment is a key factor to improve prognosis in IC. A prospective and permanent registry could achieve this goal but, presumably, at high cost and low level of data quality. One multicentric IC registry (CRAC registry), fully integrated to usual coronary activity report software, started in the centre Val-de-Loire (CVL) French region in 2014. Quality assessment of CRAC registry was conducted on five IC CathLab of the CVL region, from January 1st to December 31st 2014. Quality of collected data was evaluated by measuring procedure exhaustivity (comparing with data from hospital information system), data completeness (quality controls) and data consistency (by checking complete medical charts as gold standard). Cost per procedure (global registry operating cost/number of collected procedures) was also estimated. CRAC model provided a high-quality level with 98.2% procedure completeness, 99.6% data completeness and 89% data consistency. The operating cost per procedure was €14.70 ($16.51) for data collection and quality control, including ST-segment elevation myocardial infarction (STEMI) preadmission information and one-year follow-up after angioplasty. This integrated computerized IC registry led to the construction of an exhaustive, reliable and costless database, including all coronary patients entering in participating IC centers in the CVL region. This solution will be developed in other French regions, setting up a national IC database for coronary patients in 2020: France PCI. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  19. Structure-activity relationship (SAR) analysis of a family of steroids acutely controlling steroidogenesis.

    PubMed

    Midzak, Andrew; Rammouz, Georges; Papadopoulos, Vassilios

    2012-11-01

    Steroids metabolically derive from lipid cholesterol, and vertebrate steroids additionally derive from the steroid pregnenolone. Pregnenolone is derived from cholesterol by hydrolytic cleavage of the aliphatic tail by mitochondrial cytochrome P450 enzyme CYP11A1, located in the inner mitochondrial membrane. Delivery of cholesterol to CYP11A1 comprises the principal control step of steroidogenesis, and requires a series of proteins spanning the mitochondrial double membranes. A critical member of this cholesterol translocation machinery is the integral outer mitochondrial membrane translocator protein (18kDa, TSPO), a high-affinity drug- and cholesterol-binding protein. The cholesterol-binding site of TSPO consists of a phylogenetically conserved cholesterol recognition/interaction amino acid consensus (CRAC). Previous studies from our group identified 5-androsten-3β,17,19-triol (19-Atriol) as drug ligand for the TSPO CRAC motif inhibiting cholesterol binding to CRAC domain and steroidogenesis. To further understand 19-Atriol's mechanism of action as well as the molecular recognition by the TSPO CRAC motif, we undertook structure-activity relationship (SAR) analysis of the 19-Atriol molecule with a variety of substituted steroids oxygenated at positions around the steroid backbone. We found that in addition to steroids hydroxylated at carbon C19, hydroxylations at C4, C7, and C11 contributed to inhibition of cAMP-mediated steroidogenesis in a minimal steroidogenic cell model. However, only substituted steroids with C19 hydroxylations exhibited specificity to TSPO, its CRAC motif, and mitochondrial cholesterol transport, as the C4, C7, and C11 hydroxylated steroids inhibited the metabolic transformation of cholesterol by CYP11A1. We thus provide new insights into structure-activity relationships of steroids inhibiting mitochondrial cholesterol transport and steroidogenic cholesterol metabolic enzymes. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Store-operated Ca2+ Entry Modulates the Expression of Enamel Genes.

    PubMed

    Nurbaeva, M K; Eckstein, M; Snead, M L; Feske, S; Lacruz, R S

    2015-10-01

    Dental enamel formation is an intricate process tightly regulated by ameloblast cells. The correct spatiotemporal patterning of enamel matrix protein (EMP) expression is fundamental to orchestrate the formation of enamel crystals, which depend on a robust supply of Ca2+. In the extracellular milieu, Ca2+ -EMP interactions occur at different levels. Despite its recognized role in enamel development, the molecular machinery involved in Ca2+ homeostasis in ameloblasts remains poorly understood. A common mechanism for Ca2+ influx is store-operated Ca2+ entry (SOCE). We evaluated the possibility that Ca2+ influx in enamel cells might be mediated by SOCE and the Ca2+ release-activated Ca2+ (CRAC) channel, the prototypical SOCE channel. Using ameloblast-like LS8 cells, we demonstrate that these cells express Ca2+ -handling molecules and mediate Ca2+ influx through SOCE. As a rise in the cytosolic Ca2+ concentration is a versatile signal that can modulate gene expression, we assessed whether SOCE in enamel cells had any effect on the expression of EMPs. Our results demonstrate that stimulating LS8 cells or murine primary enamel organ cells with thapsigargin to activate SOCE leads to increased expression of Amelx, Ambn, Enam, Mmp20. This effect is reversed when cells are treated with a CRAC channel inhibitor. These data indicate that Ca2+ influx in LS8 cells and enamel organ cells is mediated by CRAC channels and that Ca2+ signals enhance the expression of EMPs. Ca2+ plays an important role not only in mineralizing dental enamel but also in regulating the expression of EMPs. © International & American Associations for Dental Research 2015.

  1. Ca(2+) signals mediated by bradykinin type 2 receptors in normal pancreatic stellate cells can be inhibited by specific Ca(2+) channel blockade.

    PubMed

    Gryshchenko, Oleksiy; Gerasimenko, Julia V; Gerasimenko, Oleg V; Petersen, Ole H

    2016-01-15

    Bradykinin may play a role in the autodigestive disease acute pancreatitis, but little is known about its pancreatic actions. In this study, we have investigated bradykinin-elicited Ca(2+) signal generation in normal mouse pancreatic lobules. We found complete separation of Ca(2+) signalling between pancreatic acinar (PACs) and stellate cells (PSCs). Pathophysiologically relevant bradykinin concentrations consistently evoked Ca(2+) signals, via B2 receptors, in PSCs but never in neighbouring PACs, whereas cholecystokinin, consistently evoking Ca(2+) signals in PACs, never elicited Ca(2+) signals in PSCs. The bradykinin-elicited Ca(2+) signals were due to initial Ca(2+) release from inositol trisphosphate-sensitive stores followed by Ca(2+) entry through Ca(2+) release-activated channels (CRACs). The Ca(2+) entry phase was effectively inhibited by a CRAC blocker. B2 receptor blockade reduced the extent of PAC necrosis evoked by pancreatitis-promoting agents and we therefore conclude that bradykinin plays a role in acute pancreatitis via specific actions on PSCs. Normal pancreatic stellate cells (PSCs) are regarded as quiescent, only to become activated in chronic pancreatitis and pancreatic cancer. However, we now report that these cells in their normal microenvironment are far from quiescent, but are capable of generating substantial Ca(2+) signals. We have compared Ca(2+) signalling in PSCs and their better studied neighbouring acinar cells (PACs) and found complete separation of Ca(2+) signalling in even closely neighbouring PACs and PSCs. Bradykinin (BK), at concentrations corresponding to the slightly elevated plasma BK levels that have been shown to occur in the auto-digestive disease acute pancreatitis in vivo, consistently elicited substantial Ca(2+) signals in PSCs, but never in neighbouring PACs, whereas the physiological PAC stimulant cholecystokinin failed to evoke Ca(2+) signals in PSCs. The BK-induced Ca(2+) signals were mediated by B2 receptors and B2 receptor blockade protected against PAC necrosis evoked by agents causing acute pancreatitis. The initial Ca(2+) rise in PSCs was due to inositol trisphosphate receptor-mediated release from internal stores, whereas the sustained phase depended on external Ca(2+) entry through Ca(2+) release-activated Ca(2+) (CRAC) channels. CRAC channel inhibitors, which have been shown to protect PACs against damage caused by agents inducing pancreatitis, therefore also inhibit Ca(2+) signal generation in PSCs and this may be helpful in treating acute pancreatitis. © 2015 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.

  2. Data Center Energy Efficiency Measurement Assessment Kit Guide and Specification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2012-10-26

    A portable and temporary wireless mesh assessment kit can be used to speed up and reduce the costs of a data center energy use assessment and overcome the issues with respect to shutdowns. The assessment kit is comprised of temperature, relative humidity, and pressure sensors. Also included are power meters that can be installed on computer room air conditioners (CRACs) without intrusive interruption of data center operations. The assessment kit produces data required for a detailed energy assessment of the data center.

  3. Interaction of mammalian seminal plasma protein PDC-109 with cholesterol: implications for a putative CRAC domain.

    PubMed

    Scolari, Silvia; Müller, Karin; Bittman, Robert; Herrmann, Andreas; Müller, Peter

    2010-10-26

    Seminal plasma proteins of the fibronectin type II (Fn2) family modulate mammalian spermatogenesis by triggering the release of the lipids phosphatidylcholine and cholesterol from sperm cells. Whereas the specific interaction of these proteins with phosphatidylcholine is well-understood, their selectivity for cholesterol is unknown. To characterize the interaction between the bovine Fn2 protein PDC-109 and cholesterol, we have investigated the effect of PDC-109 on the dynamics of fluorescent cholesterol analogues in lipid vesicles by time-resolved fluorescence anisotropy. The data show that PDC-109 decreases the rotational mobility of cholesterol within the membrane and that the extent of this impact depends on the cholesterol structure, indicating a specific influence of PDC-109 on cholesterol. We propose that the cholesterol recognition/interaction amino acid consensus (CRAC) regions of PDC-109 are involved in the interaction with cholesterol.

  4. Near-infrared photoactivatable control of Ca2+ signaling and optogenetic immunomodulation

    PubMed Central

    He, Lian; Zhang, Yuanwei; Ma, Guolin; Tan, Peng; Li, Zhanjun; Zang, Shengbing; Wu, Xiang; Jing, Ji; Fang, Shaohai; Zhou, Lijuan; Wang, Youjun; Huang, Yun; Hogan, Patrick G; Han, Gang; Zhou, Yubin

    2015-01-01

    The application of current channelrhodopsin-based optogenetic tools is limited by the lack of strict ion selectivity and the inability to extend the spectra sensitivity into the near-infrared (NIR) tissue transmissible range. Here we present an NIR-stimulable optogenetic platform (termed 'Opto-CRAC') that selectively and remotely controls Ca2+ oscillations and Ca2+-responsive gene expression to regulate the function of non-excitable cells, including T lymphocytes, macrophages and dendritic cells. When coupled to upconversion nanoparticles, the optogenetic operation window is shifted from the visible range to NIR wavelengths to enable wireless photoactivation of Ca2+-dependent signaling and optogenetic modulation of immunoinflammatory responses. In a mouse model of melanoma by using ovalbumin as surrogate tumor antigen, Opto-CRAC has been shown to act as a genetically-encoded 'photoactivatable adjuvant' to improve antigen-specific immune responses to specifically destruct tumor cells. Our study represents a solid step forward towards the goal of achieving remote and wireless control of Ca2+-modulated activities with tailored function. DOI: http://dx.doi.org/10.7554/eLife.10024.001 PMID:26646180

  5. Complex role of STIM1 in the activation of store-independent Orai1/3 channels

    PubMed Central

    Zhang, Wei; González-Cobos, José C.; Jardin, Isaac; Romanin, Christoph; Matrougui, Khalid

    2014-01-01

    Orai proteins contribute to Ca2+ entry into cells through both store-dependent, Ca2+ release–activated Ca2+ (CRAC) channels (Orai1) and store-independent, arachidonic acid (AA)-regulated Ca2+ (ARC) and leukotriene C4 (LTC4)-regulated Ca2+ (LRC) channels (Orai1/3 heteromultimers). Although activated by fundamentally different mechanisms, CRAC channels, like ARC and LRC channels, require stromal interacting molecule 1 (STIM1). The role of endoplasmic reticulum–resident STIM1 (ER-STIM1) in CRAC channel activation is widely accepted. Although ER-STIM1 is necessary and sufficient for LRC channel activation in vascular smooth muscle cells (VSMCs), the minor pool of STIM1 located at the plasma membrane (PM-STIM1) is necessary for ARC channel activation in HEK293 cells. To determine whether ARC and LRC conductances are mediated by the same or different populations of STIM1, Orai1, and Orai3 proteins, we used whole-cell and perforated patch-clamp recording to compare AA- and LTC4-activated currents in VSMCs and HEK293 cells. We found that both cell types show indistinguishable nonadditive LTC4- and AA-activated currents that require both Orai1 and Orai3, suggesting that both conductances are mediated by the same channel. Experiments using a nonmetabolizable form of AA or an inhibitor of 5-lipooxygenase suggested that ARC and LRC currents in both cell types could be activated by either LTC4 or AA, with LTC4 being more potent. Although PM-STIM1 was required for current activation by LTC4 and AA under whole-cell patch-clamp recordings in both cell types, ER-STIM1 was sufficient with perforated patch recordings. These results demonstrate that ARC and LRC currents are mediated by the same cellular populations of STIM1, Orai1, and Orai3, and suggest a complex role for both ER-STIM1 and PM-STIM1 in regulating these store-independent Orai1/3 channels. PMID:24567509

  6. The WAVE2 complex regulates actin cytoskeletal reorganization and CRAC-mediated calcium entry during T cell activation.

    PubMed

    Nolz, Jeffrey C; Gomez, Timothy S; Zhu, Peimin; Li, Shuixing; Medeiros, Ricardo B; Shimizu, Yoji; Burkhardt, Janis K; Freedman, Bruce D; Billadeau, Daniel D

    2006-01-10

    The engagement of the T cell receptor results in actin cytoskeletal reorganization at the immune synapse (IS) and the triggering of biochemical signaling cascades leading to gene regulation and, ultimately, cellular activation. Recent studies have identified the WAVE family of proteins as critical mediators of Rac1-induced actin reorganization in other cell types. However, whether these proteins participate in actin reorganization at the IS or signaling pathways in T cells has not been investigated. By using a combination of biochemical, genetic, and cell biology approaches, we provide evidence that WAVE2 is recruited to the IS, is biochemically modified, and is required for actin reorganization and beta-integrin-mediated adhesion after TCR crosslinking. Moreover, we show that WAVE2 regulates calcium entry at a point distal to PLCgamma1 activation and IP(3)-mediated store release. These data reveal a role for WAVE2 in regulating multiple pathways leading to T cell activation. In particular, this work shows that WAVE2 is a key component of the actin regulatory machinery in T cells and that it also participates in linking intracellular calcium store depletion to calcium release-activated calcium (CRAC) channel activation.

  7. Molecular mechanisms of protein-cholesterol interactions in plasma membranes: Functional distinction between topological (tilted) and consensus (CARC/CRAC) domains.

    PubMed

    Fantini, Jacques; Di Scala, Coralie; Baier, Carlos J; Barrantes, Francisco J

    2016-09-01

    The molecular mechanisms that control the multiple possible modes of protein association with membrane cholesterol are remarkably convergent. These mechanisms, which include hydrogen bonding, CH-π stacking and dispersion forces, are used by a wide variety of extracellular proteins (e.g. microbial or amyloid) and membrane receptors. Virus fusion peptides penetrate the membrane of host cells with a tilted orientation that is compatible with a transient interaction with cholesterol; this tilted orientation is also characteristic of the process of insertion of amyloid proteins that subsequently form oligomeric pores in the plasma membrane of brain cells. Membrane receptors that are associated with cholesterol generally display linear consensus binding motifs (CARC and CRAC) characterized by a triad of basic (Lys/Arg), aromatic (Tyr/phe) and aliphatic (Leu/Val) amino acid residues. In some cases, the presence of both CARC and CRAC within the same membrane-spanning domain allows the simultaneous binding of two cholesterol molecules, one in each membrane leaflet. In this review the molecular basis and the functional significance of the different modes of protein-cholesterol interactions in plasma membranes are discussed. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. A major defect in mast cell effector functions in CRACM1-/- mice

    PubMed Central

    Vig, Monika; Dehaven, Wayne I; Bird, Gary S; Billingsley, James M; Wang, Huiyun; Rao, Patricia E; Hutchings, Amy B; Jouvin, Marie-Hélène; Putney, James W; Kinet, Jean-Pierre

    2008-01-01

    CRACM1 (Orai1) constitutes the pore subunit of CRAC channels that are crucial for many physiological processes 1-6. A point mutation in CRACM1 has been associated with SCID disease in humans 2. We have generated CRACM1 deficient mice using gene trap, where β-galactosidase (LacZ) activity identifies CRACM1 expression in tissues. We show here that the homozygous CRACM1 deficient mice are considerably smaller in size and are grossly defective in mast cell degranulation and cytokine secretion. FcεRI-mediated in vivo allergic reactions were also inhibited in CRACM1-/- mice. Other tissues expressing truncated CRACM1-LacZ fusion protein include skeletal muscles, kidney and regions in the brain and heart. Surprisingly, no CRACM1 expression was seen in the lymphoid regions of thymus. Accordingly, we found no defect in T cell development. Thus, our data reveal novel crucial roles for CRAC channels including a putative role in excitable cells. PMID:18059270

  9. Customized rating assessment of climate suitability (CRACS): climate satisfaction evaluation based on subjective perception.

    PubMed

    Lin, Tzu-Ping; Yang, Shing-Ru; Matzarakis, Andreas

    2015-12-01

    Climate not only influences the behavior of people in urban environments but also affects people's schedules and travel plans. Therefore, providing people with appropriate long-term climate evaluation information is crucial. Therefore, we developed an innovative climate assessment system based on field investigations conducted in three cities located in Northern, Central, and Southern Taiwan. The field investigations included the questionnaire surveys and climate data collection. We first analyzed the relationship between the participants and climate parameters comprising physiologically equivalent temperature, air temperature, humidity, wind speed, solar radiation, cloud cover, and precipitation. Second, we established the neutral value, comfort range, and dissatisfied range of each parameter. Third, after verifying that the subjects' perception toward the climate parameters vary based on individual preferences, we developed the customized rating assessment of climate suitability (CRACS) approach, which featured functions such as personalized and default climate suitability information to be used by users exhibiting varying demands. Finally, we performed calculations using the climate conditions of two cities during the past 10 years to demonstrate the performance of the CRACS approach. The results can be used as a reference when planning activities in the city or when organizing future travel plans. The flexibility of the assessment system enables it to be adjusted for varying regions and usage characteristics.

  10. Customized rating assessment of climate suitability (CRACS): climate satisfaction evaluation based on subjective perception

    NASA Astrophysics Data System (ADS)

    Lin, Tzu-Ping; Yang, Shing-Ru; Matzarakis, Andreas

    2015-12-01

    Climate not only influences the behavior of people in urban environments but also affects people's schedules and travel plans. Therefore, providing people with appropriate long-term climate evaluation information is crucial. Therefore, we developed an innovative climate assessment system based on field investigations conducted in three cities located in Northern, Central, and Southern Taiwan. The field investigations included the questionnaire surveys and climate data collection. We first analyzed the relationship between the participants and climate parameters comprising physiologically equivalent temperature, air temperature, humidity, wind speed, solar radiation, cloud cover, and precipitation. Second, we established the neutral value, comfort range, and dissatisfied range of each parameter. Third, after verifying that the subjects' perception toward the climate parameters vary based on individual preferences, we developed the customized rating assessment of climate suitability (CRACS) approach, which featured functions such as personalized and default climate suitability information to be used by users exhibiting varying demands. Finally, we performed calculations using the climate conditions of two cities during the past 10 years to demonstrate the performance of the CRACS approach. The results can be used as a reference when planning activities in the city or when organizing future travel plans. The flexibility of the assessment system enables it to be adjusted for varying regions and usage characteristics.

  11. The WAVE2 Complex Regulates Actin Cytoskeletal Reorganization and CRAC-Mediated Calcium Entry during T Cell Activation

    PubMed Central

    Nolz, Jeffrey C.; Gomez, Timothy S.; Zhu, Peimin; Li, Shuixing; Medeiros, Ricardo B.; Shimizu, Yoji; Burkhardt, Janis K.; Freedman, Bruce D.; Billadeau, Daniel D.

    2007-01-01

    Summary Background The engagement of the T cell receptor results in actin cytoskeletal reorganization at the immune synapse (IS) and the triggering of biochemical signaling cascades leading to gene regulation and, ultimately, cellular activation. Recent studies have identified the WAVE family of proteins as critical mediators of Rac1-induced actin reorganization in other cell types. However, whether these proteins participate in actin reorganization at the IS or signaling pathways in T cells has not been investigated. Results By using a combination of biochemical, genetic, and cell biology approaches, we provide evidence that WAVE2 is recruited to the IS, is biochemically modified, and is required for actin reorganization and β-integrin-mediated adhesion after TCR crosslinking. Moreover, we show that WAVE2 regulates calcium entry at a point distal to PLCγ1 activation and IP3-mediated store release. Conclusions These data reveal a role for WAVE2 in regulating multiple pathways leading to T cell activation. In particular, this work shows that WAVE2 is a key component of the actin regulatory machinery in T cells and that it also participates in linking intracellular calcium store depletion to calcium release-activated calcium (CRAC) channel activation. PMID:16401421

  12. Store-operated Ca2+ entry controls ameloblast cell function and enamel development

    PubMed Central

    Eckstein, Miriam; Vaeth, Martin; Fornai, Cinzia; Vinu, Manikandan; Bromage, Timothy G.; Nurbaeva, Meerim K.; Sorge, Jessica L.; Coelho, Paulo G.; Idaghdour, Youssef; Feske, Stefan; Lacruz, Rodrigo S.

    2017-01-01

    Loss-of-function mutations in stromal interaction molecule 1 (STIM1) impair the activation of Ca2+ release–activated Ca2+ (CRAC) channels and store-operated Ca2+ entry (SOCE), resulting in a disease syndrome called CRAC channelopathy that is characterized by severe dental enamel defects. The cause of these enamel defects has remained unclear given a lack of animal models. We generated Stim1/2K14cre mice to delete STIM1 and its homolog STIM2 in enamel cells. These mice showed impaired SOCE in enamel cells. Enamel in Stim1/2K14cre mice was hypomineralized with decreased Ca content, mechanically weak, and thinner. The morphology of SOCE-deficient ameloblasts was altered, showing loss of the typical ruffled border, resulting in mislocalized mitochondria. Global gene expression analysis of SOCE-deficient ameloblasts revealed strong dysregulation of several pathways. ER stress genes associated with the unfolded protein response were increased in Stim1/2-deficient cells, whereas the expression of components of the glutathione system were decreased. Consistent with increased oxidative stress, we found increased ROS production, decreased mitochondrial function, and abnormal mitochondrial morphology in ameloblasts of Stim1/2K14cre mice. Collectively, these data show that loss of SOCE in enamel cells has substantial detrimental effects on gene expression, cell function, and the mineralization of dental enamel. PMID:28352661

  13. Profiling calcium signals of in vitro polarized human effector CD4+ T cells.

    PubMed

    Kircher, Sarah; Merino-Wong, Maylin; Niemeyer, Barbara A; Alansary, Dalia

    2018-06-01

    Differentiation of naïve CD4 + T cells into effector subtypes with distinct cytokine profiles and physiological roles is a tightly regulated process, the imbalance of which can lead to an inadequate immune response or autoimmune disease. The crucial role of Ca 2+ signals, mainly mediated by the store operated Ca 2+ entry (SOCE) in shaping the immune response is well described. However, it is unclear if human effector CD4 + T cell subsets show differential Ca 2+ signatures in response to different stimulation methods. Herein, we provide optimized in vitro culture conditions for polarization of human CD4 + effector T cells and characterize their SOCE following both pharmacological store depletion and direct T-cell receptor (TCR) activation. Moreover, we measured whole cell Ca 2+ release activated Ca 2+ currents (I CRAC ) and investigated whether the observed differences correlate to the expression of CRAC genes. Our results show that Ca 2+ profiles of helper CD4 + Th1, Th2 and Th17 are distinct and in part shaped by the intensity of stimulation. Regulatory T cells (Treg) are unique being the subtype with the most prominent SOCE response. Analysis of in vivo differentiated Treg unraveled the role of differential expression of ORAI2 in fine-tuning signals in Treg vs. conventional CD4 + T cells. Copyright © 2018 The Author(s). Published by Elsevier B.V. All rights reserved.

  14. Molecular Dynamics Simulations of Orai Reveal How the Third Transmembrane Segment Contributes to Hydration and Ca2+ Selectivity in Calcium Release-Activated Calcium Channels.

    PubMed

    Alavizargar, Azadeh; Berti, Claudio; Ejtehadi, Mohammad Reza; Furini, Simone

    2018-04-26

    Calcium release-activated calcium (CRAC) channels open upon depletion of Ca 2+ from the endoplasmic reticulum, and when open, they are permeable to a selective flux of calcium ions. The atomic structure of Orai, the pore domain of CRAC channels, from Drosophila melanogaster has revealed many details about conduction and selectivity in this family of ion channels. However, it is still unclear how residues on the third transmembrane helix can affect the conduction properties of the channel. Here, molecular dynamics and Brownian dynamics simulations were employed to analyze how a conserved glutamate residue on the third transmembrane helix (E262) contributes to selectivity. The comparison between the wild-type and mutated channels revealed a severe impact of the mutation on the hydration pattern of the pore domain and on the dynamics of residues K270, and Brownian dynamics simulations proved that the altered configuration of residues K270 in the mutated channel impairs selectivity to Ca 2+ over Na + . The crevices of water molecules, revealed by molecular dynamics simulations, are perfectly located to contribute to the dynamics of the hydrophobic gate and the basic gate, suggesting a possible role in channel opening and in selectivity function.

  15. Uncontrolled concrete bridge parapet cracking.

    DOT National Transportation Integrated Search

    2013-06-01

    The Ohio Department of Transportation has recently identified the problem of wide-spread premature cracking of concrete bridge : parapets throughout its District 12 region (Northeast Ohio). Many of the bridge decks that contain these prematurely crac...

  16. Identification of key amino acid residues responsible for internal and external pH sensitivity of Orai1/STIM1 channels.

    PubMed

    Tsujikawa, Hiroto; Yu, Albert S; Xie, Jia; Yue, Zhichao; Yang, Wenzhong; He, Yanlin; Yue, Lixia

    2015-11-18

    Changes of intracellular and extracellular pH are involved in a variety of physiological and pathological processes, in which regulation of the Ca(2+) release activated Ca(2+) channel (I CRAC) by pH has been implicated. Ca(2+) entry mediated by I CRAC has been shown to be regulated by acidic or alkaline pH. Whereas several amino acid residues have been shown to contribute to extracellular pH (pHo) sensitivity, the molecular mechanism for intracellular pH (pHi) sensitivity of Orai1/STIM1 is not fully understood. By investigating a series of mutations, we find that the previously identified residue E106 is responsible for pHo sensitivity when Ca(2+) is the charge carrier. Unexpectedly, we identify that the residue E190 is responsible for pHo sensitivity when Na(+) is the charge carrier. Furthermore, the intracellular mutant H155F markedly diminishes the response to acidic and alkaline pHi, suggesting that H155 is responsible for pHi sensitivity of Orai1/STIM1. Our results indicate that, whereas H155 is the intracellular pH sensor of Orai1/STIM1, the molecular mechanism of external pH sensitivity varies depending on the permeant cations. As changes of pH are involved in various physiological/pathological functions, Orai/STIM channels may be an important mediator for various physiological and pathological processes associated with acidosis and alkalinization.

  17. An aromatic amino acid in the coiled-coil 1 domain plays a crucial role in the auto-inhibitory mechanism of STIM1.

    PubMed

    Yu, Junwei; Zhang, Haining; Zhang, Mingshu; Deng, Yongqiang; Wang, Huiyu; Lu, Jingze; Xu, Tao; Xu, Pingyong

    2013-09-15

    STIM1 (stromal interaction molecule 1) is one of the key elements that mediate store-operated Ca²⁺ entry via CRAC (Ca²⁺- release-activated Ca²⁺) channels in immune and non-excitable cells. Under physiological conditions, the intramolecular auto-inhibitions in STIM1 C- and STIM1 N-termini play essential roles in keeping STIM1 in an inactive state. However, the auto-inhibitory mechanism of the STIM1 C-terminus is still unclear. In the present study, we first predicted a short inhibitory domain (residues 310-317) in human STIM1 that might determine the different localizations of human STIM1 from Caenorhabditis elegans STIM1 in resting cells. Next, we confirmed the prediction and further identified an aromatic amino acid residue, Tyr³¹⁶, that played a crucial role in maintaining STIM1 in a closed conformation in quiescent cells. Full-length STIM1-Y316A formed constitutive clusters near the plasma membrane and activated the CRAC channel in the resting state when co-expressed with Orai1. The introduction of a Y316A mutation caused the higher-order oligomerization of the in vitro purified STIM1 fragment containing both the auto-inhibitory domain and CAD(CRAC-activating domain).We propose that the Tyr³¹⁶ residue may be involved in the auto-inhibitory mechanism of the STIM1 C-terminus in the quiescent state. This inhibition could be achieved either by interacting with the CAD using hydrogen and/or hydrophobic bonds, or by an intermolecular interaction using repulsive forces, which maintained a dimeric STIM1.

  18. Integration of pavement cracking prediction model with asset management and vehicle-infrastructure interaction models.

    DOT National Transportation Integrated Search

    2015-01-01

    Not long after the construction of a pavement or a new pavement surface, various : forms of deterioration begin to accumulate due to the harsh effects of traffic loading : combined with weathering action. In a recent NEXTRANS project, a pavement crac...

  19. Users manual and modeling improvements for axial turbine design and performance computer code TD2-2

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.

    1992-01-01

    Computer code TD2 computes design point velocity diagrams and performance for multistage, multishaft, cooled or uncooled, axial flow turbines. This streamline analysis code was recently modified to upgrade modeling related to turbine cooling and to the internal loss correlation. These modifications are presented in this report along with descriptions of the code's expanded input and output. This report serves as the users manual for the upgraded code, which is named TD2-2.

  20. Calcium-activated K(+) channel (K(Ca)3.1) activity during Ca(2+) store depletion and store-operated Ca(2+) entry in human macrophages.

    PubMed

    Gao, Ya-dong; Hanley, Peter J; Rinné, Susanne; Zuzarte, Marylou; Daut, Jurgen

    2010-07-01

    STIM1 'senses' decreases in endoplasmic reticular (ER) luminal Ca(2+) and induces store-operated Ca(2+) (SOC) entry through plasma membrane Orai channels. The Ca(2+)/calmodulin-activated K(+) channel K(Ca)3.1 (previously known as SK4) has been implicated as an 'amplifier' of the Ca(2+)-release activated Ca(2+) (CRAC) current, especially in T lymphocytes. We have previously shown that human macrophages express K(Ca)3.1, and here we used the whole-cell patch-clamp technique to investigate the activity of these channels during Ca(2+) store depletion and store-operated Ca(2+) influx. Using RT-PCR, we found that macrophages express the elementary CRAC channel components Orai1 and STIM1, as well as Orai2, Orai3 and STIM2, but not the putatively STIM1-activated channels TRPC1, TRPC3-7 or TRPV6. In whole-cell configuration, a robust Ca(2+)-induced outwardly rectifying K(+) current inhibited by clotrimazole and augmented by DC-EBIO could be detected, consistent with K(Ca)3.1 channel current (also known as intermediate-conductance IK1). Introduction of extracellular Ca(2+) following Ca(2+) store depletion via P2Y(2) receptors induced a robust charybdotoxin (CTX)- and 2-APB-sensitive outward K(+) current and hyperpolarization. We also found that SOC entry induced by thapsigargin treatment induced CTX-sensitive K(+) current in HEK293 cells transiently expressing K(Ca)3.1. Our data suggest that SOC and K(Ca)3.1 channels are tightly coupled, such that a small Ca(2+) influx current induces a much large K(Ca)3.1 channel current and hyperpolarization, providing the necessary electrochemical driving force for prolonged Ca(2+) signaling and store repletion. Copyright 2010 Elsevier Ltd. All rights reserved.

  1. The calcium feedback loop and T cell activation: how cytoskeleton networks control intracellular calcium flux.

    PubMed

    Joseph, Noah; Reicher, Barak; Barda-Saad, Mira

    2014-02-01

    During T cell activation, the engagement of a T cell with an antigen-presenting cell (APC) results in rapid cytoskeletal rearrangements and a dramatic increase of intracellular calcium (Ca(2+)) concentration, downstream to T cell antigen receptor (TCR) ligation. These events facilitate the organization of an immunological synapse (IS), which supports the redistribution of receptors, signaling molecules and organelles towards the T cell-APC interface to induce downstream signaling events, ultimately supporting T cell effector functions. Thus, Ca(2+) signaling and cytoskeleton rearrangements are essential for T cell activation and T cell-dependent immune response. Rapid release of Ca(2+) from intracellular stores, e.g. the endoplasmic reticulum (ER), triggers the opening of Ca(2+) release-activated Ca(2+) (CRAC) channels, residing in the plasma membrane. These channels facilitate a sustained influx of extracellular Ca(2+) across the plasma membrane in a process termed store-operated Ca(2+) entry (SOCE). Because CRAC channels are themselves inhibited by Ca(2+) ions, additional factors are suggested to enable the sustained Ca(2+) influx required for T cell function. Among these factors, we focus here on the contribution of the actin and microtubule cytoskeleton. The TCR-mediated increase in intracellular Ca(2+) evokes a rapid cytoskeleton-dependent polarization, which involves actin cytoskeleton rearrangements and microtubule-organizing center (MTOC) reorientation. Here, we review the molecular mechanisms of Ca(2+) flux and cytoskeletal rearrangements, and further describe the way by which the cytoskeletal networks feedback to Ca(2+) signaling by controlling the spatial and temporal distribution of Ca(2+) sources and sinks, modulating TCR-dependent Ca(2+) signals, which are required for an appropriate T cell response. This article is part of a Special Issue entitled: Reciprocal influences between cell cytoskeleton and membrane channels, receptors and transporters. Guest Editor: Jean Claude Hervé. © 2013.

  2. Haemophilus parasuis encodes two functional cytolethal distending toxins: CdtC contains an atypical cholesterol recognition/interaction region.

    PubMed

    Zhou, Mingguang; Zhang, Qiang; Zhao, Jianping; Jin, Meilin

    2012-01-01

    Haemophilus parasuis is the causative agent of Glässer's disease of pigs, a disease associated with fibrinous polyserositis, polyarthritis and meningitis. We report here H. parasuis encodes two copies of cytolethal distending toxins (Cdts), which these two Cdts showed the uniform toxin activity in vitro. We demonstrate that three Cdt peptides can form an active tripartite holotoxin that exhibits maximum cellular toxicity, and CdtA and CdtB form a more active toxin than CdtB and CdtC. Moreover, the cellular toxicity is associated with the binding of Cdt subunits to cells. Further analysis indicates that CdtC subunit contains an atypical cholesterol recognition/interaction amino acid consensus (CRAC) region. The mutation of CRAC site resulted in decreased cell toxicity. Finally, western blot analysis show all the 15 H. parasuis reference strains and 109 clinical isolates expressed CdtB subunit, indicating that Cdt is a conservative putative virulence factor for H. parasuis. This is the first report of the molecular and cellular basis of Cdt host interactions in H. parasuis.

  3. Computer codes developed and under development at Lewis

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    1992-01-01

    The objective of this summary is to provide a brief description of: (1) codes developed or under development at LeRC; and (2) the development status of IPACS with some typical early results. The computer codes that have been developed and/or are under development at LeRC are listed in the accompanying charts. This list includes: (1) the code acronym; (2) select physics descriptors; (3) current enhancements; and (4) present (9/91) code status with respect to its availability and documentation. The computer codes list is grouped by related functions such as: (1) composite mechanics; (2) composite structures; (3) integrated and 3-D analysis; (4) structural tailoring; and (5) probabilistic structural analysis. These codes provide a broad computational simulation infrastructure (technology base-readiness) for assessing the structural integrity/durability/reliability of propulsion systems. These codes serve two other very important functions: they provide an effective means of technology transfer; and they constitute a depository of corporate memory.

  4. High altitude chemically reacting gas particle mixtures. Volume 3: Computer code user's and applications manual. [rocket nozzle and orbital plume flow fields

    NASA Technical Reports Server (NTRS)

    Smith, S. D.

    1984-01-01

    A users manual for the RAMP2 computer code is provided. The RAMP2 code can be used to model the dominant phenomena which affect the prediction of liquid and solid rocket nozzle and orbital plume flow fields. The general structure and operation of RAMP2 are discussed. A user input/output guide for the modified TRAN72 computer code and the RAMP2F code is given. The application and use of the BLIMPJ module are considered. Sample problems involving the space shuttle main engine and motor are included.

  5. Hanford meteorological station computer codes: Volume 9, The quality assurance computer codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burk, K.W.; Andrews, G.L.

    1989-02-01

    The Hanford Meteorological Station (HMS) was established in 1944 on the Hanford Site to collect and archive meteorological data and provide weather forecasts and related services for Hanford Site approximately 1/2 mile east of the 200 West Area and is operated by PNL for the US Department of Energy. Meteorological data are collected from various sensors and equipment located on and off the Hanford Site. These data are stored in data bases on the Digital Equipment Corporation (DEC) VAX 11/750 at the HMS (hereafter referred to as the HMS computer). Files from those data bases are routinely transferred to themore » Emergency Management System (EMS) computer at the Unified Dose Assessment Center (UDAC). To ensure the quality and integrity of the HMS data, a set of Quality Assurance (QA) computer codes has been written. The codes will be routinely used by the HMS system manager or the data base custodian. The QA codes provide detailed output files that will be used in correcting erroneous data. The following sections in this volume describe the implementation and operation of QA computer codes. The appendices contain detailed descriptions, flow charts, and source code listings of each computer code. 2 refs.« less

  6. CUP-1 Is a Novel Protein Involved in Dietary Cholesterol Uptake in Caenorhabditis elegans

    PubMed Central

    Valdes, Victor J.; Athie, Alejandro; Salinas, Laura S.; Navarro, Rosa E.; Vaca, Luis

    2012-01-01

    Sterols transport and distribution are essential processes in all multicellular organisms. Survival of the nematode Caenorhabditis elegans depends on dietary absorption of sterols present in the environment. However the general mechanisms associated to sterol uptake in nematodes are poorly understood. In the present work we provide evidence showing that a previously uncharacterized transmembrane protein, designated Cholesterol Uptake Protein-1 (CUP-1), is involved in dietary cholesterol uptake in C. elegans. Animals lacking CUP-1 showed hypersensitivity to cholesterol limitation and were unable to uptake cholesterol. A CUP-1-GFP fusion protein colocalized with cholesterol-rich vesicles, endosomes and lysosomes as well as the plasma membrane. Additionally, by FRET imaging, a direct interaction was found between the cholesterol analog DHE and the transmembrane “cholesterol recognition/interaction amino acid consensus” (CRAC) motif present in C. elegans CUP-1. In-silico analysis identified two mammalian homologues of CUP-1. Most interestingly, CRAC motifs are conserved in mammalian CUP-1 homologous. Our results suggest a role of CUP-1 in cholesterol uptake in C. elegans and open up the possibility for the existence of a new class of proteins involved in sterol absorption in mammals. PMID:22479487

  7. "Speeding up the road to recovery": The Complex Recovery Assessment and Consultation (CRAC) service.

    PubMed

    Davis Le Brun, Stephanie

    2015-01-01

    The number of bed closures in mental health is on the rise, creating additional pressure on services, including acute mental health services. An efficient way of working is required in order to streamline the acute care pathway and decrease unnecessary delays to length of stay, ensuring all individuals can be offered an inpatient bed when in crisis. The Complex Recovery Assessment and Consultation (CRAC) service was created in order to support acute mental health inpatient clinicians in streamlining hospital stays for service users who present with complex presentations that require lengthier admissions (over 40 days) by offering assessment, advice, and intervention from a rehabilitation perspective. The team was also created to understand why individuals may require a lengthy hospital stay. Preliminary data showed that requiring a placement on discharge proved to be the most significant factor in increased length of stay and so the team took on a new role of discharge coordinator after around a year of operating. This involved assisting in decreasing any delays out of hospital through improved communication and dedicated time to complete tasks, such as completing paperwork for placement referrals and funding panels. Since taking on this role it was found that the time taken for individuals to be discharged to a rehabilitation or specialist placement decreased; a rehabilitation placement by 13.12 days and a specialist placement by 9.22 days. Discharge to a family address also decreased by 2.9 days and a home address by 2.47 days. Those patients with complex presentations benefit from having one dedicated team to coordinate the discharge process. Their lengthier acute inpatient stay is improved through streamlining care pathways, ultimately decreasing delays in discharge.

  8. Analytical modeling of operating characteristics of premixing-prevaporizing fuel-air mixing passages. Volume 2: User's manual

    NASA Technical Reports Server (NTRS)

    Anderson, O. L.; Chiappetta, L. M.; Edwards, D. E.; Mcvey, J. B.

    1982-01-01

    A user's manual describing the operation of three computer codes (ADD code, PTRAK code, and VAPDIF code) is presented. The general features of the computer codes, the input/output formats, run streams, and sample input cases are described.

  9. Results of comparative RBMK neutron computation using VNIIEF codes (cell computation, 3D statics, 3D kinetics). Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grebennikov, A.N.; Zhitnik, A.K.; Zvenigorodskaya, O.A.

    1995-12-31

    In conformity with the protocol of the Workshop under Contract {open_quotes}Assessment of RBMK reactor safety using modern Western Codes{close_quotes} VNIIEF performed a neutronics computation series to compare western and VNIIEF codes and assess whether VNIIEF codes are suitable for RBMK type reactor safety assessment computation. The work was carried out in close collaboration with M.I. Rozhdestvensky and L.M. Podlazov, NIKIET employees. The effort involved: (1) cell computations with the WIMS, EKRAN codes (improved modification of the LOMA code) and the S-90 code (VNIIEF Monte Carlo). Cell, polycell, burnup computation; (2) 3D computation of static states with the KORAT-3D and NEUmore » codes and comparison with results of computation with the NESTLE code (USA). The computations were performed in the geometry and using the neutron constants presented by the American party; (3) 3D computation of neutron kinetics with the KORAT-3D and NEU codes. These computations were performed in two formulations, both being developed in collaboration with NIKIET. Formulation of the first problem maximally possibly agrees with one of NESTLE problems and imitates gas bubble travel through a core. The second problem is a model of the RBMK as a whole with imitation of control and protection system controls (CPS) movement in a core.« less

  10. CFD Modeling of Free-Piston Stirling Engines

    NASA Technical Reports Server (NTRS)

    Ibrahim, Mounir B.; Zhang, Zhi-Guo; Tew, Roy C., Jr.; Gedeon, David; Simon, Terrence W.

    2001-01-01

    NASA Glenn Research Center (GRC) is funding Cleveland State University (CSU) to develop a reliable Computational Fluid Dynamics (CFD) code that can predict engine performance with the goal of significant improvements in accuracy when compared to one-dimensional (1-D) design code predictions. The funding also includes conducting code validation experiments at both the University of Minnesota (UMN) and CSU. In this paper a brief description of the work-in-progress is provided in the two areas (CFD and Experiments). Also, previous test results are compared with computational data obtained using (1) a 2-D CFD code obtained from Dr. Georg Scheuerer and further developed at CSU and (2) a multidimensional commercial code CFD-ACE+. The test data and computational results are for (1) a gas spring and (2) a single piston/cylinder with attached annular heat exchanger. The comparisons among the codes are discussed. The paper also discusses plans for conducting code validation experiments at CSU and UMN.

  11. Implementation of radiation shielding calculation methods. Volume 1: Synopsis of methods and summary of results

    NASA Technical Reports Server (NTRS)

    Capo, M. A.; Disney, R. K.

    1971-01-01

    The work performed in the following areas is summarized: (1) Analysis of Realistic nuclear-propelled vehicle was analyzed using the Marshall Space Flight Center computer code package. This code package includes one and two dimensional discrete ordinate transport, point kernel, and single scatter techniques, as well as cross section preparation and data processing codes, (2) Techniques were developed to improve the automated data transfer in the coupled computation method of the computer code package and improve the utilization of this code package on the Univac-1108 computer system. (3) The MSFC master data libraries were updated.

  12. Computer Description of the Field Artillery Ammunition Supply Vehicle

    DTIC Science & Technology

    1983-04-01

    Combinatorial Geometry (COM-GEOM) GIFT Computer Code Computer Target Description 2& AfTNACT (Cmne M feerve shb N ,neemssalyan ify by block number) A...input to the GIFT computer code to generate target vulnerability data. F.a- 4 ono OF I NOV 5S OLETE UNCLASSIFIED SECUOITY CLASSIFICATION OF THIS PAGE...Combinatorial Geometry (COM-GEOM) desrription. The "Geometric Information for Tarqets" ( GIFT ) computer code accepts the CO!-GEOM description and

  13. An emulator for minimizing computer resources for finite element analysis

    NASA Technical Reports Server (NTRS)

    Melosh, R.; Utku, S.; Islam, M.; Salama, M.

    1984-01-01

    A computer code, SCOPE, has been developed for predicting the computer resources required for a given analysis code, computer hardware, and structural problem. The cost of running the code is a small fraction (about 3 percent) of the cost of performing the actual analysis. However, its accuracy in predicting the CPU and I/O resources depends intrinsically on the accuracy of calibration data that must be developed once for the computer hardware and the finite element analysis code of interest. Testing of the SCOPE code on the AMDAHL 470 V/8 computer and the ELAS finite element analysis program indicated small I/O errors (3.2 percent), larger CPU errors (17.8 percent), and negligible total errors (1.5 percent).

  14. Computer Code Aids Design Of Wings

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Darden, Christine M.

    1993-01-01

    AERO2S computer code developed to aid design engineers in selection and evaluation of aerodynamically efficient wing/canard and wing/horizontal-tail configurations that includes simple hinged-flap systems. Code rapidly estimates longitudinal aerodynamic characteristics of conceptual airplane lifting-surface arrangements. Developed in FORTRAN V on CDC 6000 computer system, and ported to MS-DOS environment.

  15. User manual for semi-circular compact range reflector code: Version 2

    NASA Technical Reports Server (NTRS)

    Gupta, Inder J.; Burnside, Walter D.

    1987-01-01

    A computer code has been developed at the Ohio State University ElectroScience Laboratory to analyze a semi-circular paraboloidal reflector with or without a rolled edge at the top and a skirt at the bottom. The code can be used to compute the total near field of the reflector or its individual components at a given distance from the center of the paraboloid. The code computes the fields along a radial, horizontal, vertical or axial cut at that distance. Thus, it is very effective in computing the size of the sweet spot for a semi-circular compact range reflector. This report describes the operation of the code. Various input and output statements are explained. Some results obtained using the computer code are presented to illustrate the code's capability as well as being samples of input/output sets.

  16. The role of receptor topology in the vitamin D3 uptake and Ca{sup 2+} response systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrill, Gene A., E-mail: gene.morrill@einstein.yu.edu; Kostellow, Adele B.; Gupta, Raj K.

    The steroid hormone, vitamin D{sub 3}, regulates gene transcription via at least two receptors and initiates putative rapid response systems at the plasma membrane. The vitamin D receptor (VDR) binds vitamin D{sub 3} and a second receptor, importin-4, imports the VDR-vitamin D{sub 3} complex into the nucleus via nuclear pores. Here we present evidence that the Homo sapiens VDR homodimer contains two transmembrane (TM) helices ({sup 327}E – D{sup 342}), two TM “half-helix” ({sup 264}K − N{sup 276}), one or more large channels, and 16 cholesterol binding (CRAC/CARC) domains. The importin-4 monomer exhibits 3 pore-lining regions ({sup 226}E – L{supmore » 251}; {sup 768}V – G{sup 783}; {sup 876}S – A{sup 891}) and 16 CRAC/CARC domains. The MEMSAT algorithm indicates that VDR and importin-4 may not be restricted to cytoplasm and nucleus. VDR homodimer TM helix-topology predicts insertion into the plasma membrane, with two 84 residue C-terminal regions being extracellular. Similarly, MEMSAT predicts importin-4 insertion into the plasma membrane with 226 residue extracellular N-terminal regions and 96 residue C-terminal extracellular loops; with the pore-lining regions contributing gated Ca{sup 2+} channels. The PoreWalker algorithm indicates that, of the 427 residues in each VDR monomer, 91 line the largest channel, including two vitamin D{sub 3} binding sites and residues from both the TM helix and “half-helix”. Cholesterol-binding domains also extend into the channel within the ligand binding region. Programmed changes in bound cholesterol may regulate both membrane Ca{sup 2+} response systems and vitamin D{sub 3} uptake as well as receptor internalization by the endomembrane system culminating in uptake of the vitamin D{sub 3}-VDR-importin-4 complex into the nucleus.« less

  17. Navier-Stokes Simulation of Homogeneous Turbulence on the CYBER 205

    NASA Technical Reports Server (NTRS)

    Wu, C. T.; Ferziger, J. H.; Chapman, D. R.; Rogallo, R. S.

    1984-01-01

    A computer code which solves the Navier-Stokes equations for three dimensional, time-dependent, homogenous turbulence has been written for the CYBER 205. The code has options for both 64-bit and 32-bit arithmetic. With 32-bit computation, mesh sizes up to 64 (3) are contained within core of a 2 million 64-bit word memory. Computer speed timing runs were made for various vector lengths up to 6144. With this code, speeds a little over 100 Mflops have been achieved on a 2-pipe CYBER 205. Several problems encountered in the coding are discussed.

  18. Operations analysis (study 2.1). Program listing for the LOVES computer code

    NASA Technical Reports Server (NTRS)

    Wray, S. T., Jr.

    1974-01-01

    A listing of the LOVES computer program is presented. The program is coded partially in SIMSCRIPT and FORTRAN. This version of LOVES is compatible with both the CDC 7600 and the UNIVAC 1108 computers. The code has been compiled, loaded, and executed successfully on the EXEC 8 system for the UNIVAC 1108.

  19. Antenna pattern study, task 2

    NASA Technical Reports Server (NTRS)

    Harper, Warren

    1989-01-01

    Two electromagnetic scattering codes, NEC-BSC and ESP3, were delivered and installed on a NASA VAX computer for use by Marshall Space Flight Center antenna design personnel. The existing codes and certain supplementary software were updated, the codes installed on a computer that will be delivered to the customer, to provide capability for graphic display of the data to be computed by the use of the codes and to assist the customer in the solution of specific problems that demonstrate the use of the codes. With the exception of one code revision, all of these tasks were performed.

  20. Cloud Computing for Complex Performance Codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Appel, Gordon John; Hadgu, Teklu; Klein, Brandon Thorin

    This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.

  1. 2,445 Hours of Code: What I Learned from Facilitating Hour of Code Events in High School Libraries

    ERIC Educational Resources Information Center

    Colby, Jennifer

    2015-01-01

    This article describes a school librarian's experience with initiating an Hour of Code event for her school's student body. Hadi Partovi of Code.org conceived the Hour of Code "to get ten million students to try one hour of computer science" (Partovi, 2013a), which is implemented during Computer Science Education Week with a goal of…

  2. Microgravity computing codes. User's guide

    NASA Astrophysics Data System (ADS)

    1982-01-01

    Codes used in microgravity experiments to compute fluid parameters and to obtain data graphically are introduced. The computer programs are stored on two diskettes, compatible with the floppy disk drives of the Apple 2. Two versions of both disks are available (DOS-2 and DOS-3). The codes are written in BASIC and are structured as interactive programs. Interaction takes place through the keyboard of any Apple 2-48K standard system with single floppy disk drive. The programs are protected against wrong commands given by the operator. The programs are described step by step in the same order as the instructions displayed on the monitor. Most of these instructions are shown, with samples of computation and of graphics.

  3. MIADS2 ... an alphanumeric map information assembly and display system for a large computer

    Treesearch

    Elliot L. Amidon

    1966-01-01

    A major improvement and extension of the Map Information Assembly and Display System (MIADS) developed in 1964 is described. Basic principles remain unchanged, but the computer programs have been expanded and rewritten for a large computer, in Fortran IV and MAP languages. The code system is extended from 99 integers to about 2,200 alphanumeric 2-character codes. Hand-...

  4. TEMPEST: A three-dimensional time-dependent computer program for hydrothermal analysis: Volume 2, Assessment and verification results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eyler, L L; Trent, D S; Budden, M J

    During the course of the TEMPEST computer code development a concurrent effort was conducted to assess the code's performance and the validity of computed results. The results of this work are presented in this document. The principal objective of this effort was to assure the code's computational correctness for a wide range of hydrothermal phenomena typical of fast breeder reactor application. 47 refs., 94 figs., 6 tabs.

  5. User's Guide for TOUGH2-MP - A Massively Parallel Version of the TOUGH2 Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Earth Sciences Division; Zhang, Keni; Zhang, Keni

    TOUGH2-MP is a massively parallel (MP) version of the TOUGH2 code, designed for computationally efficient parallel simulation of isothermal and nonisothermal flows of multicomponent, multiphase fluids in one, two, and three-dimensional porous and fractured media. In recent years, computational requirements have become increasingly intensive in large or highly nonlinear problems for applications in areas such as radioactive waste disposal, CO2 geological sequestration, environmental assessment and remediation, reservoir engineering, and groundwater hydrology. The primary objective of developing the parallel-simulation capability is to significantly improve the computational performance of the TOUGH2 family of codes. The particular goal for the parallel simulator ismore » to achieve orders-of-magnitude improvement in computational time for models with ever-increasing complexity. TOUGH2-MP is designed to perform parallel simulation on multi-CPU computational platforms. An earlier version of TOUGH2-MP (V1.0) was based on the TOUGH2 Version 1.4 with EOS3, EOS9, and T2R3D modules, a software previously qualified for applications in the Yucca Mountain project, and was designed for execution on CRAY T3E and IBM SP supercomputers. The current version of TOUGH2-MP (V2.0) includes all fluid property modules of the standard version TOUGH2 V2.0. It provides computationally efficient capabilities using supercomputers, Linux clusters, or multi-core PCs, and also offers many user-friendly features. The parallel simulator inherits all process capabilities from V2.0 together with additional capabilities for handling fractured media from V1.4. This report provides a quick starting guide on how to set up and run the TOUGH2-MP program for users with a basic knowledge of running the (standard) version TOUGH2 code, The report also gives a brief technical description of the code, including a discussion of parallel methodology, code structure, as well as mathematical and numerical methods used. To familiarize users with the parallel code, illustrative sample problems are presented.« less

  6. User's manual for a two-dimensional, ground-water flow code on the Octopus computer network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naymik, T.G.

    1978-08-30

    A ground-water hydrology computer code, programmed by R.L. Taylor (in Proc. American Society of Civil Engineers, Journal of Hydraulics Division, 93(HY2), pp. 25-33 (1967)), has been adapted to the Octopus computer system at Lawrence Livermore Laboratory. Using an example problem, this manual details the input, output, and execution options of the code.

  7. Navier-Stokes and Comprehensive Analysis Performance Predictions of the NREL Phase VI Experiment

    NASA Technical Reports Server (NTRS)

    Duque, Earl P. N.; Burklund, Michael D.; Johnson, Wayne

    2003-01-01

    A vortex lattice code, CAMRAD II, and a Reynolds-Averaged Navier-Stoke code, OVERFLOW-D2, were used to predict the aerodynamic performance of a two-bladed horizontal axis wind turbine. All computations were compared with experimental data that was collected at the NASA Ames Research Center 80- by 120-Foot Wind Tunnel. Computations were performed for both axial as well as yawed operating conditions. Various stall delay models and dynamics stall models were used by the CAMRAD II code. Comparisons between the experimental data and computed aerodynamic loads show that the OVERFLOW-D2 code can accurately predict the power and spanwise loading of a wind turbine rotor.

  8. Monte Carlo simulation of Ising models by multispin coding on a vector computer

    NASA Astrophysics Data System (ADS)

    Wansleben, Stephan; Zabolitzky, John G.; Kalle, Claus

    1984-11-01

    Rebbi's efficient multispin coding algorithm for Ising models is combined with the use of the vector computer CDC Cyber 205. A speed of 21.2 million updates per second is reached. This is comparable to that obtained by special- purpose computers.

  9. Beer flavor provokes striatal dopamine release in male drinkers: mediation by family history of alcoholism.

    PubMed

    Oberlin, Brandon G; Dzemidzic, Mario; Tran, Stella M; Soeurt, Christina M; Albrecht, Daniel S; Yoder, Karmen K; Kareken, David A

    2013-08-01

    Striatal dopamine (DA) is increased by virtually all drugs of abuse, including alcohol. However, drug-associated cues are also known to provoke striatal DA transmission- a phenomenon linked to the motivated behaviors associated with addiction. To our knowledge, no one has tested if alcohol's classically conditioned flavor cues, in the absence of a significant pharmacologic effect, are capable of eliciting striatal DA release in humans. Employing positron emission tomography (PET), we hypothesized that beer's flavor alone can reduce the binding potential (BP) of [(11)C]raclopride (RAC; a reflection of striatal DA release) in the ventral striatum, relative to an appetitive flavor control. Forty-nine men, ranging from social to heavy drinking, mean age 25, with a varied family history of alcoholism underwent two [(11)C]RAC PET scans: one while tasting beer, and one while tasting Gatorade. Relative to the control flavor of Gatorade, beer flavor significantly increased self-reported desire to drink, and reduced [(11)C]RAC BP, indicating that the alcohol-associated flavor cues induced DA release. BP reductions were strongest in subjects with first-degree alcoholic relatives. These results demonstrate that alcohol-conditioned flavor cues can provoke ventral striatal DA release, absent significant pharmacologic effects, and that the response is strongest in subjects with a greater genetic risk for alcoholism. Striatal DA responses to salient alcohol cues may thus be an inherited risk factor for alcoholism.

  10. Computation of neutron fluxes in clusters of fuel pins arranged in hexagonal assemblies (2D and 3D)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prabha, H.; Marleau, G.

    2012-07-01

    For computations of fluxes, we have used Carvik's method of collision probabilities. This method requires tracking algorithms. An algorithm to compute tracks (in 2D and 3D) has been developed for seven hexagonal geometries with cluster of fuel pins. This has been implemented in the NXT module of the code DRAGON. The flux distribution in cluster of pins has been computed by using this code. For testing the results, they are compared when possible with the EXCELT module of the code DRAGON. Tracks are plotted in the NXT module by using MATLAB, these plots are also presented here. Results are presentedmore » with increasing number of lines to show the convergence of these results. We have numerically computed volumes, surface areas and the percentage errors in these computations. These results show that 2D results converge faster than 3D results. The accuracy on the computation of fluxes up to second decimal is achieved with fewer lines. (authors)« less

  11. Comparison of FDNS liquid rocket engine plume computations with SPF/2

    NASA Technical Reports Server (NTRS)

    Kumar, G. N.; Griffith, D. O., II; Warsi, S. A.; Seaford, C. M.

    1993-01-01

    Prediction of a plume's shape and structure is essential to the evaluation of base region environments. The JANNAF standard plume flowfield analysis code SPF/2 predicts plumes well, but cannot analyze base regions. Full Navier-Stokes CFD codes can calculate both zones; however, before they can be used, they must be validated. The CFD code FDNS3D (Finite Difference Navier-Stokes Solver) was used to analyze the single plume of a Space Transportation Main Engine (STME) and comparisons were made with SPF/2 computations. Both frozen and finite rate chemistry models were employed as well as two turbulence models in SPF/2. The results indicate that FDNS3D plume computations agree well with SPF/2 predictions for liquid rocket engine plumes.

  12. On the error statistics of Viterbi decoding and the performance of concatenated codes

    NASA Technical Reports Server (NTRS)

    Miller, R. L.; Deutsch, L. J.; Butman, S. A.

    1981-01-01

    Computer simulation results are presented on the performance of convolutional codes of constraint lengths 7 and 10 concatenated with the (255, 223) Reed-Solomon code (a proposed NASA standard). These results indicate that as much as 0.8 dB can be gained by concatenating this Reed-Solomon code with a (10, 1/3) convolutional code, instead of the (7, 1/2) code currently used by the DSN. A mathematical model of Viterbi decoder burst-error statistics is developed and is validated through additional computer simulations.

  13. Force user's manual: A portable, parallel FORTRAN

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.; Benten, Muhammad S.; Arenstorf, Norbert S.; Ramanan, Aruna V.

    1990-01-01

    The use of Force, a parallel, portable FORTRAN on shared memory parallel computers is described. Force simplifies writing code for parallel computers and, once the parallel code is written, it is easily ported to computers on which Force is installed. Although Force is nearly the same for all computers, specific details are included for the Cray-2, Cray-YMP, Convex 220, Flex/32, Encore, Sequent, Alliant computers on which it is installed.

  14. Pretest aerosol code comparisons for LWR aerosol containment tests LA1 and LA2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wright, A.L.; Wilson, J.H.; Arwood, P.C.

    The Light-Water-Reactor (LWR) Aerosol Containment Experiments (LACE) are being performed in Richland, Washington, at the Hanford Engineering Development Laboratory (HEDL) under the leadership of an international project board and the Electric Power Research Institute. These tests have two objectives: (1) to investigate, at large scale, the inherent aerosol retention behavior in LWR containments under simulated severe accident conditions, and (2) to provide an experimental data base for validating aerosol behavior and thermal-hydraulic computer codes. Aerosol computer-code comparison activities are being coordinated at the Oak Ridge National Laboratory. For each of the six LACE tests, ''pretest'' calculations (for code-to-code comparisons) andmore » ''posttest'' calculations (for code-to-test data comparisons) are being performed. The overall goals of the comparison effort are (1) to provide code users with experience in applying their codes to LWR accident-sequence conditions and (2) to evaluate and improve the code models.« less

  15. A performance comparison of the Cray-2 and the Cray X-MP

    NASA Technical Reports Server (NTRS)

    Schmickley, Ronald; Bailey, David H.

    1986-01-01

    A suite of thirteen large Fortran benchmark codes were run on Cray-2 and Cray X-MP supercomputers. These codes were a mix of compute-intensive scientific application programs (mostly Computational Fluid Dynamics) and some special vectorized computation exercise programs. For the general class of programs tested on the Cray-2, most of which were not specially tuned for speed, the floating point operation rates varied under a variety of system load configurations from 40 percent up to 125 percent of X-MP performance rates. It is concluded that the Cray-2, in the original system configuration studied (without memory pseudo-banking) will run untuned Fortran code, on average, about 70 percent of X-MP speeds.

  16. A Guide to Axial-Flow Turbine Off-Design Computer Program AXOD2

    NASA Technical Reports Server (NTRS)

    Chen, Shu-Cheng S.

    2014-01-01

    A Users Guide for the axial flow turbine off-design computer program AXOD2 is composed in this paper. This Users Guide is supplementary to the original Users Manual of AXOD. Three notable contributions of AXOD2 to its predecessor AXOD, both in the context of the Guide or in the functionality of the code, are described and discussed in length. These are: 1) a rational representation of the mathematical principles applied, with concise descriptions of the formulas implemented in the actual coding. Their physical implications are addressed; 2) the creation and documentation of an Addendum Listing of input namelist-parameters unique to AXOD2, that differ from or are in addition to the original input-namelists given in the Manual of AXOD. Their usages are discussed; and 3) the institution of proper stoppages of the code execution, encoding termination messaging and error messages of the execution to AXOD2. These measures are to safe-guard the integrity of the code execution, such that a failure mode encountered during a case-study would not plunge the code execution into indefinite loop, or cause a blow-out of the program execution. Details on these are discussed and illustrated in this paper. Moreover, this computer program has since been reconstructed substantially. Standard FORTRAN Langue was instituted, and the code was formatted in Double Precision (REAL*8). As the result, the code is now suited for use in a local Desktop Computer Environment, is perfectly portable to any Operating System, and can be executed by any FORTRAN compiler equivalent to a FORTRAN 9095 compiler. AXOD2 will be available through NASA Glenn Research Center (GRC) Software Repository.

  17. Calculation of Water Drop Trajectories to and About Arbitrary Three-Dimensional Bodies in Potential Airflow

    NASA Technical Reports Server (NTRS)

    Norment, H. G.

    1980-01-01

    Calculations can be performed for any atmospheric conditions and for all water drop sizes, from the smallest cloud droplet to large raindrops. Any subsonic, external, non-lifting flow can be accommodated; flow into, but not through, inlets also can be simulated. Experimental water drop drag relations are used in the water drop equations of motion and effects of gravity settling are included. Seven codes are described: (1) a code used to debug and plot body surface description data; (2) a code that processes the body surface data to yield the potential flow field; (3) a code that computes flow velocities at arrays of points in space; (4) a code that computes water drop trajectories from an array of points in space; (5) a code that computes water drop trajectories and fluxes to arbitrary target points; (6) a code that computes water drop trajectories tangent to the body; and (7) a code that produces stereo pair plots which include both the body and trajectories. Code descriptions include operating instructions, card inputs and printouts for example problems, and listing of the FORTRAN codes. Accuracy of the calculations is discussed, and trajectory calculation results are compared with prior calculations and with experimental data.

  18. The University for Industry and Local Information, Advice and Guidance Partnerships. Report on a NICEC/CRAC Policy Consultation Held in Association with the National Advisory Council for Careers and Educational Guidance (Cambridge, England, February 24-25, 1999). Conference Briefing.

    ERIC Educational Resources Information Center

    Watts, Tony

    The University for Industry (UFI) and local information, advice, and guidance (IAG) partnerships are two key aspects of the British Government's lifelong learning strategy. UFI's key role is to expand the demand for and supply of learning and to exploit the learning potential of information and communication technologies. The main UFI activities…

  19. Spectral fitting, shock layer modeling, and production of nitrogen oxides and excited nitrogen

    NASA Technical Reports Server (NTRS)

    Blackwell, H. E.

    1991-01-01

    An analysis was made of N2 emission from 8.72 MJ/kg shock layer at 2.54, 1.91, and 1.27 cm positions and vibrational state distributions, temperatures, and relative electronic state populations was obtained from data sets. Other recorded arc jet N2 and air spectral data were reviewed and NO emission characteristics were studied. A review of operational procedures of the DSMC code was made. Information on other appropriate codes and modifications, including ionization, were made as well as a determination of the applicability of codes reviewed to task requirement. A review was also made of computational procedures used in CFD codes of Li and other codes on JSC computers. An analysis was made of problems associated with integration of specific chemical kinetics applicable to task into CFD codes.

  20. Gigaflop performance on a CRAY-2: Multitasking a computational fluid dynamics application

    NASA Technical Reports Server (NTRS)

    Tennille, Geoffrey M.; Overman, Andrea L.; Lambiotte, Jules J.; Streett, Craig L.

    1991-01-01

    The methodology is described for converting a large, long-running applications code that executed on a single processor of a CRAY-2 supercomputer to a version that executed efficiently on multiple processors. Although the conversion of every application is different, a discussion of the types of modification used to achieve gigaflop performance is included to assist others in the parallelization of applications for CRAY computers, especially those that were developed for other computers. An existing application, from the discipline of computational fluid dynamics, that had utilized over 2000 hrs of CPU time on CRAY-2 during the previous year was chosen as a test case to study the effectiveness of multitasking on a CRAY-2. The nature of dominant calculations within the application indicated that a sustained computational rate of 1 billion floating-point operations per second, or 1 gigaflop, might be achieved. The code was first analyzed and modified for optimal performance on a single processor in a batch environment. After optimal performance on a single CPU was achieved, the code was modified to use multiple processors in a dedicated environment. The results of these two efforts were merged into a single code that had a sustained computational rate of over 1 gigaflop on a CRAY-2. Timings and analysis of performance are given for both single- and multiple-processor runs.

  1. A fast technique for computing syndromes of BCH and RS codes. [deep space network

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.; Miller, R. L.

    1979-01-01

    A combination of the Chinese Remainder Theorem and Winograd's algorithm is used to compute transforms of odd length over GF(2 to the m power). Such transforms are used to compute the syndromes needed for decoding CBH and RS codes. The present scheme requires substantially fewer multiplications and additions than the conventional method of computing the syndromes directly.

  2. Raptor: An Enterprise Knowledge Discovery Engine Version 2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2011-08-31

    The Raptor Version 2.0 computer code uses a set of documents as seed documents to recommend documents of interest from a large, target set of documents. The computer code provides results that show the recommended documents with the highest similarity to the seed documents. Version 2.0 was specifically developed to work with SharePoint 2007 and MS SQL server.

  3. An evaluation of TRAC-PF1/MOD1 computer code performance during posttest simulations of Semiscale MOD-2C feedwater line break transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, D.G.: Watkins, J.C.

    This report documents an evaluation of the TRAC-PF1/MOD1 reactor safety analysis computer code during computer simulations of feedwater line break transients. The experimental data base for the evaluation included the results of three bottom feedwater line break tests performed in the Semiscale Mod-2C test facility. The tests modeled 14.3% (S-FS-7), 50% (S-FS-11), and 100% (S-FS-6B) breaks. The test facility and the TRAC-PF1/MOD1 model used in the calculations are described. Evaluations of the accuracy of the calculations are presented in the form of comparisons of measured and calculated histories of selected parameters associated with the primary and secondary systems. In additionmore » to evaluating the accuracy of the code calculations, the computational performance of the code during the simulations was assessed. A conclusion was reached that the code is capable of making feedwater line break transient calculations efficiently, but there is room for significant improvements in the simulations that were performed. Recommendations are made for follow-on investigations to determine how to improve future feedwater line break calculations and for code improvements to make the code easier to use.« less

  4. Fault tolerant computing: A preamble for assuring viability of large computer systems

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1977-01-01

    The need for fault-tolerant computing is addressed from the viewpoints of (1) why it is needed, (2) how to apply it in the current state of technology, and (3) what it means in the context of the Phoenix computer system and other related systems. To this end, the value of concurrent error detection and correction is described. User protection, program retry, and repair are among the factors considered. The technology of algebraic codes to protect memory systems and arithmetic codes to protect memory systems and arithmetic codes to protect arithmetic operations is discussed.

  5. Three-dimensional turbopump flowfield analysis

    NASA Technical Reports Server (NTRS)

    Sharma, O. P.; Belford, K. A.; Ni, R. H.

    1992-01-01

    A program was conducted to develop a flow prediction method applicable to rocket turbopumps. The complex nature of a flowfield in turbopumps is described and examples of flowfields are discussed to illustrate that physics based models and analytical calculation procedures based on computational fluid dynamics (CFD) are needed to develop reliable design procedures for turbopumps. A CFD code developed at NASA ARC was used as the base code. The turbulence model and boundary conditions in the base code were modified, respectively, to: (1) compute transitional flows and account for extra rates of strain, e.g., rotation; and (2) compute surface heat transfer coefficients and allow computation through multistage turbomachines. Benchmark quality data from two and three-dimensional cascades were used to verify the code. The predictive capabilities of the present CFD code were demonstrated by computing the flow through a radial impeller and a multistage axial flow turbine. Results of the program indicate that the present code operated in a two-dimensional mode is a cost effective alternative to full three-dimensional calculations, and that it permits realistic predictions of unsteady loadings and losses for multistage machines.

  6. ASR4: A computer code for fitting and processing 4-gage anelastic strain recovery data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warpinski, N.R.

    A computer code for analyzing four-gage Anelastic Strain Recovery (ASR) data has been modified for use on a personal computer. This code fits the viscoelastic model of Warpinski and Teufel to measured ASR data, calculates the stress orientation directly, and computes stress magnitudes if sufficient input data are available. The code also calculates the stress orientation using strain-rosette equations, and its calculates stress magnitudes using Blanton's approach, assuming sufficient input data are available. The program is written in FORTRAN, compiled with Ryan-McFarland Version 2.4. Graphics use PLOT88 software by Plotworks, Inc., but the graphics software must be obtained by themore » user because of licensing restrictions. A version without graphics can also be run. This code is available through the National Energy Software Center (NESC), operated by Argonne National Laboratory. 5 refs., 3 figs.« less

  7. Coupled 2-dimensional cascade theory for noise an d unsteady aerodynamics of blade row interaction in turbofans. Volume 2: Documentation for computer code CUP2D

    NASA Technical Reports Server (NTRS)

    Hanson, Donald B.

    1994-01-01

    A two dimensional linear aeroacoustic theory for rotor/stator interaction with unsteady coupling was derived and explored in Volume 1 of this report. Computer program CUP2D has been written in FORTRAN embodying the theoretical equations. This volume (Volume 2) describes the structure of the code, installation and running, preparation of the input file, and interpretation of the output. A sample case is provided with printouts of the input and output. The source code is included with comments linking it closely to the theoretical equations in Volume 1.

  8. Instructions for the use of the CIVM-Jet 4C finite-strain computer code to calculate the transient structural responses of partial and/or complete arbitrarily-curved rings subjected to fragment impact

    NASA Technical Reports Server (NTRS)

    Rodal, J. J. A.; French, S. E.; Witmer, E. A.; Stagliano, T. R.

    1979-01-01

    The CIVM-JET 4C computer program for the 'finite strain' analysis of 2 d transient structural responses of complete or partial rings and beams subjected to fragment impact stored on tape as a series of individual files. Which subroutines are found in these files are described in detail. All references to the CIVM-JET 4C program are made assuming that the user has a copy of NASA CR-134907 (ASRL TR 154-9) which serves as a user's guide to (1) the CIVM-JET 4B computer code and (2) the CIVM-JET 4C computer code 'with the use of the modified input instructions' attached hereto.

  9. Validation of the NCC Code for Staged Transverse Injection and Computations for a RBCC Combustor

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud; Liu, Nan-Suey

    2005-01-01

    The NCC code was validated for a case involving staged transverse injection into Mach 2 flow behind a rearward facing step. Comparisons with experimental data and with solutions from the FPVortex code was then used to perform computations to study fuel-air mixing for the combustor of a candidate rocket based combined cycle engine geometry. Comparisons with a one-dimensional analysis and a three-dimensional code (VULCAN) were performed to assess the qualitative and quantitative performance of the NCC solver.

  10. A supersonic three-dimensional code for flow over blunt bodies: Program documentation and test cases

    NASA Technical Reports Server (NTRS)

    Chaussee, D. S.; Mcmillan, O. J.

    1980-01-01

    The use of a computer code for the calculation of steady, supersonic, three dimensional, inviscid flow over blunt bodies is illustrated. Input and output are given and explained for two cases: a pointed code of 20 deg half angle at 15 deg angle of attack in a free stream with M sub infinite = 7, and a cone-ogive-cylinder at 10 deg angle of attack with M sub infinite = 2.86. A source listing of the computer code is provided.

  11. Simulation of 2D Kinetic Effects in Plasmas using the Grid Based Continuum Code LOKI

    NASA Astrophysics Data System (ADS)

    Banks, Jeffrey; Berger, Richard; Chapman, Tom; Brunner, Stephan

    2016-10-01

    Kinetic simulation of multi-dimensional plasma waves through direct discretization of the Vlasov equation is a useful tool to study many physical interactions and is particularly attractive for situations where minimal fluctuation levels are desired, for instance, when measuring growth rates of plasma wave instabilities. However, direct discretization of phase space can be computationally expensive, and as a result there are few examples of published results using Vlasov codes in more than a single configuration space dimension. In an effort to fill this gap we have developed the Eulerian-based kinetic code LOKI that evolves the Vlasov-Poisson system in 2+2-dimensional phase space. The code is designed to reduce the cost of phase-space computation by using fully 4th order accurate conservative finite differencing, while retaining excellent parallel scalability that efficiently uses large scale computing resources. In this poster I will discuss the algorithms used in the code as well as some aspects of their parallel implementation using MPI. I will also overview simulation results of basic plasma wave instabilities relevant to laser plasma interaction, which have been obtained using the code.

  12. Experimental and analytical comparison of flowfields in a 110 N (25 lbf) H2/O2 rocket

    NASA Technical Reports Server (NTRS)

    Reed, Brian D.; Penko, Paul F.; Schneider, Steven J.; Kim, Suk C.

    1991-01-01

    A gaseous hydrogen/gaseous oxygen 110 N (25 lbf) rocket was examined through the RPLUS code using the full Navier-Stokes equations with finite rate chemistry. Performance tests were conducted on the rocket in an altitude test facility. Preliminary parametric analyses were performed for a range of mixture ratios and fuel film cooling pcts. It is shown that the computed values of specific impulse and characteristic exhaust velocity follow the trend of the experimental data. Specific impulse computed by the code is lower than the comparable test values by about two to three percent. The computed characteristic exhaust velocity values are lower than the comparable test values by three to four pct. Thrust coefficients computed by the code are found to be within two pct. of the measured values. It is concluded that the discrepancy between computed and experimental performance values could not be attributed to experimental uncertainty.

  13. The EDIT-COMGEOM Code

    DTIC Science & Technology

    1975-09-01

    This report assumes a familiarity with the GIFT and MAGIC computer codes. The EDIT-COMGEOM code is a FORTRAN computer code. The EDIT-COMGEOM code...converts the target description data which was used in the MAGIC computer code to the target description data which can be used in the GIFT computer code

  14. Procedures for the computation of unsteady transonic flows including viscous effects

    NASA Technical Reports Server (NTRS)

    Rizzetta, D. P.

    1982-01-01

    Modifications of the code LTRAN2, developed by Ballhaus and Goorjian, which account for viscous effects in the computation of planar unsteady transonic flows are presented. Two models are considered and their theoretical development and numerical implementation is discussed. Computational examples employing both models are compared with inviscid solutions and with experimental data. Use of the modified code is described.

  15. THC-MP: High performance numerical simulation of reactive transport and multiphase flow in porous media

    NASA Astrophysics Data System (ADS)

    Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu

    2015-07-01

    The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.

  16. A selected annotated bibliography of the core biomedical literature pertaining to stroke, cervical spine, manipulation and head/neck movement

    PubMed Central

    Gotlib, Allan C.; Thiel, Haymo

    1985-01-01

    This manuscript’s purpose was to establish a knowledge base of information related to stroke and the cervical spine vascular structures, from both historical and current perspectives. The scientific biomedical literatures both indexed (ie. Index Medicus, CRAC) and non-indexed literature systems were scanned and the pertinent manuscripts were annotated. Citation is by occurence in the literature so that historical trends may be viewed more easily. No analysis of the reference material is offered. Suggested however is that: 1. complications to cervical spine manipulation are being recognized and reported with increasing frequency, 2. a cause and effect relationship between stroke and cervical spine manipulation has not been established, 3. a screening mechanism that is valid, reliable and reasonable needs to be established.

  17. Sterol Binding by the Tombusviral Replication Proteins Is Essential for Replication in Yeast and Plants.

    PubMed

    Xu, Kai; Nagy, Peter D

    2017-04-01

    Membranous structures derived from various organelles are important for replication of plus-stranded RNA viruses. Although the important roles of co-opted host proteins in RNA virus replication have been appreciated for a decade, the equally important functions of cellular lipids in virus replication have been gaining full attention only recently. Previous work with Tomato bushy stunt tombusvirus (TBSV) in model host yeast has revealed essential roles for phosphatidylethanolamine and sterols in viral replication. To further our understanding of the role of sterols in tombusvirus replication, in this work we showed that the TBSV p33 and p92 replication proteins could bind to sterols in vitro The sterol binding by p33 is supported by cholesterol recognition/interaction amino acid consensus (CRAC) and CARC-like sequences within the two transmembrane domains of p33. Mutagenesis of the critical Y amino acids within the CRAC and CARC sequences blocked TBSV replication in yeast and plant cells. We also showed the enrichment of sterols in the detergent-resistant membrane (DRM) fractions obtained from yeast and plant cells replicating TBSV. The DRMs could support viral RNA synthesis on both the endogenous and exogenous templates. A lipidomic approach showed the lack of enhancement of sterol levels in yeast and plant cells replicating TBSV. The data support the notion that the TBSV replication proteins are associated with sterol-rich detergent-resistant membranes in yeast and plant cells. Together, the results obtained in this study and the previously published results support the local enrichment of sterols around the viral replication proteins that is critical for TBSV replication. IMPORTANCE One intriguing aspect of viral infections is their dependence on efficient subcellular assembly platforms serving replication, virion assembly, or virus egress via budding out of infected cells. These assembly platforms might involve sterol-rich membrane microdomains, which are heterogeneous and highly dynamic nanoscale structures usurped by various viruses. Here, we demonstrate that TBSV p33 and p92 replication proteins can bind to sterol in vitro Mutagenesis analysis of p33 within the CRAC and CARC sequences involved in sterol binding shows the important connection between the abilities of p33 to bind to sterol and to support TBSV replication in yeast and plant cells. Together, the results further strengthen the model that cellular sterols are essential as proviral lipids during viral replication. Copyright © 2017 American Society for Microbiology.

  18. Isolation and structure determination of malevamide E, a dolastatin 14 analogue, from the marine cyanobacterium Symploca laete-viridis.

    PubMed

    Adams, Beatrice; Pörzgen, Peter; Pittman, Emily; Yoshida, Wesley Y; Westenburg, Hans E; Horgen, F David

    2008-05-01

    A new depsipeptide, malevamide E (1), was isolated from field-collected colonies of the filamentous cyanobacterium Symploca laete-viridis. The gross structure of 1 was determined by spectroscopic analyses, including one- and two-dimensional NMR and accurately measured MS/MS. Chiral HPLC analyses of an acid hydrolysate of 1 allowed the stereochemical assignments of its amino acid residues, which include N-methyl-L-alanine, alpha-N,gamma-N-dimethyl-L-asparagine, N-methyl-L-phenylalanine, L-proline, D-valine, and N-methyl-L-valine. LC-MS/MS analysis of S. laete-viridis fractions established the co-occurrence of malevamide E (1) and its homologue dolastatin 14 (2), which was previously reported in low yield from the sea hare Dolabella auricularia. Malevamide E (1) demonstrated a dose-dependent (2-45 microM) inhibition of store-operated Ca(2+) entry in thapsigargin-treated human embryonic kidney (HEK) cells, indicating an inhibitory effect on Ca(2+) release-activated Ca(2+) (CRAC) channels.

  19. Computerized systems analysis and optimization of aircraft engine performance, weight, and life cycle costs

    NASA Technical Reports Server (NTRS)

    Fishbach, L. H.

    1979-01-01

    The computational techniques utilized to determine the optimum propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements are described. The characteristics and use of the following computer codes are discussed: (1) NNEP - a very general cycle analysis code that can assemble an arbitrary matrix fans, turbines, ducts, shafts, etc., into a complete gas turbine engine and compute on- and off-design thermodynamic performance; (2) WATE - a preliminary design procedure for calculating engine weight using the component characteristics determined by NNEP; (3) POD DRG - a table look-up program to calculate wave and friction drag of nacelles; (4) LIFCYC - a computer code developed to calculate life cycle costs of engines based on the output from WATE; and (5) INSTAL - a computer code developed to calculate installation effects, inlet performance and inlet weight. Examples are given to illustrate how these computer techniques can be applied to analyze and optimize propulsion system fuel consumption, weight, and cost for representative types of aircraft and missions.

  20. A verification of the gyrokinetic microstability codes GEM, GYRO, and GS2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bravenec, R. V.; Chen, Y.; Wan, W.

    2013-10-15

    A previous publication [R. V. Bravenec et al., Phys. Plasmas 18, 122505 (2011)] presented favorable comparisons of linear frequencies and nonlinear fluxes from the Eulerian gyrokinetic codes gyro[J. Candy and R. E. Waltz, J. Comput. Phys. 186, 545 (2003)] and gs2[W. Dorland et al., Phys. Rev. Lett. 85, 5579 (2000)]. The motivation was to verify the codes, i.e., demonstrate that they correctly solve the gyrokinetic-Maxwell equations. The premise was that it is highly unlikely for both codes to yield the same incorrect results. In this work, we add the Lagrangian particle-in-cell code gem[Y. Chen and S. Parker, J. Comput. Phys.more » 220, 839 (2007)] to the comparisons, not simply to add another code, but also to demonstrate that the codes' algorithms do not matter. We find good agreement of gem with gyro and gs2 for the plasma conditions considered earlier, thus establishing confidence that the codes are verified and that ongoing validation efforts for these plasma parameters are warranted.« less

  1. Calculation of water drop trajectories to and about arbitrary three-dimensional lifting and nonlifting bodies in potential airflow

    NASA Technical Reports Server (NTRS)

    Norment, H. G.

    1985-01-01

    Subsonic, external flow about nonlifting bodies, lifting bodies or combinations of lifting and nonlifting bodies is calculated by a modified version of the Hess lifting code. Trajectory calculations can be performed for any atmospheric conditions and for all water drop sizes, from the smallest cloud droplet to large raindrops. Experimental water drop drag relations are used in the water drop equations of motion and effects of gravity settling are included. Inlet flow can be accommodated, and high Mach number compressibility effects are corrected for approximately. Seven codes are described: (1) a code used to debug and plot body surface description data; (2) a code that processes the body surface data to yield the potential flow field; (3) a code that computes flow velocities at arrays of points in space; (4) a code that computes water drop trajectories from an array of points in space; (5) a code that computes water drop trajectories and fluxes to arbitrary target points; (6) a code that computes water drop trajectories tangent to the body; and (7) a code that produces stereo pair plots which include both the body and trajectories. Accuracy of the calculations is discussed, and trajectory calculation results are compared with prior calculations and with experimental data.

  2. Hypercube matrix computation task

    NASA Technical Reports Server (NTRS)

    Calalo, R.; Imbriale, W.; Liewer, P.; Lyons, J.; Manshadi, F.; Patterson, J.

    1987-01-01

    The Hypercube Matrix Computation (Year 1986-1987) task investigated the applicability of a parallel computing architecture to the solution of large scale electromagnetic scattering problems. Two existing electromagnetic scattering codes were selected for conversion to the Mark III Hypercube concurrent computing environment. They were selected so that the underlying numerical algorithms utilized would be different thereby providing a more thorough evaluation of the appropriateness of the parallel environment for these types of problems. The first code was a frequency domain method of moments solution, NEC-2, developed at Lawrence Livermore National Laboratory. The second code was a time domain finite difference solution of Maxwell's equations to solve for the scattered fields. Once the codes were implemented on the hypercube and verified to obtain correct solutions by comparing the results with those from sequential runs, several measures were used to evaluate the performance of the two codes. First, a comparison was provided of the problem size possible on the hypercube with 128 megabytes of memory for a 32-node configuration with that available in a typical sequential user environment of 4 to 8 megabytes. Then, the performance of the codes was anlyzed for the computational speedup attained by the parallel architecture.

  3. SCALE: A modular code system for performing Standardized Computer Analyses for Licensing Evaluation. Volume 1, Part 2: Control modules S1--H1; Revision 5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system hasmore » been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system.« less

  4. Error threshold for color codes and random three-body Ising models.

    PubMed

    Katzgraber, Helmut G; Bombin, H; Martin-Delgado, M A

    2009-08-28

    We study the error threshold of color codes, a class of topological quantum codes that allow a direct implementation of quantum Clifford gates suitable for entanglement distillation, teleportation, and fault-tolerant quantum computation. We map the error-correction process onto a statistical mechanical random three-body Ising model and study its phase diagram via Monte Carlo simulations. The obtained error threshold of p(c) = 0.109(2) is very close to that of Kitaev's toric code, showing that enhanced computational capabilities do not necessarily imply lower resistance to noise.

  5. Photoionization and High Density Gas

    NASA Technical Reports Server (NTRS)

    Kallman, T.; Bautista, M.; White, Nicholas E. (Technical Monitor)

    2002-01-01

    We present results of calculations using the XSTAR version 2 computer code. This code is loosely based on the XSTAR v.1 code which has been available for public use for some time. However it represents an improvement and update in several major respects, including atomic data, code structure, user interface, and improved physical description of ionization/excitation. In particular, it now is applicable to high density situations in which significant excited atomic level populations are likely to occur. We describe the computational techniques and assumptions, and present sample runs with particular emphasis on high density situations.

  6. Guide to AERO2S and WINGDES Computer Codes for Prediction and Minimization of Drag Due to Lift

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Chu, Julio; Ozoroski, Lori P.; McCullers, L. Arnold

    1997-01-01

    The computer codes, AER02S and WINGDES, are now widely used for the analysis and design of airplane lifting surfaces under conditions that tend to induce flow separation. These codes have undergone continued development to provide additional capabilities since the introduction of the original versions over a decade ago. This code development has been reported in a variety of publications (NASA technical papers, NASA contractor reports, and society journals). Some modifications have not been publicized at all. Users of these codes have suggested the desirability of combining in a single document the descriptions of the code development, an outline of the features of each code, and suggestions for effective code usage. This report is intended to supply that need.

  7. Hypersonic simulations using open-source CFD and DSMC solvers

    NASA Astrophysics Data System (ADS)

    Casseau, V.; Scanlon, T. J.; John, B.; Emerson, D. R.; Brown, R. E.

    2016-11-01

    Hypersonic hybrid hydrodynamic-molecular gas flow solvers are required to satisfy the two essential requirements of any high-speed reacting code, these being physical accuracy and computational efficiency. The James Weir Fluids Laboratory at the University of Strathclyde is currently developing an open-source hybrid code which will eventually reconcile the direct simulation Monte-Carlo method, making use of the OpenFOAM application called dsmcFoam, and the newly coded open-source two-temperature computational fluid dynamics solver named hy2Foam. In conjunction with employing the CVDV chemistry-vibration model in hy2Foam, novel use is made of the QK rates in a CFD solver. In this paper, further testing is performed, in particular with the CFD solver, to ensure its efficacy before considering more advanced test cases. The hy2Foam and dsmcFoam codes have shown to compare reasonably well, thus providing a useful basis for other codes to compare against.

  8. Solution of 3-dimensional time-dependent viscous flows. Part 2: Development of the computer code

    NASA Technical Reports Server (NTRS)

    Weinberg, B. C.; Mcdonald, H.

    1980-01-01

    There is considerable interest in developing a numerical scheme for solving the time dependent viscous compressible three dimensional flow equations to aid in the design of helicopter rotors. The development of a computer code to solve a three dimensional unsteady approximate form of the Navier-Stokes equations employing a linearized block emplicit technique in conjunction with a QR operator scheme is described. Results of calculations of several Cartesian test cases are presented. The computer code can be applied to more complex flow fields such as these encountered on rotating airfoils.

  9. Critical evaluation of Jet-A spray combustion using propane chemical kinetics in gas turbine combustion simulated by KIVA-2

    NASA Technical Reports Server (NTRS)

    Nguyen, H. L.; Ying, S.-J.

    1990-01-01

    Jet-A spray combustion has been evaluated in gas turbine combustion with the use of propane chemical kinetics as the first approximation for the chemical reactions. Here, the numerical solutions are obtained by using the KIVA-2 computer code. The KIVA-2 code is the most developed of the available multidimensional combustion computer programs for application of the in-cylinder combustion dynamics of internal combustion engines. The released version of KIVA-2 assumes that 12 chemical species are present; the code uses an Arrhenius kinetic-controlled combustion model governed by a four-step global chemical reaction and six equilibrium reactions. Researchers efforts involve the addition of Jet-A thermophysical properties and the implementation of detailed reaction mechanisms for propane oxidation. Three different detailed reaction mechanism models are considered. The first model consists of 131 reactions and 45 species. This is considered as the full mechanism which is developed through the study of chemical kinetics of propane combustion in an enclosed chamber. The full mechanism is evaluated by comparing calculated ignition delay times with available shock tube data. However, these detailed reactions occupy too much computer memory and CPU time for the computation. Therefore, it only serves as a benchmark case by which to evaluate other simplified models. Two possible simplified models were tested in the existing computer code KIVA-2 for the same conditions as used with the full mechanism. One model is obtained through a sensitivity analysis using LSENS, the general kinetics and sensitivity analysis program code of D. A. Bittker and K. Radhakrishnan. This model consists of 45 chemical reactions and 27 species. The other model is based on the work published by C. K. Westbrook and F. L. Dryer.

  10. Development and Validation of a Fast, Accurate and Cost-Effective Aeroservoelastic Method on Advanced Parallel Computing Systems

    NASA Technical Reports Server (NTRS)

    Goodwin, Sabine A.; Raj, P.

    1999-01-01

    Progress to date towards the development and validation of a fast, accurate and cost-effective aeroelastic method for advanced parallel computing platforms such as the IBM SP2 and the SGI Origin 2000 is presented in this paper. The ENSAERO code, developed at the NASA-Ames Research Center has been selected for this effort. The code allows for the computation of aeroelastic responses by simultaneously integrating the Euler or Navier-Stokes equations and the modal structural equations of motion. To assess the computational performance and accuracy of the ENSAERO code, this paper reports the results of the Navier-Stokes simulations of the transonic flow over a flexible aeroelastic wing body configuration. In addition, a forced harmonic oscillation analysis in the frequency domain and an analysis in the time domain are done on a wing undergoing a rigid pitch and plunge motion. Finally, to demonstrate the ENSAERO flutter-analysis capability, aeroelastic Euler and Navier-Stokes computations on an L-1011 wind tunnel model including pylon, nacelle and empennage are underway. All computational solutions are compared with experimental data to assess the level of accuracy of ENSAERO. As the computations described above are performed, a meticulous log of computational performance in terms of wall clock time, execution speed, memory and disk storage is kept. Code scalability is also demonstrated by studying the impact of varying the number of processors on computational performance on the IBM SP2 and the Origin 2000 systems.

  11. An efficient method for computing unsteady transonic aerodynamics of swept wings with control surfaces

    NASA Technical Reports Server (NTRS)

    Liu, D. D.; Kao, Y. F.; Fung, K. Y.

    1989-01-01

    A transonic equivalent strip (TES) method was further developed for unsteady flow computations of arbitrary wing planforms. The TES method consists of two consecutive correction steps to a given nonlinear code such as LTRAN2; namely, the chordwise mean flow correction and the spanwise phase correction. The computation procedure requires direct pressure input from other computed or measured data. Otherwise, it does not require airfoil shape or grid generation for given planforms. To validate the computed results, four swept wings of various aspect ratios, including those with control surfaces, are selected as computational examples. Overall trends in unsteady pressures are established with those obtained by XTRAN3S codes, Isogai's full potential code and measured data by NLR and RAE. In comparison with these methods, the TES has achieved considerable saving in computer time and reasonable accuracy which suggests immediate industrial applications.

  12. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Functional modules F1--F8 -- Volume 2, Part 1, Revision 4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greene, N.M.; Petrie, L.M.; Westfall, R.M.

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system hasmore » been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation; Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries.« less

  13. Efficient Helicopter Aerodynamic and Aeroacoustic Predictions on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Wissink, Andrew M.; Lyrintzis, Anastasios S.; Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak

    1996-01-01

    This paper presents parallel implementations of two codes used in a combined CFD/Kirchhoff methodology to predict the aerodynamics and aeroacoustics properties of helicopters. The rotorcraft Navier-Stokes code, TURNS, computes the aerodynamic flowfield near the helicopter blades and the Kirchhoff acoustics code computes the noise in the far field, using the TURNS solution as input. The overall parallel strategy adds MPI message passing calls to the existing serial codes to allow for communication between processors. As a result, the total code modifications required for parallel execution are relatively small. The biggest bottleneck in running the TURNS code in parallel comes from the LU-SGS algorithm that solves the implicit system of equations. We use a new hybrid domain decomposition implementation of LU-SGS to obtain good parallel performance on the SP-2. TURNS demonstrates excellent parallel speedups for quasi-steady and unsteady three-dimensional calculations of a helicopter blade in forward flight. The execution rate attained by the code on 114 processors is six times faster than the same cases run on one processor of the Cray C-90. The parallel Kirchhoff code also shows excellent parallel speedups and fast execution rates. As a performance demonstration, unsteady acoustic pressures are computed at 1886 far-field observer locations for a sample acoustics problem. The calculation requires over two hundred hours of CPU time on one C-90 processor but takes only a few hours on 80 processors of the SP2. The resultant far-field acoustic field is analyzed with state of-the-art audio and video rendering of the propagating acoustic signals.

  14. Variation in clinical coding lists in UK general practice: a barrier to consistent data entry?

    PubMed

    Tai, Tracy Waize; Anandarajah, Sobanna; Dhoul, Neil; de Lusignan, Simon

    2007-01-01

    Routinely collected general practice computer data are used for quality improvement; poor data quality including inconsistent coding can reduce their usefulness. To document the diversity of data entry systems currently in use in UK general practice and highlight possible implications for data quality. General practice volunteers provided screen shots of the clinical coding screen they would use to code a diagnosis or problem title in the clinical consultation. The six clinical conditions examined were: depression, cystitis, type 2 diabetes mellitus, sore throat, tired all the time, and myocardial infarction. We looked at the picking lists generated for these problem titles in EMIS, IPS, GPASS and iSOFT general practice clinical computer systems, using the Triset browser as a gold standard for comparison. A mean of 19.3 codes is offered in the picking list after entering a diagnosis or problem title. EMIS produced the longest picking lists and GPASS the shortest, with a mean number of choices of 35.2 and 12.7, respectively. Approximately three-quarters (73.5%) of codes are diagnoses, one-eighth (12.5%) symptom codes, and the remainder come from a range of Read chapters. There was no readily detectable consistent order in which codes were displayed. Velocity coding, whereby commonly-used codes are placed higher in the picking list, results in variation between practices even where they have the same brand of computer system. Current systems for clinical coding promote diversity rather than consistency of clinical coding. As the UK moves towards an integrated health IT system consistency of coding will become more important. A standardised, limited list of codes for primary care might help address this need.

  15. Getting Started in Classroom Computing.

    ERIC Educational Resources Information Center

    Ahl, David H.

    Written for secondary students, this booklet provides an introduction to several computer-related concepts through a set of six classroom games, most of which can be played with little more than a sheet of paper and a pencil. The games are: 1) SECRET CODES--introduction to binary coding, punched cards, and paper tape; 2) GUESS--efficient methods…

  16. A Computational Model for Observation in Quantum Mechanics.

    DTIC Science & Technology

    1987-03-16

    Interferometer experiment ............. 17 2.3 The EPR Paradox experiment ................. 22 3 The Computational Model, an Overview 28 4 Implementation 34...40 4.4 Code for the EPR paradox experiment ............... 46 4.5 Code for the double slit interferometer experiment ..... .. 50 5 Conclusions 59 A...particle run counter to fact. The EPR paradox experiment (see section 2.3) is hard to resolve with this class of models, collectively called hidden

  17. A Combinatorial Geometry Computer Description of the XR311 Vehicle

    DTIC Science & Technology

    1978-04-01

    cards or magnetic tape. The shot line output of the GRID subroutine of the GIFT code is also stored on magnetic tape for future vulnera- bility...descriptions as processed by the Geometric Information For Targets ( GIFT )2 computer code. This report documents the COM-GEOM target description for all...72, March 1974. ’L.W. Bains and M.J. Reisinger, "The GIFT Code User Manual, VOL 1, Introduction and Input Requirements, " Ballistic Research

  18. Verifying a computational method for predicting extreme ground motion

    USGS Publications Warehouse

    Harris, R.A.; Barall, M.; Andrews, D.J.; Duan, B.; Ma, S.; Dunham, E.M.; Gabriel, A.-A.; Kaneko, Y.; Kase, Y.; Aagaard, Brad T.; Oglesby, D.D.; Ampuero, J.-P.; Hanks, T.C.; Abrahamson, N.

    2011-01-01

    In situations where seismological data is rare or nonexistent, computer simulations may be used to predict ground motions caused by future earthquakes. This is particularly practical in the case of extreme ground motions, where engineers of special buildings may need to design for an event that has not been historically observed but which may occur in the far-distant future. Once the simulations have been performed, however, they still need to be tested. The SCEC-USGS dynamic rupture code verification exercise provides a testing mechanism for simulations that involve spontaneous earthquake rupture. We have performed this examination for the specific computer code that was used to predict maximum possible ground motion near Yucca Mountain. Our SCEC-USGS group exercises have demonstrated that the specific computer code that was used for the Yucca Mountain simulations produces similar results to those produced by other computer codes when tackling the same science problem. We also found that the 3D ground motion simulations produced smaller ground motions than the 2D simulations.

  19. Experimental aerothermodynamic research of hypersonic aircraft

    NASA Technical Reports Server (NTRS)

    Cleary, Joseph W.

    1987-01-01

    The 2-D and 3-D advance computer codes being developed for use in the design of such hypersonic aircraft as the National Aero-Space Plane require comparison of the computational results with a broad spectrum of experimental data to fully assess the validity of the codes. This is particularly true for complex flow fields with control surfaces present and for flows with separation, such as leeside flow. Therefore, the objective is to provide a hypersonic experimental data base required for validation of advanced computational fluid dynamics (CFD) computer codes and for development of more thorough understanding of the flow physics necessary for these codes. This is being done by implementing a comprehensive test program for a generic all-body hypersonic aircraft model in the NASA/Ames 3.5 foot Hypersonic Wind Tunnel over a broad range of test conditions to obtain pertinent surface and flowfield data. Results from the flow visualization portion of the investigation are presented.

  20. Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, L.M.; Hochstedler, R.D.

    1997-02-01

    Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of themore » accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).« less

  1. BRYNTRN: A baryon transport model

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Townsend, Lawrence W.; Nealy, John E.; Chun, Sang Y.; Hong, B. S.; Buck, Warren W.; Lamkin, S. L.; Ganapol, Barry D.; Khan, Ferdous; Cucinotta, Francis A.

    1989-01-01

    The development of an interaction data base and a numerical solution to the transport of baryons through an arbitrary shield material based on a straight ahead approximation of the Boltzmann equation are described. The code is most accurate for continuous energy boundary values, but gives reasonable results for discrete spectra at the boundary using even a relatively coarse energy grid (30 points) and large spatial increments (1 cm in H2O). The resulting computer code is self-contained, efficient and ready to use. The code requires only a very small fraction of the computer resources required for Monte Carlo codes.

  2. Parallel Higher-order Finite Element Method for Accurate Field Computations in Wakefield and PIC Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candel, A.; Kabel, A.; Lee, L.

    Over the past years, SLAC's Advanced Computations Department (ACD), under SciDAC sponsorship, has developed a suite of 3D (2D) parallel higher-order finite element (FE) codes, T3P (T2P) and Pic3P (Pic2P), aimed at accurate, large-scale simulation of wakefields and particle-field interactions in radio-frequency (RF) cavities of complex shape. The codes are built on the FE infrastructure that supports SLAC's frequency domain codes, Omega3P and S3P, to utilize conformal tetrahedral (triangular)meshes, higher-order basis functions and quadratic geometry approximation. For time integration, they adopt an unconditionally stable implicit scheme. Pic3P (Pic2P) extends T3P (T2P) to treat charged-particle dynamics self-consistently using the PIC (particle-in-cell)more » approach, the first such implementation on a conformal, unstructured grid using Whitney basis functions. Examples from applications to the International Linear Collider (ILC), Positron Electron Project-II (PEP-II), Linac Coherent Light Source (LCLS) and other accelerators will be presented to compare the accuracy and computational efficiency of these codes versus their counterparts using structured grids.« less

  3. Turbulent Bubbly Flow in a Vertical Pipe Computed By an Eddy-Resolving Reynolds Stress Model

    DTIC Science & Technology

    2014-09-19

    the numerical code OpenFOAM R©. 1 Introduction Turbulent bubbly flows are encountered in many industrially relevant applications, such as chemical in...performed using the OpenFOAM -2.2.2 computational code utilizing a cell- center-based finite volume method on an unstructured numerical grid. The...the mean Courant number is always below 0.4. The utilized turbulence models were implemented into the so-called twoPhaseEulerFoam solver in OpenFOAM , to

  4. Application of CARS to scramjet combustion

    NASA Technical Reports Server (NTRS)

    Antcliff, R. R.

    1987-01-01

    A coherent anti-Stokes Raman spectroscopic (CARS) instrument has been developed for measuring simultaneously temperature and N2 - O2 species concentration in hostile flame environments. A folded BOXCARS arrangement was employed to obtain high spatial resolution. Polarization discrimination against the nonresonant background decreased the lower limits of O2 detectivity. The instrument has been primarily employed for validation of computational fluid-dynamics computer-model codes. Comparisons have been made to both the CHARNAL and TEACH codes on a hydrogen diffusion flame with good results.

  5. Laser Signature Prediction Using The VALUE Computer Program

    NASA Astrophysics Data System (ADS)

    Akerman, Alexander; Hoffman, George A.; Patton, Ronald

    1989-09-01

    A variety of enhancements are being made to the 1976-vintage LASERX computer code. These include: - Surface characterization with BDRF tabular data - Specular reflection from transparent surfaces - Generation of glint direction maps - Generation of relative range imagery - Interface to the LOWTRAN atmospheric transmission code - Interface to the LEOPS laser sensor code - User friendly menu prompting for easy setup Versions of VALUE have been written for both VAX/VMS and PC/DOS computer environments. Outputs have also been revised to be user friendly and include tables, plots, and images for (1) intensity, (2) cross section,(3) reflectance, (4) relative range, (5) region type, and (6) silhouette.

  6. Calculation of two-dimensional inlet flow fields in a supersonic free stream: Program documentation and test cases

    NASA Technical Reports Server (NTRS)

    Biringen, S. H.; Mcmillan, O. J.

    1980-01-01

    The use of a computer code for the calculation of two dimensional inlet flow fields in a supersonic free stream and a nonorthogonal mesh-generation code are illustrated by specific examples. Input, output, and program operation and use are given and explained for the case of supercritical inlet operation at a subdesign Mach number (M Mach free stream = 2.09) for an isentropic-compression, drooped-cowl inlet. Source listings of the computer codes are also provided.

  7. A Combinatorial Geometry Target Description of the High Mobility Multipurpose Wheeled Vehicle (HMMWV)

    DTIC Science & Technology

    1985-10-01

    NOTE3 1W. KFY OORDS (Continwo =n reverse aide If necesesar aid ldwttlfy by" block ntmber) •JW7 Regions, COM-EOM Region Ident• fication GIFT Material...technique of mobna.tcri• i Geometr- (Com-Geom). The Com-Gem data is used as input to the Geometric Inf• •cation for Targets ( GIFT ) computer code to... GIFT ) 2 3 computer code. This report documents the combinatorial geometry (Com-Geom) target description data which is the input data for the GIFT code

  8. A return mapping algorithm for isotropic and anisotropic plasticity models using a line search method

    DOE PAGES

    Scherzinger, William M.

    2016-05-01

    The numerical integration of constitutive models in computational solid mechanics codes allows for the solution of boundary value problems involving complex material behavior. Metal plasticity models, in particular, have been instrumental in the development of these codes. Here, most plasticity models implemented in computational codes use an isotropic von Mises yield surface. The von Mises, of J 2, yield surface has a simple predictor-corrector algorithm - the radial return algorithm - to integrate the model.

  9. Development and application of GASP 2.0

    NASA Technical Reports Server (NTRS)

    Mcgrory, W. D.; Huebner, L. D.; Slack, D. C.; Walters, R. W.

    1992-01-01

    GASP 2.0 represents a major new release of the computational fluid dynamics code in wide use by the aerospace community. The authors have spent the last two years analyzing the strengths and weaknesses of the previous version of the finite-rate chemistry, Navier Stokes solution algorithm. What has resulted is a completely redesigned computer code that offers two to four times the performance of previous versions while requiring as little as one quarter of the memory requirements. In addition to the improvements in efficiency over the original code, Version 2.0 contains many new features. A brief discussion of the improvements made to GASP, and an application using GASP 2.0 which demonstrates some of the new features are presented.

  10. cloudPEST - A python module for cloud-computing deployment of PEST, a program for parameter estimation

    USGS Publications Warehouse

    Fienen, Michael N.; Kunicki, Thomas C.; Kester, Daniel E.

    2011-01-01

    This report documents cloudPEST-a Python module with functions to facilitate deployment of the model-independent parameter estimation code PEST on a cloud-computing environment. cloudPEST makes use of low-level, freely available command-line tools that interface with the Amazon Elastic Compute Cloud (EC2(TradeMark)) that are unlikely to change dramatically. This report describes the preliminary setup for both Python and EC2 tools and subsequently describes the functions themselves. The code and guidelines have been tested primarily on the Windows(Registered) operating system but are extensible to Linux(Registered).

  11. An arbitrary grid CFD algorithm for configuration aerodynamics analysis. Volume 2: FEMNAS user guide

    NASA Technical Reports Server (NTRS)

    Manhardt, Paul D.; Orzechowski, J. A.; Baker, A. J.

    1992-01-01

    This report documents the user input and output data requirements for the FEMNAS finite element Navier-Stokes code for real-gas simulations of external aerodynamics flowfields. This code was developed for the configuration aerodynamics branch of NASA ARC, under SBIR Phase 2 contract NAS2-124568 by Computational Mechanics Corporation (COMCO). This report is in two volumes. Volume 1 contains the theory for the derived finite element algorithm and describes the test cases used to validate the computer program described in the Volume 2 user guide.

  12. A Computational Method for Determining the Equilibrium Composition and Product Temperature in a LH2/LOX Combustor

    NASA Technical Reports Server (NTRS)

    Sozen, Mehmet

    2003-01-01

    In what follows, the model used for combustion of liquid hydrogen (LH2) with liquid oxygen (LOX) using chemical equilibrium assumption, and the novel computational method developed for determining the equilibrium composition and temperature of the combustion products by application of the first and second laws of thermodynamics will be described. The modular FORTRAN code developed as a subroutine that can be incorporated into any flow network code with little effort has been successfully implemented in GFSSP as the preliminary runs indicate. The code provides capability of modeling the heat transfer rate to the coolants for parametric analysis in system design.

  13. Development and application of the GIM code for the Cyber 203 computer

    NASA Technical Reports Server (NTRS)

    Stainaker, J. F.; Robinson, M. A.; Rawlinson, E. G.; Anderson, P. G.; Mayne, A. W.; Spradley, L. W.

    1982-01-01

    The GIM computer code for fluid dynamics research was developed. Enhancement of the computer code, implicit algorithm development, turbulence model implementation, chemistry model development, interactive input module coding and wing/body flowfield computation are described. The GIM quasi-parabolic code development was completed, and the code used to compute a number of example cases. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and implicit finite difference scheme were also added. Development was completed on the interactive module for generating the input data for GIM. Solutions for inviscid hypersonic flow over a wing/body configuration are also presented.

  14. Proceedings of the 14th International Conference on the Numerical Simulation of Plasmas

    NASA Astrophysics Data System (ADS)

    Partial Contents are as follows: Numerical Simulations of the Vlasov-Maxwell Equations by Coupled Particle-Finite Element Methods on Unstructured Meshes; Electromagnetic PIC Simulations Using Finite Elements on Unstructured Grids; Modelling Travelling Wave Output Structures with the Particle-in-Cell Code CONDOR; SST--A Single-Slice Particle Simulation Code; Graphical Display and Animation of Data Produced by Electromagnetic, Particle-in-Cell Codes; A Post-Processor for the PEST Code; Gray Scale Rendering of Beam Profile Data; A 2D Electromagnetic PIC Code for Distributed Memory Parallel Computers; 3-D Electromagnetic PIC Simulation on the NRL Connection Machine; Plasma PIC Simulations on MIMD Computers; Vlasov-Maxwell Algorithm for Electromagnetic Plasma Simulation on Distributed Architectures; MHD Boundary Layer Calculation Using the Vortex Method; and Eulerian Codes for Plasma Simulations.

  15. Numerical, Analytical, Experimental Study of Fluid Dynamic Forces in Seals Volume 6: Description of Scientific CFD Code SCISEAL

    NASA Technical Reports Server (NTRS)

    Athavale, Mahesh; Przekwas, Andrzej

    2004-01-01

    The objectives of the program were to develop computational fluid dynamics (CFD) codes and simpler industrial codes for analyzing and designing advanced seals for air-breathing and space propulsion engines. The CFD code SCISEAL is capable of producing full three-dimensional flow field information for a variety of cylindrical configurations. An implicit multidomain capability allow the division of complex flow domains to allow optimum use of computational cells. SCISEAL also has the unique capability to produce cross-coupled stiffness and damping coefficients for rotordynamic computations. The industrial codes consist of a series of separate stand-alone modules designed for expeditious parametric analyses and optimization of a wide variety of cylindrical and face seals. Coupled through a Knowledge-Based System (KBS) that provides a user-friendly Graphical User Interface (GUI), the industrial codes are PC based using an OS/2 operating system. These codes were designed to treat film seals where a clearance exists between the rotating and stationary components. Leakage is inhibited by surface roughness, small but stiff clearance films, and viscous pumping devices. The codes have demonstrated to be a valuable resource for seal development of future air-breathing and space propulsion engines.

  16. The WISGSK: A computer code for the prediction of a multistage axial compressor performance with water ingestion

    NASA Technical Reports Server (NTRS)

    Tsuchiya, T.; Murthy, S. N. B.

    1982-01-01

    A computer code is presented for the prediction of off-design axial flow compressor performance with water ingestion. Four processes were considered to account for the aero-thermo-mechanical interactions during operation with air-water droplet mixture flow: (1) blade performance change, (2) centrifuging of water droplets, (3) heat and mass transfer process between the gaseous and the liquid phases and (4) droplet size redistribution due to break-up. Stage and compressor performance are obtained by a stage stacking procedure using representative veocity diagrams at a rotor inlet and outlet mean radii. The Code has options for performance estimation with (1) mixtures of gas and (2) gas-water droplet mixtures, and therefore can take into account the humidity present in ambient conditions. A test case illustrates the method of using the Code. The Code follows closely the methodology and architecture of the NASA-STGSTK Code for the estimation of axial-flow compressor performance with air flow.

  17. Real-time computer treatment of THz passive device images with the high image quality

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2012-06-01

    We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.

  18. Decoding the "CoDe": A Framework for Conceptualizing and Designing Help Options in Computer-Based Second Language Listening

    ERIC Educational Resources Information Center

    Cardenas-Claros, Monica Stella; Gruba, Paul A.

    2013-01-01

    This paper proposes a theoretical framework for the conceptualization and design of help options in computer-based second language (L2) listening. Based on four empirical studies, it aims at clarifying both conceptualization and design (CoDe) components. The elements of conceptualization consist of a novel four-part classification of help options:…

  19. Reeds computer code

    NASA Technical Reports Server (NTRS)

    Bjork, C.

    1981-01-01

    The REEDS (rocket exhaust effluent diffusion single layer) computer code is used for the estimation of certain rocket exhaust effluent concentrations and dosages and their distributions near the Earth's surface following a rocket launch event. Output from REEDS is used in producing near real time air quality and environmental assessments of the effects of certain potentially harmful effluents, namely HCl, Al2O3, CO, and NO.

  20. PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 2: User's guide

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady

    1990-01-01

    A new computer code was developed to solve the two-dimensional or axisymmetric, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 2 is the User's Guide, and describes the program's general features, the input and output, the procedure for setting up initial conditions, the computer resource requirements, the diagnostic messages that may be generated, the job control language used to run the program, and several test cases.

  1. Moment method analysis of linearly tapered slot antennas: Low loss components for switched beam radiometers

    NASA Technical Reports Server (NTRS)

    Koeksal, Adnan; Trew, Robert J.; Kauffman, J. Frank

    1992-01-01

    A Moment Method Model for the radiation pattern characterization of single Linearly Tapered Slot Antennas (LTSA) in air or on a dielectric substrate is developed. This characterization consists of: (1) finding the radiated far-fields of the antenna; (2) determining the E-Plane and H-Plane beamwidths and sidelobe levels; and (3) determining the D-Plane beamwidth and cross polarization levels, as antenna parameters length, height, taper angle, substrate thickness, and the relative substrate permittivity vary. The LTSA geometry does not lend itself to analytical solution with the given parameter ranges. Therefore, a computer modeling scheme and a code are necessary to analyze the problem. This necessity imposes some further objectives or requirements on the solution method (modeling) and tool (computer code). These may be listed as follows: (1) a good approximation to the real antenna geometry; and (2) feasible computer storage and time requirements. According to these requirements, the work is concentrated on the development of efficient modeling schemes for these type of problems and on reducing the central processing unit (CPU) time required from the computer code. A Method of Moments (MoM) code is developed for the analysis of LTSA's within the parameter ranges given.

  2. A Computer Program for Flow-Log Analysis of Single Holes (FLASH)

    USGS Publications Warehouse

    Day-Lewis, F. D.; Johnson, C.D.; Paillet, Frederick L.; Halford, K.J.

    2011-01-01

    A new computer program, FLASH (Flow-Log Analysis of Single Holes), is presented for the analysis of borehole vertical flow logs. The code is based on an analytical solution for steady-state multilayer radial flow to a borehole. The code includes options for (1) discrete fractures and (2) multilayer aquifers. Given vertical flow profiles collected under both ambient and stressed (pumping or injection) conditions, the user can estimate fracture (or layer) transmissivities and far-field hydraulic heads. FLASH is coded in Microsoft Excel with Visual Basic for Applications routines. The code supports manual and automated model calibration. ?? 2011, The Author(s). Ground Water ?? 2011, National Ground Water Association.

  3. Development of a CFD Code for Analysis of Fluid Dynamic Forces in Seals

    NASA Technical Reports Server (NTRS)

    Athavale, Mahesh M.; Przekwas, Andrzej J.; Singhal, Ashok K.

    1991-01-01

    The aim is to develop a 3-D computational fluid dynamics (CFD) code for the analysis of fluid flow in cylindrical seals and evaluation of the dynamic forces on the seals. This code is expected to serve as a scientific tool for detailed flow analysis as well as a check for the accuracy of the 2D industrial codes. The features necessary in the CFD code are outlined. The initial focus was to develop or modify and implement new techniques and physical models. These include collocated grid formulation, rotating coordinate frames and moving grid formulation. Other advanced numerical techniques include higher order spatial and temporal differencing and an efficient linear equation solver. These techniques were implemented in a 2D flow solver for initial testing. Several benchmark test cases were computed using the 2D code, and the results of these were compared to analytical solutions or experimental data to check the accuracy. Tests presented here include planar wedge flow, flow due to an enclosed rotor, and flow in a 2D seal with a whirling rotor. Comparisons between numerical and experimental results for an annular seal and a 7-cavity labyrinth seal are also included.

  4. Evolvix BEST Names for semantic reproducibility across code2brain interfaces

    PubMed Central

    Scheuer, Katherine S.; Keel, Seth A.; Vyas, Vaibhav; Liblit, Ben; Hanlon, Bret; Ferris, Michael C.; Yin, John; Dutra, Inês; Pietsch, Anthony; Javid, Christine G.; Moog, Cecilia L.; Meyer, Jocelyn; Dresel, Jerdon; McLoone, Brian; Loberger, Sonya; Movaghar, Arezoo; Gilchrist‐Scott, Morgaine; Sabri, Yazeed; Sescleifer, Dave; Pereda‐Zorrilla, Ivan; Zietlow, Andrew; Smith, Rodrigo; Pietenpol, Samantha; Goldfinger, Jacob; Atzen, Sarah L.; Freiberg, Erika; Waters, Noah P.; Nusbaum, Claire; Nolan, Erik; Hotz, Alyssa; Kliman, Richard M.; Mentewab, Ayalew; Fregien, Nathan; Loewe, Martha

    2016-01-01

    Names in programming are vital for understanding the meaning of code and big data. We define code2brain (C2B) interfaces as maps in compilers and brains between meaning and naming syntax, which help to understand executable code. While working toward an Evolvix syntax for general‐purpose programming that makes accurate modeling easy for biologists, we observed how names affect C2B quality. To protect learning and coding investments, C2B interfaces require long‐term backward compatibility and semantic reproducibility (accurate reproduction of computational meaning from coder‐brains to reader‐brains by code alone). Semantic reproducibility is often assumed until confusing synonyms degrade modeling in biology to deciphering exercises. We highlight empirical naming priorities from diverse individuals and roles of names in different modes of computing to show how naming easily becomes impossibly difficult. We present the Evolvix BEST (Brief, Explicit, Summarizing, Technical) Names concept for reducing naming priority conflicts, test it on a real challenge by naming subfolders for the Project Organization Stabilizing Tool system, and provide naming questionnaires designed to facilitate C2B debugging by improving names used as keywords in a stabilizing programming language. Our experiences inspired us to develop Evolvix using a flipped programming language design approach with some unexpected features and BEST Names at its core. PMID:27918836

  5. Turbofan noise generation. Volume 2: Computer programs

    NASA Technical Reports Server (NTRS)

    Ventres, C. S.; Theobald, M. A.; Mark, W. D.

    1982-01-01

    The use of a package of computer programs developed to calculate the in duct acoustic mods excited by a fan/stator stage operating at subsonic tip speed is described. The following three noise source mechanisms are included: (1) sound generated by the rotor blades interacting with turbulence ingested into, or generated within, the inlet duct; (2) sound generated by the stator vanes interacting with the turbulent wakes of the rotor blades; and (3) sound generated by the stator vanes interacting with the velocity deficits in the mean wakes of the rotor blades. The computations for three different noise mechanisms are coded as three separate computer program packages. The computer codes are described by means of block diagrams, tables of data and variables, and example program executions; FORTRAN listings are included.

  6. Turbofan noise generation. Volume 2: Computer programs

    NASA Astrophysics Data System (ADS)

    Ventres, C. S.; Theobald, M. A.; Mark, W. D.

    1982-07-01

    The use of a package of computer programs developed to calculate the in duct acoustic mods excited by a fan/stator stage operating at subsonic tip speed is described. The following three noise source mechanisms are included: (1) sound generated by the rotor blades interacting with turbulence ingested into, or generated within, the inlet duct; (2) sound generated by the stator vanes interacting with the turbulent wakes of the rotor blades; and (3) sound generated by the stator vanes interacting with the velocity deficits in the mean wakes of the rotor blades. The computations for three different noise mechanisms are coded as three separate computer program packages. The computer codes are described by means of block diagrams, tables of data and variables, and example program executions; FORTRAN listings are included.

  7. Towards Reproducibility in Computational Hydrology

    NASA Astrophysics Data System (ADS)

    Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei; Duffy, Chris; Arheimer, Berit

    2017-04-01

    Reproducibility is a foundational principle in scientific research. The ability to independently re-run an experiment helps to verify the legitimacy of individual findings, and evolve (or reject) hypotheses and models of how environmental systems function, and move them from specific circumstances to more general theory. Yet in computational hydrology (and in environmental science more widely) the code and data that produces published results are not regularly made available, and even if they are made available, there remains a multitude of generally unreported choices that an individual scientist may have made that impact the study result. This situation strongly inhibits the ability of our community to reproduce and verify previous findings, as all the information and boundary conditions required to set up a computational experiment simply cannot be reported in an article's text alone. In Hutton et al 2016 [1], we argue that a cultural change is required in the computational hydrological community, in order to advance and make more robust the process of knowledge creation and hypothesis testing. We need to adopt common standards and infrastructures to: (1) make code readable and re-useable; (2) create well-documented workflows that combine re-useable code together with data to enable published scientific findings to be reproduced; (3) make code and workflows available, easy to find, and easy to interpret, using code and code metadata repositories. To create change we argue for improved graduate training in these areas. In this talk we reflect on our progress in achieving reproducible, open science in computational hydrology, which are relevant to the broader computational geoscience community. In particular, we draw on our experience in the Switch-On (EU funded) virtual water science laboratory (http://www.switch-on-vwsl.eu/participate/), which is an open platform for collaboration in hydrological experiments (e.g. [2]). While we use computational hydrology as the example application area, we believe that our conclusions are of value to the wider environmental and geoscience community as far as the use of code and models for scientific advancement is concerned. References: [1] Hutton, C., T. Wagener, J. Freer, D. Han, C. Duffy, and B. Arheimer (2016), Most computational hydrology is not reproducible, so is it really science?, Water Resour. Res., 52, 7548-7555, doi:10.1002/2016WR019285. [2] Ceola, S., et al. (2015), Virtual laboratories: New opportunities for collaborative water science, Hydrol. Earth Syst. Sci. Discuss., 11(12), 13443-13478, doi:10.5194/hessd-11-13443-2014.

  8. Aeroacoustic Prediction Codes

    NASA Technical Reports Server (NTRS)

    Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)

    2000-01-01

    This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor-noise correlation model was developed from engine acoustic test results. This work provided several insights on potential approaches to reducing aircraft engine noise. Code development is described in this report, and those insights are discussed.

  9. The DOPEX code: An application of the method of steepest descent to laminated-shield-weight optimization with several constraints

    NASA Technical Reports Server (NTRS)

    Lahti, G. P.

    1972-01-01

    A two- or three-constraint, two-dimensional radiation shield weight optimization procedure and a computer program, DOPEX, is described. The DOPEX code uses the steepest descent method to alter a set of initial (input) thicknesses for a shield configuration to achieve a minimum weight while simultaneously satisfying dose constaints. The code assumes an exponential dose-shield thickness relation with parameters specified by the user. The code also assumes that dose rates in each principal direction are dependent only on thicknesses in that direction. Code input instructions, FORTRAN 4 listing, and a sample problem are given. Typical computer time required to optimize a seven-layer shield is about 0.1 minute on an IBM 7094-2.

  10. A Rocket Engine Design Expert System

    NASA Technical Reports Server (NTRS)

    Davidian, Kenneth J.

    1989-01-01

    The overall structure and capabilities of an expert system designed to evaluate rocket engine performance are described. The expert system incorporates a JANNAF standard reference computer code to determine rocket engine performance and a state of the art finite element computer code to calculate the interactions between propellant injection, energy release in the combustion chamber, and regenerative cooling heat transfer. Rule-of-thumb heuristics were incorporated for the H2-O2 coaxial injector design, including a minimum gap size constraint on the total number of injector elements. One dimensional equilibrium chemistry was used in the energy release analysis of the combustion chamber. A 3-D conduction and/or 1-D advection analysis is used to predict heat transfer and coolant channel wall temperature distributions, in addition to coolant temperature and pressure drop. Inputting values to describe the geometry and state properties of the entire system is done directly from the computer keyboard. Graphical display of all output results from the computer code analyses is facilitated by menu selection of up to five dependent variables per plot.

  11. Turbofan forced mixer-nozzle internal flowfield. Volume 2: Computational fluid dynamic predictions

    NASA Technical Reports Server (NTRS)

    Werle, M. J.; Vasta, V. N.

    1982-01-01

    A general program was conducted to develop and assess a computational method for predicting the flow properties in a turbofan forced mixed duct. The detail assessment of the resulting computer code is presented. It was found that the code provided excellent predictions of the kinematics of the mixing process throughout the entire length of the mixer nozzle. The thermal mixing process between the hot core and cold fan flows was found to be well represented in the low speed portion of the flowfield.

  12. Assessment of the impact of the change from manual to automated coding on mortality statistics in Australia.

    PubMed

    McKenzie, Kirsten; Walker, Sue; Tong, Shilu

    It remains unclear whether the change from a manual to an automated coding system (ACS) for deaths has significantly affected the consistency of Australian mortality data. The underlying causes of 34,000 deaths registered in 1997 in Australia were dual coded, in ICD-9 manually, and by using an automated computer coding program. The diseases most affected by the change from manual to ACS were senile/presenile dementia, and pneumonia. The most common disease to which a manually assigned underlying cause of senile dementia was coded with ACS was unspecified psychoses (37.2%). Only 12.5% of codes assigned by ACS as senile dementia were coded the same by manual coders. This study indicates some important differences in mortality rates when comparing mortality data that have been coded manually with those coded using an automated computer coding program. These differences may be related to both the different interpretation of ICD coding rules between manual and automated coding, and different co-morbidities or co-existing conditions among demographic groups.

  13. Transonic Navier-Stokes wing solutions using a zonal approach. Part 2: High angle-of-attack simulation

    NASA Technical Reports Server (NTRS)

    Chaderjian, N. M.

    1986-01-01

    A computer code is under development whereby the thin-layer Reynolds-averaged Navier-Stokes equations are to be applied to realistic fighter-aircraft configurations. This transonic Navier-Stokes code (TNS) utilizes a zonal approach in order to treat complex geometries and satisfy in-core computer memory constraints. The zonal approach has been applied to isolated wing geometries in order to facilitate code development. Part 1 of this paper addresses the TNS finite-difference algorithm, zonal methodology, and code validation with experimental data. Part 2 of this paper addresses some numerical issues such as code robustness, efficiency, and accuracy at high angles of attack. Special free-stream-preserving metrics proved an effective way to treat H-mesh singularities over a large range of severe flow conditions, including strong leading-edge flow gradients, massive shock-induced separation, and stall. Furthermore, lift and drag coefficients have been computed for a wing up through CLmax. Numerical oil flow patterns and particle trajectories are presented both for subcritical and transonic flow. These flow simulations are rich with complex separated flow physics and demonstrate the efficiency and robustness of the zonal approach.

  14. Parallel community climate model: Description and user`s guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drake, J.B.; Flanery, R.E.; Semeraro, B.D.

    This report gives an overview of a parallel version of the NCAR Community Climate Model, CCM2, implemented for MIMD massively parallel computers using a message-passing programming paradigm. The parallel implementation was developed on an Intel iPSC/860 with 128 processors and on the Intel Delta with 512 processors, and the initial target platform for the production version of the code is the Intel Paragon with 2048 processors. Because the implementation uses a standard, portable message-passing libraries, the code has been easily ported to other multiprocessors supporting a message-passing programming paradigm. The parallelization strategy used is to decompose the problem domain intomore » geographical patches and assign each processor the computation associated with a distinct subset of the patches. With this decomposition, the physics calculations involve only grid points and data local to a processor and are performed in parallel. Using parallel algorithms developed for the semi-Lagrangian transport, the fast Fourier transform and the Legendre transform, both physics and dynamics are computed in parallel with minimal data movement and modest change to the original CCM2 source code. Sequential or parallel history tapes are written and input files (in history tape format) are read sequentially by the parallel code to promote compatibility with production use of the model on other computer systems. A validation exercise has been performed with the parallel code and is detailed along with some performance numbers on the Intel Paragon and the IBM SP2. A discussion of reproducibility of results is included. A user`s guide for the PCCM2 version 2.1 on the various parallel machines completes the report. Procedures for compilation, setup and execution are given. A discussion of code internals is included for those who may wish to modify and use the program in their own research.« less

  15. Generalized Advanced Propeller Analysis System (GAPAS). Volume 2: Computer program user manual

    NASA Technical Reports Server (NTRS)

    Glatt, L.; Crawford, D. R.; Kosmatka, J. B.; Swigart, R. J.; Wong, E. W.

    1986-01-01

    The Generalized Advanced Propeller Analysis System (GAPAS) computer code is described. GAPAS was developed to analyze advanced technology multi-bladed propellers which operate on aircraft with speeds up to Mach 0.8 and altitudes up to 40,000 feet. GAPAS includes technology for analyzing aerodynamic, structural, and acoustic performance of propellers. The computer code was developed for the CDC 7600 computer and is currently available for industrial use on the NASA Langley computer. A description of all the analytical models incorporated in GAPAS is included. Sample calculations are also described as well as users requirements for modifying the analysis system. Computer system core requirements and running times are also discussed.

  16. An arbitrary grid CFD algorithm for configuration aerodynamics analysis. Volume 1: Theory and validations

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Iannelli, G. S.; Manhardt, Paul D.; Orzechowski, J. A.

    1993-01-01

    This report documents the user input and output data requirements for the FEMNAS finite element Navier-Stokes code for real-gas simulations of external aerodynamics flowfields. This code was developed for the configuration aerodynamics branch of NASA ARC, under SBIR Phase 2 contract NAS2-124568 by Computational Mechanics Corporation (COMCO). This report is in two volumes. Volume 1 contains the theory for the derived finite element algorithm and describes the test cases used to validate the computer program described in the Volume 2 user guide.

  17. Roles of Ca(v) channels and AHNAK1 in T cells: the beauty and the beast.

    PubMed

    Matza, Didi; Flavell, Richard A

    2009-09-01

    T lymphocytes require Ca2+ entry though the plasma membrane for their activation and function. Recently, several routes for Ca2+ entry through the T-cell plasma membrane after activation have been described. These include calcium release-activated channels (CRAC), transient receptor potential (TRP) channels, and inositol-1,4,5-trisphosphate receptors (IP3Rs). Herein we review the emergence of a fourth new route for Ca2+ entry, composed of Ca(v) channels (also known as L-type voltage-gated calcium channels) and the scaffold protein AHNAK1 (AHNAK/desmoyokin). Both helper (CD4+) and killer (CD8+) T cells express high levels of Ca(v)1 alpha1 subunits (alpha1S, alpha1C, alpha1D, and alpha1F) and AHNAK1 after their differentiation and require these molecules for Ca2+ entry during an immune response. In this article, we describe the observations and open questions that ultimately suggest the involvement of multiple consecutive routes for Ca2+ entry into lymphocytes, one of which may be mediated by Ca(v) channels and AHNAK1.

  18. General Electromagnetic Model for the Analysis of Complex Systems (GEMACS) Computer Code Documentation (Version 3). Volume 3, Part 4.

    DTIC Science & Technology

    1983-09-01

    6ENFRAL. ELECTROMAGNETIC MODEL FOR THE ANALYSIS OF COMPLEX SYSTEMS **%(GEMA CS) Computer Code Documentation ii( Version 3 ). A the BDM Corporation Dr...ANALYSIS FnlTcnclRpr F COMPLEX SYSTEM (GmCS) February 81 - July 83- I TR CODE DOCUMENTATION (Version 3 ) 6.PROMN N.REPORT NUMBER 5. CONTRACT ORGAT97...the ti and t2 directions on the source patch. 3 . METHOD: The electric field at a segment observation point due to the source patch j is given by 1-- lnA

  19. FORCE2: A state-of-the-art two-phase code for hydrodynamic calculations

    NASA Astrophysics Data System (ADS)

    Ding, Jianmin; Lyczkowski, R. W.; Burge, S. W.

    1993-02-01

    A three-dimensional computer code for two-phase flow named FORCE2 has been developed by Babcock and Wilcox (B & W) in close collaboration with Argonne National Laboratory (ANL). FORCE2 is capable of both transient as well as steady-state simulations. This Cartesian coordinates computer program is a finite control volume, industrial grade and quality embodiment of the pilot-scale FLUFIX/MOD2 code and contains features such as three-dimensional blockages, volume and surface porosities to account for various obstructions in the flow field, and distributed resistance modeling to account for pressure drops caused by baffles, distributor plates and large tube banks. Recently computed results demonstrated the significance of and necessity for three-dimensional models of hydrodynamics and erosion. This paper describes the process whereby ANL's pilot-scale FLUFIX/MOD2 models and numerics were implemented into FORCE2. A description of the quality control to assess the accuracy of the new code and the validation using some of the measured data from Illinois Institute of Technology (UT) and the University of Illinois at Urbana-Champaign (UIUC) are given. It is envisioned that one day, FORCE2 with additional modules such as radiation heat transfer, combustion kinetics and multi-solids together with user-friendly pre- and post-processor software and tailored for massively parallel multiprocessor shared memory computational platforms will be used by industry and researchers to assist in reducing and/or eliminating the environmental and economic barriers which limit full consideration of coal, shale and biomass as energy sources, to retain energy security, and to remediate waste and ecological problems.

  20. Performance of a parallel code for the Euler equations on hypercube computers

    NASA Technical Reports Server (NTRS)

    Barszcz, Eric; Chan, Tony F.; Jesperson, Dennis C.; Tuminaro, Raymond S.

    1990-01-01

    The performance of hypercubes were evaluated on a computational fluid dynamics problem and the parallel environment issues were considered that must be addressed, such as algorithm changes, implementation choices, programming effort, and programming environment. The evaluation focuses on a widely used fluid dynamics code, FLO52, which solves the two dimensional steady Euler equations describing flow around the airfoil. The code development experience is described, including interacting with the operating system, utilizing the message-passing communication system, and code modifications necessary to increase parallel efficiency. Results from two hypercube parallel computers (a 16-node iPSC/2, and a 512-node NCUBE/ten) are discussed and compared. In addition, a mathematical model of the execution time was developed as a function of several machine and algorithm parameters. This model accurately predicts the actual run times obtained and is used to explore the performance of the code in interesting but yet physically realizable regions of the parameter space. Based on this model, predictions about future hypercubes are made.

  1. An evaluation of three two-dimensional computational fluid dynamics codes including low Reynolds numbers and transonic Mach numbers

    NASA Technical Reports Server (NTRS)

    Hicks, Raymond M.; Cliff, Susan E.

    1991-01-01

    Full-potential, Euler, and Navier-Stokes computational fluid dynamics (CFD) codes were evaluated for use in analyzing the flow field about airfoils sections operating at Mach numbers from 0.20 to 0.60 and Reynolds numbers from 500,000 to 2,000,000. The potential code (LBAUER) includes weakly coupled integral boundary layer equations for laminar and turbulent flow with simple transition and separation models. The Navier-Stokes code (ARC2D) uses the thin-layer formulation of the Reynolds-averaged equations with an algebraic turbulence model. The Euler code (ISES) includes strongly coupled integral boundary layer equations and advanced transition and separation calculations with the capability to model laminar separation bubbles and limited zones of turbulent separation. The best experiment/CFD correlation was obtained with the Euler code because its boundary layer equations model the physics of the flow better than the other two codes. An unusual reversal of boundary layer separation with increasing angle of attack, following initial shock formation on the upper surface of the airfoil, was found in the experiment data. This phenomenon was not predicted by the CFD codes evaluated.

  2. Efficient preparation of large-block-code ancilla states for fault-tolerant quantum computation

    NASA Astrophysics Data System (ADS)

    Zheng, Yi-Cong; Lai, Ching-Yi; Brun, Todd A.

    2018-03-01

    Fault-tolerant quantum computation (FTQC) schemes that use multiqubit large block codes can potentially reduce the resource overhead to a great extent. A major obstacle is the requirement for a large number of clean ancilla states of different types without correlated errors inside each block. These ancilla states are usually logical stabilizer states of the data-code blocks, which are generally difficult to prepare if the code size is large. Previously, we have proposed an ancilla distillation protocol for Calderbank-Shor-Steane (CSS) codes by classical error-correcting codes. It was assumed that the quantum gates in the distillation circuit were perfect; however, in reality, noisy quantum gates may introduce correlated errors that are not treatable by the protocol. In this paper, we show that additional postselection by another classical error-detecting code can be applied to remove almost all correlated errors. Consequently, the revised protocol is fully fault tolerant and capable of preparing a large set of stabilizer states sufficient for FTQC using large block codes. At the same time, the yield rate can be boosted from O (t-2) to O (1 ) in practice for an [[n ,k ,d =2 t +1

  3. Computational study of duct and pipe flows using the method of pseudocompressibility

    NASA Technical Reports Server (NTRS)

    Williams, Robert W.

    1991-01-01

    A viscous, three-dimensional, incompressible, Navier-Stokes Computational Fluid Dynamics code employing pseudocompressibility is used for the prediction of laminar primary and secondary flows in two 90-degree bends of constant cross section. Under study are a square cross section duct bend with 2.3 radius ratio and a round cross section pipe bend with 2.8 radius ratio. Sensitivity of predicted primary and secondary flow to inlet boundary conditions, grid resolution, and code convergence is investigated. Contour and velocity versus spanwise coordinate plots comparing prediction to experimental data flow components are shown at several streamwise stations before, within, and after the duct and pipe bends. Discussion includes secondary flow physics, computational method, computational requirements, grid dependence, and convergence rates.

  4. PHoToNs–A parallel heterogeneous and threads oriented code for cosmological N-body simulation

    NASA Astrophysics Data System (ADS)

    Wang, Qiao; Cao, Zong-Yan; Gao, Liang; Chi, Xue-Bin; Meng, Chen; Wang, Jie; Wang, Long

    2018-06-01

    We introduce a new code for cosmological simulations, PHoToNs, which incorporates features for performing massive cosmological simulations on heterogeneous high performance computer (HPC) systems and threads oriented programming. PHoToNs adopts a hybrid scheme to compute gravitational force, with the conventional Particle-Mesh (PM) algorithm to compute the long-range force, the Tree algorithm to compute the short range force and the direct summation Particle-Particle (PP) algorithm to compute gravity from very close particles. A self-similar space filling a Peano-Hilbert curve is used to decompose the computing domain. Threads programming is advantageously used to more flexibly manage the domain communication, PM calculation and synchronization, as well as Dual Tree Traversal on the CPU+MIC platform. PHoToNs scales well and efficiency of the PP kernel achieves 68.6% of peak performance on MIC and 74.4% on CPU platforms. We also test the accuracy of the code against the much used Gadget-2 in the community and found excellent agreement.

  5. Implementation of a 3D mixing layer code on parallel computers

    NASA Technical Reports Server (NTRS)

    Roe, K.; Thakur, R.; Dang, T.; Bogucz, E.

    1995-01-01

    This paper summarizes our progress and experience in the development of a Computational-Fluid-Dynamics code on parallel computers to simulate three-dimensional spatially-developing mixing layers. In this initial study, the three-dimensional time-dependent Euler equations are solved using a finite-volume explicit time-marching algorithm. The code was first programmed in Fortran 77 for sequential computers. The code was then converted for use on parallel computers using the conventional message-passing technique, while we have not been able to compile the code with the present version of HPF compilers.

  6. Analytical investigation of the dynamics of tethered constellations in Earth orbit, phase 2

    NASA Technical Reports Server (NTRS)

    Lorenzini, E.

    1985-01-01

    This Quarterly Report deals with the deployment maneuver of a single-axis, vertical constellation with three masses. A new, easy to handle, computer code that simulates the two-dimensional dynamics of the constellation has been implemented. This computer code is used for designing control laws for the deployment maneuver that minimizes the acceleration level of the low-g platform during the maneuver.

  7. Study of Two-Dimensional Compressible Non-Acoustic Modeling of Stirling Machine Type Components

    NASA Technical Reports Server (NTRS)

    Tew, Roy C., Jr.; Ibrahim, Mounir B.

    2001-01-01

    A two-dimensional (2-D) computer code was developed for modeling enclosed volumes of gas with oscillating boundaries, such as Stirling machine components. An existing 2-D incompressible flow computer code, CAST, was used as the starting point for the project. CAST was modified to use the compressible non-acoustic Navier-Stokes equations to model an enclosed volume including an oscillating piston. The devices modeled have low Mach numbers and are sufficiently small that the time required for acoustics to propagate across them is negligible. Therefore, acoustics were excluded to enable more time efficient computation. Background information about the project is presented. The compressible non-acoustic flow assumptions are discussed. The governing equations used in the model are presented in transport equation format. A brief description is given of the numerical methods used. Comparisons of code predictions with experimental data are then discussed.

  8. LTCP 2D Graphical User Interface. Application Description and User's Guide

    NASA Technical Reports Server (NTRS)

    Ball, Robert; Navaz, Homayun K.

    1996-01-01

    A graphical user interface (GUI) written for NASA's LTCP (Liquid Thrust Chamber Performance) 2 dimensional computational fluid dynamic code is described. The GUI is written in C++ for a desktop personal computer running under a Microsoft Windows operating environment. Through the use of common and familiar dialog boxes, features, and tools, the user can easily and quickly create and modify input files for the LTCP code. In addition, old input files used with the LTCP code can be opened and modified using the GUI. The application is written in C++ for a desktop personal computer running under a Microsoft Windows operating environment. The program and its capabilities are presented, followed by a detailed description of each menu selection and the method of creating an input file for LTCP. A cross reference is included to help experienced users quickly find the variables which commonly need changes. Finally, the system requirements and installation instructions are provided.

  9. Observations on computational methodologies for use in large-scale, gradient-based, multidisciplinary design incorporating advanced CFD codes

    NASA Technical Reports Server (NTRS)

    Newman, P. A.; Hou, G. J.-W.; Jones, H. E.; Taylor, A. C., III; Korivi, V. M.

    1992-01-01

    How a combination of various computational methodologies could reduce the enormous computational costs envisioned in using advanced CFD codes in gradient based optimized multidisciplinary design (MdD) procedures is briefly outlined. Implications of these MdD requirements upon advanced CFD codes are somewhat different than those imposed by a single discipline design. A means for satisfying these MdD requirements for gradient information is presented which appear to permit: (1) some leeway in the CFD solution algorithms which can be used; (2) an extension to 3-D problems; and (3) straightforward use of other computational methodologies. Many of these observations have previously been discussed as possibilities for doing parts of the problem more efficiently; the contribution here is observing how they fit together in a mutually beneficial way.

  10. Computation of turbulent boundary layers employing the defect wall-function method. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Brown, Douglas L.

    1994-01-01

    In order to decrease overall computational time requirements of spatially-marching parabolized Navier-Stokes finite-difference computer code when applied to turbulent fluid flow, a wall-function methodology, originally proposed by R. Barnwell, was implemented. This numerical effort increases computational speed and calculates reasonably accurate wall shear stress spatial distributions and boundary-layer profiles. Since the wall shear stress is analytically determined from the wall-function model, the computational grid near the wall is not required to spatially resolve the laminar-viscous sublayer. Consequently, a substantially increased computational integration step size is achieved resulting in a considerable decrease in net computational time. This wall-function technique is demonstrated for adiabatic flat plate test cases from Mach 2 to Mach 8. These test cases are analytically verified employing: (1) Eckert reference method solutions, (2) experimental turbulent boundary-layer data of Mabey, and (3) finite-difference computational code solutions with fully resolved laminar-viscous sublayers. Additionally, results have been obtained for two pressure-gradient cases: (1) an adiabatic expansion corner and (2) an adiabatic compression corner.

  11. Final Technical Report for SBIR entitled Four-Dimensional Finite-Orbit-Width Fokker-Planck Code with Sources, for Neoclassical/Anomalous Transport Simulation of Ion and Electron Distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harvey, R. W.; Petrov, Yu. V.

    2013-12-03

    Within the US Department of Energy/Office of Fusion Energy magnetic fusion research program, there is an important whole-plasma-modeling need for a radio-frequency/neutral-beam-injection (RF/NBI) transport-oriented finite-difference Fokker-Planck (FP) code with combined capabilities for 4D (2R2V) geometry near the fusion plasma periphery, and computationally less demanding 3D (1R2V) bounce-averaged capabilities for plasma in the core of fusion devices. Demonstration of proof-of-principle achievement of this goal has been carried out in research carried out under Phase I of the SBIR award. Two DOE-sponsored codes, the CQL3D bounce-average Fokker-Planck code in which CompX has specialized, and the COGENT 4D, plasma edge-oriented Fokker-Planck code whichmore » has been constructed by Lawrence Livermore National Laboratory and Lawrence Berkeley Laboratory scientists, where coupled. Coupling was achieved by using CQL3D calculated velocity distributions including an energetic tail resulting from NBI, as boundary conditions for the COGENT code over the two-dimensional velocity space on a spatial interface (flux) surface at a given radius near the plasma periphery. The finite-orbit-width fast ions from the CQL3D distributions penetrated into the peripheral plasma modeled by the COGENT code. This combined code demonstrates the feasibility of the proposed 3D/4D code. By combining these codes, the greatest computational efficiency is achieved subject to present modeling needs in toroidally symmetric magnetic fusion devices. The more efficient 3D code can be used in its regions of applicability, coupled to the more computationally demanding 4D code in higher collisionality edge plasma regions where that extended capability is necessary for accurate representation of the plasma. More efficient code leads to greater use and utility of the model. An ancillary aim of the project is to make the combined 3D/4D code user friendly. Achievement of full-coupling of these two Fokker-Planck codes will advance computational modeling of plasma devices important to the USDOE magnetic fusion energy program, in particular the DIII-D tokamak at General Atomics, San Diego, the NSTX spherical tokamak at Princeton, New Jersey, and the MST reversed-field-pinch Madison, Wisconsin. The validation studies of the code against the experiments will improve understanding of physics important for magnetic fusion, and will increase our design capabilities for achieving the goals of the International Tokamak Experimental Reactor (ITER) project in which the US is a participant and which seeks to demonstrate at least a factor of five in fusion power production divided by input power.« less

  12. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1995-01-01

    This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.

  13. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  14. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  15. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  16. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  17. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  18. Computational electronics and electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shang, C. C.

    The Computational Electronics and Electromagnetics thrust area at Lawrence Livermore National Laboratory serves as the focal point for engineering R&D activities for developing computer-based design, analysis, and tools for theory. Key representative applications include design of particle accelerator cells and beamline components; engineering analysis and design of high-power components, photonics, and optoelectronics circuit design; EMI susceptibility analysis; and antenna synthesis. The FY-96 technology-base effort focused code development on (1) accelerator design codes; (2) 3-D massively parallel, object-oriented time-domain EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; (5) 3-D spectral-domainmore » CEM tools; and (6) enhancement of laser drilling codes. Joint efforts with the Power Conversion Technologies thrust area include development of antenna systems for compact, high-performance radar, in addition to novel, compact Marx generators. 18 refs., 25 figs., 1 tab.« less

  19. Fast Acceleration of 2D Wave Propagation Simulations Using Modern Computational Accelerators

    PubMed Central

    Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H.; Kay, Matthew

    2014-01-01

    Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least faster than the sequential implementation and faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of wave propagation in multi-dimensional media. PMID:24497950

  20. ACDOS2: an improved neutron-induced dose rate code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lagache, J.C.

    1981-06-01

    To calculate the expected dose rate from fusion reactors as a function of geometry, composition, and time after shutdown a computer code, ACDOS2, was written, which utilizes up-to-date libraries of cross-sections and radioisotope decay data. ACDOS2 is in ANSI FORTRAN IV, in order to make it readily adaptable elsewhere.

  1. 15 CFR 740.7 - Computers (APP).

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 4A003. (2) Technology and software. License Exception APP authorizes exports of technology and software... License Exception. (2) Access and release restrictions. (i)[Reserved] (ii) Technology and source code. Technology and source code eligible for License Exception APP may not be released to nationals of Cuba, Iran...

  2. Validation of NASA Thermal Ice Protection Computer Codes Part 2 - LEWICE/Thermal

    DOT National Transportation Integrated Search

    1996-01-01

    The Icing Technology Branch at NASA Lewis has been involved in an effort to validate two thermal ice protection codes developed at the NASA Lewis Research Center: LEWICE/Thermal 1 (electrothermal de-icing and anti-icing), and ANTICE 2 (hot gas and el...

  3. Numerical solution of Space Shuttle Orbiter flow field including real gas effects

    NASA Technical Reports Server (NTRS)

    Prabhu, D. K.; Tannehill, J. C.

    1984-01-01

    The hypersonic, laminar flow around the Space Shuttle Orbiter has been computed for both an ideal gas (gamma = 1.2) and equilibrium air using a real-gas, parabolized Navier-Stokes code. This code employs a generalized coordinate transformation; hence, it places no restrictions on the orientation of the solution surfaces. The initial solution in the nose region was computed using a 3-D, real-gas, time-dependent Navier-Stokes code. The thermodynamic and transport properties of equilibrium air were obtained from either approximate curve fits or a table look-up procedure. Numerical results are presented for flight conditions corresponding to the STS-3 trajectory. The computed surface pressures and convective heating rates are compared with data from the STS-3 flight.

  4. Computer Description of Black Hawk Helicopter

    DTIC Science & Technology

    1979-06-01

    Model Combinatorial Geometry Models Black Hawk Helicopter Helicopter GIFT Computer Code Geometric Description of Targets 20. ABSTRACT...description was made using the technique of combinatorial geometry (COM-GEOM) and will be used as input to the GIFT computer code which generates Tliic...rnHp The data used bv the COVART comtmter code was eenerated bv the Geometric Information for Targets ( GIFT )Z computer code. This report documents

  5. 2-Step scalar deadzone quantization for bitplane image coding.

    PubMed

    Auli-Llinas, Francesc

    2013-12-01

    Modern lossy image coding systems generate a quality progressive codestream that, truncated at increasing rates, produces an image with decreasing distortion. Quality progressivity is commonly provided by an embedded quantizer that employs uniform scalar deadzone quantization (USDQ) together with a bitplane coding strategy. This paper introduces a 2-step scalar deadzone quantization (2SDQ) scheme that achieves same coding performance as that of USDQ while reducing the coding passes and the emitted symbols of the bitplane coding engine. This serves to reduce the computational costs of the codec and/or to code high dynamic range images. The main insights behind 2SDQ are the use of two quantization step sizes that approximate wavelet coefficients with more or less precision depending on their density, and a rate-distortion optimization technique that adjusts the distortion decreases produced when coding 2SDQ indexes. The integration of 2SDQ in current codecs is straightforward. The applicability and efficiency of 2SDQ are demonstrated within the framework of JPEG2000.

  6. ITER Simulations Using the PEDESTAL Module in the PTRANSP Code

    NASA Astrophysics Data System (ADS)

    Halpern, F. D.; Bateman, G.; Kritz, A. H.; Pankin, A. Y.; Budny, R. V.; Kessel, C.; McCune, D.; Onjun, T.

    2006-10-01

    PTRANSP simulations with a computed pedestal height are carried out for ITER scenarios including a standard ELMy H-mode (15 MA discharge) and a hybrid scenario (12MA discharge). It has been found that fusion power production predicted in simulations of ITER discharges depends sensitively on the height of the H-mode temperature pedestal [1]. In order to study this effect, the NTCC PEDESTAL module [2] has been implemented in PTRANSP code to provide boundary conditions used for the computation of the projected performance of ITER. The PEDESTAL module computes both the temperature and width of the pedestal at the edge of type I ELMy H-mode discharges once the threshold conditions for the H-mode are satisfied. The anomalous transport in the plasma core is predicted using the GLF23 or MMM95 transport models. To facilitate the steering of lengthy PTRANSP computations, the PTRANSP code has been modified to allow changes in the transport model when simulations are restarted. The PTRANSP simulation results are compared with corresponding results obtained using other integrated modeling codes.[1] G. Bateman, T. Onjun and A.H. Kritz, Plasma Physics and Controlled Fusion, 45, 1939 (2003).[2] T. Onjun, G. Bateman, A.H. Kritz, and G. Hammett, Phys. Plasmas 9, 5018 (2002).

  7. Study of SOL in DIII-D tokamak with SOLPS suite of codes.

    NASA Astrophysics Data System (ADS)

    Pankin, Alexei; Bateman, Glenn; Brennan, Dylan; Coster, David; Hogan, John; Kritz, Arnold; Kukushkin, Andrey; Schnack, Dalton; Snyder, Phil

    2005-10-01

    The scrape-of-layer (SOL) region in DIII-D tokamak is studied with the SOLPS integrated suite of codes. The SOLPS package includes the 3D multi-species Monte-Carlo neutral code EIRINE and 2D multi-fluid code B2. The EIRINE and B2 codes are cross-coupled through B2-EIRINE interface. The results of SOLPS simulations are used in the integrated modeling of the plasma edge in DIII-D tokamak with the ASTRA transport code. Parameterized dependences for neutral particle fluxes that are computed with the SOLPS code are implemented in a model for the H-mode pedestal and ELMs [1] in the ASTRA code. The effects of neutrals on the H-mode pedestal and ELMs are studied in this report. [1] A. Y. Pankin, I. Voitsekhovitch, G. Bateman, et al., Plasma Phys. Control. Fusion 47, 483 (2005).

  8. How to differentiate collective variables in free energy codes: Computer-algebra code generation and automatic differentiation

    NASA Astrophysics Data System (ADS)

    Giorgino, Toni

    2018-07-01

    The proper choice of collective variables (CVs) is central to biased-sampling free energy reconstruction methods in molecular dynamics simulations. The PLUMED 2 library, for instance, provides several sophisticated CV choices, implemented in a C++ framework; however, developing new CVs is still time consuming due to the need to provide code for the analytical derivatives of all functions with respect to atomic coordinates. We present two solutions to this problem, namely (a) symbolic differentiation and code generation, and (b) automatic code differentiation, in both cases leveraging open-source libraries (SymPy and Stan Math, respectively). The two approaches are demonstrated and discussed in detail implementing a realistic example CV, the local radius of curvature of a polymer. Users may use the code as a template to streamline the implementation of their own CVs using high-level constructs and automatic gradient computation.

  9. Nonlinear 3D visco-resistive MHD modeling of fusion plasmas: a comparison between numerical codes

    NASA Astrophysics Data System (ADS)

    Bonfiglio, D.; Chacon, L.; Cappello, S.

    2008-11-01

    Fluid plasma models (and, in particular, the MHD model) are extensively used in the theoretical description of laboratory and astrophysical plasmas. We present here a successful benchmark between two nonlinear, three-dimensional, compressible visco-resistive MHD codes. One is the fully implicit, finite volume code PIXIE3D [1,2], which is characterized by many attractive features, notably the generalized curvilinear formulation (which makes the code applicable to different geometries) and the possibility to include in the computation the energy transport equation and the extended MHD version of Ohm's law. In addition, the parallel version of the code features excellent scalability properties. Results from this code, obtained in cylindrical geometry, are compared with those produced by the semi-implicit cylindrical code SpeCyl, which uses finite differences radially, and spectral formulation in the other coordinates [3]. Both single and multi-mode simulations are benchmarked, regarding both reversed field pinch (RFP) and ohmic tokamak magnetic configurations. [1] L. Chacon, Computer Physics Communications 163, 143 (2004). [2] L. Chacon, Phys. Plasmas 15, 056103 (2008). [3] S. Cappello, Plasma Phys. Control. Fusion 46, B313 (2004) & references therein.

  10. Computations of the Magnus effect for slender bodies in supersonic flow

    NASA Technical Reports Server (NTRS)

    Sturek, W. B.; Schiff, L. B.

    1980-01-01

    A recently reported Parabolized Navier-Stokes code has been employed to compute the supersonic flow field about spinning cone, ogive-cylinder, and boattailed bodies of revolution at moderate incidence. The computations were performed for flow conditions where extensive measurements for wall pressure, boundary layer velocity profiles and Magnus force had been obtained. Comparisons between the computational results and experiment indicate excellent agreement for angles of attack up to six degrees. The comparisons for Magnus effects show that the code accurately predicts the effects of body shape and Mach number for the selected models for Mach numbers in the range of 2-4.

  11. A 2D electrostatic PIC code for the Mark III Hypercube

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferraro, R.D.; Liewer, P.C.; Decyk, V.K.

    We have implemented a 2D electrostastic plasma particle in cell (PIC) simulation code on the Caltech/JPL Mark IIIfp Hypercube. The code simulates plasma effects by evolving in time the trajectories of thousands to millions of charged particles subject to their self-consistent fields. Each particle`s position and velocity is advanced in time using a leap frog method for integrating Newton`s equations of motion in electric and magnetic fields. The electric field due to these moving charged particles is calculated on a spatial grid at each time by solving Poisson`s equation in Fourier space. These two tasks represent the largest part ofmore » the computation. To obtain efficient operation on a distributed memory parallel computer, we are using the General Concurrent PIC (GCPIC) algorithm previously developed for a 1D parallel PIC code.« less

  12. Numerical simulation of turbulent jet noise, part 2

    NASA Technical Reports Server (NTRS)

    Metcalfe, R. W.; Orszag, S. A.

    1976-01-01

    Results on the numerical simulation of jet flow fields were used to study the radiated sound field, and in addition, to extend and test the capabilities of the turbulent jet simulation codes. The principal result of the investigation was the computation of the radiated sound field from a turbulent jet. In addition, the computer codes were extended to account for the effects of compressibility and eddy viscosity, and the treatment of the nonlinear terms of the Navier-Stokes equations was modified so that they can be computed in a semi-implicit way. A summary of the flow model and a description of the numerical methods used for its solution are presented. Calculations of the radiated sound field are reported. In addition, the extensions that were made to the fundamental dynamical codes are described. Finally, the current state-of-the-art for computer simulation of turbulent jet noise is summarized.

  13. Computations of spray, fuel-air mixing, and combustion in a lean-premixed-prevaporized combustor

    NASA Technical Reports Server (NTRS)

    Dasgupta, A.; Li, Z.; Shih, T. I.-P.; Kundu, K.; Deur, J. M.

    1993-01-01

    A code was developed for computing the multidimensional flow, spray, combustion, and pollutant formation inside gas turbine combustors. The code developed is based on a Lagrangian-Eulerian formulation and utilizes an implicit finite-volume method. The focus of this paper is on the spray part of the code (both formulation and algorithm), and a number of issues related to the computation of sprays and fuel-air mixing in a lean-premixed-prevaporized combustor. The issues addressed include: (1) how grid spacings affect the diffusion of evaporated fuel, and (2) how spurious modes can arise through modelling of the spray in the Lagrangian computations. An upwind interpolation scheme is proposed to account for some effects of grid spacing on the artificial diffusion of the evaporated fuel. Also, some guidelines are presented to minimize errors associated with the spurious modes.

  14. Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system components, part 2

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The technical effort and computer code enhancements performed during the sixth year of the Probabilistic Structural Analysis Methods program are summarized. Various capabilities are described to probabilistically combine structural response and structural resistance to compute component reliability. A library of structural resistance models is implemented in the Numerical Evaluations of Stochastic Structures Under Stress (NESSUS) code that included fatigue, fracture, creep, multi-factor interaction, and other important effects. In addition, a user interface was developed for user-defined resistance models. An accurate and efficient reliability method was developed and was successfully implemented in the NESSUS code to compute component reliability based on user-selected response and resistance models. A risk module was developed to compute component risk with respect to cost, performance, or user-defined criteria. The new component risk assessment capabilities were validated and demonstrated using several examples. Various supporting methodologies were also developed in support of component risk assessment.

  15. Apocalypse Soon? The Bug.

    ERIC Educational Resources Information Center

    Clyde, Anne

    1999-01-01

    Discussion of the Year 2000 (Y2K) problem, the computer-code problem that affects computer programs or computer chips, focuses on the impact on teacher-librarians. Topics include automated library systems, access to online information services, library computers and software, and other electronic equipment such as photocopiers and fax machines.…

  16. Development of probabilistic multimedia multipathway computer codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, C.; LePoire, D.; Gnanapragasam, E.

    2002-01-01

    The deterministic multimedia dose/risk assessment codes RESRAD and RESRAD-BUILD have been widely used for many years for evaluation of sites contaminated with residual radioactive materials. The RESRAD code applies to the cleanup of sites (soils) and the RESRAD-BUILD code applies to the cleanup of buildings and structures. This work describes the procedure used to enhance the deterministic RESRAD and RESRAD-BUILD codes for probabilistic dose analysis. A six-step procedure was used in developing default parameter distributions and the probabilistic analysis modules. These six steps include (1) listing and categorizing parameters; (2) ranking parameters; (3) developing parameter distributions; (4) testing parameter distributionsmore » for probabilistic analysis; (5) developing probabilistic software modules; and (6) testing probabilistic modules and integrated codes. The procedures used can be applied to the development of other multimedia probabilistic codes. The probabilistic versions of RESRAD and RESRAD-BUILD codes provide tools for studying the uncertainty in dose assessment caused by uncertain input parameters. The parameter distribution data collected in this work can also be applied to other multimedia assessment tasks and multimedia computer codes.« less

  17. Computer program BL2D for solving two-dimensional and axisymmetric boundary layers

    NASA Technical Reports Server (NTRS)

    Iyer, Venkit

    1995-01-01

    This report presents the formulation, validation, and user's manual for the computer program BL2D. The program is a fourth-order-accurate solution scheme for solving two-dimensional or axisymmetric boundary layers in speed regimes that range from low subsonic to hypersonic Mach numbers. A basic implementation of the transition zone and turbulence modeling is also included. The code is a result of many improvements made to the program VGBLP, which is described in NASA TM-83207 (February 1982), and can effectively supersede it. The code BL2D is designed to be modular, user-friendly, and portable to any machine with a standard fortran77 compiler. The report contains the new formulation adopted and the details of its implementation. Five validation cases are presented. A detailed user's manual with the input format description and instructions for running the code is included. Adequate information is presented in the report to enable the user to modify or customize the code for specific applications.

  18. Sensitivity analysis of the Gupta and Park chemical models on the heat flux by DSMC and CFD codes

    NASA Astrophysics Data System (ADS)

    Morsa, Luigi; Festa, Giandomenico; Zuppardi, Gennaro

    2012-11-01

    The present study is the logical continuation of a former paper by the first author in which the influence of the chemical models by Gupta and by Park on the computation of heat flux on the Orion and EXPERT capsules was evaluated. Tests were carried out by the direct simulation Monte Carlo code DS2V and by the computational fluiddynamic (CFD) code H3NS. DS2V implements the Gupta model, while H3NS implements the Park model. In order to compare the effects of the chemical models, the Park model was implemented also in DS2V. The results showed that DS2V and H3NS compute a different composition both in the flow field and on the surface, even using the same chemical model (Park). Furthermore DS2V computes, by the two chemical models, different compositions in the flow field but the same composition on the surface, therefore the same heat flux. In the present study, in order to evaluate the influence of these chemical models also in a CFD code, the Gupta and the Park models have been implemented in FLUENT. Tests by DS2V and by FLUENT, have been carried out for the EXPERT capsule at the altitude of 70 km and with velocity of 5000 m/s. The capsule experiences a hypersonic, continuum low density regime. Due to the energy level of the flow, the vibration equation, lacking in the original version of FLUENT, has been implemented. The results of the heat flux computation verify that FLUENT is quite sensitive to the Gupta and to the Park chemical models. In fact, at the stagnation point, the percentage difference between the models is about 13%. On the opposite the DS2V results by the two models are practically equivalent.

  19. High altitude chemically reacting gas particle mixtures. Volume 2: Program manual for RAMP2. [rocket nozzle and orbital plume flow fields

    NASA Technical Reports Server (NTRS)

    Smith, S. D.

    1984-01-01

    All of the elements used in the Reacting and Multi-Phase (RAMP2) computer code are described in detail. The code can be used to model the dominant phenomena which affect the prediction of liquid and solid rocket nozzle and orbital plume flow fields.

  20. Code White: A Signed Code Protection Mechanism for Smartphones

    DTIC Science & Technology

    2010-09-01

    analogous to computer security is the use of antivirus (AV) software . 12 AV software is a brute force approach to security. The software ...these users, numerous malicious programs have also surfaced. And while smartphones have desktop-like capabilities to execute software , they do not...11 2.3.1 Antivirus and Mobile Phones ............................................................... 11 2.3.2

  1. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and Harold Baranger; 26. Critique of fault-tolerant quantum information processing Robert Alicki; References; Index.

  2. Visual Computing Environment

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles; Putt, Charles W.

    1997-01-01

    The Visual Computing Environment (VCE) is a NASA Lewis Research Center project to develop a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis. The objectives of VCE are to (1) develop a visual computing environment for controlling the execution of individual simulation codes that are running in parallel and are distributed on heterogeneous host machines in a networked environment, (2) develop numerical coupling algorithms for interchanging boundary conditions between codes with arbitrary grid matching and different levels of dimensionality, (3) provide a graphical interface for simulation setup and control, and (4) provide tools for online visualization and plotting. VCE was designed to provide a distributed, object-oriented environment. Mechanisms are provided for creating and manipulating objects, such as grids, boundary conditions, and solution data. This environment includes parallel virtual machine (PVM) for distributed processing. Users can interactively select and couple any set of codes that have been modified to run in a parallel distributed fashion on a cluster of heterogeneous workstations. A scripting facility allows users to dictate the sequence of events that make up the particular simulation.

  3. User's manual for semi-circular compact range reflector code

    NASA Technical Reports Server (NTRS)

    Gupta, Inder J.; Burnside, Walter D.

    1986-01-01

    A computer code was developed to analyze a semi-circular paraboloidal reflector antenna with a rolled edge at the top and a skirt at the bottom. The code can be used to compute the total near field of the antenna or its individual components at a given distance from the center of the paraboloid. Thus, it is very effective in computing the size of the sweet spot for RCS or antenna measurement. The operation of the code is described. Various input and output statements are explained. Some results obtained using the computer code are presented to illustrate the code's capability as well as being samples of input/output sets.

  4. Efficient full wave code for the coupling of large multirow multijunction LH grills

    NASA Astrophysics Data System (ADS)

    Preinhaelter, Josef; Hillairet, Julien; Milanesio, Daniele; Maggiora, Riccardo; Urban, Jakub; Vahala, Linda; Vahala, George

    2017-11-01

    The full wave code OLGA, for determining the coupling of a single row lower hybrid launcher (waveguide grills) to the plasma, is extended to handle multirow multijunction active passive structures (like the C3 and C4 launchers on TORE SUPRA) by implementing the scattering matrix formalism. The extended code is still computationally fast because of the use of (i) 2D splines of the plasma surface admittance in the accessibility region of the k-space, (ii) high order Gaussian quadrature rules for the integration of the coupling elements and (iii) utilizing the symmetries of the coupling elements in the multiperiodic structures. The extended OLGA code is benchmarked against the ALOHA-1D, ALOHA-2D and TOPLHA codes for the coupling of the C3 and C4 TORE SUPRA launchers for several plasma configurations derived from reflectometry and interferometery. Unlike nearly all codes (except the ALOHA-1D code), OLGA does not require large computational resources and can be used for everyday usage in planning experimental runs. In particular, it is shown that the OLGA code correctly handles the coupling of the C3 and C4 launchers over a very wide range of plasma densities in front of the grill.

  5. ADPAC v1.0: User's Manual

    NASA Technical Reports Server (NTRS)

    Hall, Edward J.; Heidegger, Nathan J.; Delaney, Robert A.

    1999-01-01

    The overall objective of this study was to evaluate the effects of turbulence models in a 3-D numerical analysis on the wake prediction capability. The current version of the computer code resulting from this study is referred to as ADPAC v7 (Advanced Ducted Propfan Analysis Codes -Version 7). This report is intended to serve as a computer program user's manual for the ADPAC code used and modified under Task 15 of NASA Contract NAS3-27394. The ADPAC program is based on a flexible multiple-block and discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. Turbulence models now available in the ADPAC code are: a simple mixing-length model, the algebraic Baldwin-Lomax model with user defined coefficients, the one-equation Spalart-Allmaras model, and a two-equation k-R model. The consolidated ADPAC code is capable of executing in either a serial or parallel computing mode from a single source code.

  6. Highly fault-tolerant parallel computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spielman, D.A.

    We re-introduce the coded model of fault-tolerant computation in which the input and output of a computational device are treated as words in an error-correcting code. A computational device correctly computes a function in the coded model if its input and output, once decoded, are a valid input and output of the function. In the coded model, it is reasonable to hope to simulate all computational devices by devices whose size is greater by a constant factor but which are exponentially reliable even if each of their components can fail with some constant probability. We consider fine-grained parallel computations inmore » which each processor has a constant probability of producing the wrong output at each time step. We show that any parallel computation that runs for time t on w processors can be performed reliably on a faulty machine in the coded model using w log{sup O(l)} w processors and time t log{sup O(l)} w. The failure probability of the computation will be at most t {center_dot} exp(-w{sup 1/4}). The codes used to communicate with our fault-tolerant machines are generalized Reed-Solomon codes and can thus be encoded and decoded in O(n log{sup O(1)} n) sequential time and are independent of the machine they are used to communicate with. We also show how coded computation can be used to self-correct many linear functions in parallel with arbitrarily small overhead.« less

  7. LSENS, a general chemical kinetics and sensitivity analysis code for homogeneous gas-phase reactions. 2: Code description and usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 2 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 2 describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part 1 (NASA RP-1328) derives the governing equations describes the numerical solution procedures for the types of problems that can be solved by lSENS. Part 3 (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  8. Analysis of steam generator loss-of-feedwater experiments with APROS and RELAP5/MOD3.1 computer codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Virtanen, E.; Haapalehto, T.; Kouhia, J.

    1995-09-01

    Three experiments were conducted to study the behavior of the new horizontal steam generator construction of the PACTEL test facility. In the experiments the secondary side coolant level was reduced stepwise. The experiments were calculated with two computer codes RELAP5/MOD3.1 and APROS version 2.11. A similar nodalization scheme was used for both codes to that the results may be compared. Only the steam generator was modelled and the rest of the facility was given as a boundary condition. The results show that both codes calculate well the behaviour of the primary side of the steam generator. On the secondary sidemore » both codes calculate lower steam temperatures in the upper part of the heat exchange tube bundle than was measured in the experiments.« less

  9. Enhancement of the Probabilistic CEramic Matrix Composite ANalyzer (PCEMCAN) Computer Code

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin

    2000-01-01

    This report represents a final technical report for Order No. C-78019-J entitled "Enhancement of the Probabilistic Ceramic Matrix Composite Analyzer (PCEMCAN) Computer Code." The scope of the enhancement relates to including the probabilistic evaluation of the D-Matrix terms in MAT2 and MAT9 material properties card (available in CEMCAN code) for the MSC/NASTRAN. Technical activities performed during the time period of June 1, 1999 through September 3, 1999 have been summarized, and the final version of the enhanced PCEMCAN code and revisions to the User's Manual is delivered along with. Discussions related to the performed activities were made to the NASA Project Manager during the performance period. The enhanced capabilities have been demonstrated using sample problems.

  10. Orion Service Module Reaction Control System Plume Impingement Analysis Using PLIMP/RAMP2

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen J.; Gati, Frank; Yuko, James R.; Motil, Brian J.; Lumpkin, Forrest E.

    2009-01-01

    The Orion Crew Exploration Vehicle Service Module Reaction Control System engine plume impingement was computed using the plume impingement program (PLIMP). PLIMP uses the plume solution from RAMP2, which is the refined version of the reacting and multiphase program (RAMP) code. The heating rate and pressure (force and moment) on surfaces or components of the Service Module were computed. The RAMP2 solution of the flow field inside the engine and the plume was compared with those computed using GASP, a computational fluid dynamics code, showing reasonable agreement. The computed heating rate and pressure using PLIMP were compared with the Reaction Control System plume model (RPM) solution and the plume impingement dynamics (PIDYN) solution. RPM uses the GASP-based plume solution, whereas PIDYN uses the SCARF plume solution. Three sets of the heating rate and pressure solutions agree well. Further thermal analysis on the avionic ring of the Service Module showed that thermal protection is necessary because of significant heating from the plume.

  11. A generalized one-dimensional computer code for turbomachinery cooling passage flow calculations

    NASA Technical Reports Server (NTRS)

    Kumar, Ganesh N.; Roelke, Richard J.; Meitner, Peter L.

    1989-01-01

    A generalized one-dimensional computer code for analyzing the flow and heat transfer in the turbomachinery cooling passages was developed. This code is capable of handling rotating cooling passages with turbulators, 180 degree turns, pin fins, finned passages, by-pass flows, tip cap impingement flows, and flow branching. The code is an extension of a one-dimensional code developed by P. Meitner. In the subject code, correlations for both heat transfer coefficient and pressure loss computations were developed to model each of the above mentioned type of coolant passages. The code has the capability of independently computing the friction factor and heat transfer coefficient on each side of a rectangular passage. Either the mass flow at the inlet to the channel or the exit plane pressure can be specified. For a specified inlet total temperature, inlet total pressure, and exit static pressure, the code computers the flow rates through the main branch and the subbranches, flow through tip cap for impingement cooling, in addition to computing the coolant pressure, temperature, and heat transfer coefficient distribution in each coolant flow branch. Predictions from the subject code for both nonrotating and rotating passages agree well with experimental data. The code was used to analyze the cooling passage of a research cooled radial rotor.

  12. BRYNTRN: A baryon transport computer code, computation procedures and data base

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Townsend, Lawrence W.; Chun, Sang Y.; Buck, Warren W.; Khan, Ferdous; Cucinotta, Frank

    1988-01-01

    The development is described of an interaction data base and a numerical solution to the transport of baryons through the arbitrary shield material based on a straight ahead approximation of the Boltzmann equation. The code is most accurate for continuous energy boundary values but gives reasonable results for discrete spectra at the boundary with even a relatively coarse energy grid (30 points) and large spatial increments (1 cm in H2O).

  13. Joint Services Electronics Program Annual Progress Report.

    DTIC Science & Technology

    1985-11-01

    one symbol memory) adaptive lHuffman codes were performed, and the compression achieved was compared with that of Ziv - Lempel coding. As was expected...MATERIALS 8 4. Information Systems 9 4.1 REAL TIME STATISTICAL DATA PROCESSING 9 -. 4.2 DATA COMPRESSION for COMPUTER DATA STRUCTURES 9 5. PhD...a. Real Time Statistical Data Processing (T. Kailatb) b. Data Compression for Computer Data Structures (J. Gill) Acces Fo NTIS CRA&I I " DTIC TAB

  14. COM-GEOM Interactive Display Debugger (CIDD)

    DTIC Science & Technology

    1984-08-01

    necessery and Identify by block nlum.ber) Target Description GIFT interactive Computer Graphics SolIi d Geone t ry Combintatorial Gecometry * COM-GLOM 120...program was written to speed up the process of formulating the Com-Geom data used by the Geometric Information for Targets ( GIFT ) 1,2 computer code...Polyhedron Lawrence W. Bain, Mathew J. Reisinger, "The GIFT Code User Manual; Volume I, Introduction and Input Requirements (u)," BRL Report No. 1802

  15. Review of numerical models to predict cooling tower performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, B.M.; Nomura, K.K.; Bartz, J.A.

    1987-01-01

    Four state-of-the-art computer models developed to predict the thermal performance of evaporative cooling towers are summarized. The formulation of these models, STAR and TEFERI (developed in Europe) and FACTS and VERA2D (developed in the U.S.), is summarized. A fifth code, based on Merkel analysis, is also discussed. Principal features of the codes, computation time and storage requirements are described. A discussion of model validation is also provided.

  16. Efficient Proximity Computation Techniques Using ZIP Code Data for Smart Cities †

    PubMed Central

    Murdani, Muhammad Harist; Hong, Bonghee

    2018-01-01

    In this paper, we are interested in computing ZIP code proximity from two perspectives, proximity between two ZIP codes (Ad-Hoc) and neighborhood proximity (Top-K). Such a computation can be used for ZIP code-based target marketing as one of the smart city applications. A naïve approach to this computation is the usage of the distance between ZIP codes. We redefine a distance metric combining the centroid distance with the intersecting road network between ZIP codes by using a weighted sum method. Furthermore, we prove that the results of our combined approach conform to the characteristics of distance measurement. We have proposed a general and heuristic approach for computing Ad-Hoc proximity, while for computing Top-K proximity, we have proposed a general approach only. Our experimental results indicate that our approaches are verifiable and effective in reducing the execution time and search space. PMID:29587366

  17. Efficient Proximity Computation Techniques Using ZIP Code Data for Smart Cities †.

    PubMed

    Murdani, Muhammad Harist; Kwon, Joonho; Choi, Yoon-Ho; Hong, Bonghee

    2018-03-24

    In this paper, we are interested in computing ZIP code proximity from two perspectives, proximity between two ZIP codes ( Ad-Hoc ) and neighborhood proximity ( Top-K ). Such a computation can be used for ZIP code-based target marketing as one of the smart city applications. A naïve approach to this computation is the usage of the distance between ZIP codes. We redefine a distance metric combining the centroid distance with the intersecting road network between ZIP codes by using a weighted sum method. Furthermore, we prove that the results of our combined approach conform to the characteristics of distance measurement. We have proposed a general and heuristic approach for computing Ad-Hoc proximity, while for computing Top-K proximity, we have proposed a general approach only. Our experimental results indicate that our approaches are verifiable and effective in reducing the execution time and search space.

  18. Navier-Stokes analysis of cold scramjet-afterbody flows

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Engelund, Walter C.; Eleshaky, Mohamed E.

    1989-01-01

    The progress of two efforts in coding solutions of Navier-Stokes equations is summarized. The first effort concerns a 3-D space marching parabolized Navier-Stokes (PNS) code being modified to compute the supersonic mixing flow through an internal/external expansion nozzle with multicomponent gases. The 3-D PNS equations, coupled with a set of species continuity equations, are solved using an implicit finite difference scheme. The completed work is summarized and includes code modifications for four chemical species, computing the flow upstream of the upper cowl for a theoretical air mixture, developing an initial plane solution for the inner nozzle region, and computing the flow inside the nozzle for both a N2/O2 mixture and a Freon-12/Ar mixture, and plotting density-pressure contours for the inner nozzle region. The second effort concerns a full Navier-Stokes code. The species continuity equations account for the diffusion of multiple gases. This 3-D explicit afterbody code has the ability to use high order numerical integration schemes such as the 4th order MacCormack, and the Gottlieb-MacCormack schemes. Changes to the work are listed and include, but are not limited to: (1) internal/external flow capability; (2) new treatments of the cowl wall boundary conditions and relaxed computations around the cowl region and cowl tip; (3) the entering of the thermodynamic and transport properties of Freon-12, Ar, O, and N; (4) modification to the Baldwin-Lomax turbulence model to account for turbulent eddies generated by cowl walls inside and external to the nozzle; and (5) adopting a relaxation formula to account for the turbulence in the mixing shear layer.

  19. Trellis coding with multidimensional QAM signal sets

    NASA Technical Reports Server (NTRS)

    Pietrobon, Steven S.; Costello, Daniel J.

    1993-01-01

    Trellis coding using multidimensional QAM signal sets is investigated. Finite-size 2D signal sets are presented that have minimum average energy, are 90-deg rotationally symmetric, and have from 16 to 1024 points. The best trellis codes using the finite 16-QAM signal set with two, four, six, and eight dimensions are found by computer search (the multidimensional signal set is constructed from the 2D signal set). The best moderate complexity trellis codes for infinite lattices with two, four, six, and eight dimensions are also found. The minimum free squared Euclidean distance and number of nearest neighbors for these codes were used as the selection criteria. Many of the multidimensional codes are fully rotationally invariant and give asymptotic coding gains up to 6.0 dB. From the infinite lattice codes, the best codes for transmitting J, J + 1/4, J + 1/3, J + 1/2, J + 2/3, and J + 3/4 bit/sym (J an integer) are presented.

  20. A CFD/CSD Interaction Methodology for Aircraft Wings

    NASA Technical Reports Server (NTRS)

    Bhardwaj, Manoj K.

    1997-01-01

    With advanced subsonic transports and military aircraft operating in the transonic regime, it is becoming important to determine the effects of the coupling between aerodynamic loads and elastic forces. Since aeroelastic effects can contribute significantly to the design of these aircraft, there is a strong need in the aerospace industry to predict these aero-structure interactions computationally. To perform static aeroelastic analysis in the transonic regime, high fidelity computational fluid dynamics (CFD) analysis tools must be used in conjunction with high fidelity computational structural fluid dynamics (CSD) analysis tools due to the nonlinear behavior of the aerodynamics in the transonic regime. There is also a need to be able to use a wide variety of CFD and CSD tools to predict these aeroelastic effects in the transonic regime. Because source codes are not always available, it is necessary to couple the CFD and CSD codes without alteration of the source codes. In this study, an aeroelastic coupling procedure is developed which will perform static aeroelastic analysis using any CFD and CSD code with little code integration. The aeroelastic coupling procedure is demonstrated on an F/A-18 Stabilator using NASTD (an in-house McDonnell Douglas CFD code) and NASTRAN. In addition, the Aeroelastic Research Wing (ARW-2) is used for demonstration of the aeroelastic coupling procedure by using ENSAERO (NASA Ames Research Center CFD code) and a finite element wing-box code (developed as part of this research).

  1. HELIOS-R: An Ultrafast, Open-Source Retrieval Code For Exoplanetary Atmosphere Characterization

    NASA Astrophysics Data System (ADS)

    LAVIE, Baptiste

    2015-12-01

    Atmospheric retrieval is a growing, new approach in the theory of exoplanet atmosphere characterization. Unlike self-consistent modeling it allows us to fully explore the parameter space, as well as the degeneracies between the parameters using a Bayesian framework. We present HELIOS-R, a very fast retrieving code written in Python and optimized for GPU computation. Once it is ready, HELIOS-R will be the first open-source atmospheric retrieval code accessible to the exoplanet community. As the new generation of direct imaging instruments (SPHERE, GPI) have started to gather data, the first version of HELIOS-R focuses on emission spectra. We use a 1D two-stream forward model for computing fluxes and couple it to an analytical temperature-pressure profile that is constructed to be in radiative equilibrium. We use our ultra-fast opacity calculator HELIOS-K (also open-source) to compute the opacities of CO2, H2O, CO and CH4 from the HITEMP database. We test both opacity sampling (which is typically used by other workers) and the method of k-distributions. Using this setup, we compute a grid of synthetic spectra and temperature-pressure profiles, which is then explored using a nested sampling algorithm. By focusing on model selection (Occam’s razor) through the explicit computation of the Bayesian evidence, nested sampling allows us to deal with current sparse data as well as upcoming high-resolution observations. Once the best model is selected, HELIOS-R provides posterior distributions of the parameters. As a test for our code we studied HR8799 system and compared our results with the previous analysis of Lee, Heng & Irwin (2013), which used the proprietary NEMESIS retrieval code. HELIOS-R and HELIOS-K are part of the set of open-source community codes we named the Exoclimes Simulation Platform (www.exoclime.org).

  2. Evaluation of three coding schemes designed for improved data communication

    NASA Technical Reports Server (NTRS)

    Snelsire, R. W.

    1974-01-01

    Three coding schemes designed for improved data communication are evaluated. Four block codes are evaluated relative to a quality function, which is a function of both the amount of data rejected and the error rate. The Viterbi maximum likelihood decoding algorithm as a decoding procedure is reviewed. This evaluation is obtained by simulating the system on a digital computer. Short constraint length rate 1/2 quick-look codes are studied, and their performance is compared to general nonsystematic codes.

  3. Volume accumulator design analysis computer codes

    NASA Technical Reports Server (NTRS)

    Whitaker, W. D.; Shimazaki, T. T.

    1973-01-01

    The computer codes, VANEP and VANES, were written and used to aid in the design and performance calculation of the volume accumulator units (VAU) for the 5-kwe reactor thermoelectric system. VANEP computes the VAU design which meets the primary coolant loop VAU volume and pressure performance requirements. VANES computes the performance of the VAU design, determined from the VANEP code, at the conditions of the secondary coolant loop. The codes can also compute the performance characteristics of the VAU's under conditions of possible modes of failure which still permit continued system operation.

  4. Proteus two-dimensional Navier-Stokes computer code, version 2.0. Volume 2: User's guide

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Schwab, John R.; Bui, Trong T.

    1993-01-01

    A computer code called Proteus 2D was developed to solve the two-dimensional planar or axisymmetric, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The objective in this effort was to develop a code for aerospace propulsion applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The governing equations are solved in generalized nonorthogonal body-fitted coordinates, by marching in time using a fully-coupled ADI solution procedure. The boundary conditions are treated implicitly. All terms, including the diffusion terms, are linearized using second-order Taylor series expansions. Turbulence is modeled using either an algebraic or two-equation eddy viscosity model. The thin-layer or Euler equations may also be solved. The energy equation may be eliminated by the assumption of constant total enthalpy. Explicit and implicit artificial viscosity may be used. Several time step options are available for convergence acceleration. The documentation is divided into three volumes. This is the User's Guide, and describes the program's features, the input and output, the procedure for setting up initial conditions, the computer resource requirements, the diagnostic messages that may be generated, the job control language used to run the program, and several test cases.

  5. "Hour of Code": Can It Change Students' Attitudes toward Programming?

    ERIC Educational Resources Information Center

    Du, Jie; Wimmer, Hayden; Rada, Roy

    2016-01-01

    The Hour of Code is a one-hour introduction to computer science organized by Code.org, a non-profit dedicated to expanding participation in computer science. This study investigated the impact of the Hour of Code on students' attitudes towards computer programming and their knowledge of programming. A sample of undergraduate students from two…

  6. [Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].

    PubMed

    Furuta, Takuya; Sato, Tatsuhiko

    2015-01-01

    Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.

  7. Development of a computer code for calculating the steady super/hypersonic inviscid flow around real configurations. Volume 2: Code description

    NASA Technical Reports Server (NTRS)

    Marconi, F.; Yaeger, L.

    1976-01-01

    A numerical procedure was developed to compute the inviscid super/hypersonic flow field about complex vehicle geometries accurately and efficiently. A second-order accurate finite difference scheme is used to integrate the three-dimensional Euler equations in regions of continuous flow, while all shock waves are computed as discontinuities via the Rankine-Hugoniot jump conditions. Conformal mappings are used to develop a computational grid. The effects of blunt nose entropy layers are computed in detail. Real gas effects for equilibrium air are included using curve fits of Mollier charts. Typical calculated results for shuttle orbiter, hypersonic transport, and supersonic aircraft configurations are included to demonstrate the usefulness of this tool.

  8. 26 CFR 1.441-1 - Period for computation of taxable income.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Internal Revenue Code, and the regulations thereunder. (2) Length of taxable year. Except as otherwise provided in the Internal Revenue Code and the regulations thereunder (e.g., § 1.441-2 regarding 52-53-week... and definitions. The general rules and definitions in this paragraph (b) apply for purposes of...

  9. 26 CFR 1.441-1 - Period for computation of taxable income.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Internal Revenue Code, and the regulations thereunder. (2) Length of taxable year. Except as otherwise provided in the Internal Revenue Code and the regulations thereunder (e.g., § 1.441-2 regarding 52-53-week... and definitions. The general rules and definitions in this paragraph (b) apply for purposes of...

  10. 26 CFR 1.441-1 - Period for computation of taxable income.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Internal Revenue Code, and the regulations thereunder. (2) Length of taxable year. Except as otherwise provided in the Internal Revenue Code and the regulations thereunder (e.g., § 1.441-2 regarding 52-53-week... and definitions. The general rules and definitions in this paragraph (b) apply for purposes of...

  11. Influence of temperature fluctuations on infrared limb radiance: a new simulation code

    NASA Astrophysics Data System (ADS)

    Rialland, Valérie; Chervet, Patrick

    2006-08-01

    Airborne infrared limb-viewing detectors may be used as surveillance sensors in order to detect dim military targets. These systems' performances are limited by the inhomogeneous background in the sensor field of view which impacts strongly on target detection probability. This background clutter, which results from small-scale fluctuations of temperature, density or pressure must therefore be analyzed and modeled. Few existing codes are able to model atmospheric structures and their impact on limb-observed radiance. SAMM-2 (SHARC-4 and MODTRAN4 Merged), the Air Force Research Laboratory (AFRL) background radiance code can be used to in order to predict the radiance fluctuation as a result of a normalized temperature fluctuation, as a function of the line-of-sight. Various realizations of cluttered backgrounds can then be computed, based on these transfer functions and on a stochastic temperature field. The existing SIG (SHARC Image Generator) code was designed to compute the cluttered background which would be observed from a space-based sensor. Unfortunately, this code was not able to compute accurate scenes as seen by an airborne sensor especially for lines-of-sight close to the horizon. Recently, we developed a new code called BRUTE3D and adapted to our configuration. This approach is based on a method originally developed in the SIG model. This BRUTE3D code makes use of a three-dimensional grid of temperature fluctuations and of the SAMM-2 transfer functions to synthesize an image of radiance fluctuations according to sensor characteristics. This paper details the working principles of the code and presents some output results. The effects of the small-scale temperature fluctuations on infrared limb radiance as seen by an airborne sensor are highlighted.

  12. Legacy Code Modernization

    NASA Technical Reports Server (NTRS)

    Hribar, Michelle R.; Frumkin, Michael; Jin, Haoqiang; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    Over the past decade, high performance computing has evolved rapidly; systems based on commodity microprocessors have been introduced in quick succession from at least seven vendors/families. Porting codes to every new architecture is a difficult problem; in particular, here at NASA, there are many large CFD applications that are very costly to port to new machines by hand. The LCM ("Legacy Code Modernization") Project is the development of an integrated parallelization environment (IPE) which performs the automated mapping of legacy CFD (Fortran) applications to state-of-the-art high performance computers. While most projects to port codes focus on the parallelization of the code, we consider porting to be an iterative process consisting of several steps: 1) code cleanup, 2) serial optimization,3) parallelization, 4) performance monitoring and visualization, 5) intelligent tools for automated tuning using performance prediction and 6) machine specific optimization. The approach for building this parallelization environment is to build the components for each of the steps simultaneously and then integrate them together. The demonstration will exhibit our latest research in building this environment: 1. Parallelizing tools and compiler evaluation. 2. Code cleanup and serial optimization using automated scripts 3. Development of a code generator for performance prediction 4. Automated partitioning 5. Automated insertion of directives. These demonstrations will exhibit the effectiveness of an automated approach for all the steps involved with porting and tuning a legacy code application for a new architecture.

  13. HYDRA-II: A hydrothermal analysis computer code: Volume 2, User's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCann, R.A.; Lowery, P.S.; Lessor, D.L.

    1987-09-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite-difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations formore » conservation of momentum incorporate directional porosities and permeabilities that are available to model solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated methods are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume 1 - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. This volume, Volume 2 - User's Manual, contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a sample problem. The final volume, Volume 3 - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. 6 refs.« less

  14. co2amp: A software program for modeling the dynamics of ultrashort pulses in optical systems with CO 2 amplifiers

    DOE PAGES

    Polyanskiy, Mikhail N.

    2015-01-01

    We describe a computer code for simulating the amplification of ultrashort mid-infrared laser pulses in CO 2 amplifiers and their propagation through arbitrary optical systems. This code is based on a comprehensive model that includes an accurate consideration of the CO 2 active medium and a physical optics propagation algorithm, and takes into account the interaction of the laser pulse with the material of the optical elements. Finally, the application of the code for optimizing an isotopic regenerative amplifier is described.

  15. CFD Simulation on the J-2X Engine Exhaust in the Center-Body Diffuser and Spray Chamber at the B-2 Facility

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen; Wey, Thomas; Buehrle, Robert

    2009-01-01

    A computational fluid dynamic (CFD) code is used to simulate the J-2X engine exhaust in the center-body diffuser and spray chamber at the Spacecraft Propulsion Facility (B-2). The CFD code is named as the space-time conservation element and solution element (CESE) Euler solver and is very robust at shock capturing. The CESE results are compared with independent analysis results obtained by using the National Combustion Code (NCC) and show excellent agreement.

  16. Talking about Code: Integrating Pedagogical Code Reviews into Early Computing Courses

    ERIC Educational Resources Information Center

    Hundhausen, Christopher D.; Agrawal, Anukrati; Agarwal, Pawan

    2013-01-01

    Given the increasing importance of soft skills in the computing profession, there is good reason to provide students withmore opportunities to learn and practice those skills in undergraduate computing courses. Toward that end, we have developed an active learning approach for computing education called the "Pedagogical Code Review"…

  17. Computational fluid dynamics analysis of space shuttle main propulsion feed line 17-inch disconnect valves

    NASA Technical Reports Server (NTRS)

    Kandula, Max; Pearce, Daniel

    1989-01-01

    A steady incompressible three-dimensional (3-D) viscous flow analysis was conducted for the Space Shuttle Main Propulsion External Tank (ET)/Orbiter (ORB) propellant feed line quick separable 17-inch disconnect flapper valves for liquid oxygen (LO2) and liquid hydrogen (LH2). The main objectives of the analysis were to predict and correlate the hydrodynamic stability of the flappers and pressure drop with available water test data. Computational Fluid Dynamics (CFD) computer codes were procured at no cost from the public domain, and were modified and extended to carry out the disconnect flow analysis. The grid generator codes SVTGD3D and INGRID were obtained. NASA Ames Research Center supplied the flow solution code INS3D, and the color graphics code PLOT3D. A driver routine was developed to automate the grid generation process. Components such as pipes, elbows, and flappers can be generated with simple commands, and flapper angles can be varied easily. The flow solver INS3D code was modified to treat interior flappers, and other interfacing routines were developed, which include a turbulence model, a force/moment routine, a time-step routine, and initial and boundary conditions. In particular, an under-relaxation scheme was implemented to enhance the solution stability. Major physical assumptions and simplifications made in the analysis include the neglect of linkages, slightly reduced flapper diameter, and smooth solid surfaces. A grid size of 54 x 21 x 25 was employed for both the LO2 and LH2 units. Mixing length theory applied to turbulent shear flow in pipes formed the basis for the simple turbulence model. Results of the analysis are presented for LO2 and LH2 disconnects.

  18. Computational strategies for three-dimensional flow simulations on distributed computer systems

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-01-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  19. Computational strategies for three-dimensional flow simulations on distributed computer systems

    NASA Astrophysics Data System (ADS)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-08-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  20. Guidelines for developing vectorizable computer programs

    NASA Technical Reports Server (NTRS)

    Miner, E. W.

    1982-01-01

    Some fundamental principles for developing computer programs which are compatible with array-oriented computers are presented. The emphasis is on basic techniques for structuring computer codes which are applicable in FORTRAN and do not require a special programming language or exact a significant penalty on a scalar computer. Researchers who are using numerical techniques to solve problems in engineering can apply these basic principles and thus develop transportable computer programs (in FORTRAN) which contain much vectorizable code. The vector architecture of the ASC is discussed so that the requirements of array processing can be better appreciated. The "vectorization" of a finite-difference viscous shock-layer code is used as an example to illustrate the benefits and some of the difficulties involved. Increases in computing speed with vectorization are illustrated with results from the viscous shock-layer code and from a finite-element shock tube code. The applicability of these principles was substantiated through running programs on other computers with array-associated computing characteristics, such as the Hewlett-Packard (H-P) 1000-F.

  1. User Manual for the NASA Glenn Ice Accretion Code LEWICE: Version 2.0

    NASA Technical Reports Server (NTRS)

    Wright, William B.

    1999-01-01

    A research project is underway at NASA Glenn to produce a computer code which can accurately predict ice growth under a wide range of meteorological conditions for any aircraft surface. This report will present a description of the code inputs and outputs from version 2.0 of this code, which is called LEWICE. This version differs from previous releases due to its robustness and its ability to reproduce results accurately for different spacing and time step criteria across computing platform. It also differs in the extensive effort undertaken to compare the results against the database of ice shapes which have been generated in the NASA Glenn Icing Research Tunnel (IRT) 1. This report will only describe the features of the code related to the use of the program. The report will not describe the inner working of the code or the physical models used. This information is available in the form of several unpublished documents which will be collectively referred to as a Programmers Manual for LEWICE 2 in this report. These reports are intended as an update/replacement for all previous user manuals of LEWICE. In addition to describing the changes and improvements made for this version, information from previous manuals may be duplicated so that the user will not need to consult previous manuals to use this code.

  2. The Helicopter Antenna Radiation Prediction Code (HARP)

    NASA Technical Reports Server (NTRS)

    Klevenow, F. T.; Lynch, B. G.; Newman, E. H.; Rojas, R. G.; Scheick, J. T.; Shamansky, H. T.; Sze, K. Y.

    1990-01-01

    The first nine months effort in the development of a user oriented computer code, referred to as the HARP code, for analyzing the radiation from helicopter antennas is described. The HARP code uses modern computer graphics to aid in the description and display of the helicopter geometry. At low frequencies the helicopter is modeled by polygonal plates, and the method of moments is used to compute the desired patterns. At high frequencies the helicopter is modeled by a composite ellipsoid and flat plates, and computations are made using the geometrical theory of diffraction. The HARP code will provide a user friendly interface, employing modern computer graphics, to aid the user to describe the helicopter geometry, select the method of computation, construct the desired high or low frequency model, and display the results.

  3. Multi-dimensional computer simulation of MHD combustor hydrodynamics

    NASA Astrophysics Data System (ADS)

    Berry, G. F.; Chang, S. L.; Lottes, S. A.; Rimkus, W. A.

    1991-04-01

    Argonne National Laboratory is investigating the nonreacting jet gas mixing patterns in an MHD second stage combustor by using a 2-D multiphase hydrodynamics computer program and a 3-D single phase hydrodynamics computer program. The computer simulations are intended to enhance the understanding of flow and mixing patterns in the combustor, which in turn may lead to improvement of the downstream MHD channel performance. A 2-D steady state computer model, based on mass and momentum conservation laws for multiple gas species, is used to simulate the hydrodynamics of the combustor in which a jet of oxidizer is injected into an unconfined cross stream gas flow. A 3-D code is used to examine the effects of the side walls and the distributed jet flows on the non-reacting jet gas mixing patterns. The code solves the conservation equations of mass, momentum, and energy, and a transport equation of a turbulence parameter and allows permeable surfaces to be specified for any computational cell.

  4. Enhanced fault-tolerant quantum computing in d-level systems.

    PubMed

    Campbell, Earl T

    2014-12-05

    Error-correcting codes protect quantum information and form the basis of fault-tolerant quantum computing. Leading proposals for fault-tolerant quantum computation require codes with an exceedingly rare property, a transversal non-Clifford gate. Codes with the desired property are presented for d-level qudit systems with prime d. The codes use n=d-1 qudits and can detect up to ∼d/3 errors. We quantify the performance of these codes for one approach to quantum computation known as magic-state distillation. Unlike prior work, we find performance is always enhanced by increasing d.

  5. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1992-01-01

    Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.

  6. Nonuniform code concatenation for universal fault-tolerant quantum computing

    NASA Astrophysics Data System (ADS)

    Nikahd, Eesa; Sedighi, Mehdi; Saheb Zamani, Morteza

    2017-09-01

    Using transversal gates is a straightforward and efficient technique for fault-tolerant quantum computing. Since transversal gates alone cannot be computationally universal, they must be combined with other approaches such as magic state distillation, code switching, or code concatenation to achieve universality. In this paper we propose an alternative approach for universal fault-tolerant quantum computing, mainly based on the code concatenation approach proposed in [T. Jochym-O'Connor and R. Laflamme, Phys. Rev. Lett. 112, 010505 (2014), 10.1103/PhysRevLett.112.010505], but in a nonuniform fashion. The proposed approach is described based on nonuniform concatenation of the 7-qubit Steane code with the 15-qubit Reed-Muller code, as well as the 5-qubit code with the 15-qubit Reed-Muller code, which lead to two 49-qubit and 47-qubit codes, respectively. These codes can correct any arbitrary single physical error with the ability to perform a universal set of fault-tolerant gates, without using magic state distillation.

  7. A high performance scientific cloud computing environment for materials simulations

    NASA Astrophysics Data System (ADS)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  8. Heat transfer, thermal stress analysis and the dynamic behaviour of high power RF structures. [MARC and SUPERFISH codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKeown, J.; Labrie, J.P.

    1983-08-01

    A general purpose finite element computer code called MARC is used to calculate the temperature distribution and dimensional changes in linear accelerator rf structures. Both steady state and transient behaviour are examined with the computer model. Combining results from MARC with the cavity evaluation computer code SUPERFISH, the static and dynamic behaviour of a structure under power is investigated. Structure cooling is studied to minimize loss in shunt impedance and frequency shifts during high power operation. Results are compared with an experimental test carried out on a cw 805 MHz on-axis coupled structure at an energy gradient of 1.8 MeV/m.more » The model has also been used to compare the performance of on-axis and coaxial structures and has guided the mechanical design of structures suitable for average gradients in excess of 2.0 MeV/m at 2.45 GHz.« less

  9. Improvements to Busquet's Non LTE algorithm in NRL's Hydro code

    NASA Astrophysics Data System (ADS)

    Klapisch, M.; Colombant, D.

    1996-11-01

    Implementation of the Non LTE model RADIOM (M. Busquet, Phys. Fluids B, 5, 4191 (1993)) in NRL's RAD2D Hydro code in conservative form was reported previously(M. Klapisch et al., Bull. Am. Phys. Soc., 40, 1806 (1995)).While the results were satisfactory, the algorithm was slow and not always converging. We describe here modifications that address the latter two shortcomings. This method is quicker and more stable than the original. It also gives information about the validity of the fitting. It turns out that the number and distribution of groups in the multigroup diffusion opacity tables - a basis for the computation of radiation effects in the ionization balance in RADIOM- has a large influence on the robustness of the algorithm. These modifications give insight about the algorithm, and allow to check that the obtained average charge state is the true average. In addition, code optimization resulted in greatly reduced computing time: The ratio of Non LTE to LTE computing times being now between 1.5 and 2.

  10. Evolvix BEST Names for semantic reproducibility across code2brain interfaces.

    PubMed

    Loewe, Laurence; Scheuer, Katherine S; Keel, Seth A; Vyas, Vaibhav; Liblit, Ben; Hanlon, Bret; Ferris, Michael C; Yin, John; Dutra, Inês; Pietsch, Anthony; Javid, Christine G; Moog, Cecilia L; Meyer, Jocelyn; Dresel, Jerdon; McLoone, Brian; Loberger, Sonya; Movaghar, Arezoo; Gilchrist-Scott, Morgaine; Sabri, Yazeed; Sescleifer, Dave; Pereda-Zorrilla, Ivan; Zietlow, Andrew; Smith, Rodrigo; Pietenpol, Samantha; Goldfinger, Jacob; Atzen, Sarah L; Freiberg, Erika; Waters, Noah P; Nusbaum, Claire; Nolan, Erik; Hotz, Alyssa; Kliman, Richard M; Mentewab, Ayalew; Fregien, Nathan; Loewe, Martha

    2017-01-01

    Names in programming are vital for understanding the meaning of code and big data. We define code2brain (C2B) interfaces as maps in compilers and brains between meaning and naming syntax, which help to understand executable code. While working toward an Evolvix syntax for general-purpose programming that makes accurate modeling easy for biologists, we observed how names affect C2B quality. To protect learning and coding investments, C2B interfaces require long-term backward compatibility and semantic reproducibility (accurate reproduction of computational meaning from coder-brains to reader-brains by code alone). Semantic reproducibility is often assumed until confusing synonyms degrade modeling in biology to deciphering exercises. We highlight empirical naming priorities from diverse individuals and roles of names in different modes of computing to show how naming easily becomes impossibly difficult. We present the Evolvix BEST (Brief, Explicit, Summarizing, Technical) Names concept for reducing naming priority conflicts, test it on a real challenge by naming subfolders for the Project Organization Stabilizing Tool system, and provide naming questionnaires designed to facilitate C2B debugging by improving names used as keywords in a stabilizing programming language. Our experiences inspired us to develop Evolvix using a flipped programming language design approach with some unexpected features and BEST Names at its core. © 2016 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of New York Academy of Sciences.

  11. Performance analysis of a cascaded coding scheme with interleaved outer code

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1986-01-01

    A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.

  12. A Comparison of Automatic Parallelization Tools/Compilers on the SGI Origin 2000 Using the NAS Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Frumkin, Michael; Hribar, Michelle; Jin, Hao-Qiang; Waheed, Abdul; Yan, Jerry

    1998-01-01

    Porting applications to new high performance parallel and distributed computing platforms is a challenging task. Since writing parallel code by hand is extremely time consuming and costly, porting codes would ideally be automated by using some parallelization tools and compilers. In this paper, we compare the performance of the hand written NAB Parallel Benchmarks against three parallel versions generated with the help of tools and compilers: 1) CAPTools: an interactive computer aided parallelization too] that generates message passing code, 2) the Portland Group's HPF compiler and 3) using compiler directives with the native FORTAN77 compiler on the SGI Origin2000.

  13. Porting a Hall MHD Code to a Graphic Processing Unit

    NASA Technical Reports Server (NTRS)

    Dorelli, John C.

    2011-01-01

    We present our experience porting a Hall MHD code to a Graphics Processing Unit (GPU). The code is a 2nd order accurate MUSCL-Hancock scheme which makes use of an HLL Riemann solver to compute numerical fluxes and second-order finite differences to compute the Hall contribution to the electric field. The divergence of the magnetic field is controlled with Dedner?s hyperbolic divergence cleaning method. Preliminary benchmark tests indicate a speedup (relative to a single Nehalem core) of 58x for a double precision calculation. We discuss scaling issues which arise when distributing work across multiple GPUs in a CPU-GPU cluster.

  14. STGSTK: A computer code for predicting multistage axial flow compressor performance by a meanline stage stacking method

    NASA Technical Reports Server (NTRS)

    Steinke, R. J.

    1982-01-01

    A FORTRAN computer code is presented for off-design performance prediction of axial-flow compressors. Stage and compressor performance is obtained by a stage-stacking method that uses representative velocity diagrams at rotor inlet and outlet meanline radii. The code has options for: (1) direct user input or calculation of nondimensional stage characteristics; (2) adjustment of stage characteristics for off-design speed and blade setting angle; (3) adjustment of rotor deviation angle for off-design conditions; and (4) SI or U.S. customary units. Correlations from experimental data are used to model real flow conditions. Calculations are compared with experimental data.

  15. Higher order turbulence closure models

    NASA Technical Reports Server (NTRS)

    Amano, Ryoichi S.; Chai, John C.; Chen, Jau-Der

    1988-01-01

    Theoretical models are developed and numerical studies conducted on various types of flows including both elliptic and parabolic. The purpose of this study is to find better higher order closure models for the computations of complex flows. This report summarizes three new achievements: (1) completion of the Reynolds-stress closure by developing a new pressure-strain correlation; (2) development of a parabolic code to compute jets and wakes; and, (3) application to a flow through a 180 deg turnaround duct by adopting a boundary fitted coordinate system. In the above mentioned models near-wall models are developed for pressure-strain correlation and third-moment, and incorporated into the transport equations. This addition improved the results considerably and is recommended for future computations. A new parabolic code to solve shear flows without coordinate tranformations is developed and incorporated in this study. This code uses the structure of the finite volume method to solve the governing equations implicitly. The code was validated with the experimental results available in the literature.

  16. VORCOR: A computer program for calculating characteristics of wings with edge vortex separation by using a vortex-filament and-core model

    NASA Technical Reports Server (NTRS)

    Pao, J. L.; Mehrotra, S. C.; Lan, C. E.

    1982-01-01

    A computer code base on an improved vortex filament/vortex core method for predicting aerodynamic characteristics of slender wings with edge vortex separations is developed. The code is applicable to camber wings, straked wings or wings with leading edge vortex flaps at subsonic speeds. The prediction of lifting pressure distribution and the computer time are improved by using a pair of concentrated vortex cores above the wing surface. The main features of this computer program are: (1) arbitrary camber shape may be defined and an option for exactly defining leading edge flap geometry is also provided; (2) the side edge vortex system is incorporated.

  17. Green's function methods in heavy ion shielding

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Costen, Robert C.; Shinn, Judy L.; Badavi, Francis F.

    1993-01-01

    An analytic solution to the heavy ion transport in terms of Green's function is used to generate a highly efficient computer code for space applications. The efficiency of the computer code is accomplished by a nonperturbative technique extending Green's function over the solution domain. The computer code can also be applied to accelerator boundary conditions to allow code validation in laboratory experiments.

  18. Validation of High-Speed Turbulent Boundary Layer and Shock-Boundary Layer Interaction Computations with the OVERFLOW Code

    NASA Technical Reports Server (NTRS)

    Oliver, A. B.; Lillard, R. P.; Blaisdell, G. A.; Lyrintizis, A. S.

    2006-01-01

    The capability of the OVERFLOW code to accurately compute high-speed turbulent boundary layers and turbulent shock-boundary layer interactions is being evaluated. Configurations being investigated include a Mach 2.87 flat plate to compare experimental velocity profiles and boundary layer growth, a Mach 6 flat plate to compare experimental surface heat transfer,a direct numerical simulation (DNS) at Mach 2.25 for turbulent quantities, and several Mach 3 compression ramps to compare computations of shock-boundary layer interactions to experimental laser doppler velocimetry (LDV) data and hot-wire data. The present paper describes outlines the study and presents preliminary results for two of the flat plate cases and two small-angle compression corner test cases.

  19. A user`s guide to LUGSAN II. A computer program to calculate and archive lug and sway brace loads for aircraft-carried stores

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunn, W.N.

    1998-03-01

    LUG and Sway brace ANalysis (LUGSAN) II is an analysis and database computer program that is designed to calculate store lug and sway brace loads for aircraft captive carriage. LUGSAN II combines the rigid body dynamics code, SWAY85, with a Macintosh Hypercard database to function both as an analysis and archival system. This report describes the LUGSAN II application program, which operates on the Macintosh System (Hypercard 2.2 or later) and includes function descriptions, layout examples, and sample sessions. Although this report is primarily a user`s manual, a brief overview of the LUGSAN II computer code is included with suggestedmore » resources for programmers.« less

  20. Game-Coding Workshops in New Zealand Public Libraries: Evaluation of a Pilot Project

    ERIC Educational Resources Information Center

    Bolstad, Rachel

    2016-01-01

    This report evaluates a game coding workshop offered to young people and adults in seven public libraries round New Zealand. Participants were taken step by step through the process of creating their own simple 2D videogame, learning the basics of coding, computational thinking, and digital game design. The workshops were free and drew 426 people…

  1. Automated apparatus and method of generating native code for a stitching machine

    NASA Technical Reports Server (NTRS)

    Miller, Jeffrey L. (Inventor)

    2000-01-01

    A computer system automatically generates CNC code for a stitching machine. The computer determines the locations of a present stitching point and a next stitching point. If a constraint is not found between the present stitching point and the next stitching point, the computer generates code for making a stitch at the next stitching point. If a constraint is found, the computer generates code for changing a condition (e.g., direction) of the stitching machine's stitching head.

  2. Development of a Model and Computer Code to Describe Solar Grade Silicon Production Processes

    NASA Technical Reports Server (NTRS)

    Srivastava, R.; Gould, R. K.

    1979-01-01

    Mathematical models and computer codes based on these models, which allow prediction of the product distribution in chemical reactors for converting gaseous silicon compounds to condensed-phase silicon were developed. The following tasks were accomplished: (1) formulation of a model for silicon vapor separation/collection from the developing turbulent flow stream within reactors of the Westinghouse (2) modification of an available general parabolic code to achieve solutions to the governing partial differential equations (boundary layer type) which describe migration of the vapor to the reactor walls, (3) a parametric study using the boundary layer code to optimize the performance characteristics of the Westinghouse reactor, (4) calculations relating to the collection efficiency of the new AeroChem reactor, and (5) final testing of the modified LAPP code for use as a method of predicting Si(1) droplet sizes in these reactors.

  3. Development of structured ICD-10 and its application to computer-assisted ICD coding.

    PubMed

    Imai, Takeshi; Kajino, Masayuki; Sato, Megumi; Ohe, Kazuhiko

    2010-01-01

    This paper presents: (1) a framework of formal representation of ICD10, which functions as a bridge between ontological information and natural language expressions; and (2) a methodology to use formally described ICD10 for computer-assisted ICD coding. First, we analyzed and structurized the meanings of categories in 15 chapters of ICD10. Then we expanded the structured ICD10 (S-ICD10) by adding subordinate concepts and labels derived from Japanese Standard Disease Names. The information model to describe formal representation was refined repeatedly. The resultant model includes 74 types of semantic links. We also developed an ICD coding module based on S-ICD10 and a 'Coding Principle,' which achieved high accuracy (>70%) for four chapters. These results not only demonstrate the basic feasibility of our coding framework but might also inform the development of the information model for formal description framework in the ICD11 revision.

  4. Cloud Fingerprinting: Using Clock Skews To Determine Co Location Of Virtual Machines

    DTIC Science & Technology

    2016-09-01

    DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Cloud computing has quickly revolutionized computing practices of organizations, to include the Department of... Cloud computing has quickly revolutionized computing practices of organizations, to in- clude the Department of Defense. However, security concerns...vi Table of Contents 1 Introduction 1 1.1 Proliferation of Cloud Computing . . . . . . . . . . . . . . . . . . 1 1.2 Problem Statement

  5. Advanced Subsonic Technology (AST) Area of Interest (AOI) 6: Develop and Validate Aeroelastic Codes for Turbomachinery

    NASA Technical Reports Server (NTRS)

    Gardner, Kevin D.; Liu, Jong-Shang; Murthy, Durbha V.; Kruse, Marlin J.; James, Darrell

    1999-01-01

    AlliedSignal Engines, in cooperation with NASA GRC (National Aeronautics and Space Administration Glenn Research Center), completed an evaluation of recently-developed aeroelastic computer codes using test cases from the AlliedSignal Engines fan blisk and turbine databases. Test data included strain gage, performance, and steady-state pressure information obtained for conditions where synchronous or flutter vibratory conditions were found to occur. Aeroelastic codes evaluated included quasi 3-D UNSFLO (MIT Developed/AE Modified, Quasi 3-D Aeroelastic Computer Code), 2-D FREPS (NASA-Developed Forced Response Prediction System Aeroelastic Computer Code), and 3-D TURBO-AE (NASA/Mississippi State University Developed 3-D Aeroelastic Computer Code). Unsteady pressure predictions for the turbine test case were used to evaluate the forced response prediction capabilities of each of the three aeroelastic codes. Additionally, one of the fan flutter cases was evaluated using TURBO-AE. The UNSFLO and FREPS evaluation predictions showed good agreement with the experimental test data trends, but quantitative improvements are needed. UNSFLO over-predicted turbine blade response reductions, while FREPS under-predicted them. The inviscid TURBO-AE turbine analysis predicted no discernible blade response reduction, indicating the necessity of including viscous effects for this test case. For the TURBO-AE fan blisk test case, significant effort was expended getting the viscous version of the code to give converged steady flow solutions for the transonic flow conditions. Once converged, the steady solutions provided an excellent match with test data and the calibrated DAWES (AlliedSignal 3-D Viscous Steady Flow CFD Solver). However, efforts expended establishing quality steady-state solutions prevented exercising the unsteady portion of the TURBO-AE code during the present program. AlliedSignal recommends that unsteady pressure measurement data be obtained for both test cases examined for use in aeroelastic code validation.

  6. Probabilistic Structural Analysis Methods for select space propulsion system components (PSAM). Volume 2: Literature surveys of critical Space Shuttle main engine components

    NASA Technical Reports Server (NTRS)

    Rajagopal, K. R.

    1992-01-01

    The technical effort and computer code development is summarized. Several formulations for Probabilistic Finite Element Analysis (PFEA) are described with emphasis on the selected formulation. The strategies being implemented in the first-version computer code to perform linear, elastic PFEA is described. The results of a series of select Space Shuttle Main Engine (SSME) component surveys are presented. These results identify the critical components and provide the information necessary for probabilistic structural analysis. Volume 2 is a summary of critical SSME components.

  7. Multiprocessing on supercomputers for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Mehta, Unmeel B.

    1990-01-01

    Very little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPs or more) in computational aerodynamics to significantly improve turnaround time. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, the improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) through multi-tasking is applied via a strategy which requires relatively minor modifications to an existing code for a single processor. Essentially, this approach maps the available memory to multiple processors, exploiting the C-FORTRAN-Unix interface. The existing single processor code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor. As a demonstration of this approach, a Multiple Processor Multiple Grid (MPMG) code is developed. It is capable of using nine processors, and can be easily extended to a larger number of processors. This code solves the three-dimensional, Reynolds averaged, thin-layer and slender-layer Navier-Stokes equations with an implicit, approximately factored and diagonalized method. The solver is applied to generic oblique-wing aircraft problem on a four processor Cray-2 computer. A tricubic interpolation scheme is developed to increase the accuracy of coupling of overlapped grids. For the oblique-wing aircraft problem, a speedup of two in elapsed (turnaround) time is observed in a saturated time-sharing environment.

  8. Numerical, analytical, experimental study of fluid dynamic forces in seals

    NASA Technical Reports Server (NTRS)

    Shapiro, William; Artiles, Antonio; Aggarwal, Bharat; Walowit, Jed; Athavale, Mahesh M.; Preskwas, Andrzej J.

    1992-01-01

    NASA/Lewis Research Center is sponsoring a program for providing computer codes for analyzing and designing turbomachinery seals for future aerospace and engine systems. The program is made up of three principal components: (1) the development of advanced three dimensional (3-D) computational fluid dynamics codes, (2) the production of simpler two dimensional (2-D) industrial codes, and (3) the development of a knowledge based system (KBS) that contains an expert system to assist in seal selection and design. The first task has been to concentrate on cylindrical geometries with straight, tapered, and stepped bores. Improvements have been made by adoption of a colocated grid formulation, incorporation of higher order, time accurate schemes for transient analysis and high order discretization schemes for spatial derivatives. This report describes the mathematical formulations and presents a variety of 2-D results, including labyrinth and brush seal flows. Extensions of 3-D are presently in progress.

  9. VizieR Online Data Catalog: Habitable zone code (Valle+, 2014)

    NASA Astrophysics Data System (ADS)

    Valle, G.; Dell'Omodarme, M.; Prada Moroni, P. G.; Degl'Innocenti, S.

    2014-06-01

    A C computation code that provide in output the distance dm (i for which the duration of habitability is longest, the corresponding duration tm (in Gyr), the width W (in AU) of the zone for which the habitability lasts tm/2, the inner (Ri) and outer (Ro) boundaries of the 4Gyr continuously habitable zone. The code read the input file HZ-input.dat, containing in each row the mass of the host star (range: 0.70-1.10M⊙), its metallicity (either Z (range: 0.005-0.004) or [Fe/H]), the helium-to-metal enrichment ratio (range: 1-3, standard value = 2), the equilibrium temperature for habitable zone outer boundary computation (range: 169-203K) and the planet Bond Albedo (range: 0.0-1.0, Earth = 0.3). The output is printed on-screen. Compilation: just use your favorite C compiler: gcc hz.c -lm -o HZ (2 data files).

  10. 10 CFR 2.1003 - Availability of material.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... months in advance of submitting its license application for a geologic repository, the NRC shall make... of privilege in § 2.1006, graphic-oriented documentary material that includes raw data, computer runs, computer programs and codes, field notes, laboratory notes, maps, diagrams and photographs, which have been...

  11. 10 CFR 2.1003 - Availability of material.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... months in advance of submitting its license application for a geologic repository, the NRC shall make... of privilege in § 2.1006, graphic-oriented documentary material that includes raw data, computer runs, computer programs and codes, field notes, laboratory notes, maps, diagrams and photographs, which have been...

  12. Spatial transform coding of color images.

    NASA Technical Reports Server (NTRS)

    Pratt, W. K.

    1971-01-01

    The application of the transform-coding concept to the coding of color images represented by three primary color planes of data is discussed. The principles of spatial transform coding are reviewed and the merits of various methods of color-image representation are examined. A performance analysis is presented for the color-image transform-coding system. Results of a computer simulation of the coding system are also given. It is shown that, by transform coding, the chrominance content of a color image can be coded with an average of 1.0 bits per element or less without serious degradation. If luminance coding is also employed, the average rate reduces to about 2.0 bits per element or less.

  13. Reactivity effects in VVER-1000 of the third unit of the kalinin nuclear power plant at physical start-up. Computations in ShIPR intellectual code system with library of two-group cross sections generated by UNK code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zizin, M. N.; Zimin, V. G.; Zizina, S. N., E-mail: zizin@adis.vver.kiae.ru

    2010-12-15

    The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit ofmore » the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.« less

  14. Reactivity effects in VVER-1000 of the third unit of the kalinin nuclear power plant at physical start-up. Computations in ShIPR intellectual code system with library of two-group cross sections generated by UNK code

    NASA Astrophysics Data System (ADS)

    Zizin, M. N.; Zimin, V. G.; Zizina, S. N.; Kryakvin, L. V.; Pitilimov, V. A.; Tereshonok, V. A.

    2010-12-01

    The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit of the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.

  15. An Object-Oriented Approach to Writing Computational Electromagnetics Codes

    NASA Technical Reports Server (NTRS)

    Zimmerman, Martin; Mallasch, Paul G.

    1996-01-01

    Presently, most computer software development in the Computational Electromagnetics (CEM) community employs the structured programming paradigm, particularly using the Fortran language. Other segments of the software community began switching to an Object-Oriented Programming (OOP) paradigm in recent years to help ease design and development of highly complex codes. This paper examines design of a time-domain numerical analysis CEM code using the OOP paradigm, comparing OOP code and structured programming code in terms of software maintenance, portability, flexibility, and speed.

  16. BEARCLAW: Boundary Embedded Adaptive Refinement Conservation LAW package

    NASA Astrophysics Data System (ADS)

    Mitran, Sorin

    2011-04-01

    The BEARCLAW package is a multidimensional, Eulerian AMR-capable computational code written in Fortran to solve hyperbolic systems for astrophysical applications. It is part of AstroBEAR, a hydrodynamic & magnetohydrodynamic code environment designed for a variety of astrophysical applications which allows simulations in 2, 2.5 (i.e., cylindrical), and 3 dimensions, in either cartesian or curvilinear coordinates.

  17. Particle Impact Erosion. Volume 4. User’s Manual Erosion Prediction Procedure for Rocket Nozzle Expansion Region

    DTIC Science & Technology

    1983-05-01

    empirical erosion model, with use of the debris-layer model optional. 1.1 INTERFACE WITH ISPP ISPP is a collection of computer codes designed to calculate...expansion with the ODK code, 4. A two-dimensional, two-phase nozzle expansion with the TD2P code, 5. A turbulent boundary layer solution along the...INPUT THERMODYNAMIC DATA FOR TEMPERATURESBELOW 300°K OIF NEEDED) NO A• 11 READ SSP NAMELIST (ODE. BAL. ODK . TD2P. TEL. NOZZLE GEOMETRY) PROfLM 2

  18. Particle-in-cell simulations with charge-conserving current deposition on graphic processing units

    NASA Astrophysics Data System (ADS)

    Ren, Chuang; Kong, Xianglong; Huang, Michael; Decyk, Viktor; Mori, Warren

    2011-10-01

    Recently using CUDA, we have developed an electromagnetic Particle-in-Cell (PIC) code with charge-conserving current deposition for Nvidia graphic processing units (GPU's) (Kong et al., Journal of Computational Physics 230, 1676 (2011). On a Tesla M2050 (Fermi) card, the GPU PIC code can achieve a one-particle-step process time of 1.2 - 3.2 ns in 2D and 2.3 - 7.2 ns in 3D, depending on plasma temperatures. In this talk we will discuss novel algorithms for GPU-PIC including charge-conserving current deposition scheme with few branching and parallel particle sorting. These algorithms have made efficient use of the GPU shared memory. We will also discuss how to replace the computation kernels of existing parallel CPU codes while keeping their parallel structures. This work was supported by U.S. Department of Energy under Grant Nos. DE-FG02-06ER54879 and DE-FC02-04ER54789 and by NSF under Grant Nos. PHY-0903797 and CCF-0747324.

  19. Inclusion of pressure and flow in a new 3D MHD equilibrium code

    NASA Astrophysics Data System (ADS)

    Raburn, Daniel; Fukuyama, Atsushi

    2012-10-01

    Flow and nonsymmetric effects can play a large role in plasma equilibria and energy confinement. A concept for such a 3D equilibrium code was developed and presented in 2011. The code is called the Kyoto ITerative Equilibrium Solver (KITES) [1], and the concept is based largely on the PIES code [2]. More recently, the work-in-progress KITES code was used to calculate force-free equilibria. Here, progress and results on the inclusion of pressure and flow in the code are presented. [4pt] [1] Daniel Raburn and Atsushi Fukuyama, Plasma and Fusion Research: Regular Articles, 7:240381 (2012).[0pt] [2] H. S. Greenside, A. H. Reiman, and A. Salas, J. Comput. Phys, 81(1):102-136 (1989).

  20. SU-D-BRD-03: A Gateway for GPU Computing in Cancer Radiotherapy Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jia, X; Folkerts, M; Shi, F

    Purpose: Graphics Processing Unit (GPU) has become increasingly important in radiotherapy. However, it is still difficult for general clinical researchers to access GPU codes developed by other researchers, and for developers to objectively benchmark their codes. Moreover, it is quite often to see repeated efforts spent on developing low-quality GPU codes. The goal of this project is to establish an infrastructure for testing GPU codes, cross comparing them, and facilitating code distributions in radiotherapy community. Methods: We developed a system called Gateway for GPU Computing in Cancer Radiotherapy Research (GCR2). A number of GPU codes developed by our group andmore » other developers can be accessed via a web interface. To use the services, researchers first upload their test data or use the standard data provided by our system. Then they can select the GPU device on which the code will be executed. Our system offers all mainstream GPU hardware for code benchmarking purpose. After the code running is complete, the system automatically summarizes and displays the computing results. We also released a SDK to allow the developers to build their own algorithm implementation and submit their binary codes to the system. The submitted code is then systematically benchmarked using a variety of GPU hardware and representative data provided by our system. The developers can also compare their codes with others and generate benchmarking reports. Results: It is found that the developed system is fully functioning. Through a user-friendly web interface, researchers are able to test various GPU codes. Developers also benefit from this platform by comprehensively benchmarking their codes on various GPU platforms and representative clinical data sets. Conclusion: We have developed an open platform allowing the clinical researchers and developers to access the GPUs and GPU codes. This development will facilitate the utilization of GPU in radiation therapy field.« less

  1. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Peiyuan; Brown, Timothy; Fullmer, William D.

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling ofmore » the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.« less

  2. Computation of transonic separated wing flows using an Euler/Navier-Stokes zonal approach

    NASA Technical Reports Server (NTRS)

    Kaynak, Uenver; Holst, Terry L.; Cantwell, Brian J.

    1986-01-01

    A computer program called Transonic Navier Stokes (TNS) has been developed which solves the Euler/Navier-Stokes equations around wings using a zonal grid approach. In the present zonal scheme, the physical domain of interest is divided into several subdomains called zones and the governing equations are solved interactively. The advantages of the Zonal Grid approach are as follows: (1) the grid for any subdomain can be generated easily; (2) grids can be, in a sense, adapted to the solution; (3) different equation sets can be used in different zones; and, (4) this approach allows for a convenient data base organization scheme. Using this code, separated flows on a NACA 0012 section wing and on the NASA Ames WING C have been computed. First, the effects of turbulence and artificial dissipation models incorporated into the code are assessed by comparing the TNS results with other CFD codes and experiments. Then a series of flow cases is described where data are available. The computed results, including cases with shock-induced separation, are in good agreement with experimental data. Finally, some futuristic cases are presented to demonstrate the abilities of the code for massively separated cases which do not have experimental data.

  3. 48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...

  4. 48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...

  5. 48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...

  6. 48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar data produced for...

  7. Computer Power: Part 1: Distribution of Power (and Communications).

    ERIC Educational Resources Information Center

    Price, Bennett J.

    1988-01-01

    Discussion of the distribution of power to personal computers and computer terminals addresses options such as extension cords, perimeter raceways, and interior raceways. Sidebars explain: (1) the National Electrical Code; (2) volts, amps, and watts; (3) transformers, circuit breakers, and circuits; and (4) power vs. data wiring. (MES)

  8. Gyrokinetic micro-turbulence simulations on the NERSC 16-way SMP IBM SP computer: experiences and performance results

    NASA Astrophysics Data System (ADS)

    Ethier, Stephane; Lin, Zhihong

    2001-10-01

    Earlier this year, the National Energy Research Scientific Computing center (NERSC) took delivery of the second most powerful computer in the world. With its 2,528 processors running at a peak performance of 1.5 GFlops, this IBM SP machine has a theoretical performance of almost 3.8 TFlops. To efficiently harness such computing power in one single code is not an easy task and requires a good knowledge of the computer's architecture. Here we present the steps that we followed to improve our gyrokinetic micro-turbulence code GTC in order to take advantage of the new 16-way shared memory nodes of the NERSC IBM SP. Performance results are shown as well as details about the improved mixed-mode MPI-OpenMP model that we use. The enhancements to the code allowed us to tackle much bigger problem sizes, getting closer to our goal of simulating an ITER-size tokamak with both kinetic ions and electrons.(This work is supported by DOE Contract No. DE-AC02-76CH03073 (PPPL), and in part by the DOE Fusion SciDAC Project.)

  9. Infrared imaging - A validation technique for computational fluid dynamics codes used in STOVL applications

    NASA Technical Reports Server (NTRS)

    Hardman, R. R.; Mahan, J. R.; Smith, M. H.; Gelhausen, P. A.; Van Dalsem, W. R.

    1991-01-01

    The need for a validation technique for computational fluid dynamics (CFD) codes in STOVL applications has led to research efforts to apply infrared thermal imaging techniques to visualize gaseous flow fields. Specifically, a heated, free-jet test facility was constructed. The gaseous flow field of the jet exhaust was characterized using an infrared imaging technique in the 2 to 5.6 micron wavelength band as well as conventional pitot tube and thermocouple methods. These infrared images are compared to computer-generated images using the equations of radiative exchange based on the temperature distribution in the jet exhaust measured with the thermocouple traverses. Temperature and velocity measurement techniques, infrared imaging, and the computer model of the infrared imaging technique are presented and discussed. From the study, it is concluded that infrared imaging techniques coupled with the radiative exchange equations applied to CFD models are a valid method to qualitatively verify CFD codes used in STOVL applications.

  10. 15 CFR 740.7 - Computers (APP).

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 4A003. (2) Technology and software. License Exception APP authorizes exports of technology and software... programmability. (ii) Technology and source code. Technology and source code eligible for License Exception APP..., reexports and transfers (in-country) for nuclear, chemical, biological, or missile end-users and end-uses...

  11. ATHENA 3D: A finite element code for ultrasonic wave propagation

    NASA Astrophysics Data System (ADS)

    Rose, C.; Rupin, F.; Fouquet, T.; Chassignole, B.

    2014-04-01

    The understanding of wave propagation phenomena requires use of robust numerical models. 3D finite element (FE) models are generally prohibitively time consuming. However, advances in computing processor speed and memory allow them to be more and more competitive. In this context, EDF R&D developed the 3D version of the well-validated FE code ATHENA2D. The code is dedicated to the simulation of wave propagation in all kinds of elastic media and in particular, heterogeneous and anisotropic materials like welds. It is based on solving elastodynamic equations in the calculation zone expressed in terms of stress and particle velocities. The particularity of the code relies on the fact that the discretization of the calculation domain uses a Cartesian regular 3D mesh while the defect of complex geometry can be described using a separate (2D) mesh using the fictitious domains method. This allows combining the rapidity of regular meshes computation with the capability of modelling arbitrary shaped defects. Furthermore, the calculation domain is discretized with a quasi-explicit time evolution scheme. Thereby only local linear systems of small size have to be solved. The final step to reduce the computation time relies on the fact that ATHENA3D has been parallelized and adapted to the use of HPC resources. In this paper, the validation of the 3D FE model is discussed. A cross-validation of ATHENA 3D and CIVA is proposed for several inspection configurations. The performances in terms of calculation time are also presented in the cases of both local computer and computation cluster use.

  12. Three-dimensional structural analysis using interactive graphics

    NASA Technical Reports Server (NTRS)

    Biffle, J.; Sumlin, H. A.

    1975-01-01

    The application of computer interactive graphics to three-dimensional structural analysis was described, with emphasis on the following aspects: (1) structural analysis, and (2) generation and checking of input data and examination of the large volume of output data (stresses, displacements, velocities, accelerations). Handling of three-dimensional input processing with a special MESH3D computer program was explained. Similarly, a special code PLTZ may be used to perform all the needed tasks for output processing from a finite element code. Examples were illustrated.

  13. SINGER: A Computer Code for General Analysis of Two-Dimensional Reinforced Concrete Structures. Volume 1. Solution Process

    DTIC Science & Technology

    1975-05-01

    Conference on Earthquake Engineering, Santiago de Chile, 13-18 January 1969, Vol. I , Session B2, Chilean Association oil Seismology and Earth- quake...Nuclear Agency May 1975 DISTRIBUTED BY: KJ National Technical Information Service U. S. DEPARTMENT OF COMMERCE ^804J AFWL-TR-74-228, Vol. I ...CM o / i ’•fu.r ) V V AFWL-TR- 74-228 Vol. I SINGER: A COMPUTER CODE FOR GENERAL ANALYSIS OF TWO-DIMENSIONAL CONCRETE STRUCTURES Volum« I

  14. Subscale Fast Cookoff Testing and Modeling for the Hazard Assessment of Large Rocket Motors

    DTIC Science & Technology

    2001-03-01

    41 LIST OF TABLES Table 1 Heats of Vaporization Parameter for Two-liner Phase Transformation - Complete Liner Sublimation and/or Combined Liner...One-dimensional 2-D Two-dimensional ALE3D Arbitrary-Lagrange-Eulerian (3-D) Computer Code ALEGRA 3-D Arbitrary-Lagrange-Eulerian Computer Code for...case-liner bond areas and in the grain inner bore to explore the pre-ignition and ignition phases , as well as burning evolution in rocket motor fast

  15. Quantum error correcting codes and 4-dimensional arithmetic hyperbolic manifolds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guth, Larry, E-mail: lguth@math.mit.edu; Lubotzky, Alexander, E-mail: alex.lubotzky@mail.huji.ac.il

    2014-08-15

    Using 4-dimensional arithmetic hyperbolic manifolds, we construct some new homological quantum error correcting codes. They are low density parity check codes with linear rate and distance n{sup ε}. Their rate is evaluated via Euler characteristic arguments and their distance using Z{sub 2}-systolic geometry. This construction answers a question of Zémor [“On Cayley graphs, surface codes, and the limits of homological coding for quantum error correction,” in Proceedings of Second International Workshop on Coding and Cryptology (IWCC), Lecture Notes in Computer Science Vol. 5557 (2009), pp. 259–273], who asked whether homological codes with such parameters could exist at all.

  16. Multi-Zone Liquid Thrust Chamber Performance Code with Domain Decomposition for Parallel Processing

    NASA Technical Reports Server (NTRS)

    Navaz, Homayun K.

    2002-01-01

    Computational Fluid Dynamics (CFD) has considerably evolved in the last decade. There are many computer programs that can perform computations on viscous internal or external flows with chemical reactions. CFD has become a commonly used tool in the design and analysis of gas turbines, ramjet combustors, turbo-machinery, inlet ducts, rocket engines, jet interaction, missile, and ramjet nozzles. One of the problems of interest to NASA has always been the performance prediction for rocket and air-breathing engines. Due to the complexity of flow in these engines it is necessary to resolve the flowfield into a fine mesh to capture quantities like turbulence and heat transfer. However, calculation on a high-resolution grid is associated with a prohibitively increasing computational time that can downgrade the value of the CFD for practical engineering calculations. The Liquid Thrust Chamber Performance (LTCP) code was developed for NASA/MSFC (Marshall Space Flight Center) to perform liquid rocket engine performance calculations. This code is a 2D/axisymmetric full Navier-Stokes (NS) solver with fully coupled finite rate chemistry and Eulerian treatment of liquid fuel and/or oxidizer droplets. One of the advantages of this code has been the resemblance of its input file to the JANNAF (Joint Army Navy NASA Air Force Interagency Propulsion Committee) standard TDK code, and its automatic grid generation for JANNAF defined combustion chamber wall geometry. These options minimize the learning effort for TDK users, and make the code a good candidate for performing engineering calculations. Although the LTCP code was developed for liquid rocket engines, it is a general-purpose code and has been used for solving many engineering problems. However, the single zone formulation of the LTCP has limited the code to be applicable to problems with complex geometry. Furthermore, the computational time becomes prohibitively large for high-resolution problems with chemistry, two-equation turbulence model, and two-phase flow. To overcome these limitations, the LTCP code is rewritten to include the multi-zone capability with domain decomposition that makes it suitable for parallel processing, i.e., enabling the code to run every zone or sub-domain on a separate processor. This can reduce the run time by a factor of 6 to 8, depending on the problem.

  17. Optimal periodic binary codes of lengths 28 to 64

    NASA Technical Reports Server (NTRS)

    Tyler, S.; Keston, R.

    1980-01-01

    Results from computer searches performed to find repeated binary phase coded waveforms with optimal periodic autocorrelation functions are discussed. The best results for lengths 28 to 64 are given. The code features of major concern are where (1) the peak sidelobe in the autocorrelation function is small and (2) the sum of the squares of the sidelobes in the autocorrelation function is small.

  18. CFL3D User's Manual (Version 5.0)

    NASA Technical Reports Server (NTRS)

    Krist, Sherrie L.; Biedron, Robert T.; Rumsey, Christopher L.

    1998-01-01

    This document is the User's Manual for the CFL3D computer code, a thin-layer Reynolds-averaged Navier-Stokes flow solver for structured multiple-zone grids. Descriptions of the code's input parameters, non-dimensionalizations, file formats, boundary conditions, and equations are included. Sample 2-D and 3-D test cases are also described, and many helpful hints for using the code are provided.

  19. 48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... funds; (ii) Studies, analyses, test data, or similar data produced for this contract, when the study...

  20. Parallel Computation of the Jacobian Matrix for Nonlinear Equation Solvers Using MATLAB

    NASA Technical Reports Server (NTRS)

    Rose, Geoffrey K.; Nguyen, Duc T.; Newman, Brett A.

    2017-01-01

    Demonstrating speedup for parallel code on a multicore shared memory PC can be challenging in MATLAB due to underlying parallel operations that are often opaque to the user. This can limit potential for improvement of serial code even for the so-called embarrassingly parallel applications. One such application is the computation of the Jacobian matrix inherent to most nonlinear equation solvers. Computation of this matrix represents the primary bottleneck in nonlinear solver speed such that commercial finite element (FE) and multi-body-dynamic (MBD) codes attempt to minimize computations. A timing study using MATLAB's Parallel Computing Toolbox was performed for numerical computation of the Jacobian. Several approaches for implementing parallel code were investigated while only the single program multiple data (spmd) method using composite objects provided positive results. Parallel code speedup is demonstrated but the goal of linear speedup through the addition of processors was not achieved due to PC architecture.

  1. Workshop report - A validation study of Navier-Stokes codes for transverse injection into a Mach 2 flow

    NASA Technical Reports Server (NTRS)

    Eklund, Dean R.; Northam, G. B.; Mcdaniel, J. C.; Smith, Cliff

    1992-01-01

    A CFD (Computational Fluid Dynamics) competition was held at the Third Scramjet Combustor Modeling Workshop to assess the current state-of-the-art in CFD codes for the analysis of scramjet combustors. Solutions from six three-dimensional Navier-Stokes codes were compared for the case of staged injection of air behind a step into a Mach 2 flow. This case was investigated experimentally at the University of Virginia and extensive in-stream data was obtained. Code-to-code comparisons have been made with regard to both accuracy and efficiency. The turbulence models employed in the solutions are believed to be a major source of discrepancy between the six solutions.

  2. An accurate evaluation of the performance of asynchronous DS-CDMA systems with zero-correlation-zone coding in Rayleigh fading

    NASA Astrophysics Data System (ADS)

    Walker, Ernest; Chen, Xinjia; Cooper, Reginald L.

    2010-04-01

    An arbitrarily accurate approach is used to determine the bit-error rate (BER) performance for generalized asynchronous DS-CDMA systems, in Gaussian noise with Raleigh fading. In this paper, and the sequel, new theoretical work has been contributed which substantially enhances existing performance analysis formulations. Major contributions include: substantial computational complexity reduction, including a priori BER accuracy bounding; an analytical approach that facilitates performance evaluation for systems with arbitrary spectral spreading distributions, with non-uniform transmission delay distributions. Using prior results, augmented by these enhancements, a generalized DS-CDMA system model is constructed and used to evaluated the BER performance, in a variety of scenarios. In this paper, the generalized system modeling was used to evaluate the performance of both Walsh- Hadamard (WH) and Walsh-Hadamard-seeded zero-correlation-zone (WH-ZCZ) coding. The selection of these codes was informed by the observation that WH codes contain N spectral spreading values (0 to N - 1), one for each code sequence; while WH-ZCZ codes contain only two spectral spreading values (N/2 - 1,N/2); where N is the sequence length in chips. Since these codes span the spectral spreading range for DS-CDMA coding, by invoking an induction argument, the generalization of the system model is sufficiently supported. The results in this paper, and the sequel, support the claim that an arbitrary accurate performance analysis for DS-CDMA systems can be evaluated over the full range of binary coding, with minimal computational complexity.

  3. Air Launch Instrumented Vehicles Evaluation (ALIVE).

    DTIC Science & Technology

    1977-02-01

    propellant .s. The study addressed aging of two 12—inch—diamete r , SRBDM—type motors cast with mode ra te—burning—rate prope l l a n t . The propel lan t...s Ii t ttiis j t .y Factor vs Half Crack Length 86 30 Stress Intensity Factor /Load vs I1~ l 1 Crack Length 87 31 Log Stress I n t c r t s t t y... Factor vs Log Crac k Tip V e l o c i ty for S t r ip Biaxial Specimen 88 32 Log Stress I t i t i n s i t v Factor A d j u s t e d for Stra in

  4. Task 7: ADPAC User's Manual

    NASA Technical Reports Server (NTRS)

    Hall, E. J.; Topp, D. A.; Delaney, R. A.

    1996-01-01

    The overall objective of this study was to develop a 3-D numerical analysis for compressor casing treatment flowfields. The current version of the computer code resulting from this study is referred to as ADPAC (Advanced Ducted Propfan Analysis Codes-Version 7). This report is intended to serve as a computer program user's manual for the ADPAC code developed under Tasks 6 and 7 of the NASA Contract. The ADPAC program is based on a flexible multiple- block grid discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. An iterative implicit algorithm is available for rapid time-dependent flow calculations, and an advanced two equation turbulence model is incorporated to predict complex turbulent flows. The consolidated code generated during this study is capable of executing in either a serial or parallel computing mode from a single source code. Numerous examples are given in the form of test cases to demonstrate the utility of this approach for predicting the aerodynamics of modem turbomachinery configurations.

  5. User Instructions for the Systems Assessment Capability, Rev. 1, Computer Codes Volume 3: Utility Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eslinger, Paul W.; Aaberg, Rosanne L.; Lopresti, Charles A.

    2004-09-14

    This document contains detailed user instructions for a suite of utility codes developed for Rev. 1 of the Systems Assessment Capability. The suite of computer codes for Rev. 1 of Systems Assessment Capability performs many functions.

  6. Orion Service Module Reaction Control System Plume Impingement Analysis Using PLIMP/RAMP2

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen; Lumpkin, Forrest E., III; Gati, Frank; Yuko, James R.; Motil, Brian J.

    2009-01-01

    The Orion Crew Exploration Vehicle Service Module Reaction Control System engine plume impingement was computed using the plume impingement program (PLIMP). PLIMP uses the plume solution from RAMP2, which is the refined version of the reacting and multiphase program (RAMP) code. The heating rate and pressure (force and moment) on surfaces or components of the Service Module were computed. The RAMP2 solution of the flow field inside the engine and the plume was compared with those computed using GASP, a computational fluid dynamics code, showing reasonable agreement. The computed heating rate and pressure using PLIMP were compared with the Reaction Control System plume model (RPM) solution and the plume impingement dynamics (PIDYN) solution. RPM uses the GASP-based plume solution, whereas PIDYN uses the SCARF plume solution. Three sets of the heating rate and pressure solutions agree well. Further thermal analysis on the avionic ring of the Service Module was performed using MSC Patran/Pthermal. The obtained temperature results showed that thermal protection is necessary because of significant heating from the plume.

  7. Adjoint-Based Sensitivity and Uncertainty Analysis for Density and Composition: A User’s Guide

    DOE PAGES

    Favorite, Jeffrey A.; Perko, Zoltan; Kiedrowski, Brian C.; ...

    2017-03-01

    The ability to perform sensitivity analyses using adjoint-based first-order sensitivity theory has existed for decades. This paper provides guidance on how adjoint sensitivity methods can be used to predict the effect of material density and composition uncertainties in critical experiments, including when these uncertain parameters are correlated or constrained. Two widely used Monte Carlo codes, MCNP6 (Ref. 2) and SCALE 6.2 (Ref. 3), are both capable of computing isotopic density sensitivities in continuous energy and angle. Additionally, Perkó et al. have shown how individual isotope density sensitivities, easily computed using adjoint methods, can be combined to compute constrained first-order sensitivitiesmore » that may be used in the uncertainty analysis. This paper provides details on how the codes are used to compute first-order sensitivities and how the sensitivities are used in an uncertainty analysis. Constrained first-order sensitivities are computed in a simple example problem.« less

  8. Computation of Sound Generated by Flow Over a Circular Cylinder: An Acoustic Analogy Approach

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth S.; Cox, Jared S.; Rumsey, Christopher L.; Younis, Bassam A.

    1997-01-01

    The sound generated by viscous flow past a circular cylinder is predicted via the Lighthill acoustic analogy approach. The two dimensional flow field is predicted using two unsteady Reynolds-averaged Navier-Stokes solvers. Flow field computations are made for laminar flow at three Reynolds numbers (Re = 1000, Re = 10,000, and Re = 90,000) and two different turbulent models at Re = 90,000. The unsteady surface pressures are utilized by an acoustics code that implements Farassat's formulation 1A to predict the acoustic field. The acoustic code is a 3-D code - 2-D results are found by using a long cylinder length. The 2-D predictions overpredict the acoustic amplitude; however, if correlation lengths in the range of 3 to 10 cylinder diameters are used, the predicted acoustic amplitude agrees well with experiment.

  9. Majorana fermion surface code for universal quantum computation

    DOE PAGES

    Vijay, Sagar; Hsieh, Timothy H.; Fu, Liang

    2015-12-10

    In this study, we introduce an exactly solvable model of interacting Majorana fermions realizing Z 2 topological order with a Z 2 fermion parity grading and lattice symmetries permuting the three fundamental anyon types. We propose a concrete physical realization by utilizing quantum phase slips in an array of Josephson-coupled mesoscopic topological superconductors, which can be implemented in a wide range of solid-state systems, including topological insulators, nanowires, or two-dimensional electron gases, proximitized by s-wave superconductors. Our model finds a natural application as a Majorana fermion surface code for universal quantum computation, with a single-step stabilizer measurement requiring no physicalmore » ancilla qubits, increased error tolerance, and simpler logical gates than a surface code with bosonic physical qubits. We thoroughly discuss protocols for stabilizer measurements, encoding and manipulating logical qubits, and gate implementations.« less

  10. Lewis Structures Technology, 1988. Volume 2: Structural Mechanics

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Lewis Structures Div. performs and disseminates results of research conducted in support of aerospace engine structures. These results have a wide range of applicability to practitioners of structural engineering mechanics beyond the aerospace arena. The engineering community was familiarized with the depth and range of research performed by the division and its academic and industrial partners. Sessions covered vibration control, fracture mechanics, ceramic component reliability, parallel computing, nondestructive evaluation, constitutive models and experimental capabilities, dynamic systems, fatigue and damage, wind turbines, hot section technology (HOST), aeroelasticity, structural mechanics codes, computational methods for dynamics, structural optimization, and applications of structural dynamics, and structural mechanics computer codes.

  11. RAMONA-4B a computer code with three-dimensional neutron kinetics for BWR and SBWR system transient - user`s manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohatgi, U.S.; Cheng, H.S.; Khan, H.J.

    This document is the User`s Manual for the Boiling Water Reactor (BWR), and Simplified Boiling Water Reactor (SBWR) systems transient code RAMONA-4B. The code uses a three-dimensional neutron-kinetics model coupled with a multichannel, nonequilibrium, drift-flux, phase-flow model of the thermal hydraulics of the reactor vessel. The code is designed to analyze a wide spectrum of BWR core and system transients. Chapter 1 gives an overview of the code`s capabilities and limitations; Chapter 2 describes the code`s structure, lists major subroutines, and discusses the computer requirements. Chapter 3 is on code, auxillary codes, and instructions for running RAMONA-4B on Sun SPARCmore » and IBM Workstations. Chapter 4 contains component descriptions and detailed card-by-card input instructions. Chapter 5 provides samples of the tabulated output for the steady-state and transient calculations and discusses the plotting procedures for the steady-state and transient calculations. Three appendices contain important user and programmer information: lists of plot variables (Appendix A) listings of input deck for sample problem (Appendix B), and a description of the plotting program PAD (Appendix C). 24 refs., 18 figs., 11 tabs.« less

  12. Coding efficiency of AVS 2.0 for CBAC and CABAC engines

    NASA Astrophysics Data System (ADS)

    Cui, Jing; Choi, Youngkyu; Chae, Soo-Ik

    2015-12-01

    In this paper we compare the coding efficiency of AVS 2.0[1] for engines of the Context-based Binary Arithmetic Coding (CBAC)[2] in the AVS 2.0 and the Context-Adaptive Binary Arithmetic Coder (CABAC)[3] in the HEVC[4]. For fair comparison, the CABAC is embedded in the reference code RD10.1 because the CBAC is in the HEVC in our previous work[5]. The rate estimation table is employed only for RDOQ in the RD code. To reduce the computation complexity of the video encoder, therefore we modified the RD code so that the rate estimation table is employed for all RDO decision. Furthermore, we also simplify the complexity of rate estimation table by reducing the bit depth of its fractional part to 2 from 8. The simulation result shows that the CABAC has the BD-rate loss of about 0.7% compared to the CBAC. It seems that the CBAC is a little more efficient than that the CABAC in the AVS 2.0.

  13. Extensions and improvements on XTRAN3S

    NASA Technical Reports Server (NTRS)

    Borland, C. J.

    1989-01-01

    Improvements to the XTRAN3S computer program are summarized. Work on this code, for steady and unsteady aerodynamic and aeroelastic analysis in the transonic flow regime has concentrated on the following areas: (1) Maintenance of the XTRAN3S code, including correction of errors, enhancement of operational capability, and installation on the Cray X-MP system; (2) Extension of the vectorization concepts in XTRAN3S to include additional areas of the code for improved execution speed; (3) Modification of the XTRAN3S algorithm for improved numerical stability for swept, tapered wing cases and improved computational efficiency; and (4) Extension of the wing-only version of XTRAN3S to include pylon and nacelle or external store capability.

  14. POLYSHIFT Communications Software for the Connection Machine System CM-200

    DOE PAGES

    George, William; Brickner, Ralph G.; Johnsson, S. Lennart

    1994-01-01

    We describe the use and implementation of a polyshift function PSHIFT for circular shifts and end-offs shifts. Polyshift is useful in many scientific codes using regular grids, such as finite difference codes in several dimensions, and multigrid codes, molecular dynamics computations, and in lattice gauge physics computations, such as quantum chromodynamics (QCD) calculations. Our implementation of the PSHIFT function on the Connection Machine systems CM-2 and CM-200 offers a speedup of up to a factor of 3–4 compared with CSHIFT when the local data motion within a node is small. The PSHIFT routine is included in the Connection Machine Scientificmore » Software Library (CMSSL).« less

  15. NEAMS Update. Quarterly Report for October - December 2011.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, K.

    2012-02-16

    The Advanced Modeling and Simulation Office within the DOE Office of Nuclear Energy (NE) has been charged with revolutionizing the design tools used to build nuclear power plants during the next 10 years. To accomplish this, the DOE has brought together the national laboratories, U.S. universities, and the nuclear energy industry to establish the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Program. The mission of NEAMS is to modernize computer modeling of nuclear energy systems and improve the fidelity and validity of modeling results using contemporary software environments and high-performance computers. NEAMS will create a set of engineering-level codes aimedmore » at designing and analyzing the performance and safety of nuclear power plants and reactor fuels. The truly predictive nature of these codes will be achieved by modeling the governing phenomena at the spatial and temporal scales that dominate the behavior. These codes will be executed within a simulation environment that orchestrates code integration with respect to spatial meshing, computational resources, and execution to give the user a common 'look and feel' for setting up problems and displaying results. NEAMS is building upon a suite of existing simulation tools, including those developed by the federal Scientific Discovery through Advanced Computing and Advanced Simulation and Computing programs. NEAMS also draws upon existing simulation tools for materials and nuclear systems, although many of these are limited in terms of scale, applicability, and portability (their ability to be integrated into contemporary software and hardware architectures). NEAMS investments have directly and indirectly supported additional NE research and development programs, including those devoted to waste repositories, safeguarded separations systems, and long-term storage of used nuclear fuel. NEAMS is organized into two broad efforts, each comprising four elements. The quarterly highlights October-December 2011 are: (1) Version 1.0 of AMP, the fuel assembly performance code, was tested on the JAGUAR supercomputer and released on November 1, 2011, a detailed discussion of this new simulation tool is given; (2) A coolant sub-channel model and a preliminary UO{sub 2} smeared-cracking model were implemented in BISON, the single-pin fuel code, more information on how these models were developed and benchmarked is given; (3) The Object Kinetic Monte Carlo model was implemented to account for nucleation events in meso-scale simulations and a discussion of the significance of this advance is given; (4) The SHARP neutronics module, PROTEUS, was expanded to be applicable to all types of reactors, and a discussion of the importance of PROTEUS is given; (5) A plan has been finalized for integrating the high-fidelity, three-dimensional reactor code SHARP with both the systems-level code RELAP7 and the fuel assembly code AMP. This is a new initiative; (6) Work began to evaluate the applicability of AMP to the problem of dry storage of used fuel and to define a relevant problem to test the applicability; (7) A code to obtain phonon spectra from the force-constant matrix for a crystalline lattice has been completed. This important bridge between subcontinuum and continuum phenomena is discussed; (8) Benchmarking was begun on the meso-scale, finite-element fuels code MARMOT to validate its new variable splitting algorithm; (9) A very computationally demanding simulation of diffusion-driven nucleation of new microstructural features has been completed. An explanation of the difficulty of this simulation is given; (10) Experiments were conducted with deformed steel to validate a crystal plasticity finite-element code for bodycentered cubic iron; (11) The Capability Transfer Roadmap was completed and published as an internal laboratory technical report; (12) The AMP fuel assembly code input generator was integrated into the NEAMS Integrated Computational Environment (NiCE). More details on the planned NEAMS computing environment is given; and (13) The NEAMS program website (neams.energy.gov) is nearly ready to launch.« less

  16. Manned systems utilization analysis (study 2.1). Volume 3: LOVES computer simulations, results, and analyses

    NASA Technical Reports Server (NTRS)

    Stricker, L. T.

    1975-01-01

    The LOVES computer program was employed to analyze the geosynchronous portion of the NASA's 1973 automated satellite mission model from 1980 to 1990. The objectives of the analyses were: (1) to demonstrate the capability of the LOVES code to provide the depth and accuracy of data required to support the analyses; and (2) to tradeoff the concept of space servicing automated satellites composed of replaceable modules against the concept of replacing expendable satellites upon failure. The computer code proved to be an invaluable tool in analyzing the logistic requirements of the various test cases required in the tradeoff. It is indicated that the concept of space servicing offers the potential for substantial savings in the cost of operating automated satellite systems.

  17. Annual Report of the ECSU Home-Institution Support Program (1993)

    DTIC Science & Technology

    1993-09-30

    summer of 1992. Stephanie plans to attend graduate school at the University of Alabama at Birmingham. r 3 . Deborah Jones has attended the ISSP program for...computer equipment Component #2 A visiting lecturer series Component # 3 : Students pay & faculty release time Component #4 Student/sponsor travel program...DTXC QUA, ty rNpBT 3 S. 0. CODE: 1133 DISBURSING CODE: N001 79 AGO CODE: N66005 CAGE CODE: OJLKO 3 PART I: A succinct narrative which should

  18. Development of a model and computer code to describe solar grade silicon production processes

    NASA Technical Reports Server (NTRS)

    Gould, R. K.; Srivastava, R.

    1979-01-01

    Two computer codes were developed for describing flow reactors in which high purity, solar grade silicon is produced via reduction of gaseous silicon halides. The first is the CHEMPART code, an axisymmetric, marching code which treats two phase flows with models describing detailed gas-phase chemical kinetics, particle formation, and particle growth. It can be used to described flow reactors in which reactants, mix, react, and form a particulate phase. Detailed radial gas-phase composition, temperature, velocity, and particle size distribution profiles are computed. Also, deposition of heat, momentum, and mass (either particulate or vapor) on reactor walls is described. The second code is a modified version of the GENMIX boundary layer code which is used to compute rates of heat, momentum, and mass transfer to the reactor walls. This code lacks the detailed chemical kinetics and particle handling features of the CHEMPART code but has the virtue of running much more rapidly than CHEMPART, while treating the phenomena occurring in the boundary layer in more detail.

  19. Computing the cross sections of nuclear reactions with nuclear clusters emission for proton energies between 30 MeV and 2.6 GeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korovin, Yu. A.; Maksimushkina, A. V., E-mail: AVMaksimushkina@mephi.ru; Frolova, T. A.

    2016-12-15

    The cross sections of nuclear reactions involving emission of clusters of light nuclei in proton collisions with a heavy-metal target are computed for incident-proton energies between 30 MeV and 2.6 GeV. The calculation relies on the ALICE/ASH and CASCADE/INPE computer codes. The parameters determining the pre-equilibrium cluster emission are varied in the computation.

  20. A prototype Knowledge-Based System to Aid Space System Restoration Management.

    DTIC Science & Technology

    1986-12-01

    Systems. ......... 122 Appendix B: Computation of Weights With AHP . . .. 132 Appendix C: ART Code .. ............... 138 Appendix D: Test Outputs...45 5.1 Earth Coverage With Geosynchronous Satellites 49 5.2 Space System Configurations ... ........... . 50 5.3 AHP Hierarchy...67 5.4 AHP Hierarchy With Weights .... ............ 68 6.1 TALK Schema Structure ..... .............. 75 6.2 ART Code for TALK Satellite C

  1. Comparison of two computer codes for crack growth analysis: NASCRAC Versus NASA/FLAGRO

    NASA Technical Reports Server (NTRS)

    Stallworth, R.; Meyers, C. A.; Stinson, H. C.

    1989-01-01

    Results are presented from the comparison study of two computer codes for crack growth analysis - NASCRAC and NASA/FLAGRO. The two computer codes gave compatible conservative results when the part through crack analysis solutions were analyzed versus experimental test data. Results showed good correlation between the codes for the through crack at a lug solution. For the through crack at a lug solution, NASA/FLAGRO gave the most conservative results.

  2. Computational Predictions of the Performance Wright 'Bent End' Propellers

    NASA Technical Reports Server (NTRS)

    Wang, Xiang-Yu; Ash, Robert L.; Bobbitt, Percy J.; Prior, Edwin (Technical Monitor)

    2002-01-01

    Computational analysis of two 1911 Wright brothers 'Bent End' wooden propeller reproductions have been performed and compared with experimental test results from the Langley Full Scale Wind Tunnel. The purpose of the analysis was to check the consistency of the experimental results and to validate the reliability of the tests. This report is one part of the project on the propeller performance research of the Wright 'Bent End' propellers, intend to document the Wright brothers' pioneering propeller design contributions. Two computer codes were used in the computational predictions. The FLO-MG Navier-Stokes code is a CFD (Computational Fluid Dynamics) code based on the Navier-Stokes Equations. It is mainly used to compute the lift coefficient and the drag coefficient at specified angles of attack at different radii. Those calculated data are the intermediate results of the computation and a part of the necessary input for the Propeller Design Analysis Code (based on Adkins and Libeck method), which is a propeller design code used to compute the propeller thrust coefficient, the propeller power coefficient and the propeller propulsive efficiency.

  3. [The QR code in society, economy and medicine--fields of application, options and chances].

    PubMed

    Flaig, Benno; Parzeller, Markus

    2011-01-01

    2D codes like the QR Code ("Quick Response") are becoming more and more common in society and medicine. The application spectrum and benefits in medicine and other fields are described. 2D codes can be created free of charge on any computer with internet access without any previous knowledge. The codes can be easily used in publications, presentations, on business cards and posters. Editors choose between contact details, text or a hyperlink as information behind the code. At expert conferences, linkage by QR Code allows the audience to download presentations and posters quickly. The documents obtained can then be saved, printed, processed etc. Fast access to stored data in the internet makes it possible to integrate additional and explanatory multilingual videos into medical posters. In this context, a combination of different technologies (printed handout, QR Code and screen) may be reasonable.

  4. Proceduracy: Computer Code Writing in the Continuum of Literacy

    ERIC Educational Resources Information Center

    Vee, Annette

    2010-01-01

    This dissertation looks at computer programming through the lens of literacy studies, building from the concept of code as a written text with expressive and rhetorical power. I focus on the intersecting technological and social factors of computer code writing as a literacy--a practice I call "proceduracy". Like literacy, proceduracy is a human…

  5. Computational electronics and electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shang, C C

    The Computational Electronics and Electromagnetics thrust area serves as the focal point for Engineering R and D activities for developing computer-based design and analysis tools. Representative applications include design of particle accelerator cells and beamline components; design of transmission line components; engineering analysis and design of high-power (optical and microwave) components; photonics and optoelectronics circuit design; electromagnetic susceptibility analysis; and antenna synthesis. The FY-97 effort focuses on development and validation of (1) accelerator design codes; (2) 3-D massively parallel, time-dependent EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; andmore » (5) development of beam control algorithms coupled to beam transport physics codes. These efforts are in association with technology development in the power conversion, nondestructive evaluation, and microtechnology areas. The efforts complement technology development in Lawrence Livermore National programs.« less

  6. COMSAC: Computational Methods for Stability and Control. Part 2

    NASA Technical Reports Server (NTRS)

    Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)

    2004-01-01

    The unprecedented advances being made in computational fluid dynamic (CFD) technology have demonstrated the powerful capabilities of codes in applications to civil and military aircraft. Used in conjunction with wind-tunnel and flight investigations, many codes are now routinely used by designers in diverse applications such as aerodynamic performance predictions and propulsion integration. Typically, these codes are most reliable for attached, steady, and predominantly turbulent flows. As a result of increasing reliability and confidence in CFD, wind-tunnel testing for some new configurations has been substantially reduced in key areas, such as wing trade studies for mission performance guarantees. Interest is now growing in the application of computational methods to other critical design challenges. One of the most important disciplinary elements for civil and military aircraft is prediction of stability and control characteristics. CFD offers the potential for significantly increasing the basic understanding, prediction, and control of flow phenomena associated with requirements for satisfactory aircraft handling characteristics.

  7. Simulation of Jet Noise with OVERFLOW CFD Code and Kirchhoff Surface Integral

    NASA Technical Reports Server (NTRS)

    Kandula, M.; Caimi, R.; Voska, N. (Technical Monitor)

    2002-01-01

    An acoustic prediction capability for supersonic axisymmetric jets was developed on the basis of OVERFLOW Navier-Stokes CFD (Computational Fluid Dynamics) code of NASA Langley Research Center. Reynolds-averaged turbulent stresses in the flow field are modeled with the aid of Spalart-Allmaras one-equation turbulence model. Appropriate acoustic and outflow boundary conditions were implemented to compute time-dependent acoustic pressure in the nonlinear source-field. Based on the specification of acoustic pressure, its temporal and normal derivatives on the Kirchhoff surface, the near-field and the far-field sound pressure levels are computed via Kirchhoff surface integral, with the Kirchhoff surface chosen to enclose the nonlinear sound source region described by the CFD code. The methods are validated by a comparison of the predictions of sound pressure levels with the available data for an axisymmetric turbulent supersonic (Mach 2) perfectly expanded jet.

  8. Simulation of Supersonic Jet Noise with the Adaptation of Overflow CFD Code and Kirchhoff Surface Integral

    NASA Technical Reports Server (NTRS)

    Kandula, Max; Caimi, Raoul; Steinrock, T. (Technical Monitor)

    2001-01-01

    An acoustic prediction capability for supersonic axisymmetric jets was developed on the basis of OVERFLOW Navier-Stokes CFD (Computational Fluid Dynamics) code of NASA Langley Research Center. Reynolds-averaged turbulent stresses in the flow field are modeled with the aid of Spalart-Allmaras one-equation turbulence model. Appropriate acoustic and outflow boundary conditions were implemented to compute time-dependent acoustic pressure in the nonlinear source-field. Based on the specification of acoustic pressure, its temporal and normal derivatives on the Kirchhoff surface, the near-field and the far-field sound pressure levels are computed via Kirchhoff surface integral, with the Kirchhoff surface chosen to enclose the nonlinear sound source region described by the CFD code. The methods are validated by a comparison of the predictions of sound pressure levels with the available data for an axisymmetric turbulent supersonic (Mach 2) perfectly expanded jet.

  9. Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murata, K.K.; Williams, D.C.; Griffith, R.O.

    1997-12-01

    The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of themore » input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.« less

  10. Pattern-based integer sample motion search strategies in the context of HEVC

    NASA Astrophysics Data System (ADS)

    Maier, Georg; Bross, Benjamin; Grois, Dan; Marpe, Detlev; Schwarz, Heiko; Veltkamp, Remco C.; Wiegand, Thomas

    2015-09-01

    The H.265/MPEG-H High Efficiency Video Coding (HEVC) standard provides a significant increase in coding efficiency compared to its predecessor, the H.264/MPEG-4 Advanced Video Coding (AVC) standard, which however comes at the cost of a high computational burden for a compliant encoder. Motion estimation (ME), which is a part of the inter-picture prediction process, typically consumes a high amount of computational resources, while significantly increasing the coding efficiency. In spite of the fact that both H.265/MPEG-H HEVC and H.264/MPEG-4 AVC standards allow processing motion information on a fractional sample level, the motion search algorithms based on the integer sample level remain to be an integral part of ME. In this paper, a flexible integer sample ME framework is proposed, thereby allowing to trade off significant reduction of ME computation time versus coding efficiency penalty in terms of bit rate overhead. As a result, through extensive experimentation, an integer sample ME algorithm that provides a good trade-off is derived, incorporating a combination and optimization of known predictive, pattern-based and early termination techniques. The proposed ME framework is implemented on a basis of the HEVC Test Model (HM) reference software, further being compared to the state-of-the-art fast search algorithm, which is a native part of HM. It is observed that for high resolution sequences, the integer sample ME process can be speed-up by factors varying from 3.2 to 7.6, resulting in the bit-rate overhead of 1.5% and 0.6% for Random Access (RA) and Low Delay P (LDP) configurations, respectively. In addition, the similar speed-up is observed for sequences with mainly Computer-Generated Imagery (CGI) content while trading off the bit rate overhead of up to 5.2%.

  11. Thermodynamic properties of gaseous fluorocarbons and isentropic equilibrium expansions of two binary mixtures of fluorocarbons and argon

    NASA Technical Reports Server (NTRS)

    Talcott, N. A., Jr.

    1977-01-01

    Equations and computer code are given for the thermodynamic properties of gaseous fluorocarbons in chemical equilibrium. In addition, isentropic equilibrium expansions of two binary mixtures of fluorocarbons and argon are included. The computer code calculates the equilibrium thermodynamic properties and, in some cases, the transport properties for the following fluorocarbons: CCl2F, CCl2F2, CBrF3, CF4, CHCl2F, CHF3, CCL2F-CCl2F, CCLF2-CClF2, CF3-CF3, and C4F8. Equilibrium thermodynamic properties are tabulated for six of the fluorocarbons(CCl3F, CCL2F2, CBrF3, CF4, CF3-CF3, and C4F8) and pressure-enthalpy diagrams are presented for CBrF3.

  12. Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 2; Preliminary Results

    NASA Technical Reports Server (NTRS)

    Walsh, J. L.; Weston, R. P.; Samareh, J. A.; Mason, B. H.; Green, L. L.; Biedron, R. T.

    2000-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity finite-element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a high-speed civil transport configuration. The paper describes both the preliminary results from implementing and validating the multidisciplinary analysis and the results from an aerodynamic optimization. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture compliant software product. A companion paper describes the formulation of the multidisciplinary analysis and optimization system.

  13. APC: A New Code for Atmospheric Polarization Computations

    NASA Technical Reports Server (NTRS)

    Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.

    2014-01-01

    A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.

  14. Computational and experimental investigation of two-dimensional scramjet inlets and hypersonic flow over a sharp flat plate

    NASA Astrophysics Data System (ADS)

    Messitt, Donald G.

    1999-11-01

    The WIND code was employed to compute the hypersonic flow in the shock wave boundary layer merged region near the leading edge of a sharp flat plate. Solutions were obtained at Mach numbers from 9.86 to 15.0 and free stream Reynolds numbers of 3,467 to 346,700 in-1 (1.365 · 105 to 1.365 · 107 m-1) for perfect gas conditions. The numerical results indicated a merged shock wave and viscous layer near the leading edge. The merged region grew in size with increasing free stream Mach number, proportional to Minfinity 2/Reinfinity. Profiles of the static pressure in the merged region indicated a strong normal pressure gradient (∂p/∂y). The normal pressure gradient has been neglected in previous analyses which used the boundary layer equations. The shock wave near the leading edge was thick, as has been experimentally observed. Computed shock wave locations and surface pressures agreed well within experimental error for values of the rarefaction parameter, chi/M infinity2 < 0.3. A preliminary analysis using kinetic theory indicated that rarefied flow effects became important above this value. In particular, the WIND solution agreed well in the transition region between the merged flow, which was predicted well by the theory of Li and Nagamatsu, and the downstream region where the strong interaction theory applied. Additional computations with the NPARC code, WIND's predecessor, demonstrated the ability of the code to compute hypersonic inlet flows at free stream Mach numbers up to 20. Good qualitative agreement with measured pressure data indicated that the code captured the important physical features of the shock wave - boundary layer interactions. The computed surface and pitot pressures fell within the combined experimental and numerical error bounds for most points. The calculations demonstrated the need for extremely fine grids when computing hypersonic interaction flows.

  15. Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators

    NASA Astrophysics Data System (ADS)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-03-01

    We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.

  16. Hypercube matrix computation task

    NASA Technical Reports Server (NTRS)

    Calalo, Ruel H.; Imbriale, William A.; Jacobi, Nathan; Liewer, Paulett C.; Lockhart, Thomas G.; Lyzenga, Gregory A.; Lyons, James R.; Manshadi, Farzin; Patterson, Jean E.

    1988-01-01

    A major objective of the Hypercube Matrix Computation effort at the Jet Propulsion Laboratory (JPL) is to investigate the applicability of a parallel computing architecture to the solution of large-scale electromagnetic scattering problems. Three scattering analysis codes are being implemented and assessed on a JPL/California Institute of Technology (Caltech) Mark 3 Hypercube. The codes, which utilize different underlying algorithms, give a means of evaluating the general applicability of this parallel architecture. The three analysis codes being implemented are a frequency domain method of moments code, a time domain finite difference code, and a frequency domain finite elements code. These analysis capabilities are being integrated into an electromagnetics interactive analysis workstation which can serve as a design tool for the construction of antennas and other radiating or scattering structures. The first two years of work on the Hypercube Matrix Computation effort is summarized. It includes both new developments and results as well as work previously reported in the Hypercube Matrix Computation Task: Final Report for 1986 to 1987 (JPL Publication 87-18).

  17. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  18. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  19. Mechanistic prediction of fission-gas behavior during in-cell transient heating tests on LWR fuel using the GRASS-SST and FASTGRASS computer codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rest, J; Gehl, S M

    1979-01-01

    GRASS-SST and FASTGRASS are mechanistic computer codes for predicting fission-gas behavior in UO/sub 2/-base fuels during steady-state and transient conditions. FASTGRASS was developed in order to satisfy the need for a fast-running alternative to GRASS-SST. Althrough based on GRASS-SST, FASTGRASS is approximately an order of magnitude quicker in execution. The GRASS-SST transient analysis has evolved through comparisons of code predictions with the fission-gas release and physical phenomena that occur during reactor operation and transient direct-electrical-heating (DEH) testing of irradiated light-water reactor fuel. The FASTGRASS calculational procedure is described in this paper, along with models of key physical processes included inmore » both FASTGRASS and GRASS-SST. Predictions of fission-gas release obtained from GRASS-SST and FASTGRASS analyses are compared with experimental observations from a series of DEH tests. The major conclusions is that the computer codes should include an improved model for the evolution of the grain-edge porosity.« less

  20. PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 3: Programmer's reference

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady

    1990-01-01

    A new computer code was developed to solve the 2-D or axisymmetric, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating-direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 3 is the Programmer's Reference, and describes the program structure, the FORTRAN variables stored in common blocks, and the details of each subprogram.

  1. Field estimates of gravity terrain corrections and Y2K-compatible method to convert from gravity readings with multiple base stations to tide- and long-term drift-corrected observations

    USGS Publications Warehouse

    Plouff, Donald

    2000-01-01

    Gravity observations are directly made or are obtained from other sources by the U.S. Geological Survey in order to prepare maps of the anomalous gravity field and consequently to interpret the subsurface distribution of rock densities and associated lithologic or geologic units. Observations are made in the field with gravity meters at new locations and at reoccupations of previously established gravity "stations." This report illustrates an interactively-prompted series of steps needed to convert gravity "readings" to values that are tied to established gravity datums and includes computer programs to implement those steps. Inasmuch as individual gravity readings have small variations, gravity-meter (instrument) drift may not be smoothly variable, and acommodations may be needed for ties to previously established stations, the reduction process is iterative. Decision-making by the program user is prompted by lists of best values and graphical displays. Notes about irregularities of topography, which affect the value of observed gravity but are not shown in sufficient detail on topographic maps, must be recorded in the field. This report illustrates ways to record field notes (distances, heights, and slope angles) and includes computer programs to convert field notes to gravity terrain corrections. This report includes approaches that may serve as models for other applications, for example: portrayal of system flow; style of quality control to document and validate computer applications; lack of dependence on proprietary software except source code compilation; method of file-searching with a dwindling list; interactive prompting; computer code to write directly in the PostScript (Adobe Systems Incorporated) printer language; and high-lighting the four-digit year on the first line of time-dependent data sets for assured Y2K compatibility. Computer source codes provided are written in the Fortran scientific language. In order for the programs to operate, they first must be converted (compiled) into an executable form on the user's computer. Although program testing was done in a UNIX (tradename of American Telephone and Telegraph Company) computer environment, it is anticipated that only a system-dependent date-and-time function may need to be changed for adaptation to other computer platforms that accept standard Fortran code.d del iliscipit volorer sequi ting etue feum zzriliquatum zzriustrud esenibh ex esto esequat.

  2. Nonoccurrence of Negotiation of Meaning in Task-Based Synchronous Computer-Mediated Communication

    ERIC Educational Resources Information Center

    Van Der Zwaard, Rose; Bannink, Anne

    2016-01-01

    This empirical study investigated the occurrence of meaning negotiation in an interactive synchronous computer-mediated second language (L2) environment. Sixteen dyads (N = 32) consisting of nonnative speakers (NNSs) and native speakers (NSs) of English performed 2 different tasks using videoconferencing and written chat. The data were coded and…

  3. Utilizing GPUs to Accelerate Turbomachinery CFD Codes

    NASA Technical Reports Server (NTRS)

    MacCalla, Weylin; Kulkarni, Sameer

    2016-01-01

    GPU computing has established itself as a way to accelerate parallel codes in the high performance computing world. This work focuses on speeding up APNASA, a legacy CFD code used at NASA Glenn Research Center, while also drawing conclusions about the nature of GPU computing and the requirements to make GPGPU worthwhile on legacy codes. Rewriting and restructuring of the source code was avoided to limit the introduction of new bugs. The code was profiled and investigated for parallelization potential, then OpenACC directives were used to indicate parallel parts of the code. The use of OpenACC directives was not able to reduce the runtime of APNASA on either the NVIDIA Tesla discrete graphics card, or the AMD accelerated processing unit. Additionally, it was found that in order to justify the use of GPGPU, the amount of parallel work being done within a kernel would have to greatly exceed the work being done by any one portion of the APNASA code. It was determined that in order for an application like APNASA to be accelerated on the GPU, it should not be modular in nature, and the parallel portions of the code must contain a large portion of the code's computation time.

  4. PASCO: Structural panel analysis and sizing code: Users manual - Revised

    NASA Technical Reports Server (NTRS)

    Anderson, M. S.; Stroud, W. J.; Durling, B. J.; Hennessy, K. W.

    1981-01-01

    A computer code denoted PASCO is described for analyzing and sizing uniaxially stiffened composite panels. Buckling and vibration analyses are carried out with a linked plate analysis computer code denoted VIPASA, which is included in PASCO. Sizing is based on nonlinear mathematical programming techniques and employs a computer code denoted CONMIN, also included in PASCO. Design requirements considered are initial buckling, material strength, stiffness and vibration frequency. A user's manual for PASCO is presented.

  5. Computation of Reacting Flows in Combustion Processes

    NASA Technical Reports Server (NTRS)

    Keith, Theo G., Jr.; Chen, Kuo-Huey

    1997-01-01

    The main objective of this research was to develop an efficient three-dimensional computer code for chemically reacting flows. The main computer code developed is ALLSPD-3D. The ALLSPD-3D computer program is developed for the calculation of three-dimensional, chemically reacting flows with sprays. The ALL-SPD code employs a coupled, strongly implicit solution procedure for turbulent spray combustion flows. A stochastic droplet model and an efficient method for treatment of the spray source terms in the gas-phase equations are used to calculate the evaporating liquid sprays. The chemistry treatment in the code is general enough that an arbitrary number of reaction and species can be defined by the users. Also, it is written in generalized curvilinear coordinates with both multi-block and flexible internal blockage capabilities to handle complex geometries. In addition, for general industrial combustion applications, the code provides both dilution and transpiration cooling capabilities. The ALLSPD algorithm, which employs the preconditioning and eigenvalue rescaling techniques, is capable of providing efficient solution for flows with a wide range of Mach numbers. Although written for three-dimensional flows in general, the code can be used for two-dimensional and axisymmetric flow computations as well. The code is written in such a way that it can be run in various computer platforms (supercomputers, workstations and parallel processors) and the GUI (Graphical User Interface) should provide a user-friendly tool in setting up and running the code.

  6. Computer program optimizes design of nuclear radiation shields

    NASA Technical Reports Server (NTRS)

    Lahti, G. P.

    1971-01-01

    Computer program, OPEX 2, determines minimum weight, volume, or cost for shields. Program incorporates improved coding, simplified data input, spherical geometry, and an expanded output. Method is capable of altering dose-thickness relationship when a shield layer has been removed.

  7. MHD code using multi graphical processing units: SMAUG+

    NASA Astrophysics Data System (ADS)

    Gyenge, N.; Griffiths, M. K.; Erdélyi, R.

    2018-01-01

    This paper introduces the Sheffield Magnetohydrodynamics Algorithm Using GPUs (SMAUG+), an advanced numerical code for solving magnetohydrodynamic (MHD) problems, using multi-GPU systems. Multi-GPU systems facilitate the development of accelerated codes and enable us to investigate larger model sizes and/or more detailed computational domain resolutions. This is a significant advancement over the parent single-GPU MHD code, SMAUG (Griffiths et al., 2015). Here, we demonstrate the validity of the SMAUG + code, describe the parallelisation techniques and investigate performance benchmarks. The initial configuration of the Orszag-Tang vortex simulations are distributed among 4, 16, 64 and 100 GPUs. Furthermore, different simulation box resolutions are applied: 1000 × 1000, 2044 × 2044, 4000 × 4000 and 8000 × 8000 . We also tested the code with the Brio-Wu shock tube simulations with model size of 800 employing up to 10 GPUs. Based on the test results, we observed speed ups and slow downs, depending on the granularity and the communication overhead of certain parallel tasks. The main aim of the code development is to provide massively parallel code without the memory limitation of a single GPU. By using our code, the applied model size could be significantly increased. We demonstrate that we are able to successfully compute numerically valid and large 2D MHD problems.

  8. NASA Rotor 37 CFD Code Validation: Glenn-HT Code

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.

    2010-01-01

    In order to advance the goals of NASA aeronautics programs, it is necessary to continuously evaluate and improve the computational tools used for research and design at NASA. One such code is the Glenn-HT code which is used at NASA Glenn Research Center (GRC) for turbomachinery computations. Although the code has been thoroughly validated for turbine heat transfer computations, it has not been utilized for compressors. In this work, Glenn-HT was used to compute the flow in a transonic compressor and comparisons were made to experimental data. The results presented here are in good agreement with this data. Most of the measures of performance are well within the measurement uncertainties and the exit profiles of interest agree with the experimental measurements.

  9. Final report for the Tera Computer TTI CRADA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidson, G.S.; Pavlakos, C.; Silva, C.

    1997-01-01

    Tera Computer and Sandia National Laboratories have completed a CRADA, which examined the Tera Multi-Threaded Architecture (MTA) for use with large codes of importance to industry and DOE. The MTA is an innovative architecture that uses parallelism to mask latency between memories and processors. The physical implementation is a parallel computer with high cross-section bandwidth and GaAs processors designed by Tera, which support many small computation threads and fast, lightweight context switches between them. When any thread blocks while waiting for memory accesses to complete, another thread immediately begins execution so that high CPU utilization is maintained. The Tera MTAmore » parallel computer has a single, global address space, which is appealing when porting existing applications to a parallel computer. This ease of porting is further enabled by compiler technology that helps break computations into parallel threads. DOE and Sandia National Laboratories were interested in working with Tera to further develop this computing concept. While Tera Computer would continue the hardware development and compiler research, Sandia National Laboratories would work with Tera to ensure that their compilers worked well with important Sandia codes, most particularly CTH, a shock physics code used for weapon safety computations. In addition to that important code, Sandia National Laboratories would complete research on a robotic path planning code, SANDROS, which is important in manufacturing applications, and would evaluate the MTA performance on this code. Finally, Sandia would work directly with Tera to develop 3D visualization codes, which would be appropriate for use with the MTA. Each of these tasks has been completed to the extent possible, given that Tera has just completed the MTA hardware. All of the CRADA work had to be done on simulators.« less

  10. An Analysis and Procedure for Determining Space Environmental Sink Temperatures With Selected Computational Results

    NASA Technical Reports Server (NTRS)

    Juhasz, Albert J.

    2001-01-01

    The purpose of this report was to analyze the heat-transfer problem posed by the determination of spacecraft temperatures and to incorporate the theoretically derived relationships in the computational code TSCALC. The basis for the code was a theoretical analysis of the thermal radiative equilibrium in space, particularly in the Solar System. Beginning with the solar luminosity, the code takes into account these key variables: (1) the spacecraft-to-Sun distance expressed in astronomical units (AU), where 1 AU represents the average Sun-to-Earth distance of 149.6 million km; (2) the angle (arc degrees) at which solar radiation is incident upon a spacecraft surface (ILUMANG); (3) the spacecraft surface temperature (a radiator or photovoltaic array) in kelvin, the surface absorptivity-to-emissivity ratio alpha/epsilon with respect to the solar radiation and (alpha/epsilon)(sub 2) with respect to planetary radiation; and (4) the surface view factor to space F. Outputs from the code have been used to determine environmental temperatures in various Earth orbits. The code was also utilized as a subprogram in the design of power system radiators for deep-space probes.

  11. Diablo 2.0: A modern DNS/LES code for the incompressible NSE leveraging new time-stepping and multigrid algorithms

    NASA Astrophysics Data System (ADS)

    Cavaglieri, Daniele; Bewley, Thomas; Mashayek, Ali

    2015-11-01

    We present a new code, Diablo 2.0, for the simulation of the incompressible NSE in channel and duct flows with strong grid stretching near walls. The code leverages the fractional step approach with a few twists. New low-storage IMEX (implicit-explicit) Runge-Kutta time-marching schemes are tested which are superior to the traditional and widely-used CN/RKW3 (Crank-Nicolson/Runge-Kutta-Wray) approach; the new schemes tested are L-stable in their implicit component, and offer improved overall order of accuracy and stability with, remarkably, similar computational cost and storage requirements. For duct flow simulations, our new code also introduces a new smoother for the multigrid solver for the pressure Poisson equation. The classic approach, involving alternating-direction zebra relaxation, is replaced by a new scheme, dubbed tweed relaxation, which achieves the same convergence rate with roughly half the computational cost. The code is then tested on the simulation of a shear flow instability in a duct, a classic problem in fluid mechanics which has been the object of extensive numerical modelling for its role as a canonical pathway to energetic turbulence in several fields of science and engineering.

  12. Acid rock drainage passive remediation using alkaline clay and impacts of vegetation and saturated sand barrier

    NASA Astrophysics Data System (ADS)

    Plaza, F.; Wen, Y.; Liang, X.

    2017-12-01

    Acid rock drainage (ARD) caused by abundance of coal refuse (CR) deposits in mining regions requires adequate treatment to prevent serious water pollution due to its acidity and high concentrations of sulfate and metals/metalloids. Over the past decades, various approaches have been explored and developed to remediate ARD. This study uses laboratory experiments to investigate the effectiveness and impacts of ARD passive remediation using alkaline clay (AC), a by-product of the aluminum refining process. Twelve column kinetic leaching experiments were set up with CR/AC mixing ratios ranging from 1%AC to 10%AC. Samples were collected from these columns to measure the pH, sulfate, metals/metalloids, acidity and alkalinity. Additional tests of XRD and acid base accounting were also conducted to better characterize the mineral phase in terms of the alkalinity and acidity potential. Based on the leachate measurement results, these columns were further classified into two groups of neutral/near neutral pH and acidic pH for further analysis. In addition, impacts of the vegetation and saturated sand layer on the remediation effectiveness were explored. The results of our long-term (more than three years in some cases) laboratory experiments show that AC is an effective ARD remediation material for the neutralization of leachate pH and immobilization of sulfate and metals such as Fe, Mn, Cu, Zn, Ni, Pb, Cd, Co. The CR/AC mixing ratios higher than 3%AC are found to be effective, with 10% close to optimal. Moreover, the results demonstrate the benefits of using vegetation and a saturated sand barrier. Vegetation acted as a phytoaccumulation/phytoextraction agent, causing an additional immobilization of metals. The saturated sand barrier blocked the oxygen and water diffusion downwards, leading to a reduction of the pyrite oxidation rate. Finally, the proposed remediation approach shows that the acidity consumption will likely occur before all the alkalinity is exhausted, guaranteeing an adequate long-term performance of this remediation approach.

  13. Development of a dynamic coupled hydro-geomechanical code and its application to induced seismicity

    NASA Astrophysics Data System (ADS)

    Miah, Md Mamun

    This research describes the importance of a hydro-geomechanical coupling in the geologic sub-surface environment from fluid injection at geothermal plants, large-scale geological CO2 sequestration for climate mitigation, enhanced oil recovery, and hydraulic fracturing during wells construction in the oil and gas industries. A sequential computational code is developed to capture the multiphysics interaction behavior by linking a flow simulation code TOUGH2 and a geomechanics modeling code PyLith. Numerical formulation of each code is discussed to demonstrate their modeling capabilities. The computational framework involves sequential coupling, and solution of two sub-problems- fluid flow through fractured and porous media and reservoir geomechanics. For each time step of flow calculation, pressure field is passed to the geomechanics code to compute effective stress field and fault slips. A simplified permeability model is implemented in the code that accounts for the permeability of porous and saturated rocks subject to confining stresses. The accuracy of the TOUGH-PyLith coupled simulator is tested by simulating Terzaghi's 1D consolidation problem. The modeling capability of coupled poroelasticity is validated by benchmarking it against Mandel's problem. The code is used to simulate both quasi-static and dynamic earthquake nucleation and slip distribution on a fault from the combined effect of far field tectonic loading and fluid injection by using an appropriate fault constitutive friction model. Results from the quasi-static induced earthquake simulations show a delayed response in earthquake nucleation. This is attributed to the increased total stress in the domain and not accounting for pressure on the fault. However, this issue is resolved in the final chapter in simulating a single event earthquake dynamic rupture. Simulation results show that fluid pressure has a positive effect on slip nucleation and subsequent crack propagation. This is confirmed by running a sensitivity analysis that shows an increase in injection well distance results in delayed slip nucleation and rupture propagation on the fault.

  14. Analysis of the Length of Braille Texts in English Braille American Edition, the Nemeth Code, and Computer Braille Code versus the Unified English Braille Code

    ERIC Educational Resources Information Center

    Knowlton, Marie; Wetzel, Robin

    2006-01-01

    This study compared the length of text in English Braille American Edition, the Nemeth code, and the computer braille code with the Unified English Braille Code (UEBC)--also known as Unified English Braille (UEB). The findings indicate that differences in the length of text are dependent on the type of material that is transcribed and the grade…

  15. A MATLAB based 3D modeling and inversion code for MT data

    NASA Astrophysics Data System (ADS)

    Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.

    2017-07-01

    The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.

  16. Enhancing Scalability and Efficiency of the TOUGH2_MP for LinuxClusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Keni; Wu, Yu-Shu

    2006-04-17

    TOUGH2{_}MP, the parallel version TOUGH2 code, has been enhanced by implementing more efficient communication schemes. This enhancement is achieved through reducing the amount of small-size messages and the volume of large messages. The message exchange speed is further improved by using non-blocking communications for both linear and nonlinear iterations. In addition, we have modified the AZTEC parallel linear-equation solver to nonblocking communication. Through the improvement of code structuring and bug fixing, the new version code is now more stable, while demonstrating similar or even better nonlinear iteration converging speed than the original TOUGH2 code. As a result, the new versionmore » of TOUGH2{_}MP is improved significantly in its efficiency. In this paper, the scalability and efficiency of the parallel code are demonstrated by solving two large-scale problems. The testing results indicate that speedup of the code may depend on both problem size and complexity. In general, the code has excellent scalability in memory requirement as well as computing time.« less

  17. Groundwater flow and heat transport for systems undergoing freeze-thaw: Intercomparison of numerical simulators for 2D test cases

    NASA Astrophysics Data System (ADS)

    Grenier, Christophe; Anbergen, Hauke; Bense, Victor; Chanzy, Quentin; Coon, Ethan; Collier, Nathaniel; Costard, François; Ferry, Michel; Frampton, Andrew; Frederick, Jennifer; Gonçalvès, Julio; Holmén, Johann; Jost, Anne; Kokh, Samuel; Kurylyk, Barret; McKenzie, Jeffrey; Molson, John; Mouche, Emmanuel; Orgogozo, Laurent; Pannetier, Romain; Rivière, Agnès; Roux, Nicolas; Rühaak, Wolfram; Scheidegger, Johanna; Selroos, Jan-Olof; Therrien, René; Vidstrand, Patrik; Voss, Clifford

    2018-04-01

    In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. This issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatial and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.

  18. Toward a first-principles integrated simulation of tokamak edge plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, C S; Klasky, Scott A; Cummings, Julian

    2008-01-01

    Performance of the ITER is anticipated to be highly sensitive to the edge plasma condition. The edge pedestal in ITER needs to be predicted from an integrated simulation of the necessary firstprinciples, multi-scale physics codes. The mission of the SciDAC Fusion Simulation Project (FSP) Prototype Center for Plasma Edge Simulation (CPES) is to deliver such a code integration framework by (1) building new kinetic codes XGC0 and XGC1, which can simulate the edge pedestal buildup; (2) using and improving the existing MHD codes ELITE, M3D-OMP, M3D-MPP and NIMROD, for study of large-scale edge instabilities called Edge Localized Modes (ELMs); andmore » (3) integrating the codes into a framework using cutting-edge computer science technology. Collaborative effort among physics, computer science, and applied mathematics within CPES has created the first working version of the End-to-end Framework for Fusion Integrated Simulation (EFFIS), which can be used to study the pedestal-ELM cycles.« less

  19. Code OK3 - An upgraded version of OK2 with beam wobbling function

    NASA Astrophysics Data System (ADS)

    Ogoyski, A. I.; Kawata, S.; Popov, P. H.

    2010-07-01

    For computer simulations on heavy ion beam (HIB) irradiation onto a target with an arbitrary shape and structure in heavy ion fusion (HIF), the code OK2 was developed and presented in Computer Physics Communications 161 (2004). Code OK3 is an upgrade of OK2 including an important capability of wobbling beam illumination. The wobbling beam introduces a unique possibility for a smooth mechanism of inertial fusion target implosion, so that sufficient fusion energy is released to construct a fusion reactor in future. New version program summaryProgram title: OK3 Catalogue identifier: ADST_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADST_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 221 517 No. of bytes in distributed program, including test data, etc.: 2 471 015 Distribution format: tar.gz Programming language: C++ Computer: PC (Pentium 4, 1 GHz or more recommended) Operating system: Windows or UNIX RAM: 2048 MBytes Classification: 19.7 Catalogue identifier of previous version: ADST_v2_0 Journal reference of previous version: Comput. Phys. Comm. 161 (2004) 143 Does the new version supersede the previous version?: Yes Nature of problem: In heavy ion fusion (HIF), ion cancer therapy, material processing, etc., a precise beam energy deposition is essentially important [1]. Codes OK1 and OK2 have been developed to simulate the heavy ion beam energy deposition in three-dimensional arbitrary shaped targets [2, 3]. Wobbling beam illumination is important to smooth the beam energy deposition nonuniformity in HIF, so that a uniform target implosion is realized and a sufficient fusion output energy is released. Solution method: OK3 code works on the base of OK1 and OK2 [2, 3]. The code simulates a multi-beam illumination on a target with arbitrary shape and structure, including beam wobbling function. Reasons for new version: The code OK3 is based on OK2 [3] and uses the same algorithm with some improvements, the most important one is the beam wobbling function. Summary of revisions:In the code OK3, beams are subdivided on many bunches. The displacement of each bunch center from the initial beam direction is calculated. Code OK3 allows the beamlet number to vary from bunch to bunch. That reduces the calculation error especially in case of very complicated mesh structure with big internal holes. The target temperature rises during the time of energy deposition. Some procedures are improved to perform faster. The energy conservation is checked up on each step of calculation process and corrected if necessary. New procedures included in OK3 Procedure BeamCenterRot( ) rotates the beam axis around the impinging direction of each beam. Procedure BeamletRot( ) rotates the beamlet axes that belong to each beam. Procedure Rotation( ) sets the coordinates of rotated beams and beamlets in chamber and pellet systems. Procedure BeamletOut( ) calculates the lost energy of ions that have not impinged on the target. Procedure TargetT( ) sets the temperature of the target layer of energy deposition during the irradiation process. Procedure ECL( ) checks up the energy conservation law at each step of the energy deposition process. Procedure ECLt( ) performs the final check up of the energy conservation law at the end of deposition process. Modified procedures in OK3 Procedure InitBeam( ): This procedure initializes the beam radius and coefficients A1, A2, A3, A4 and A5 for Gauss distributed beams [2]. It is enlarged in OK3 and can set beams with radii from 1 to 20 mm. Procedure kBunch( ) is modified to allow beamlet number variation from bunch to bunch during the deposition. Procedure ijkSp( ) and procedure Hole( ) are modified to perform faster. Procedure Espl( ) and procedure ChechE( ) are modified to increase the calculation accuracy. Procedure SD( ) calculates the total relative root-mean-square (RMS) deviation and the total relative peak-to-valley (PTV) deviation in energy deposition non-uniformity. This procedure is not included in code OK2 because of its limited applications (for spherical targets only). It is taken from code OK1 and modified to perform with code OK3. Running time: The execution time depends on the pellet mesh number and the number of beams in the simulated illumination as well as on the beam characteristics (beam radius on the pellet surface, beam subdivision, projectile particle energy and so on). In almost all of the practical running tests performed, the typical running time for one beam deposition is about 30 s on a PC with a CPU of Pentium 4, 2.4 GHz. References:A.I. Ogoyski, et al., Heavy ion beam irradiation non-uniformity in inertial fusion, Phys. Lett. A 315 (2003) 372-377. A.I. Ogoyski, et al., Code OK1 - Simulation of multi-beam irradiation on a spherical target in heavy ion fusion, Comput. Phys. Comm. 157 (2004) 160-172. A.I. Ogoyski, et al., Code OK2 - A simulation code of ion-beam illumination on an arbitrary shape and structure target, Comput. Phys. Comm. 161 (2004) 143-150.

  20. Applications of automatic differentiation in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Carle, A.; Bischof, C.; Haigler, Kara J.; Newman, Perry A.

    1994-01-01

    Automatic differentiation (AD) is a powerful computational method that provides for computing exact sensitivity derivatives (SD) from existing computer programs for multidisciplinary design optimization (MDO) or in sensitivity analysis. A pre-compiler AD tool for FORTRAN programs called ADIFOR has been developed. The ADIFOR tool has been easily and quickly applied by NASA Langley researchers to assess the feasibility and computational impact of AD in MDO with several different FORTRAN programs. These include a state-of-the-art three dimensional multigrid Navier-Stokes flow solver for wings or aircraft configurations in transonic turbulent flow. With ADIFOR the user specifies sets of independent and dependent variables with an existing computer code. ADIFOR then traces the dependency path throughout the code, applies the chain rule to formulate derivative expressions, and generates new code to compute the required SD matrix. The resulting codes have been verified to compute exact non-geometric and geometric SD for a variety of cases. in less time than is required to compute the SD matrix using centered divided differences.

  1. Fundamentals, current state of the development of, and prospects for further improvement of the new-generation thermal-hydraulic computational HYDRA-IBRAE/LM code for simulation of fast reactor systems

    NASA Astrophysics Data System (ADS)

    Alipchenkov, V. M.; Anfimov, A. M.; Afremov, D. A.; Gorbunov, V. S.; Zeigarnik, Yu. A.; Kudryavtsev, A. V.; Osipov, S. L.; Mosunova, N. A.; Strizhov, V. F.; Usov, E. V.

    2016-02-01

    The conceptual fundamentals of the development of the new-generation system thermal-hydraulic computational HYDRA-IBRAE/LM code are presented. The code is intended to simulate the thermalhydraulic processes that take place in the loops and the heat-exchange equipment of liquid-metal cooled fast reactor systems under normal operation and anticipated operational occurrences and during accidents. The paper provides a brief overview of Russian and foreign system thermal-hydraulic codes for modeling liquid-metal coolants and gives grounds for the necessity of development of a new-generation HYDRA-IBRAE/LM code. Considering the specific engineering features of the nuclear power plants (NPPs) equipped with the BN-1200 and the BREST-OD-300 reactors, the processes and the phenomena are singled out that require a detailed analysis and development of the models to be correctly described by the system thermal-hydraulic code in question. Information on the functionality of the computational code is provided, viz., the thermalhydraulic two-phase model, the properties of the sodium and the lead coolants, the closing equations for simulation of the heat-mass exchange processes, the models to describe the processes that take place during the steam-generator tube rupture, etc. The article gives a brief overview of the usability of the computational code, including a description of the support documentation and the supply package, as well as possibilities of taking advantages of the modern computer technologies, such as parallel computations. The paper shows the current state of verification and validation of the computational code; it also presents information on the principles of constructing of and populating the verification matrices for the BREST-OD-300 and the BN-1200 reactor systems. The prospects are outlined for further development of the HYDRA-IBRAE/LM code, introduction of new models into it, and enhancement of its usability. It is shown that the program of development and practical application of the code will allow carrying out in the nearest future the computations to analyze the safety of potential NPP projects at a qualitatively higher level.

  2. Performance assessment of KORAT-3D on the ANL IBM-SP computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexeyev, A.V.; Zvenigorodskaya, O.A.; Shagaliev, R.M.

    1999-09-01

    The TENAR code is currently being developed at the Russian Federal Nuclear Center (VNIIEF) as a coupled dynamics code for the simulation of transients in VVER and RBMK systems and other nuclear systems. The neutronic module in this code system is KORAT-3D. This module is also one of the most computationally intensive components of the code system. A parallel version of KORAT-3D has been implemented to achieve the goal of obtaining transient solutions in reasonable computational time, particularly for RBMK calculations that involve the application of >100,000 nodes. An evaluation of the KORAT-3D code performance was recently undertaken on themore » Argonne National Laboratory (ANL) IBM ScalablePower (SP) parallel computer located in the Mathematics and Computer Science Division of ANL. At the time of the study, the ANL IBM-SP computer had 80 processors. This study was conducted under the auspices of a technical staff exchange program sponsored by the International Nuclear Safety Center (INSC).« less

  3. Scalability of Parallel Spatial Direct Numerical Simulations on Intel Hypercube and IBM SP1 and SP2

    NASA Technical Reports Server (NTRS)

    Joslin, Ronald D.; Hanebutte, Ulf R.; Zubair, Mohammad

    1995-01-01

    The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube and IBM SP1 and SP2 parallel computers is documented. Spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows are computed with the PSDNS code. The feasibility of using the PSDNS to perform transition studies on these computers is examined. The results indicate that PSDNS approach can effectively be parallelized on a distributed-memory parallel machine by remapping the distributed data structure during the course of the calculation. Scalability information is provided to estimate computational costs to match the actual costs relative to changes in the number of grid points. By increasing the number of processors, slower than linear speedups are achieved with optimized (machine-dependent library) routines. This slower than linear speedup results because the computational cost is dominated by FFT routine, which yields less than ideal speedups. By using appropriate compile options and optimized library routines on the SP1, the serial code achieves 52-56 M ops on a single node of the SP1 (45 percent of theoretical peak performance). The actual performance of the PSDNS code on the SP1 is evaluated with a "real world" simulation that consists of 1.7 million grid points. One time step of this simulation is calculated on eight nodes of the SP1 in the same time as required by a Cray Y/MP supercomputer. For the same simulation, 32-nodes of the SP1 and SP2 are required to reach the performance of a Cray C-90. A 32 node SP1 (SP2) configuration is 2.9 (4.6) times faster than a Cray Y/MP for this simulation, while the hypercube is roughly 2 times slower than the Y/MP for this application. KEY WORDS: Spatial direct numerical simulations; incompressible viscous flows; spectral methods; finite differences; parallel computing.

  4. Profugus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Thomas; Hamilton, Steven; Slattery, Stuart

    Profugus is an open-source mini-application (mini-app) for radiation transport and reactor applications. It contains the fundamental computational kernels used in the Exnihilo code suite from Oak Ridge National Laboratory. However, Exnihilo is production code with a substantial user base. Furthermore, Exnihilo is export controlled. This makes collaboration with computer scientists and computer engineers difficult. Profugus is designed to bridge that gap. By encapsulating the core numerical algorithms in an abbreviated code base that is open-source, computer scientists can analyze the algorithms and easily make code-architectural changes to test performance without compromising the production code values of Exnihilo. Profugus is notmore » meant to be production software with respect to problem analysis. The computational kernels in Profugus are designed to analyze performance, not correctness. Nonetheless, users of Profugus can setup and run problems with enough real-world features to be useful as proof-of-concept for actual production work.« less

  5. The MCNP6 Analytic Criticality Benchmark Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less

  6. Large-Scale Computation of Nuclear Magnetic Resonance Shifts for Paramagnetic Solids Using CP2K.

    PubMed

    Mondal, Arobendo; Gaultois, Michael W; Pell, Andrew J; Iannuzzi, Marcella; Grey, Clare P; Hutter, Jürg; Kaupp, Martin

    2018-01-09

    Large-scale computations of nuclear magnetic resonance (NMR) shifts for extended paramagnetic solids (pNMR) are reported using the highly efficient Gaussian-augmented plane-wave implementation of the CP2K code. Combining hyperfine couplings obtained with hybrid functionals with g-tensors and orbital shieldings computed using gradient-corrected functionals, contact, pseudocontact, and orbital-shift contributions to pNMR shifts are accessible. Due to the efficient and highly parallel performance of CP2K, a wide variety of materials with large unit cells can be studied with extended Gaussian basis sets. Validation of various approaches for the different contributions to pNMR shifts is done first for molecules in a large supercell in comparison with typical quantum-chemical codes. This is then extended to a detailed study of g-tensors for extended solid transition-metal fluorides and for a series of complex lithium vanadium phosphates. Finally, lithium pNMR shifts are computed for Li 3 V 2 (PO 4 ) 3 , for which detailed experimental data are available. This has allowed an in-depth study of different approaches (e.g., full periodic versus incremental cluster computations of g-tensors and different functionals and basis sets for hyperfine computations) as well as a thorough analysis of the different contributions to the pNMR shifts. This study paves the way for a more-widespread computational treatment of NMR shifts for paramagnetic materials.

  7. Aerodynamic Interference Due to MSL Reaction Control System

    NASA Technical Reports Server (NTRS)

    Dyakonov, Artem A.; Schoenenberger, Mark; Scallion, William I.; VanNorman, John W.; Novak, Luke A.; Tang, Chun Y.

    2009-01-01

    An investigation of effectiveness of the reaction control system (RCS) of Mars Science Laboratory (MSL) entry capsule during atmospheric flight has been conducted. The reason for the investigation is that MSL is designed to fly a lifting actively guided entry with hypersonic bank maneuvers, therefore an understanding of RCS effectiveness is required. In the course of the study several jet configurations were evaluated using Langley Aerothermal Upwind Relaxation Algorithm (LAURA) code, Data Parallel Line Relaxation (DPLR) code, Fully Unstructured 3D (FUN3D) code and an Overset Grid Flowsolver (OVERFLOW) code. Computations indicated that some of the proposed configurations might induce aero-RCS interactions, sufficient to impede and even overwhelm the intended control torques. It was found that the maximum potential for aero-RCS interference exists around peak dynamic pressure along the trajectory. Present analysis largely relies on computational methods. Ground testing, flight data and computational analyses are required to fully understand the problem. At the time of this writing some experimental work spanning range of Mach number 2.5 through 4.5 has been completed and used to establish preliminary levels of confidence for computations. As a result of the present work a final RCS configuration has been designed such as to minimize aero-interference effects and it is a design baseline for MSL entry capsule.

  8. Experimental investigations, modeling, and analyses of high-temperature devices for space applications: Part 2. Final report, June 1996--December 1998

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tournier, J.; El-Genk, M.S.; Huang, L.

    1999-01-01

    The Institute of Space and Nuclear Power Studies at the University of New Mexico has developed a computer simulation of cylindrical geometry alkali metal thermal-to-electric converter cells using a standard Fortran 77 computer code. The objective and use of this code was to compare the experimental measurements with computer simulations, upgrade the model as appropriate, and conduct investigations of various methods to improve the design and performance of the devices for improved efficiency, durability, and longer operational lifetime. The Institute of Space and Nuclear Power Studies participated in vacuum testing of PX series alkali metal thermal-to-electric converter cells and developedmore » the alkali metal thermal-to-electric converter Performance Evaluation and Analysis Model. This computer model consisted of a sodium pressure loss model, a cell electrochemical and electric model, and a radiation/conduction heat transfer model. The code closely predicted the operation and performance of a wide variety of PX series cells which led to suggestions for improvements to both lifetime and performance. The code provides valuable insight into the operation of the cell, predicts parameters of components within the cell, and is a useful tool for predicting both the transient and steady state performance of systems of cells.« less

  9. Fast H.264/AVC FRExt intra coding using belief propagation.

    PubMed

    Milani, Simone

    2011-01-01

    In the H.264/AVC FRExt coder, the coding performance of Intra coding significantly overcomes the previous still image coding standards, like JPEG2000, thanks to a massive use of spatial prediction. Unfortunately, the adoption of an extensive set of predictors induces a significant increase of the computational complexity required by the rate-distortion optimization routine. The paper presents a complexity reduction strategy that aims at reducing the computational load of the Intra coding with a small loss in the compression performance. The proposed algorithm relies on selecting a reduced set of prediction modes according to their probabilities, which are estimated adopting a belief-propagation procedure. Experimental results show that the proposed method permits saving up to 60 % of the coding time required by an exhaustive rate-distortion optimization method with a negligible loss in performance. Moreover, it permits an accurate control of the computational complexity unlike other methods where the computational complexity depends upon the coded sequence.

  10. Computations in turbulent flows and off-design performance predictions for airframe-integrated scramjets

    NASA Technical Reports Server (NTRS)

    Goglia, G. L.; Spiegler, E.

    1977-01-01

    The research activity focused on two main tasks: (1) the further development of the SCRAM program and, in particular, the addition of a procedure for modeling the mechanism of the internal adjustment process of the flow, in response to the imposed thermal load across the combustor and (2) the development of a numerical code for the computation of the variation of concentrations throughout a turbulent field, where finite-rate reactions occur. The code also includes an estimation of the effect of the phenomenon called 'unmixedness'.

  11. Numerical computation of viscous flow around bodies and wings moving at supersonic speeds

    NASA Technical Reports Server (NTRS)

    Tannehill, J. C.

    1984-01-01

    Research in aerodynamics is discussed. The development of equilibrium air curve fits; computation of hypersonic rarefield leading edge flows; computation of 2-D and 3-D blunt body laminar flows with an impinging shock; development of a two-dimensional or axisymmetric real gas blunt body code; a study of an over-relaxation procedure forthe MacCormack finite-difference scheme; computation of 2-D blunt body turbulent flows with an impinging shock; computation of supersonic viscous flow over delta wings at high angles of attack; and computation of the Space Shuttle Orbiter flowfield are discussed.

  12. Numerical algorithm comparison for the accurate and efficient computation of high-incidence vortical flow

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.

    1991-01-01

    Computations from two Navier-Stokes codes, NSS and F3D, are presented for a tangent-ogive-cylinder body at high angle of attack. Features of this steady flow include a pair of primary vortices on the leeward side of the body as well as secondary vortices. The topological and physical plausibility of this vortical structure is discussed. The accuracy of these codes are assessed by comparison of the numerical solutions with experimental data. The effects of turbulence model, numerical dissipation, and grid refinement are presented. The overall efficiency of these codes are also assessed by examining their convergence rates, computational time per time step, and maximum allowable time step for time-accurate computations. Overall, the numerical results from both codes compared equally well with experimental data, however, the NSS code was found to be significantly more efficient than the F3D code.

  13. User's Manual for FEMOM3DR. Version 1.0

    NASA Technical Reports Server (NTRS)

    Reddy, C. J.

    1998-01-01

    FEMoM3DR is a computer code written in FORTRAN 77 to compute radiation characteristics of antennas on 3D body using combined Finite Element Method (FEM)/Method of Moments (MoM) technique. The code is written to handle different feeding structures like coaxial line, rectangular waveguide, and circular waveguide. This code uses the tetrahedral elements, with vector edge basis functions for FEM and triangular elements with roof-top basis functions for MoM. By virtue of FEM, this code can handle any arbitrary shaped three dimensional bodies with inhomogeneous lossy materials; and due to MoM the computational domain can be terminated in any arbitrary shape. The User's Manual is written to make the user acquainted with the operation of the code. The user is assumed to be familiar with the FORTRAN 77 language and the operating environment of the computers on which the code is intended to run.

  14. Selection of a computer code for Hanford low-level waste engineered-system performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGrail, B.P.; Mahoney, L.A.

    Planned performance assessments for the proposed disposal of low-level waste (LLW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. Currently available computer codes were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical process expected tomore » affect LLW glass corrosion and the mobility of radionuclides. The highest ranked computer code was found to be the ARES-CT code developed at PNL for the US Department of Energy for evaluation of and land disposal sites.« less

  15. User's manual for a material transport code on the Octopus Computer Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naymik, T.G.; Mendez, G.D.

    1978-09-15

    A code to simulate material transport through porous media was developed at Oak Ridge National Laboratory. This code has been modified and adapted for use at Lawrence Livermore Laboratory. This manual, in conjunction with report ORNL-4928, explains the input, output, and execution of the code on the Octopus Computer Network.

  16. Nonlinear to Linear Elastic Code Coupling in 2-D Axisymmetric Media.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preston, Leiph

    Explosions within the earth nonlinearly deform the local media, but at typical seismological observation distances, the seismic waves can be considered linear. Although nonlinear algorithms can simulate explosions in the very near field well, these codes are computationally expensive and inaccurate at propagating these signals to great distances. A linearized wave propagation code, coupled to a nonlinear code, provides an efficient mechanism to both accurately simulate the explosion itself and to propagate these signals to distant receivers. To this end we have coupled Sandia's nonlinear simulation algorithm CTH to a linearized elastic wave propagation code for 2-D axisymmetric media (axiElasti)more » by passing information from the nonlinear to the linear code via time-varying boundary conditions. In this report, we first develop the 2-D axisymmetric elastic wave equations in cylindrical coordinates. Next we show how we design the time-varying boundary conditions passing information from CTH to axiElasti, and finally we demonstrate the coupling code via a simple study of the elastic radius.« less

  17. Three-dimensional computational aerodynamics in the 1980's

    NASA Technical Reports Server (NTRS)

    Lomax, H.

    1978-01-01

    The future requirements for constructing codes that can be used to compute three-dimensional flows about aerodynamic shapes should be assessed in light of the constraints imposed by future computer architectures and the reality of usable algorithms that can provide practical three-dimensional simulations. On the hardware side, vector processing is inevitable in order to meet the CPU speeds required. To cope with three-dimensional geometries, massive data bases with fetch/store conflicts and transposition problems are inevitable. On the software side, codes must be prepared that: (1) can be adapted to complex geometries, (2) can (at the very least) predict the location of laminar and turbulent boundary layer separation, and (3) will converge rapidly to sufficiently accurate solutions.

  18. Applications of potential theory computations to transonic aeroelasticity

    NASA Technical Reports Server (NTRS)

    Edwards, J. W.

    1986-01-01

    Unsteady aerodynamic and aeroelastic stability calculations based upon transonic small disturbance (TSD) potential theory are presented. Results from the two-dimensional XTRAN2L code and the three-dimensional XTRAN3S code are compared with experiment to demonstrate the ability of TSD codes to treat transonic effects. The necessity of nonisentropic corrections to transonic potential theory is demonstrated. Dynamic computational effects resulting from the choice of grid and boundary conditions are illustrated. Unsteady airloads for a number of parameter variations including airfoil shape and thickness, Mach number, frequency, and amplitude are given. Finally, samples of transonic aeroelastic calculations are given. A key observation is the extent to which unsteady transonic airloads calculated by inviscid potential theory may be treated in a locally linear manner.

  19. Validation of NASA Thermal Ice Protection Computer Codes. Part 3; The Validation of Antice

    NASA Technical Reports Server (NTRS)

    Al-Khalil, Kamel M.; Horvath, Charles; Miller, Dean R.; Wright, William B.

    2001-01-01

    An experimental program was generated by the Icing Technology Branch at NASA Glenn Research Center to validate two ice protection simulation codes: (1) LEWICE/Thermal for transient electrothermal de-icing and anti-icing simulations, and (2) ANTICE for steady state hot gas and electrothermal anti-icing simulations. An electrothermal ice protection system was designed and constructed integral to a 36 inch chord NACA0012 airfoil. The model was fully instrumented with thermo-couples, RTD'S, and heat flux gages. Tests were conducted at several icing environmental conditions during a two week period at the NASA Glenn Icing Research Tunnel. Experimental results of running-wet and evaporative cases were compared to the ANTICE computer code predictions and are presented in this paper.

  20. PURDU-WINCOF: A computer code for establishing the performance of a fan-compressor unit with water ingestion

    NASA Technical Reports Server (NTRS)

    Leonardo, M.; Tsuchiya, T.; Murthy, S. N. B.

    1982-01-01

    A model for predicting the performance of a multi-spool axial-flow compressor with a fan during operation with water ingestion was developed incorporating several two-phase fluid flow effects as follows: (1) ingestion of water, (2) droplet interaction with blades and resulting changes in blade characteristics, (3) redistribution of water and water vapor due to centrifugal action, (4) heat and mass transfer processes, and (5) droplet size adjustment due to mass transfer and mechanical stability considerations. A computer program, called the PURDU-WINCOF code, was generated based on the model utilizing a one-dimensional formulation. An illustrative case serves to show the manner in which the code can be utilized and the nature of the results obtained.

  1. Heat pipe design handbook, part 2. [digital computer code specifications

    NASA Technical Reports Server (NTRS)

    Skrabek, E. A.

    1972-01-01

    The utilization of a digital computer code for heat pipe analysis and design (HPAD) is described which calculates the steady state hydrodynamic heat transport capability of a heat pipe with a particular wick configuration, the working fluid being a function of wick cross-sectional area. Heat load, orientation, operating temperature, and heat pipe geometry are specified. Both one 'g' and zero 'g' environments are considered, and, at the user's option, the code will also perform a weight analysis and will calculate heat pipe temperature drops. The central porous slab, circumferential porous wick, arterial wick, annular wick, and axial rectangular grooves are the wick configurations which HPAD has the capability of analyzing. For Vol. 1, see N74-22569.

  2. Performance analysis of three dimensional integral equation computations on a massively parallel computer. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Logan, Terry G.

    1994-01-01

    The purpose of this study is to investigate the performance of the integral equation computations using numerical source field-panel method in a massively parallel processing (MPP) environment. A comparative study of computational performance of the MPP CM-5 computer and conventional Cray-YMP supercomputer for a three-dimensional flow problem is made. A serial FORTRAN code is converted into a parallel CM-FORTRAN code. Some performance results are obtained on CM-5 with 32, 62, 128 nodes along with those on Cray-YMP with a single processor. The comparison of the performance indicates that the parallel CM-FORTRAN code near or out-performs the equivalent serial FORTRAN code for some cases.

  3. Computer-Integrated Manufacturing Technology. Tech Prep Competency Profile.

    ERIC Educational Resources Information Center

    Lakeland Tech Prep Consortium, Kirtland, OH.

    This tech prep competency profile covers these occupations: manufacturing technician, computer-assisted design and drafting (CADD) technician, quality technician, and mechanical technician. Section 1 provides occupation definitions. Section 2 lists development committee members. Section 3 provides the leveling codes---abbreviations for grade level…

  4. Computer Description of the M561 Utility Truck

    DTIC Science & Technology

    1984-10-01

    GIFT Computer Code Sustainabi1ity Predictions for Army Spare Components Requirements for Combat (SPARC) 20. ABSTRACT (Caotfmia «a NWM eitim ft...used as input to the GIFT computer code to generate target vulnerability data. DO FORM V JAM 73 1473 EDITION OF I NOV 65 IS OBSOLETE Unclass i f ied...anaLyiis requires input from the Geometric Information for Targets ( GIFT ) ’ computer code. This report documents the combina- torial geometry (Com-Geom

  5. f1: a code to compute Appell's F1 hypergeometric function

    NASA Astrophysics Data System (ADS)

    Colavecchia, F. D.; Gasaneo, G.

    2004-02-01

    In this work we present the FORTRAN code to compute the hypergeometric function F1( α, β1, β2, γ, x, y) of Appell. The program can compute the F1 function for real values of the variables { x, y}, and complex values of the parameters { α, β1, β2, γ}. The code uses different strategies to calculate the function according to the ideas outlined in [F.D. Colavecchia et al., Comput. Phys. Comm. 138 (1) (2001) 29]. Program summaryTitle of the program: f1 Catalogue identifier: ADSJ Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSJ Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Computers: PC compatibles, SGI Origin2∗ Operating system under which the program has been tested: Linux, IRIX Programming language used: Fortran 90 Memory required to execute with typical data: 4 kbytes No. of bits in a word: 32 No. of bytes in distributed program, including test data, etc.: 52 325 Distribution format: tar gzip file External subprograms used: Numerical Recipes hypgeo [W.H. Press et al., Numerical Recipes in Fortran 77, Cambridge Univ. Press, 1996] or chyp routine of R.C. Forrey [J. Comput. Phys. 137 (1997) 79], rkf45 [L.F. Shampine and H.H. Watts, Rep. SAND76-0585, 1976]. Keywords: Numerical methods, special functions, hypergeometric functions, Appell functions, Gauss function Nature of the physical problem: Computing the Appell F1 function is relevant in atomic collisions and elementary particle physics. It is usually the result of multidimensional integrals involving Coulomb continuum states. Method of solution: The F1 function has a convergent-series definition for | x|<1 and | y|<1, and several analytic continuations for other regions of the variable space. The code tests the values of the variables and selects one of the precedent cases. In the convergence region the program uses the series definition near the origin of coordinates, and a numerical integration of the third-order differential parametric equation for the F1 function. Also detects several special cases according to the values of the parameters. Restrictions on the complexity of the problem: The code is restricted to real values of the variables { x, y}. Also, there are some parameter domains that are not covered. These usually imply differences between integer parameters that lead to negative integer arguments of Gamma functions. Typical running time: Depends basically on the variables. The computation of Table 4 of [F.D. Colavecchia et al., Comput. Phys. Comm. 138 (1) (2001) 29] (64 functions) requires approximately 0.33 s in a Athlon 900 MHz processor.

  6. Tough2{_}MP: A parallel version of TOUGH2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Keni; Wu, Yu-Shu; Ding, Chris

    2003-04-09

    TOUGH2{_}MP is a massively parallel version of TOUGH2. It was developed for running on distributed-memory parallel computers to simulate large simulation problems that may not be solved by the standard, single-CPU TOUGH2 code. The new code implements an efficient massively parallel scheme, while preserving the full capacity and flexibility of the original TOUGH2 code. The new software uses the METIS software package for grid partitioning and AZTEC software package for linear-equation solving. The standard message-passing interface is adopted for communication among processors. Numerical performance of the current version code has been tested on CRAY-T3E and IBM RS/6000 SP platforms. Inmore » addition, the parallel code has been successfully applied to real field problems of multi-million-cell simulations for three-dimensional multiphase and multicomponent fluid and heat flow, as well as solute transport. In this paper, we will review the development of the TOUGH2{_}MP, and discuss the basic features, modules, and their applications.« less

  7. Computer-based coding of free-text job descriptions to efficiently identify occupations in epidemiological studies.

    PubMed

    Russ, Daniel E; Ho, Kwan-Yuet; Colt, Joanne S; Armenti, Karla R; Baris, Dalsu; Chow, Wong-Ho; Davis, Faith; Johnson, Alison; Purdue, Mark P; Karagas, Margaret R; Schwartz, Kendra; Schwenn, Molly; Silverman, Debra T; Johnson, Calvin A; Friesen, Melissa C

    2016-06-01

    Mapping job titles to standardised occupation classification (SOC) codes is an important step in identifying occupational risk factors in epidemiological studies. Because manual coding is time-consuming and has moderate reliability, we developed an algorithm called SOCcer (Standardized Occupation Coding for Computer-assisted Epidemiologic Research) to assign SOC-2010 codes based on free-text job description components. Job title and task-based classifiers were developed by comparing job descriptions to multiple sources linking job and task descriptions to SOC codes. An industry-based classifier was developed based on the SOC prevalence within an industry. These classifiers were used in a logistic model trained using 14 983 jobs with expert-assigned SOC codes to obtain empirical weights for an algorithm that scored each SOC/job description. We assigned the highest scoring SOC code to each job. SOCcer was validated in 2 occupational data sources by comparing SOC codes obtained from SOCcer to expert assigned SOC codes and lead exposure estimates obtained by linking SOC codes to a job-exposure matrix. For 11 991 case-control study jobs, SOCcer-assigned codes agreed with 44.5% and 76.3% of manually assigned codes at the 6-digit and 2-digit level, respectively. Agreement increased with the score, providing a mechanism to identify assignments needing review. Good agreement was observed between lead estimates based on SOCcer and manual SOC assignments (κ 0.6-0.8). Poorer performance was observed for inspection job descriptions, which included abbreviations and worksite-specific terminology. Although some manual coding will remain necessary, using SOCcer may improve the efficiency of incorporating occupation into large-scale epidemiological studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  8. Multi-zonal Navier-Stokes code with the LU-SGS scheme

    NASA Technical Reports Server (NTRS)

    Klopfer, G. H.; Yoon, S.

    1993-01-01

    The LU-SGS (lower upper symmetric Gauss Seidel) algorithm has been implemented into the Compressible Navier-Stokes, Finite Volume (CNSFV) code and validated with a multizonal Navier-Stokes simulation of a transonic turbulent flow around an Onera M6 transport wing. The convergence rate and robustness of the code have been improved and the computational cost has been reduced by at least a factor of 2 over the diagonal Beam-Warming scheme.

  9. The influence of commenting validity, placement, and style on perceptions of computer code trustworthiness: A heuristic-systematic processing approach.

    PubMed

    Alarcon, Gene M; Gamble, Rose F; Ryan, Tyler J; Walter, Charles; Jessup, Sarah A; Wood, David W; Capiola, August

    2018-07-01

    Computer programs are a ubiquitous part of modern society, yet little is known about the psychological processes that underlie reviewing code. We applied the heuristic-systematic model (HSM) to investigate the influence of computer code comments on perceptions of code trustworthiness. The study explored the influence of validity, placement, and style of comments in code on trustworthiness perceptions and time spent on code. Results indicated valid comments led to higher trust assessments and more time spent on the code. Properly placed comments led to lower trust assessments and had a marginal effect on time spent on code; however, the effect was no longer significant after controlling for effects of the source code. Low style comments led to marginally higher trustworthiness assessments, but high style comments led to longer time spent on the code. Several interactions were also found. Our findings suggest the relationship between code comments and perceptions of code trustworthiness is not as straightforward as previously thought. Additionally, the current paper extends the HSM to the programming literature. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Adiabatic topological quantum computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cesare, Chris; Landahl, Andrew J.; Bacon, Dave

    Topological quantum computing promises error-resistant quantum computation without active error correction. However, there is a worry that during the process of executing quantum gates by braiding anyons around each other, extra anyonic excitations will be created that will disorder the encoded quantum information. Here, we explore this question in detail by studying adiabatic code deformations on Hamiltonians based on topological codes, notably Kitaev’s surface codes and the more recently discovered color codes. We develop protocols that enable universal quantum computing by adiabatic evolution in a way that keeps the energy gap of the system constant with respect to the computationmore » size and introduces only simple local Hamiltonian interactions. This allows one to perform holonomic quantum computing with these topological quantum computing systems. The tools we develop allow one to go beyond numerical simulations and understand these processes analytically.« less

  11. Adiabatic topological quantum computing

    DOE PAGES

    Cesare, Chris; Landahl, Andrew J.; Bacon, Dave; ...

    2015-07-31

    Topological quantum computing promises error-resistant quantum computation without active error correction. However, there is a worry that during the process of executing quantum gates by braiding anyons around each other, extra anyonic excitations will be created that will disorder the encoded quantum information. Here, we explore this question in detail by studying adiabatic code deformations on Hamiltonians based on topological codes, notably Kitaev’s surface codes and the more recently discovered color codes. We develop protocols that enable universal quantum computing by adiabatic evolution in a way that keeps the energy gap of the system constant with respect to the computationmore » size and introduces only simple local Hamiltonian interactions. This allows one to perform holonomic quantum computing with these topological quantum computing systems. The tools we develop allow one to go beyond numerical simulations and understand these processes analytically.« less

  12. A combined Fuzzy and Naive Bayesian strategy can be used to assign event codes to injury narratives.

    PubMed

    Marucci-Wellman, H; Lehto, M; Corns, H

    2011-12-01

    Bayesian methods show promise for classifying injury narratives from large administrative datasets into cause groups. This study examined a combined approach where two Bayesian models (Fuzzy and Naïve) were used to either classify a narrative or select it for manual review. Injury narratives were extracted from claims filed with a worker's compensation insurance provider between January 2002 and December 2004. Narratives were separated into a training set (n=11,000) and prediction set (n=3,000). Expert coders assigned two-digit Bureau of Labor Statistics Occupational Injury and Illness Classification event codes to each narrative. Fuzzy and Naïve Bayesian models were developed using manually classified cases in the training set. Two semi-automatic machine coding strategies were evaluated. The first strategy assigned cases for manual review if the Fuzzy and Naïve models disagreed on the classification. The second strategy selected additional cases for manual review from the Agree dataset using prediction strength to reach a level of 50% computer coding and 50% manual coding. When agreement alone was used as the filtering strategy, the majority were coded by the computer (n=1,928, 64%) leaving 36% for manual review. The overall combined (human plus computer) sensitivity was 0.90 and positive predictive value (PPV) was >0.90 for 11 of 18 2-digit event categories. Implementing the 2nd strategy improved results with an overall sensitivity of 0.95 and PPV >0.90 for 17 of 18 categories. A combined Naïve-Fuzzy Bayesian approach can classify some narratives with high accuracy and identify others most beneficial for manual review, reducing the burden on human coders.

  13. Prediction of Turbulence-Generated Noise in Unheated Jets. Part 2; JeNo Users' Manual (Version 1.0)

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Wolter, John D.; Koch, L. Danielle

    2009-01-01

    JeNo (Version 1.0) is a Fortran90 computer code that calculates the far-field sound spectral density produced by axisymmetric, unheated jets at a user specified observer location and frequency range. The user must provide a structured computational grid and a mean flow solution from a Reynolds-Averaged Navier Stokes (RANS) code as input. Turbulence kinetic energy and its dissipation rate from a k-epsilon or k-omega turbulence model must also be provided. JeNo is a research code, and as such, its development is ongoing. The goal is to create a code that is able to accurately compute far-field sound pressure levels for jets at all observer angles and all operating conditions. In order to achieve this goal, current theories must be combined with the best practices in numerical modeling, all of which must be validated by experiment. Since the acoustic predictions from JeNo are based on the mean flow solutions from a RANS code, quality predictions depend on accurate aerodynamic input.This is why acoustic source modeling, turbulence modeling, together with the development of advanced measurement systems are the leading areas of research in jet noise research at NASA Glenn Research Center.

  14. Fast Computation of the Two-Point Correlation Function in the Age of Big Data

    NASA Astrophysics Data System (ADS)

    Pellegrino, Andrew; Timlin, John

    2018-01-01

    We present a new code which quickly computes the two-point correlation function for large sets of astronomical data. This code combines the ease of use of Python with the speed of parallel shared libraries written in C. We include the capability to compute the auto- and cross-correlation statistics, and allow the user to calculate the three-dimensional and angular correlation functions. Additionally, the code automatically divides the user-provided sky masks into contiguous subsamples of similar size, using the HEALPix pixelization scheme, for the purpose of resampling. Errors are computed using jackknife and bootstrap resampling in a way that adds negligible extra runtime, even with many subsamples. We demonstrate comparable speed with other clustering codes, and code accuracy compared to known and analytic results.

  15. [Introduction of a bar coding pharmacy stock replenishment system in a prehospital emergency medical unit: economical impact].

    PubMed

    Dupuis, S; Fecci, J-L; Noyer, P; Lecarpentier, E; Chollet-Xémard, C; Margenet, A; Marty, J; Combes, X

    2009-01-01

    To assess economical impact after introduction of a bar coding pharmacy stock replenishment system in a prehospital emergency medical unit. Observational before and after study. A computer system using specific software and bare-code technology was introduced in the pre hospital emergency medical unit (Smur). Overall activity and costs related to pharmacy were recorded annually during two periods: the first 2 years period before computer system introduction and the second one during the 4 years following this system installation. The overall clinical activity increased by 10% between the two periods whereas pharmacy related costs continuously decreased after the start of pharmacy management computer system use. Pharmacy stock management was easier after introduction of the new stock replenishment system. The mean pharmacy related cost of one patient management was 13 Euros before and 9 Euros after the introduction of the system. The overall cost savings during the studied period was calculated to reach 134,000 Euros. The introduction of a specific pharmacy management computer system allowed to do important costs savings in a prehospital emergency medical unit.

  16. Knowledge management: Role of the the Radiation Safety Information Computational Center (RSICC)

    NASA Astrophysics Data System (ADS)

    Valentine, Timothy

    2017-09-01

    The Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL) is an information analysis center that collects, archives, evaluates, synthesizes and distributes information, data and codes that are used in various nuclear technology applications. RSICC retains more than 2,000 software packages that have been provided by code developers from various federal and international agencies. RSICC's customers (scientists, engineers, and students from around the world) obtain access to such computing codes (source and/or executable versions) and processed nuclear data files to promote on-going research, to ensure nuclear and radiological safety, and to advance nuclear technology. The role of such information analysis centers is critical for supporting and sustaining nuclear education and training programs both domestically and internationally, as the majority of RSICC's customers are students attending U.S. universities. Additionally, RSICC operates a secure CLOUD computing system to provide access to sensitive export-controlled modeling and simulation (M&S) tools that support both domestic and international activities. This presentation will provide a general review of RSICC's activities, services, and systems that support knowledge management and education and training in the nuclear field.

  17. ICAM (Conceptual Design for Computer-Integrated Manufacturing. Volume 2. Part 6. Task B - Establishment of the Factory of the Future Conceptual Framework Conceptual Framework Document, (MMR)

    DTIC Science & Technology

    1984-06-29

    effort that requires hard copy documentation. As a result, there are generally numerous delays in providing current quality information. In the FoF...process have had fixed controls or were based on " hard -coded" information. A template, for example, is hard -coded information defining the shape of a...represents soft-coded control information. (Although manual handling of punch tapes still possess some of the limitations of " hard -coded" controls

  18. For Whom Is a Picture Worth a Thousand Words? Extensions of a Dual-Coding Theory of Multimedia Learning.

    ERIC Educational Resources Information Center

    Mayer, Richard E.; Sims, Valerie K.

    1994-01-01

    In 2 experiments, 162 high- and low-spatial ability students viewed a computer-generated animation and heard a concurrent or successive explanation. The concurrent group generated more creative solutions to transfer problems and demonstrated a contiguity effect consistent with dual-coding theory. (SLD)

  19. 17 CFR 232.106 - Prohibition against electronic submissions containing executable code.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 17 Commodity and Securities Exchanges 2 2011-04-01 2011-04-01 false Prohibition against electronic submissions containing executable code. 232.106 Section 232.106 Commodity and Securities Exchanges SECURITIES... Filer Manual section also may be a violation of the Computer Fraud and Abuse Act of 1986, as amended...

  20. 17 CFR 232.106 - Prohibition against electronic submissions containing executable code.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 17 Commodity and Securities Exchanges 2 2013-04-01 2013-04-01 false Prohibition against electronic submissions containing executable code. 232.106 Section 232.106 Commodity and Securities Exchanges SECURITIES... Filer Manual section also may be a violation of the Computer Fraud and Abuse Act of 1986, as amended...

  1. 17 CFR 232.106 - Prohibition against electronic submissions containing executable code.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 17 Commodity and Securities Exchanges 2 2012-04-01 2012-04-01 false Prohibition against electronic submissions containing executable code. 232.106 Section 232.106 Commodity and Securities Exchanges SECURITIES... Filer Manual section also may be a violation of the Computer Fraud and Abuse Act of 1986, as amended...

  2. Design of convolutional tornado code

    NASA Astrophysics Data System (ADS)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  3. Steady and Unsteady Nozzle Simulations Using the Conservation Element and Solution Element Method

    NASA Technical Reports Server (NTRS)

    Friedlander, David Joshua; Wang, Xiao-Yen J.

    2014-01-01

    This paper presents results from computational fluid dynamic (CFD) simulations of a three-stream plug nozzle. Time-accurate, Euler, quasi-1D and 2D-axisymmetric simulations were performed as part of an effort to provide a CFD-based approach to modeling nozzle dynamics. The CFD code used for the simulations is based on the space-time Conservation Element and Solution Element (CESE) method. Steady-state results were validated using the Wind-US code and a code utilizing the MacCormack method while the unsteady results were partially validated via an aeroacoustic benchmark problem. The CESE steady-state flow field solutions showed excellent agreement with solutions derived from the other methods and codes while preliminary unsteady results for the three-stream plug nozzle are also shown. Additionally, a study was performed to explore the sensitivity of gross thrust computations to the control surface definition. The results showed that most of the sensitivity while computing the gross thrust is attributed to the control surface stencil resolution and choice of stencil end points and not to the control surface definition itself.Finally, comparisons between the quasi-1D and 2D-axisymetric solutions were performed in order to gain insight on whether a quasi-1D solution can capture the steady and unsteady nozzle phenomena without the cost of a 2D-axisymmetric simulation. Initial results show that while the quasi-1D solutions are similar to the 2D-axisymmetric solutions, the inability of the quasi-1D simulations to predict two dimensional phenomena limits its accuracy.

  4. Accuracy and time requirements of a bar-code inventory system for medical supplies.

    PubMed

    Hanson, L B; Weinswig, M H; De Muth, J E

    1988-02-01

    The effects of implementing a bar-code system for issuing medical supplies to nursing units at a university teaching hospital were evaluated. Data on the time required to issue medical supplies to three nursing units at a 480-bed, tertiary-care teaching hospital were collected (1) before the bar-code system was implemented (i.e., when the manual system was in use), (2) one month after implementation, and (3) four months after implementation. At the same times, the accuracy of the central supply perpetual inventory was monitored using 15 selected items. One-way analysis of variance tests were done to determine any significant differences between the bar-code and manual systems. Using the bar-code system took longer than using the manual system because of a significant difference in the time required for order entry into the computer. Multiple-use requirements of the central supply computer system made entering bar-code data a much slower process. There was, however, a significant improvement in the accuracy of the perpetual inventory. Using the bar-code system for issuing medical supplies to the nursing units takes longer than using the manual system. However, the accuracy of the perpetual inventory was significantly improved with the implementation of the bar-code system.

  5. The ZPIC educational code suite

    NASA Astrophysics Data System (ADS)

    Calado, R.; Pardal, M.; Ninhos, P.; Helm, A.; Mori, W. B.; Decyk, V. K.; Vieira, J.; Silva, L. O.; Fonseca, R. A.

    2017-10-01

    Particle-in-Cell (PIC) codes are used in almost all areas of plasma physics, such as fusion energy research, plasma accelerators, space physics, ion propulsion, and plasma processing, and many other areas. In this work, we present the ZPIC educational code suite, a new initiative to foster training in plasma physics using computer simulations. Leveraging on our expertise and experience from the development and use of the OSIRIS PIC code, we have developed a suite of 1D/2D fully relativistic electromagnetic PIC codes, as well as 1D electrostatic. These codes are self-contained and require only a standard laptop/desktop computer with a C compiler to be run. The output files are written in a new file format called ZDF that can be easily read using the supplied routines in a number of languages, such as Python, and IDL. The code suite also includes a number of example problems that can be used to illustrate several textbook and advanced plasma mechanisms, including instructions for parameter space exploration. We also invite contributions to this repository of test problems that will be made freely available to the community provided the input files comply with the format defined by the ZPIC team. The code suite is freely available and hosted on GitHub at https://github.com/zambzamb/zpic. Work partially supported by PICKSC.

  6. Inlet flowfield investigation. Part 2: Computation of the flow about a supercruise forebody at supersonic speeds

    NASA Technical Reports Server (NTRS)

    Paynter, G. C.; Salemann, V.; Strom, E. E. I.

    1984-01-01

    A numerical procedure which solves the parabolized Navier-Stokes (PNS) equations on a body fitted mesh was used to compute the flow about the forebody of an advanced tactical supercruise fighter configuration in an effort to explore the use of a PNS method for design of supersonic cruise forebody geometries. Forebody flow fields were computed at Mach numbers of 1.5, 2.0, and 2.5, and at angles-of-attack of 0 deg, 4 deg, and 8 deg. at each Mach number. Computed results are presented at several body stations and include contour plots of Mach number, total pressure, upwash angle, sidewash angle and cross-plane velocity. The computational analysis procedure was found reliable for evaluating forebody flow fields of advanced aircraft configurations for flight conditions where the vortex shed from the wing leading edge is not a dominant flow phenomenon. Static pressure distributions and boundary layer profiles on the forebody and wing were surveyed in a wind tunnel test, and the analytical results are compared to the data. The current status of the parabolized flow flow field code is described along with desirable improvements in the code.

  7. TOPAZ2D heat transfer code users manual and thermal property data base

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shapiro, A.B.; Edwards, A.L.

    1990-05-01

    TOPAZ2D is a two dimensional implicit finite element computer code for heat transfer analysis. This user's manual provides information on the structure of a TOPAZ2D input file. Also included is a material thermal property data base. This manual is supplemented with The TOPAZ2D Theoretical Manual and the TOPAZ2D Verification Manual. TOPAZ2D has been implemented on the CRAY, SUN, and VAX computers. TOPAZ2D can be used to solve for the steady state or transient temperature field on two dimensional planar or axisymmetric geometries. Material properties may be temperature dependent and either isotropic or orthotropic. A variety of time and temperature dependentmore » boundary conditions can be specified including temperature, flux, convection, and radiation. Time or temperature dependent internal heat generation can be defined locally be element or globally by material. TOPAZ2D can solve problems of diffuse and specular band radiation in an enclosure coupled with conduction in material surrounding the enclosure. Additional features include thermally controlled reactive chemical mixtures, thermal contact resistance across an interface, bulk fluid flow, phase change, and energy balances. Thermal stresses can be calculated using the solid mechanics code NIKE2D which reads the temperature state data calculated by TOPAZ2D. A three dimensional version of the code, TOPAZ3D is available. The material thermal property data base, Chapter 4, included in this manual was originally published in 1969 by Art Edwards for use with his TRUMP finite difference heat transfer code. The format of the data has been altered to be compatible with TOPAZ2D. Bob Bailey is responsible for adding the high explosive thermal property data.« less

  8. Optimization of monitoring networks based on uncertainty quantification of model predictions of contaminant transport

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.; Harp, D.

    2010-12-01

    The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.

  9. Development of numerical methods for overset grids with applications for the integrated Space Shuttle vehicle

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    1995-01-01

    Algorithms and computer code developments were performed for the overset grid approach to solving computational fluid dynamics problems. The techniques developed are applicable to compressible Navier-Stokes flow for any general complex configurations. The computer codes developed were tested on different complex configurations with the Space Shuttle launch vehicle configuration as the primary test bed. General, efficient and user-friendly codes were produced for grid generation, flow solution and force and moment computation.

  10. Research in Computational Aeroscience Applications Implemented on Advanced Parallel Computing Systems

    NASA Technical Reports Server (NTRS)

    Wigton, Larry

    1996-01-01

    Improving the numerical linear algebra routines for use in new Navier-Stokes codes, specifically Tim Barth's unstructured grid code, with spin-offs to TRANAIR is reported. A fast distance calculation routine for Navier-Stokes codes using the new one-equation turbulence models is written. The primary focus of this work was devoted to improving matrix-iterative methods. New algorithms have been developed which activate the full potential of classical Cray-class computers as well as distributed-memory parallel computers.

  11. ISSYS: An integrated synergistic Synthesis System

    NASA Technical Reports Server (NTRS)

    Dovi, A. R.

    1980-01-01

    Integrated Synergistic Synthesis System (ISSYS), an integrated system of computer codes in which the sequence of program execution and data flow is controlled by the user, is discussed. The commands available to exert such control, the ISSYS major function and rules, and the computer codes currently available in the system are described. Computational sequences frequently used in the aircraft structural analysis and synthesis are defined. External computer codes utilized by the ISSYS system are documented. A bibliography on the programs is included.

  12. ICCE/ICCAI 2000 Full & Short Papers (Others).

    ERIC Educational Resources Information Center

    2000

    This document contains the following full and short papers from ICCE/ICCAI 2000 (International Conference on Computers in Education/International Conference on Computer-Assisted Instruction): (1) "A Code Restructuring Tool To Help Scaffold Novice Programmers" (Stuart Garner); (2) "An Assessment Framework for Information Technology Integrated…

  13. Preclinical assessment of dopaminergic system in rats by MicroPET using three positron-emitting radiopharmaceuticals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lara-Camacho, V. M., E-mail: victormlc13@hotmail.com; Ávila-García, M. C., E-mail: victormlc13@hotmail.com; Ávila-Rodríguez, M. A., E-mail: victormlc13@hotmail.com

    Different diseases associated with dysfunction of dopaminergic system such as Parkinson, Alzheimer, and Schizophrenia are being widely studied with positron emission tomography (PET) which is a noninvasive method useful to assess the stage of these illnesses. In our facility we have recently implemented the production of [{sup 11}C]-DTBZ, [{sup 11}C]-RAC, and [{sup 18}F]-FDOPA, which are among the most common PET radiopharmaceuticals used in neurology applications to get information about the dopamine pathways. In this study two healthy rats were imaged with each of those radiotracers in order to confirm selective striatum uptake as a proof of principle before to releasemore » them for human use.« less

  14. Parallel Computation of Unsteady Flows on a Network of Workstations

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Parallel computation of unsteady flows requires significant computational resources. The utilization of a network of workstations seems an efficient solution to the problem where large problems can be treated at a reasonable cost. This approach requires the solution of several problems: 1) the partitioning and distribution of the problem over a network of workstation, 2) efficient communication tools, 3) managing the system efficiently for a given problem. Of course, there is the question of the efficiency of any given numerical algorithm to such a computing system. NPARC code was chosen as a sample for the application. For the explicit version of the NPARC code both two- and three-dimensional problems were studied. Again both steady and unsteady problems were investigated. The issues studied as a part of the research program were: 1) how to distribute the data between the workstations, 2) how to compute and how to communicate at each node efficiently, 3) how to balance the load distribution. In the following, a summary of these activities is presented. Details of the work have been presented and published as referenced.

  15. Interactive Synthesis of Code Level Security Rules

    DTIC Science & Technology

    2017-04-01

    Interactive Synthesis of Code-Level Security Rules A Thesis Presented by Leo St. Amour to The Department of Computer Science in partial fulfillment...of the requirements for the degree of Master of Science in Computer Science Northeastern University Boston, Massachusetts April 2017 DISTRIBUTION...Abstract of the Thesis Interactive Synthesis of Code-Level Security Rules by Leo St. Amour Master of Science in Computer Science Northeastern University

  16. Agricultural Spraying

    NASA Technical Reports Server (NTRS)

    1986-01-01

    AGDISP, a computer code written for Langley by Continuum Dynamics, Inc., aids crop dusting airplanes in targeting pesticides. The code is commercially available and can be run on a personal computer by an inexperienced operator. Called SWA+H, it is used by the Forest Service, FAA, DuPont, etc. DuPont uses the code to "test" equipment on the computer using a laser system to measure particle characteristics of various spray compounds.

  17. A comparison of native GPU computing versus OpenACC for implementing flow-routing algorithms in hydrological applications

    NASA Astrophysics Data System (ADS)

    Rueda, Antonio J.; Noguera, José M.; Luque, Adrián

    2016-02-01

    In recent years GPU computing has gained wide acceptance as a simple low-cost solution for speeding up computationally expensive processing in many scientific and engineering applications. However, in most cases accelerating a traditional CPU implementation for a GPU is a non-trivial task that requires a thorough refactorization of the code and specific optimizations that depend on the architecture of the device. OpenACC is a promising technology that aims at reducing the effort required to accelerate C/C++/Fortran code on an attached multicore device. Virtually with this technology the CPU code only has to be augmented with a few compiler directives to identify the areas to be accelerated and the way in which data has to be moved between the CPU and GPU. Its potential benefits are multiple: better code readability, less development time, lower risk of errors and less dependency on the underlying architecture and future evolution of the GPU technology. Our aim with this work is to evaluate the pros and cons of using OpenACC against native GPU implementations in computationally expensive hydrological applications, using the classic D8 algorithm of O'Callaghan and Mark for river network extraction as case-study. We implemented the flow accumulation step of this algorithm in CPU, using OpenACC and two different CUDA versions, comparing the length and complexity of the code and its performance with different datasets. We advance that although OpenACC can not match the performance of a CUDA optimized implementation (×3.5 slower in average), it provides a significant performance improvement against a CPU implementation (×2-6) with by far a simpler code and less implementation effort.

  18. The adaption and use of research codes for performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebetrau, A.M.

    1987-05-01

    Models of real-world phenomena are developed for many reasons. The models are usually, if not always, implemented in the form of a computer code. The characteristics of a code are determined largely by its intended use. Realizations or implementations of detailed mathematical models of complex physical and/or chemical processes are often referred to as research or scientific (RS) codes. Research codes typically require large amounts of computing time. One example of an RS code is a finite-element code for solving complex systems of differential equations that describe mass transfer through some geologic medium. Considerable computing time is required because computationsmore » are done at many points in time and/or space. Codes used to evaluate the overall performance of real-world physical systems are called performance assessment (PA) codes. Performance assessment codes are used to conduct simulated experiments involving systems that cannot be directly observed. Thus, PA codes usually involve repeated simulations of system performance in situations that preclude the use of conventional experimental and statistical methods. 3 figs.« less

  19. Topological color codes on Union Jack lattices: a stable implementation of the whole Clifford group

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katzgraber, Helmut G.; Theoretische Physik, ETH Zurich, CH-8093 Zurich; Bombin, H.

    We study the error threshold of topological color codes on Union Jack lattices that allow for the full implementation of the whole Clifford group of quantum gates. After mapping the error-correction process onto a statistical mechanical random three-body Ising model on a Union Jack lattice, we compute its phase diagram in the temperature-disorder plane using Monte Carlo simulations. Surprisingly, topological color codes on Union Jack lattices have a similar error stability to color codes on triangular lattices, as well as to the Kitaev toric code. The enhanced computational capabilities of the topological color codes on Union Jack lattices with respectmore » to triangular lattices and the toric code combined with the inherent robustness of this implementation show good prospects for future stable quantum computer implementations.« less

  20. Accurate Modeling of Ionospheric Electromagnetic Fields Generated by a Low-Altitude VLF Transmitter

    DTIC Science & Technology

    2007-08-31

    latitude) for 3 different grid spacings. 14 8. Low-altitude fields produced by a 10-kHz source computed using the FD and TD codes. The agreement is...excellent, validating the new FD code. 16 9. High-altitude fields produced by a 10-kHz source computed using the FD and TD codes. The agreement is...again excellent. 17 10. Low-altitude fields produced by a 20-k.Hz source computed using the FD and TD codes. 17 11. High-altitude fields produced

  1. ASTROP2 users manual: A program for aeroelastic stability analysis of propfans

    NASA Technical Reports Server (NTRS)

    Narayanan, G. V.; Kaza, K. R. V.

    1991-01-01

    A user's manual is presented for the aeroelastic stability and response of propulsion systems computer program called ASTROP2. The ASTROP2 code preforms aeroelastic stability analysis of rotating propfan blades. This analysis uses a two-dimensional, unsteady cascade aerodynamics model and a three-dimensional, normal-mode structural model. Analytical stability results from this code are compared with published experimental results of a rotating composite advanced turboprop model and of nonrotating metallic wing model.

  2. Displaying radiologic images on personal computers: image storage and compression--Part 2.

    PubMed

    Gillespy, T; Rowberg, A H

    1994-02-01

    This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.

  3. Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 1; Formulation

    NASA Technical Reports Server (NTRS)

    Walsh, J. L.; Townsend, J. C.; Salas, A. O.; Samareh, J. A.; Mukhopadhyay, V.; Barthelemy, J.-F.

    2000-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a highspeed civil transport configuration. The paper describes the engineering aspects of formulating the optimization by integrating these analysis codes and associated interface codes into the system. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture (CORBA) compliant software product. A companion paper presents currently available results.

  4. Functional Requirements of a Target Description System for Vulnerability Analysis

    DTIC Science & Technology

    1979-11-01

    called GIFT .1,2 Together the COMGEOM description model and GIFT codes make up the BRL’s target description system. The significance of a target...and modifying target descriptions are described. 1 Lawrence W. Bain, Jr. and Mathew J. Reisinger, "The GIFT Code User Manual; Volume 1...34The GIFT Code User Manual; Volume II, The Output Options," unpublished draft of BRL report. II. UNDERLYING PHILOSOPHY The BRL has a computer

  5. Analysis of airborne antenna systems using geometrical theory of diffraction and moment method computer codes

    NASA Technical Reports Server (NTRS)

    Hartenstein, Richard G., Jr.

    1985-01-01

    Computer codes have been developed to analyze antennas on aircraft and in the presence of scatterers. The purpose of this study is to use these codes to develop accurate computer models of various aircraft and antenna systems. The antenna systems analyzed are a P-3B L-Band antenna, an A-7E UHF relay pod antenna, and traffic advisory antenna system installed on a Bell Long Ranger helicopter. Computer results are compared to measured ones with good agreement. These codes can be used in the design stage of an antenna system to determine the optimum antenna location and save valuable time and costly flight hours.

  6. Code of Ethical Conduct for Computer-Using Educators: An ICCE Policy Statement.

    ERIC Educational Resources Information Center

    Computing Teacher, 1987

    1987-01-01

    Prepared by the International Council for Computers in Education's Ethics and Equity Committee, this code of ethics for educators using computers covers nine main areas: curriculum issues, issues relating to computer access, privacy/confidentiality issues, teacher-related issues, student issues, the community, school organizational issues,…

  7. Validation of NASA Thermal Ice Protection Computer Codes. Part 1; Program Overview

    NASA Technical Reports Server (NTRS)

    Miller, Dean; Bond, Thomas; Sheldon, David; Wright, William; Langhals, Tammy; Al-Khalil, Kamel; Broughton, Howard

    1996-01-01

    The Icing Technology Branch at NASA Lewis has been involved in an effort to validate two thermal ice protection codes developed at the NASA Lewis Research Center. LEWICE/Thermal (electrothermal deicing & anti-icing), and ANTICE (hot-gas & electrothermal anti-icing). The Thermal Code Validation effort was designated as a priority during a 1994 'peer review' of the NASA Lewis Icing program, and was implemented as a cooperative effort with industry. During April 1996, the first of a series of experimental validation tests was conducted in the NASA Lewis Icing Research Tunnel(IRT). The purpose of the April 96 test was to validate the electrothermal predictive capabilities of both LEWICE/Thermal, and ANTICE. A heavily instrumented test article was designed and fabricated for this test, with the capability of simulating electrothermal de-icing and anti-icing modes of operation. Thermal measurements were then obtained over a range of test conditions, for comparison with analytical predictions. This paper will present an overview of the test, including a detailed description of: (1) the validation process; (2) test article design; (3) test matrix development; and (4) test procedures. Selected experimental results will be presented for de-icing and anti-icing modes of operation. Finally, the status of the validation effort at this point will be summarized. Detailed comparisons between analytical predictions and experimental results are contained in the following two papers: 'Validation of NASA Thermal Ice Protection Computer Codes: Part 2- The Validation of LEWICE/Thermal' and 'Validation of NASA Thermal Ice Protection Computer Codes: Part 3-The Validation of ANTICE'

  8. Computer Science in High School Graduation Requirements. ECS Education Trends

    ERIC Educational Resources Information Center

    Zinth, Jennifer Dounay

    2015-01-01

    Computer science and coding skills are widely recognized as a valuable asset in the current and projected job market. The Bureau of Labor Statistics projects 37.5 percent growth from 2012 to 2022 in the "computer systems design and related services" industry--from 1,620,300 jobs in 2012 to an estimated 2,229,000 jobs in 2022. Yet some…

  9. Effectiveness of Various Computer-Based Instructional Strategies in Language Teaching. Final Report, November 1, 1969-August 31, 1970.

    ERIC Educational Resources Information Center

    Van Campen, Joseph A.

    Computer software for programed language instruction, developed in the second quarter of 1970 at Stanford's Institute for Mathematical Studies in the Social Sciences is described in this report. The software includes: (1) a PDP-10 computer assembly language for generating drill sentences; (2) a coding system allowing a large number of sentences to…

  10. Reactor Dosimetry Applications Using RAPTOR-M3G:. a New Parallel 3-D Radiation Transport Code

    NASA Astrophysics Data System (ADS)

    Longoni, Gianluca; Anderson, Stanwood L.

    2009-08-01

    The numerical solution of the Linearized Boltzmann Equation (LBE) via the Discrete Ordinates method (SN) requires extensive computational resources for large 3-D neutron and gamma transport applications due to the concurrent discretization of the angular, spatial, and energy domains. This paper will discuss the development RAPTOR-M3G (RApid Parallel Transport Of Radiation - Multiple 3D Geometries), a new 3-D parallel radiation transport code, and its application to the calculation of ex-vessel neutron dosimetry responses in the cavity of a commercial 2-loop Pressurized Water Reactor (PWR). RAPTOR-M3G is based domain decomposition algorithms, where the spatial and angular domains are allocated and processed on multi-processor computer architectures. As compared to traditional single-processor applications, this approach reduces the computational load as well as the memory requirement per processor, yielding an efficient solution methodology for large 3-D problems. Measured neutron dosimetry responses in the reactor cavity air gap will be compared to the RAPTOR-M3G predictions. This paper is organized as follows: Section 1 discusses the RAPTOR-M3G methodology; Section 2 describes the 2-loop PWR model and the numerical results obtained. Section 3 addresses the parallel performance of the code, and Section 4 concludes this paper with final remarks and future work.

  11. Validation of numerical solvers for liquid metal flow in a complex geometry in the presence of a strong magnetic field

    NASA Astrophysics Data System (ADS)

    Patel, Anita; Pulugundla, Gautam; Smolentsev, Sergey; Abdou, Mohamed; Bhattacharyay, Rajendraprasad

    2018-04-01

    Following the magnetohydrodynamic (MHD) code validation and verification proposal by Smolentsev et al. (Fusion Eng Des 100:65-72, 2015), we perform code to code and code to experiment comparisons between two computational solvers, FLUIDYN and HIMAG, which are presently considered as two of the prospective CFD tools for fusion blanket applications. In such applications, an electrically conducting breeder/coolant circulates in the blanket ducts in the presence of a strong plasma-confining magnetic field at high Hartmann numbers, it{Ha} (it{Ha}^2 is the ratio between electromagnetic and viscous forces) and high interaction parameters, it{N} (it{N} is the ratio of electromagnetic to inertial forces). The main objective of this paper is to provide the scientific and engineering community with common references to assist fusion researchers in the selection of adequate computational means to be used for blanket design and analysis. As an initial validation case, the two codes are applied to the classic problem of a laminar fully developed MHD flows in a rectangular duct. Both codes demonstrate a very good agreement with the analytical solution for it{Ha} up to 15, 000. To address the capabilities of the two codes to properly resolve complex geometry flows, we consider a case of three-dimensional developing MHD flow in a geometry comprising of a series of interconnected electrically conducting rectangular ducts. The computed electric potential distributions for two flows (Case A) it{Ha}=515, it{N}=3.2 and (Case B) it{Ha}=2059, it{N}=63.8 are in very good agreement with the experimental data, while the comparisons for the MHD pressure drop are still unsatisfactory. To better interpret the observed differences, the obtained numerical data are analyzed against earlier theoretical and experimental studies for flows that involve changes in the relative orientation between the flow and the magnetic field.

  12. Embedding Secure Coding Instruction into the IDE: Complementing Early and Intermediate CS Courses with ESIDE

    ERIC Educational Resources Information Center

    Whitney, Michael; Lipford, Heather Richter; Chu, Bill; Thomas, Tyler

    2018-01-01

    Many of the software security vulnerabilities that people face today can be remediated through secure coding practices. A critical step toward the practice of secure coding is ensuring that our computing students are educated on these practices. We argue that secure coding education needs to be included across a computing curriculum. We are…

  13. Comparisons of 'Identical' Simulations by the Eulerian Gyrokinetic Codes GS2 and GYRO

    NASA Astrophysics Data System (ADS)

    Bravenec, R. V.; Ross, D. W.; Candy, J.; Dorland, W.; McKee, G. R.

    2003-10-01

    A major goal of the fusion program is to be able to predict tokamak transport from first-principles theory. To this end, the Eulerian gyrokinetic code GS2 was developed years ago and continues to be improved [1]. Recently, the Eulerian code GYRO was developed [2]. These codes are not subject to the statistical noise inherent to particle-in-cell (PIC) codes, and have been very successful in treating electromagnetic fluctuations. GS2 is fully spectral in the radial coordinate while GYRO uses finite-differences and ``banded" spectral schemes. To gain confidence in nonlinear simulations of experiment with these codes, ``apples-to-apples" comparisons (identical profile inputs, flux-tube geometry, two species, etc.) are first performed. We report on a series of linear and nonlinear comparisons (with overall agreement) including kinetic electrons, collisions, and shaped flux surfaces. We also compare nonlinear simulations of a DIII-D discharge to measurements of not only the fluxes but also the turbulence parameters. [1] F. Jenko, et al., Phys. Plasmas 7, 1904 (2000) and refs. therein. [2] J. Candy, J. Comput. Phys. 186, 545 (2003).

  14. The coupling of fluids, dynamics, and controls on advanced architecture computers

    NASA Technical Reports Server (NTRS)

    Atwood, Christopher

    1995-01-01

    This grant provided for the demonstration of coupled controls, body dynamics, and fluids computations in a workstation cluster environment; and an investigation of the impact of peer-peer communication on flow solver performance and robustness. The findings of these investigations were documented in the conference articles.The attached publication, 'Towards Distributed Fluids/Controls Simulations', documents the solution and scaling of the coupled Navier-Stokes, Euler rigid-body dynamics, and state feedback control equations for a two-dimensional canard-wing. The poor scaling shown was due to serialized grid connectivity computation and Ethernet bandwidth limits. The scaling of a peer-to-peer communication flow code on an IBM SP-2 was also shown. The scaling of the code on the switched fabric-linked nodes was good, with a 2.4 percent loss due to communication of intergrid boundary point information. The code performance on 30 worker nodes was 1.7 (mu)s/point/iteration, or a factor of three over a Cray C-90 head. The attached paper, 'Nonlinear Fluid Computations in a Distributed Environment', documents the effect of several computational rate enhancing methods on convergence. For the cases shown, the highest throughput was achieved using boundary updates at each step, with the manager process performing communication tasks only. Constrained domain decomposition of the implicit fluid equations did not degrade the convergence rate or final solution. The scaling of a coupled body/fluid dynamics problem on an Ethernet-linked cluster was also shown.

  15. [Footwear according to the "business dress code", and the health condition of women's feet--computer-assisted holistic evaluation].

    PubMed

    Lorkowski, Jacek; Mrzygłód, Mirosław; Kotela, Ireneusz; Kiełbasiewicz-Lorkowska, Ewa; Teul, Iwona

    2013-01-01

    According to the verdict of the Supreme Court in 2005, an employer may dismiss an employee if their conduct (including dress) exposes the employer to losses or threatens his interests. The aim of the study was a holistic assessment of the pleiotropic effects of high-heeled pointed shoes on the health condition of women's feet, wearing them at work, in accordance with the existing rules of the "business dress code". A holistic multidisciplinary analysis was performed. It takes into account: 1) women employees of banks and other large corporations (82 persons); 2) 2D FEM computer model developed by the authors of foot deformed by pointed high-heeled shoes; 3) web site found after entering the code "business dress code". Over 60% of women in the office wore high-heeled shoes. The following has been found among people walking to work in high heels: 1) reduction in the quality of life in about 70% of cases, through periodic occurrence of pain and reduction of functional capacity of the feet; 2) increase in the pressure on the plantar side of the forefoot at least twice; 3) the continued effects the forces deforming the forefoot. 1. An evolutionary change of "dress code" shoes is necessary in order to lead to a reduction in non-physiological overload of feet and the consequence of their disability. 2. These changes are particularly urgent in patients with so-called "sensitive foot".

  16. Development of the PARVMEC Code for Rapid Analysis of 3D MHD Equilibrium

    NASA Astrophysics Data System (ADS)

    Seal, Sudip; Hirshman, Steven; Cianciosa, Mark; Wingen, Andreas; Unterberg, Ezekiel; Wilcox, Robert; ORNL Collaboration

    2015-11-01

    The VMEC three-dimensional (3D) MHD equilibrium has been used extensively for designing stellarator experiments and analyzing experimental data in such strongly 3D systems. Recent applications of VMEC include 2D systems such as tokamaks (in particular, the D3D experiment), where application of very small (delB/B ~ 10-3) 3D resonant magnetic field perturbations render the underlying assumption of axisymmetry invalid. In order to facilitate the rapid analysis of such equilibria (for example, for reconstruction purposes), we have undertaken the task of parallelizing the VMEC code (PARVMEC) to produce a scalable and temporally rapidly convergent equilibrium code for use on parallel distributed memory platforms. The parallelization task naturally splits into three distinct parts 1) radial surfaces in the fixed-boundary part of the calculation; 2) two 2D angular meshes needed to compute the Green's function integrals over the plasma boundary for the free-boundary part of the code; and 3) block tridiagonal matrix needed to compute the full (3D) pre-conditioner near the final equilibrium state. Preliminary results show that scalability is achieved for tasks 1 and 3, with task 2 still nearing completion. The impact of this work on the rapid reconstruction of D3D plasmas using PARVMEC in the V3FIT code will be discussed. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.

  17. Shell stability analysis in a computer aided engineering (CAE) environment

    NASA Technical Reports Server (NTRS)

    Arbocz, J.; Hol, J. M. A. M.

    1993-01-01

    The development of 'DISDECO', the Delft Interactive Shell DEsign COde is described. The purpose of this project is to make the accumulated theoretical, numerical and practical knowledge of the last 25 years or so readily accessible to users interested in the analysis of buckling sensitive structures. With this open ended, hierarchical, interactive computer code the user can access from his workstation successively programs of increasing complexity. The computational modules currently operational in DISDECO provide the prospective user with facilities to calculate the critical buckling loads of stiffened anisotropic shells under combined loading, to investigate the effects the various types of boundary conditions will have on the critical load, and to get a complete picture of the degrading effects the different shapes of possible initial imperfections might cause, all in one interactive session. Once a design is finalized, its collapse load can be verified by running a large refined model remotely from behind the workstation with one of the current generation 2-dimensional codes, with advanced capabilities to handle both geometric and material nonlinearities.

  18. Computational simulation of acoustic fatigue for hot composite structures

    NASA Technical Reports Server (NTRS)

    Singhal, S. N.; Nagpal, V. K.; Murthy, P. L. N.; Chamis, C. C.

    1991-01-01

    This paper presents predictive methods/codes for computational simulation of acoustic fatigue resistance of hot composite structures subjected to acoustic excitation emanating from an adjacent vibrating component. Select codes developed over the past two decades at the NASA Lewis Research Center are used. The codes include computation of (1) acoustic noise generated from a vibrating component, (2) degradation in material properties of the composite laminate at use temperature, (3) dynamic response of acoustically excited hot multilayered composite structure, (4) degradation in the first-ply strength of the excited structure due to acoustic loading, and (5) acoustic fatigue resistance of the excited structure, including propulsion environment. Effects of the laminate lay-up and environment on the acoustic fatigue life are evaluated. The results show that, by keeping the angled plies on the outer surface of the laminate, a substantial increase in the acoustic fatigue life is obtained. The effect of environment (temperature and moisure) is to relieve the residual stresses leading to an increase in the acoustic fatigue life of the excited panel.

  19. Fast computation of quadrupole and hexadecapole approximations in microlensing with a single point-source evaluation

    NASA Astrophysics Data System (ADS)

    Cassan, Arnaud

    2017-07-01

    The exoplanet detection rate from gravitational microlensing has grown significantly in recent years thanks to a great enhancement of resources and improved observational strategy. Current observatories include ground-based wide-field and/or robotic world-wide networks of telescopes, as well as space-based observatories such as satellites Spitzer or Kepler/K2. This results in a large quantity of data to be processed and analysed, which is a challenge for modelling codes because of the complexity of the parameter space to be explored and the intensive computations required to evaluate the models. In this work, I present a method that allows to compute the quadrupole and hexadecapole approximations of the finite-source magnification with more efficiency than previously available codes, with routines about six times and four times faster, respectively. The quadrupole takes just about twice the time of a point-source evaluation, which advocates for generalizing its use to large portions of the light curves. The corresponding routines are available as open-source python codes.

  20. Validation of a Computational Fluid Dynamics (CFD) Code for Supersonic Axisymmetric Base Flow

    NASA Technical Reports Server (NTRS)

    Tucker, P. Kevin

    1993-01-01

    The ability to accurately and efficiently calculate the flow structure in the base region of bodies of revolution in supersonic flight is a significant step in CFD code validation for applications ranging from base heating for rockets to drag for protectives. The FDNS code is used to compute such a flow and the results are compared to benchmark quality experimental data. Flowfield calculations are presented for a cylindrical afterbody at M = 2.46 and angle of attack a = O. Grid independent solutions are compared to mean velocity profiles in the separated wake area and downstream of the reattachment point. Additionally, quantities such as turbulent kinetic energy and shear layer growth rates are compared to the data. Finally, the computed base pressures are compared to the measured values. An effort is made to elucidate the role of turbulence models in the flowfield predictions. The level of turbulent eddy viscosity, and its origin, are used to contrast the various turbulence models and compare the results to the experimental data.

  1. A computer program for processing impedance cardiographic data: Improving accuracy through user-interactive software

    NASA Technical Reports Server (NTRS)

    Cowings, Patricia S.; Naifeh, Karen; Thrasher, Chet

    1988-01-01

    This report contains the source code and documentation for a computer program used to process impedance cardiography data. The cardiodynamic measures derived from impedance cardiography are ventricular stroke column, cardiac output, cardiac index and Heather index. The program digitizes data collected from the Minnesota Impedance Cardiograph, Electrocardiography (ECG), and respiratory cycles and then stores these data on hard disk. It computes the cardiodynamic functions using interactive graphics and stores the means and standard deviations of each 15-sec data epoch on floppy disk. This software was designed on a Digital PRO380 microcomputer and used version 2.0 of P/OS, with (minimally) a 4-channel 16-bit analog/digital (A/D) converter. Applications software is written in FORTRAN 77, and uses Digital's Pro-Tool Kit Real Time Interface Library, CORE Graphic Library, and laboratory routines. Source code can be readily modified to accommodate alternative detection, A/D conversion and interactive graphics. The object code utilizing overlays and multitasking has a maximum of 50 Kbytes.

  2. Improved Boundary Layer Module (BLM) for the Solid Performance Program (SPP)

    NASA Astrophysics Data System (ADS)

    Coats, D. E.; Cebeci, T.

    1982-03-01

    The requirements for a replacement to the Bartz boundary layer code, the standard method of computing the performance loss due to viscous effects by the solid performance program, were discussed by the propulsion community along with four nationally recognized boundary layer experts. A consensus was reached regarding the preferred features for the analysis of the replacement code. The major points that were agreed upon are: (1) finite difference methods are preferred over integral methods; (2) a single equation eddy viscosity model was considered to be adequate for the purpose of computing performance loss; (3) a variable grid capability in both coordinate directions would be required; (4) a proven finite difference algorithm which is not stability restricted should be used, that is, an implicit numerical scheme would be required; and (5) the replacement code should be able to compute both turbulent and laminar flows. The program should treat mass addition at the wall as well as being able to calculate a stagnation point starting line.

  3. Debugging Techniques Used by Experienced Programmers to Debug Their Own Code.

    DTIC Science & Technology

    1990-09-01

    IS. NUMBER OF PAGES code debugging 62 computer programmers 16. PRICE CODE debug programming 17. SECURITY CLASSIFICATION 18. SECURITY CLASSIFICATION 119...Davis, and Schultz (1987) also compared experts and novices, but focused on the way a computer program is represented cognitively and how that...of theories in the emerging computer programming domain (Fisher, 1987). In protocol analysis, subjects are asked to talk/think aloud as they solve

  4. SciDAC-3: Searching for Physics Beyond the Standard Model, University of Arizona component, Year 2 progress report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toussaint, Doug

    2014-03-21

    The Arizona component of the SciDAC-3 Lattice Gauge Theory program consisted of partial support for a postdoctoral position. In the original budget this covered three fourths of a postdoc, but the University of Arizona changed its ERE rate for postdoctoral positions from 4.3% to 21%, so the support level was closer to two-thirds of a postdoc. The grant covered the work of postdoc Thomas Primer. Dr. Primer's first task was an urgent one, although it was not forseen in our proposed work. It turned out that on the large lattices used in some of our current computations the gauge fixingmore » code was not working as expected, and this revealed itself in inconsistent results in the correlators needed to compute the semileptonic form factors for K and D decays. Dr. Primer participated in the effort to understand this problem and to modify our codes to deal with the large lattices we are now generating (as large as 144 3 x 288). Corrected code was incorporated in our standard codes, and workarounds that allow us to use the correlators already computed with the unexpected gauge fixing were been implemented.« less

  5. Proteus three-dimensional Navier-Stokes computer code, version 1.0. Volume 2: User's guide

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Schwab, John R.; Bui, Trong T.

    1993-01-01

    A computer code called Proteus 3D was developed to solve the three-dimensional, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The objective in this effort was to develop a code for aerospace propulsion applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The governing equations are solved in generalized nonorthogonal body-fitted coordinates, by marching in time using a fully-coupled ADI solution procedure. The boundary conditions are treated implicitly. All terms, including the diffusion terms, are linearized using second-order Taylor series expansions. Turbulence is modeled using either an algebraic or two-equation eddy viscosity model. The thin-layer or Euler equations may also be solved. The energy equation may be eliminated by the assumption of constant total enthalpy. Explicit and implicit artificial viscosity may be used. Several time step options are available for convergence acceleration. The documentation is divided into three volumes. This User's Guide describes the program's features, the input and output, the procedure for setting up initial conditions, the computer resource requirements, the diagnostic messages that may be generated, the job control language used to run the program, and several test cases.

  6. A COTS-Based Replacement Strategy for Aging Avionics Computers

    DTIC Science & Technology

    2001-12-01

    Communication Control Unit. A COTS-Based Replacement Strategy for Aging Avionics Computers COTS Microprocessor Real Time Operating System New Native Code...Native Code Objec ts Native Code Thread Real - Time Operating System Legacy Function x Virtual Component Environment Context Switch Thunk Add-in Replace

  7. Drekar v.2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seefeldt, Ben; Sondak, David; Hensinger, David M.

    Drekar is an application code that solves partial differential equations for fluids that can be optionally coupled to electromagnetics. Drekar solves low-mach compressible and incompressible computational fluid dynamics (CFD), compressible and incompressible resistive magnetohydrodynamics (MHD), and multiple species plasmas interacting with electromagnetic fields. Drekar discretization technology includes continuous and discontinuous finite element formulations, stabilized finite element formulations, mixed integration finite element bases (nodal, edge, face, volume) and an initial arbitrary Lagrangian Eulerian (ALE) capability. Drekar contains the implementation of the discretized physics and leverages the open source Trilinos project for both parallel solver capabilities and general finite element discretization tools.more » The code will be released open source under a BSD license. The code is used for fundamental research for simulation of fluids and plasmas on high performance computing environments.« less

  8. Efficient modeling of laser-plasma accelerator staging experiments using INF&RNO

    NASA Astrophysics Data System (ADS)

    Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; Esarey, E.; Leemans, W. P.

    2017-03-01

    The computational framework INF&RNO (INtegrated Fluid & paRticle simulatioN cOde) allows for fast and accurate modeling, in 2D cylindrical geometry, of several aspects of laser-plasma accelerator physics. In this paper, we present some of the new features of the code, including the quasistatic Particle-In-Cell (PIC)/fluid modality, and describe using different computational grids and time steps for the laser envelope and the plasma wake. These and other features allow for a speedup of several orders of magnitude compared to standard full 3D PIC simulations while still retaining physical fidelity. INF&RNO is used to support the experimental activity at the BELLA Center, and we will present an example of the application of the code to the laser-plasma accelerator staging experiment.

  9. Boundary modelling of the stellarator Wendelstein 7-X

    NASA Astrophysics Data System (ADS)

    Renner, H.; Strumberger, E.; Kisslinger, J.; Nührenberg, J.; Wobig, H.

    1997-02-01

    To justify the design of the divertor plates in W7-X the magnetic fields of finite-β HELIAS equilibria for the so-called high-mirror case have been computed for various average β-values up to < β > = 0.04 with the NEMEC free-boundary equilibrium code [S.P. Hirshman, W.I. van Rij and W.I. Merkel, Comput. Phys. Commun. 43 (1986) 143] in combination with the newly developed MFBE (magnetic field solver for finite-beta equilibria) code. In a second study the unloading of the target plates by radiation was investigated. The B2 code [B.J. Braams, Ph.D. Thesis, Rijksuniversiteit Utrecht (1986)] was applied for the first time to stellarators to provide of a self-consistent modelling of the SOL including effects of neutrals and impurities.

  10. Atomic-scale Modeling of the Structure and Dynamics of Dislocations in Complex Alloys at High Temperatures

    NASA Technical Reports Server (NTRS)

    Daw, Murray S.; Mills, Michael J.

    2003-01-01

    We report on the progress made during the first year of the project. Most of the progress at this point has been on the theoretical and computational side. Here are the highlights: (1) A new code, tailored for high-end desktop computing, now combines modern Accelerated Dynamics (AD) with the well-tested Embedded Atom Method (EAM); (2) The new Accelerated Dynamics allows the study of relatively slow, thermally-activated processes, such as diffusion, which are much too slow for traditional Molecular Dynamics; (3) We have benchmarked the new AD code on a rather simple and well-known process: vacancy diffusion in copper; and (4) We have begun application of the AD code to the diffusion of vacancies in ordered intermetallics.

  11. Temporal parallelization of edge plasma simulations using the parareal algorithm and the SOLPS code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samaddar, Debasmita; Coster, D. P.; Bonnin, X.

    We show that numerical modelling of edge plasma physics may be successfully parallelized in time. The parareal algorithm has been employed for this purpose and the SOLPS code package coupling the B2.5 finite-volume fluid plasma solver with the kinetic Monte-Carlo neutral code Eirene has been used as a test bed. The complex dynamics of the plasma and neutrals in the scrape-off layer (SOL) region makes this a unique application. It is demonstrated that a significant computational gain (more than an order of magnitude) may be obtained with this technique. The use of the IPS framework for event-based parareal implementation optimizesmore » resource utilization and has been shown to significantly contribute to the computational gain.« less

  12. Temporal parallelization of edge plasma simulations using the parareal algorithm and the SOLPS code

    DOE PAGES

    Samaddar, Debasmita; Coster, D. P.; Bonnin, X.; ...

    2017-07-31

    We show that numerical modelling of edge plasma physics may be successfully parallelized in time. The parareal algorithm has been employed for this purpose and the SOLPS code package coupling the B2.5 finite-volume fluid plasma solver with the kinetic Monte-Carlo neutral code Eirene has been used as a test bed. The complex dynamics of the plasma and neutrals in the scrape-off layer (SOL) region makes this a unique application. It is demonstrated that a significant computational gain (more than an order of magnitude) may be obtained with this technique. The use of the IPS framework for event-based parareal implementation optimizesmore » resource utilization and has been shown to significantly contribute to the computational gain.« less

  13. PARAVT: Parallel Voronoi tessellation code

    NASA Astrophysics Data System (ADS)

    González, R. E.

    2016-10-01

    In this study, we present a new open source code for massive parallel computation of Voronoi tessellations (VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition takes into account consistent boundary computation between tasks, and includes periodic conditions. In addition, the code computes neighbors list, Voronoi density, Voronoi cell volume, density gradient for each particle, and densities on a regular grid. Code implementation and user guide are publicly available at https://github.com/regonzar/paravt.

  14. CFD validation needs for advanced concepts at Northrop Corporation

    NASA Technical Reports Server (NTRS)

    George, Michael W.

    1987-01-01

    Information is given in viewgraph form on the Computational Fluid Dynamics (CFD) Workshop held July 14 - 16, 1987. Topics covered include the philosophy of CFD validation, current validation efforts, the wing-body-tail Euler code, F-20 Euler simulated oil flow, and Euler Navier-Stokes code validation for 2D and 3D nozzle afterbody applications.

  15. Cognitive, Social, and Literacy Competencies: The Chelsea Bank Simulation Project. Year One: Final Report. [Volume 2]: Appendices.

    ERIC Educational Resources Information Center

    Duffy, Thomas; And Others

    This supplementary volume presents appendixes A-E associated with a 1-year study which determined what secondary school students were doing as they engaged in the Chelsea Bank computer software simulation activities. Appendixes present the SCANS Analysis Coding Sheet; coding problem analysis of 50 video segments; student and teacher interview…

  16. Application of a Two-dimensional Unsteady Viscous Analysis Code to a Supersonic Throughflow Fan Stage

    NASA Technical Reports Server (NTRS)

    Steinke, Ronald J.

    1989-01-01

    The Rai ROTOR1 code for two-dimensional, unsteady viscous flow analysis was applied to a supersonic throughflow fan stage design. The axial Mach number for this fan design increases from 2.0 at the inlet to 2.9 at the outlet. The Rai code uses overlapped O- and H-grids that are appropriately packed. The Rai code was run on a Cray XMP computer; then data postprocessing and graphics were performed to obtain detailed insight into the stage flow. The large rotor wakes uniformly traversed the rotor-stator interface and dispersed as they passed through the stator passage. Only weak blade shock losses were computerd, which supports the design goals. High viscous effects caused large blade wakes and a low fan efficiency. Rai code flow predictions were essentially steady for the rotor, and they compared well with Chima rotor viscous code predictions based on a C-grid of similar density.

  17. FY17 Status Report on NEAMS Neutronics Activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C. H.; Jung, Y. S.; Smith, M. A.

    2017-09-30

    Under the U.S. DOE NEAMS program, the high-fidelity neutronics code system has been developed to support the multiphysics modeling and simulation capability named SHARP. The neutronics code system includes the high-fidelity neutronics code PROTEUS, the cross section library and preprocessing tools, the multigroup cross section generation code MC2-3, the in-house meshing generation tool, the perturbation and sensitivity analysis code PERSENT, and post-processing tools. The main objectives of the NEAMS neutronics activities in FY17 are to continue development of an advanced nodal solver in PROTEUS for use in nuclear reactor design and analysis projects, implement a simplified sub-channel based thermal-hydraulic (T/H)more » capability into PROTEUS to efficiently compute the thermal feedback, improve the performance of PROTEUS-MOCEX using numerical acceleration and code optimization, improve the cross section generation tools including MC2-3, and continue to perform verification and validation tests for PROTEUS.« less

  18. SU (2) lattice gauge theory simulations on Fermi GPUs

    NASA Astrophysics Data System (ADS)

    Cardoso, Nuno; Bicudo, Pedro

    2011-05-01

    In this work we explore the performance of CUDA in quenched lattice SU (2) simulations. CUDA, NVIDIA Compute Unified Device Architecture, is a hardware and software architecture developed by NVIDIA for computing on the GPU. We present an analysis and performance comparison between the GPU and CPU in single and double precision. Analyses with multiple GPUs and two different architectures (G200 and Fermi architectures) are also presented. In order to obtain a high performance, the code must be optimized for the GPU architecture, i.e., an implementation that exploits the memory hierarchy of the CUDA programming model. We produce codes for the Monte Carlo generation of SU (2) lattice gauge configurations, for the mean plaquette, for the Polyakov Loop at finite T and for the Wilson loop. We also present results for the potential using many configurations (50,000) without smearing and almost 2000 configurations with APE smearing. With two Fermi GPUs we have achieved an excellent performance of 200× the speed over one CPU, in single precision, around 110 Gflops/s. We also find that, using the Fermi architecture, double precision computations for the static quark-antiquark potential are not much slower (less than 2× slower) than single precision computations.

  19. The STAGS computer code

    NASA Technical Reports Server (NTRS)

    Almroth, B. O.; Brogan, F. A.

    1978-01-01

    Basic information about the computer code STAGS (Structural Analysis of General Shells) is presented to describe to potential users the scope of the code and the solution procedures that are incorporated. Primarily, STAGS is intended for analysis of shell structures, although it has been extended to more complex shell configurations through the inclusion of springs and beam elements. The formulation is based on a variational approach in combination with local two dimensional power series representations of the displacement components. The computer code includes options for analysis of linear or nonlinear static stress, stability, vibrations, and transient response. Material as well as geometric nonlinearities are included. A few examples of applications of the code are presented for further illustration of its scope.

  20. Holonomic surface codes for fault-tolerant quantum computation

    NASA Astrophysics Data System (ADS)

    Zhang, Jiang; Devitt, Simon J.; You, J. Q.; Nori, Franco

    2018-02-01

    Surface codes can protect quantum information stored in qubits from local errors as long as the per-operation error rate is below a certain threshold. Here we propose holonomic surface codes by harnessing the quantum holonomy of the system. In our scheme, the holonomic gates are built via auxiliary qubits rather than the auxiliary levels in multilevel systems used in conventional holonomic quantum computation. The key advantage of our approach is that the auxiliary qubits are in their ground state before and after each gate operation, so they are not involved in the operation cycles of surface codes. This provides an advantageous way to implement surface codes for fault-tolerant quantum computation.

  1. Comparison of two- and three-dimensional flow computations with laser anemometer measurements in a transonic compressor rotor

    NASA Technical Reports Server (NTRS)

    Chima, R. V.; Strazisar, A. J.

    1982-01-01

    Two and three dimensional inviscid solutions for the flow in a transonic axial compressor rotor at design speed are compared with probe and laser anemometers measurements at near-stall and maximum-flow operating points. Experimental details of the laser anemometer system and computational details of the two dimensional axisymmetric code and three dimensional Euler code are described. Comparisons are made between relative Mach number and flow angle contours, shock location, and shock strength. A procedure for using an efficient axisymmetric code to generate downstream pressure input for computationally expensive Euler codes is discussed. A film supplement shows the calculations of the two operating points with the time-marching Euler code.

  2. Computed secondary-particle energy spectra following nonelastic neutron interactions with sup 12 C for E sub n between 15 and 60 MeV: Comparisons of results from two calculational methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dickens, J.K.

    1991-04-01

    The organic scintillation detector response code SCINFUL has been used to compute secondary-particle energy spectra, d{sigma}/dE, following nonelastic neutron interactions with {sup 12}C for incident neutron energies between 15 and 60 MeV. The resulting spectra are compared with published similar spectra computed by Brenner and Prael who used an intranuclear cascade code, including alpha clustering, a particle pickup mechanism, and a theoretical approach to sequential decay via intermediate particle-unstable states. The similarities of and the differences between the results of the two approaches are discussed. 16 refs., 44 figs., 2 tabs.

  3. Computer codes for the evaluation of thermodynamic and transport properties for equilibrium air to 30000 K

    NASA Technical Reports Server (NTRS)

    Thompson, Richard A.; Lee, Kam-Pui; Gupta, Roop N.

    1991-01-01

    The computer codes developed here provide self-consistent thermodynamic and transport properties for equilibrium air for temperatures from 500 to 30000 K over a temperature range of 10 (exp -4) to 10 (exp -2) atm. These properties are computed through the use of temperature dependent curve fits for discrete values of pressure. Interpolation is employed for intermediate values of pressure. The curve fits are based on mixture values calculated from an 11-species air model. Individual species properties used in the mixture relations are obtained from a recent study by the present authors. A review and discussion of the sources and accuracy of the curve fitted data used herein are given in NASA RP 1260.

  4. DSMC computations of hypersonic flow separation and re-attachment in the transition to continuum regime

    NASA Astrophysics Data System (ADS)

    Prakash, Ram; Gai, Sudhir L.; O'Byrne, Sean; Brown, Melrose

    2016-11-01

    The flow over a `tick' shaped configuration is performed using two Direct Simulation Monte Carlo codes: the DS2V code of Bird and the code from Sandia National Laboratory, called SPARTA. The configuration creates a flow field, where the flow is expanded initially but then is affected by the adverse pressure gradient induced by a compression surface. The flow field is challenging in the sense that the full flow domain is comprised of localized areas spanning continuum and transitional regimes. The present work focuses on the capability of SPARTA to model such flow conditions and also towards a comparative evaluation with results from DS2V. An extensive grid adaptation study is performed using both the codes on a model with a sharp leading edge and the converged results are then compared. The computational predictions are evaluated in terms of surface parameters such as heat flux, shear stress, pressure and velocity slip. SPARTA consistently predicts higher values for these surface properties. The skin friction predictions of both the codes don't give any indication of separation but the velocity slip plots indicate an incipient separation behavior at the corner. The differences in the results are attributed towards the flow resolution at the leading edge that dictates the downstream flow characteristics.

  5. Genome-Wide Discovery of Long Non-Coding RNAs in Rainbow Trout.

    PubMed

    Al-Tobasei, Rafet; Paneru, Bam; Salem, Mohamed

    2016-01-01

    The ENCODE project revealed that ~70% of the human genome is transcribed. While only 1-2% of the RNAs encode for proteins, the rest are non-coding RNAs. Long non-coding RNAs (lncRNAs) form a diverse class of non-coding RNAs that are longer than 200 nt. Emerging evidence indicates that lncRNAs play critical roles in various cellular processes including regulation of gene expression. LncRNAs show low levels of gene expression and sequence conservation, which make their computational identification in genomes difficult. In this study, more than two billion Illumina sequence reads were mapped to the genome reference using the TopHat and Cufflinks software. Transcripts shorter than 200 nt, with more than 83-100 amino acids ORF, or with significant homologies to the NCBI nr-protein database were removed. In addition, a computational pipeline was used to filter the remaining transcripts based on a protein-coding-score test. Depending on the filtering stringency conditions, between 31,195 and 54,503 lncRNAs were identified, with only 421 matching known lncRNAs in other species. A digital gene expression atlas revealed 2,935 tissue-specific and 3,269 ubiquitously-expressed lncRNAs. This study annotates the lncRNA rainbow trout genome and provides a valuable resource for functional genomics research in salmonids.

  6. Development of MCNPX-ESUT computer code for simulation of neutron/gamma pulse height distribution

    NASA Astrophysics Data System (ADS)

    Abolfazl Hosseini, Seyed; Vosoughi, Naser; Zangian, Mehdi

    2015-05-01

    In this paper, the development of the MCNPX-ESUT (MCNPX-Energy Engineering of Sharif University of Technology) computer code for simulation of neutron/gamma pulse height distribution is reported. Since liquid organic scintillators like NE-213 are well suited and routinely used for spectrometry in mixed neutron/gamma fields, this type of detectors is selected for simulation in the present study. The proposed algorithm for simulation includes four main steps. The first step is the modeling of the neutron/gamma particle transport and their interactions with the materials in the environment and detector volume. In the second step, the number of scintillation photons due to charged particles such as electrons, alphas, protons and carbon nuclei in the scintillator material is calculated. In the third step, the transport of scintillation photons in the scintillator and lightguide is simulated. Finally, the resolution corresponding to the experiment is considered in the last step of the simulation. Unlike the similar computer codes like SCINFUL, NRESP7 and PHRESP, the developed computer code is applicable to both neutron and gamma sources. Hence, the discrimination of neutron and gamma in the mixed fields may be performed using the MCNPX-ESUT computer code. The main feature of MCNPX-ESUT computer code is that the neutron/gamma pulse height simulation may be performed without needing any sort of post processing. In the present study, the pulse height distributions due to a monoenergetic neutron/gamma source in NE-213 detector using MCNPX-ESUT computer code is simulated. The simulated neutron pulse height distributions are validated through comparing with experimental data (Gohil et al. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 664 (2012) 304-309.) and the results obtained from similar computer codes like SCINFUL, NRESP7 and Geant4. The simulated gamma pulse height distribution for a 137Cs source is also compared with the experimental data.

  7. Validation Results for LEWICE 2.0

    NASA Technical Reports Server (NTRS)

    Wright, William B.; Rutkowski, Adam

    1999-01-01

    A research project is underway at NASA Lewis to produce a computer code which can accurately predict ice growth under any meteorological conditions for any aircraft surface. This report will present results from version 2.0 of this code, which is called LEWICE. This version differs from previous releases due to its robustness and its ability to reproduce results accurately for different spacing and time step criteria across computing platform. It also differs in the extensive amount of effort undertaken to compare the results in a quantified manner against the database of ice shapes which have been generated in the NASA Lewis Icing Research Tunnel (IRT). The results of the shape comparisons are analyzed to determine the range of meteorological conditions under which LEWICE 2.0 is within the experimental repeatability. This comparison shows that the average variation of LEWICE 2.0 from the experimental data is 7.2% while the overall variability of the experimental data is 2.5%.

  8. Documentation of probabilistic fracture mechanics codes used for reactor pressure vessels subjected to pressurized thermal shock loading: Parts 1 and 2. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balkey, K.; Witt, F.J.; Bishop, B.A.

    1995-06-01

    Significant attention has been focused on the issue of reactor vessel pressurized thermal shock (PTS) for many years. Pressurized thermal shock transient events are characterized by a rapid cooldown at potentially high pressure levels that could lead to a reactor vessel integrity concern for some pressurized water reactors. As a result of regulatory and industry efforts in the early 1980`s, a probabilistic risk assessment methodology has been established to address this concern. Probabilistic fracture mechanics analyses are performed as part of this methodology to determine conditional probability of significant flaw extension for given pressurized thermal shock events. While recent industrymore » efforts are underway to benchmark probabilistic fracture mechanics computer codes that are currently used by the nuclear industry, Part I of this report describes the comparison of two independent computer codes used at the time of the development of the original U.S. Nuclear Regulatory Commission (NRC) pressurized thermal shock rule. The work that was originally performed in 1982 and 1983 to compare the U.S. NRC - VISA and Westinghouse (W) - PFM computer codes has been documented and is provided in Part I of this report. Part II of this report describes the results of more recent industry efforts to benchmark PFM computer codes used by the nuclear industry. This study was conducted as part of the USNRC-EPRI Coordinated Research Program for reviewing the technical basis for pressurized thermal shock (PTS) analyses of the reactor pressure vessel. The work focused on the probabilistic fracture mechanics (PFM) analysis codes and methods used to perform the PTS calculations. An in-depth review of the methodologies was performed to verify the accuracy and adequacy of the various different codes. The review was structured around a series of benchmark sample problems to provide a specific context for discussion and examination of the fracture mechanics methodology.« less

  9. EAC: A program for the error analysis of STAGS results for plates

    NASA Technical Reports Server (NTRS)

    Sistla, Rajaram; Thurston, Gaylen A.; Bains, Nancy Jane C.

    1989-01-01

    A computer code is now available for estimating the error in results from the STAGS finite element code for a shell unit consisting of a rectangular orthotropic plate. This memorandum contains basic information about the computer code EAC (Error Analysis and Correction) and describes the connection between the input data for the STAGS shell units and the input data necessary to run the error analysis code. The STAGS code returns a set of nodal displacements and a discrete set of stress resultants; the EAC code returns a continuous solution for displacements and stress resultants. The continuous solution is defined by a set of generalized coordinates computed in EAC. The theory and the assumptions that determine the continuous solution are also outlined in this memorandum. An example of application of the code is presented and instructions on its usage on the Cyber and the VAX machines have been provided.

  10. New double-byte error-correcting codes for memory systems

    NASA Technical Reports Server (NTRS)

    Feng, Gui-Liang; Wu, Xinen; Rao, T. R. N.

    1996-01-01

    Error-correcting or error-detecting codes have been used in the computer industry to increase reliability, reduce service costs, and maintain data integrity. The single-byte error-correcting and double-byte error-detecting (SbEC-DbED) codes have been successfully used in computer memory subsystems. There are many methods to construct double-byte error-correcting (DBEC) codes. In the present paper we construct a class of double-byte error-correcting codes, which are more efficient than those known to be optimum, and a decoding procedure for our codes is also considered.

  11. Additional and revised thermochemical data and computer code for WATEQ2: a computerized chemical model for trace and major element speciation and mineral equilibria of natural waters

    USGS Publications Warehouse

    Ball, James W.; Nordstrom, D. Kirk; Jenne, Everett A.

    1980-01-01

    A computerized chemical model, WATEQ2, has resulted from extensive additions to and revision of the WATEQ model of Truesdell and Jones (Truesdell, A. H., and Jones, B. F., 1974, WATEQ, a computer program for calculating chemical equilibria of natural waters: J. Res. U. S. Geol, Survey, v. 2, p. 233-274). The model building effort has necessitated searching the literature and selecting thermochemical data pertinent to the reactions added to the model. This supplementary report manes available the details of the reactions added to the model together with the selected thermochemical data and their sources. Also listed are details of program operation and a brief description of the output of the model. Appendices-contain a glossary of identifiers used in the PL/1 computer code, the complete PL/1 listing, and sample output from three water analyses used as test cases.

  12. The solution of linear systems of equations with a structural analysis code on the NAS CRAY-2

    NASA Technical Reports Server (NTRS)

    Poole, Eugene L.; Overman, Andrea L.

    1988-01-01

    Two methods for solving linear systems of equations on the NAS Cray-2 are described. One is a direct method; the other is an iterative method. Both methods exploit the architecture of the Cray-2, particularly the vectorization, and are aimed at structural analysis applications. To demonstrate and evaluate the methods, they were installed in a finite element structural analysis code denoted the Computational Structural Mechanics (CSM) Testbed. A description of the techniques used to integrate the two solvers into the Testbed is given. Storage schemes, memory requirements, operation counts, and reformatting procedures are discussed. Finally, results from the new methods are compared with results from the initial Testbed sparse Choleski equation solver for three structural analysis problems. The new direct solvers described achieve the highest computational rates of the methods compared. The new iterative methods are not able to achieve as high computation rates as the vectorized direct solvers but are best for well conditioned problems which require fewer iterations to converge to the solution.

  13. Computational fluid dynamics assessment: Volume 1, Computer simulations of the METC (Morgantown Energy Technology Center) entrained-flow gasifier: Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Celik, I.; Chattree, M.

    1988-07-01

    An assessment of the theoretical and numerical aspects of the computer code, PCGC-2, is made; and the results of the application of this code to the Morgantown Energy Technology Center (METC) advanced gasification facility entrained-flow reactor, ''the gasifier,'' are presented. PCGC-2 is a code suitable for simulating pulverized coal combustion or gasification under axisymmetric (two-dimensional) flow conditions. The governing equations for the gas and particulate phase have been reviewed. The numerical procedure and the related programming difficulties have been elucidated. A single-particle model similar to the one used in PCGC-2 has been developed, programmed, and applied to some simple situationsmore » in order to gain insight to the physics of coal particle heat-up, devolatilization, and char oxidation processes. PCGC-2 was applied to the METC entrained-flow gasifier to study numerically the flash pyrolysis of coal, and gasification of coal with steam or carbon dioxide. The results from the simulations are compared with measurements. The gas and particle residence times, particle temperature, and mass component history were also calculated and the results were analyzed. The results provide useful information for understanding the fundamentals of coal gasification and for assessment of experimental results performed using the reactor considered. 69 refs., 35 figs., 23 tabs.« less

  14. Ontological function annotation of long non-coding RNAs through hierarchical multi-label classification.

    PubMed

    Zhang, Jingpu; Zhang, Zuping; Wang, Zixiang; Liu, Yuting; Deng, Lei

    2018-05-15

    Long non-coding RNAs (lncRNAs) are an enormous collection of functional non-coding RNAs. Over the past decades, a large number of novel lncRNA genes have been identified. However, most of the lncRNAs remain function uncharacterized at present. Computational approaches provide a new insight to understand the potential functional implications of lncRNAs. Considering that each lncRNA may have multiple functions and a function may be further specialized into sub-functions, here we describe NeuraNetL2GO, a computational ontological function prediction approach for lncRNAs using hierarchical multi-label classification strategy based on multiple neural networks. The neural networks are incrementally trained level by level, each performing the prediction of gene ontology (GO) terms belonging to a given level. In NeuraNetL2GO, we use topological features of the lncRNA similarity network as the input of the neural networks and employ the output results to annotate the lncRNAs. We show that NeuraNetL2GO achieves the best performance and the overall advantage in maximum F-measure and coverage on the manually annotated lncRNA2GO-55 dataset compared to other state-of-the-art methods. The source code and data are available at http://denglab.org/NeuraNetL2GO/. leideng@csu.edu.cn. Supplementary data are available at Bioinformatics online.

  15. Performance of a Bounce-Averaged Global Model of Super-Thermal Electron Transport in the Earth's Magnetic Field

    NASA Technical Reports Server (NTRS)

    McGuire, Tim

    1998-01-01

    In this paper, we report the results of our recent research on the application of a multiprocessor Cray T916 supercomputer in modeling super-thermal electron transport in the earth's magnetic field. In general, this mathematical model requires numerical solution of a system of partial differential equations. The code we use for this model is moderately vectorized. By using Amdahl's Law for vector processors, it can be verified that the code is about 60% vectorized on a Cray computer. Speedup factors on the order of 2.5 were obtained compared to the unvectorized code. In the following sections, we discuss the methodology of improving the code. In addition to our goal of optimizing the code for solution on the Cray computer, we had the goal of scalability in mind. Scalability combines the concepts of portabilty with near-linear speedup. Specifically, a scalable program is one whose performance is portable across many different architectures with differing numbers of processors for many different problem sizes. Though we have access to a Cray at this time, the goal was to also have code which would run well on a variety of architectures.

  16. CFD Simulation of Liquid Rocket Engine Injectors

    NASA Technical Reports Server (NTRS)

    Farmer, Richard; Cheng, Gary; Chen, Yen-Sen; Garcia, Roberto (Technical Monitor)

    2001-01-01

    Detailed design issues associated with liquid rocket engine injectors and combustion chamber operation require CFD methodology which simulates highly three-dimensional, turbulent, vaporizing, and combusting flows. The primary utility of such simulations involves predicting multi-dimensional effects caused by specific injector configurations. SECA, Inc. and Engineering Sciences, Inc. have been developing appropriate computational methodology for NASA/MSFC for the past decade. CFD tools and computers have improved dramatically during this time period; however, the physical submodels used in these analyses must still remain relatively simple in order to produce useful results. Simulations of clustered coaxial and impinger injector elements for hydrogen and hydrocarbon fuels, which account for real fluid properties, is the immediate goal of this research. The spray combustion codes are based on the FDNS CFD code' and are structured to represent homogeneous and heterogeneous spray combustion. The homogeneous spray model treats the flow as a continuum of multi-phase, multicomponent fluids which move without thermal or velocity lags between the phases. Two heterogeneous models were developed: (1) a volume-of-fluid (VOF) model which represents the liquid core of coaxial or impinger jets and their atomization and vaporization, and (2) a Blob model which represents the injected streams as a cloud of droplets the size of the injector orifice which subsequently exhibit particle interaction, vaporization, and combustion. All of these spray models are computationally intensive, but this is unavoidable to accurately account for the complex physics and combustion which is to be predicted, Work is currently in progress to parallelize these codes to improve their computational efficiency. These spray combustion codes were used to simulate the three test cases which are the subject of the 2nd International Workshop on-Rocket Combustion Modeling. Such test cases are considered by these investigators to be very valuable for code validation because combustion kinetics, turbulence models and atomization models based on low pressure experiments of hydrogen air combustion do not adequately verify analytical or CFD submodels which are necessary to simulate rocket engine combustion. We wish to emphasize that the simulations which we prepared for this meeting are meant to test the accuracy of the approximations used in our general purpose spray combustion models, rather than represent a definitive analysis of each of the experiments which were conducted. Our goal is to accurately predict local temperatures and mixture ratios in rocket engines; hence predicting individual experiments is used only for code validation. To replace the conventional JANNAF standard axisymmetric finite-rate (TDK) computer code 2 for performance prediction with CFD cases, such codes must posses two features. Firstly, they must be as easy to use and of comparable run times for conventional performance predictions. Secondly, they must provide more detailed predictions of the flowfields near the injector face. Specifically, they must accurately predict the convective mixing of injected liquid propellants in terms of the injector element configurations.

  17. SOURCELESS STARTUP. A MACHINE CODE FOR COMPUTING LOW-SOURCE REACTOR STARTUPS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacMillan, D.B.

    1960-06-01

    >A revision to the sourceless start-up code is presented. The code solves a system of differential equations encountered in computing the probability distribution of activity at an observed power level during reactor start-up from a very low source level. (J.R.D.)

  18. Computer-assisted coding and clinical documentation: first things first.

    PubMed

    Tully, Melinda; Carmichael, Angela

    2012-10-01

    Computer-assisted coding tools have the potential to drive improvements in seven areas: Transparency of coding. Productivity (generally by 20 to 25 percent for inpatient claims). Accuracy (by improving specificity of documentation). Cost containment (by reducing overtime expenses, audit fees, and denials). Compliance. Efficiency. Consistency.

  19. Fast Scattering Code (FSC) User's Manual: Version 2

    NASA Technical Reports Server (NTRS)

    Tinetti, Ana F.; Dun, M. H.; Pope, D. Stuart

    2006-01-01

    The Fast Scattering Code (version 2.0) is a computer program for predicting the three-dimensional scattered acoustic field produced by the interaction of known, time-harmonic, incident sound with aerostructures in the presence of potential background flow. The FSC has been developed for use as an aeroacoustic analysis tool for assessing global effects on noise radiation and scattering caused by changes in configuration (geometry, component placement) and operating conditions (background flow, excitation frequency).

  20. IPython: components for interactive and parallel computing across disciplines. (Invited)

    NASA Astrophysics Data System (ADS)

    Perez, F.; Bussonnier, M.; Frederic, J. D.; Froehle, B. M.; Granger, B. E.; Ivanov, P.; Kluyver, T.; Patterson, E.; Ragan-Kelley, B.; Sailer, Z.

    2013-12-01

    Scientific computing is an inherently exploratory activity that requires constantly cycling between code, data and results, each time adjusting the computations as new insights and questions arise. To support such a workflow, good interactive environments are critical. The IPython project (http://ipython.org) provides a rich architecture for interactive computing with: 1. Terminal-based and graphical interactive consoles. 2. A web-based Notebook system with support for code, text, mathematical expressions, inline plots and other rich media. 3. Easy to use, high performance tools for parallel computing. Despite its roots in Python, the IPython architecture is designed in a language-agnostic way to facilitate interactive computing in any language. This allows users to mix Python with Julia, R, Octave, Ruby, Perl, Bash and more, as well as to develop native clients in other languages that reuse the IPython clients. In this talk, I will show how IPython supports all stages in the lifecycle of a scientific idea: 1. Individual exploration. 2. Collaborative development. 3. Production runs with parallel resources. 4. Publication. 5. Education. In particular, the IPython Notebook provides an environment for "literate computing" with a tight integration of narrative and computation (including parallel computing). These Notebooks are stored in a JSON-based document format that provides an "executable paper": notebooks can be version controlled, exported to HTML or PDF for publication, and used for teaching.

Top