Input-output model for MACCS nuclear accident impacts estimation¹
DOE Office of Scientific and Technical Information (OSTI.GOV)
Outkin, Alexander V.; Bixler, Nathan E.; Vargas, Vanessa N
Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domesticmore » product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Little, M.P.; Muirhead, C.R.; Goossens, L.H.J.
1997-12-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library ofmore » uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA late health effects models. This volume contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures, (3) the rationales and results for the expert panel on late health effects, (4) short biographies of the experts, and (5) the aggregated results of their responses.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goossens, L.H.J.; Kraan, B.C.P.; Cooke, R.M.
1998-04-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library ofmore » uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA internal dosimetry models. This volume contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures, (3) the rationales and results for the panel on internal dosimetry, (4) short biographies of the experts, and (5) the aggregated results of their responses.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goossens, L.H.J.; Kraan, B.C.P.; Cooke, R.M.
1997-12-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library ofmore » uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA deposited material and external dose models. This volume contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures, (3) the rationales and results for the panel on deposited material and external doses, (4) short biographies of the experts, and (5) the aggregated results of their responses.« less
Suh, Young Joo; Han, Kyunghwa; Chang, Suyon; Kim, Jin Young; Im, Dong Jin; Hong, Yoo Jin; Lee, Hye-Jeong; Hur, Jin; Kim, Young Jin; Choi, Byoung Wook
2017-09-01
The SYNergy between percutaneous coronary intervention with TAXus and cardiac surgery (SYNTAX) score is an invasive coronary angiography (ICA)-based score for quantifying the complexity of coronary artery disease (CAD). Although the SYNTAX score was originally developed based on ICA, recent publications have reported that coronary computed tomography angiography (CCTA) is a feasible modality for the estimation of the SYNTAX score.The aim of our study was to investigate the prognostic value of the SYNTAX score, based on CCTA for the prediction of major adverse cardiac and cerebrovascular events (MACCEs) in patients with complex CAD.The current study was approved by the institutional review board of our institution, and informed consent was waived for this retrospective cohort study. We included 251 patients (173 men, mean age 66.0 ± 9.29 years) who had complex CAD [3-vessel disease or left main (LM) disease] on CCTA. SYNTAX score was obtained on the basis of CCTA. Follow-up clinical outcome data regarding composite MACCEs were also obtained. Cox proportional hazards models were developed to predict the risk of MACCEs based on clinical variables, treatment, and computed tomography (CT)-SYNTAX scores.During the median follow-up period of 1517 days, there were 48 MACCEs. Univariate Cox hazards models demonstrated that MACCEs were associated with advanced age, low body mass index (BMI), and dyslipidemia (P < .2). In patients with LM disease, MACCEs were associated with a higher SYNTAX score. In patients with CT-SYNTAX score ≥23, patients who underwent coronary artery bypass graft surgery (CABG) and percutaneous coronary intervention had significantly lower hazard ratios than patients who were treated with medication alone. In multivariate Cox hazards model, advanced age, low BMI, and higher SYNTAX score showed an increased hazard ratio for MACCE, while treatment with CABG showed a lower hazard ratio (P < .2).On the basis of our results, CT-SYNTAX score can be a useful method for noninvasively predicting MACCEs in patients with complex CAD, especially in patients with LM disease.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprung, J.L.; Jow, H-N; Rollstin, J.A.
1990-12-01
Estimation of offsite accident consequences is the customary final step in a probabilistic assessment of the risks of severe nuclear reactor accidents. Recently, the Nuclear Regulatory Commission reassessed the risks of severe accidents at five US power reactors (NUREG-1150). Offsite accident consequences for NUREG-1150 source terms were estimated using the MELCOR Accident Consequence Code System (MACCS). Before these calculations were performed, most MACCS input parameters were reviewed, and for each parameter reviewed, a best-estimate value was recommended. This report presents the results of these reviews. Specifically, recommended values and the basis for their selection are presented for MACCS atmospheric andmore » biospheric transport, emergency response, food pathway, and economic input parameters. Dose conversion factors and health effect parameters are not reviewed in this report. 134 refs., 15 figs., 110 tabs.« less
Wang, Lin; Lin, Li; Chen, Xi; Sun, Li; Liao, Yulin; Huang, Na; Liao, Wangjun
2015-01-01
Vasculogenic mimicry (VM) is a blood supply modality that is strongly associated with the epithelial-mesenchymal transition (EMT), TWIST1 activation and tumor progression. We previously reported that metastasis-associated in colon cancer-1 (MACC1) induced the EMT and was associated with a poor prognosis of patients with gastric cancer (GC), but it remains unknown whether MACC1 promotes VM and regulates the TWIST signaling pathway in GC. In this study, we investigated MACC1 expression and VM by immunohistochemistry in 88 patients with stage IV GC, and also investigated the role of TWIST1 and TWIST2 in MACC1-induced VM by using nude mice with GC xenografts and GC cell lines. We found that the VM density was significantly increased in the tumors of patients who died of GC and was positively correlated with MACC1 immunoreactivity (p < 0.05). The 3-year survival rate was only 8.6% in patients whose tumors showed double positive staining for MACC1 and VM, whereas it was 41.7% in patients whose tumors were negative for both MACC1 and VM. Moreover, nuclear expression of MACC1, TWIST1, and TWIST2 was upregulated in GC tissues compared with matched adjacent non-tumorous tissues (p < 0.05). Overexpression of MACC1 increased TWIST1/2 expression and induced typical VM in the GC xenografts of nude mice and in GC cell lines. MACC1 enhanced TWIST1/2 promoter activity and facilitated VM, while silencing of TWIST1 or TWIST2 inhibited VM. Hepatocyte growth factor (HGF) increased the nuclear translocation of MACC1, TWIST1, and TWIST2, while a c-Met inhibitor reduced these effects. These findings indicate that MACC1 promotes VM in GC by regulating the HGF/c-Met-TWIST1/2 signaling pathway, which means that MACC1 and this pathway are potential new therapeutic targets for GC. PMID:25895023
Rohr, U-P; Herrmann, P; Ilm, K; Zhang, H; Lohmann, S; Reiser, A; Muranyi, A; Smith, J; Burock, S; Osterland, M; Leith, K; Singh, S; Brunhoeber, P; Bowermaster, R; Tie, J; Christie, M; Wong, H-L; Waring, P; Shanmugam, K; Gibbs, P; Stein, U
2017-08-01
We assessed the novel MACC1 gene to further stratify stage II colon cancer patients with proficient mismatch repair (pMMR). Four cohorts with 596 patients were analyzed: Charité 1 discovery cohort was assayed for MACC1 mRNA expression and MMR in cryo-preserved tumors. Charité 2 comparison cohort was used to translate MACC1 qRT-PCR analyses to FFPE samples. In the BIOGRID 1 training cohort MACC1 mRNA levels were related to MACC1 protein levels from immunohistochemistry in FFPE sections; also analyzed for MMR. Chemotherapy-naïve pMMR patients were stratified by MACC1 mRNA and protein expression to establish risk groups based on recurrence-free survival (RFS). Risk stratification from BIOGRID 1 was confirmed in the BIOGRID 2 validation cohort. Pooled BIOGRID datasets produced a best effect-size estimate. In BIOGRID 1, using qRT-PCR and immunohistochemistry for MACC1 detection, pMMR/MACC1-low patients had a lower recurrence probability versus pMMR/MACC1-high patients (5-year RFS of 92% and 67% versus 100% and 68%, respectively). In BIOGRID 2, longer RFS was confirmed for pMMR/MACC1-low versus pMMR/MACC1-high patients (5-year RFS of 100% versus 90%, respectively). In the pooled dataset, 6.5% of patients were pMMR/MACC1-low with no disease recurrence, resulting in a 17% higher 5-year RFS [95% confidence interval (CI) (12.6%-21.3%)] versus pMMR/MACC1-high patients (P = 0.037). Outcomes were similar for pMMR/MACC1-low and deficient MMR (dMMR) patients (5-year RFS of 100% and 96%, respectively). MACC1 expression stratifies colon cancer patients with unfavorable pMMR status. Stage II colon cancer patients with pMMR/MACC1-low tumors have a similar favorable prognosis to those with dMMR with potential implications for the role of adjuvant therapy. © The Author 2017. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Chen, Shuo; Zong, Zhi-Hong; Wu, Dan-Dan; Sun, Kai-Xuan; Liu, Bo-Liang; Zhao, Yang
2017-04-01
Metastasis-associated in colon cancer-1 (MACC1), has recently been identified as a key regulator in the progression of many cancers. However, its role in endometrial carcinoma (EC) remains unknown. MACC1 expression was determined in EC and normal endometrial tissues by immunohistochemistry. EC cell phenotypes and related molecules were examined after MACC1 downregulation by Small interfering RNA (siRNA) or microRNA (miRNA) transfection. We found that MACC1 was highly expressed in EC tissues than normal samples, and was significantly different in FIGO staging (I and II vs. III and IV), the depth of myometrial infiltration (<1/2 vs. ≥1/2), lymph nodes metastasis (negative vs. positive), besides, MACC1 overexpression was correlated with lower cumulative and relapse-free survival rate. MACC1 downregulation by siRNA transfection significantly induced G1 phrase arrest, suppressed EC cell proliferation, migration, and invasion. In addition, MACC1 downregulation also reduced expression of Cyclin D1 and Cyclin-dependent Kinase 2 (CDK2), N-cadherin (N-Ca), α-SMA, matrix metalloproteinase 2 (MMP2), and MMP9, but increased expression of E-cadherin (E-Ca). Bioinformatic predictions and dual-luciferase reporter assays indicate that MACC1 is a possible target of miR-23b. MiR-23b overexpression reduced MACC1 expression in vitro and induced G1 phrase arrest, suppressed cell proliferation, migration, and invasion. MiR-23b transfection also reduced Cyclin D1 and CDK2, N-Ca, α-SMA, MMP2, MMP9 expression, but increased E-Ca expression. Furthermore, the nude mouse xenograft assay showed that miR-23b overexpression suppressed tumour growth through downregulating MACC1 expression. Taken together, our results demonstrate for the first time that MACC1 may be a new and important diagnosis and therapeutic target of endometrial carcinoma. © 2017 Wiley Periodicals, Inc.
MISR Regional GoMACCS Imagery Overview
Atmospheric Science Data Center
2016-08-24
... View Data | Download Data About this Web Site: Visualizations of select MISR Level 3 data for special regional ... version used in support of the GoMACCS Campaign. More information about the Level 1 and Level 2 products subsetted for the GoMACCS ...
Gao, Yue-chun; Yu, Xian-peng; He, Ji-qiang; Chen, Fang
2012-01-01
To assess the value of SYNTAX score to predict major adverse cardiac and cerebrovascular events (MACCE) among patients with three-vessel or left-main coronary artery disease undergoing percutaneous coronary intervention. 190 patients with three-vessel or left-main coronary artery disease undergoing percutaneous coronary intervention (PCI) with Cypher select drug-eluting stent were enrolled. SYNTAX score and clinical SYNTAX score were retrospectively calculated. Our clinical Endpoint focused on MACCE, a composite of death, nonfatal myocardial infarction (MI), stroke and repeat revascularization. The value of SYNTAX score and clinical SYNTAX score to predict MACCE were studied respectively. 29 patients were observed to suffer from MACCE, accounting 18.5% of the overall 190 patients. MACCE rates of low (≤ 20.5), intermediate (21.0 - 31.0), and high (≥ 31.5) tertiles according to SYNTAX score were 9.1%, 16.2% and 30.9% respectively. Both univariate and multivariate analysis showed that SYNTAX score was the independent predictor of MACCE. MACCE rates of low (≤ 19.5), intermediate (19.6 - 29.1), and high (≥ 29.2) tertiles according to clinical SYNTAX score were 14.9%, 9.8% and 30.6% respectively. Both univariate and multivariate analysis showed that clinical SYNTAX score was the independent predictor of MACCE. ROC analysis showed both SYNTAX score (AUC = 0.667, P = 0.004) and clinical SYNTAX score (AUC = 0.636, P = 0.020) had predictive value of MACCE. Clinical SYNTAX score failed to show better predictive ability than the SYNTAX score. Both SYNTAX score and clinical SYNTAX score could be independent risk predictors for MACCE among patients with three-vessel or left-main coronary artery disease undergoing percutaneous coronary intervention. Clinical SYNTAX score failed to show better predictive ability than the SYNTAX score in this group of patients.
MACC1 regulates Fas mediated apoptosis through STAT1/3 - Mcl-1 signaling in solid cancers.
Radhakrishnan, Harikrishnan; Ilm, Katharina; Walther, Wolfgang; Shirasawa, Senji; Sasazuki, Takehiko; Daniel, Peter T; Gillissen, Bernhard; Stein, Ulrike
2017-09-10
MACC1 was identified as a novel player in cancer progression and metastasis, but its role in death receptor-mediated apoptosis is still unexplored. We show that MACC1 knockdown sensitizes cancer cells to death receptor-mediated apoptosis. For the first time, we provide evidence for STAT signaling as a MACC1 target. MACC1 knockdown drastically reduced STAT1/3 activating phosphorylation, thereby regulating the expression of its apoptosis targets Mcl-1 and Fas. STAT signaling inhibition by the JAK1/2 inhibitor ruxolitinib mimicked MACC1 knockdown-mediated molecular signatures and apoptosis sensitization to Fas activation. Despite the increased Fas expression, the reduced Mcl-1 expression was instrumental in apoptosis sensitization. This reduced Mcl-1-mediated apoptosis sensitization was Bax and Bak dependent. MACC1 knockdown also increased TRAIL-induced apoptosis. MACC1 overexpression enhanced STAT1/3 phosphorylation and increased Mcl-1 expression, which was abrogated by ruxolitinib. The central role of Mcl-1 was strengthened by the resistance of Mcl-1 overexpressing cells to apoptosis induction. The clinical relevance of Mcl-1 regulation by MACC1 was supported by their positive expression correlation in patient-derived tumors. Altogether, we reveal a novel death receptor-mediated apoptosis regulatory mechanism by MACC1 in solid cancers through modulation of the STAT1/3-Mcl-1 axis. Copyright © 2017 Elsevier B.V. All rights reserved.
Zhao, Yang; Dai, Cong; Wang, Meng; Kang, Huafeng; Lin, Shuai; Yang, Pengtao; Liu, Xinghan; Liu, Kang; Xu, Peng; Zheng, Yi; Li, Shanli; Dai, Zhijun
2016-01-01
Metastasis-associated in colon cancer-1 (MACC1) has been reported to be overexpressed in diverse human malignancies, and the increasing amount of evidences suggest that its overexpression is associated with the development and progression of many human tumors. However, the prognostic and clinicopathological value of MACC1 in colorectal cancer remains inconclusive. Therefore, we conducted this meta-analysis to investigate the effect of MACC1 overexpression on clinicopathological features and survival outcomes in colorectal cancer. PubMed, CNKI, and Wanfang databases were searched for relevant articles published update to December 2015. Correlation of MACC1 expression level with overall survival (OS), disease-free survival (DFS), and clinicopathological features were analyzed. In this meta-analysis, fifteen studies with a total of 2,161 colorectal cancer patients were included. Our results showed that MACC1 overexpression was significantly associated with poorer OS and DFS. Moreover, MACC1 overexpression was significantly associated with gender, localization, TNM stage, T stage, and N stage. Together, our meta-analysis showed that MACC1 overexpression was significantly associated with poor survival rates, regional invasion and lymph-node metastasis. MACC1 expression level can serve as a novel prognostic factor in colorectal cancer patients. PMID:27542234
NASA Astrophysics Data System (ADS)
Petrucci, B.; Huc, M.; Feuvrier, T.; Ruffel, C.; Hagolle, O.; Lonjou, V.; Desjardins, C.
2015-10-01
For the production of Level2A products during Sentinel-2 commissioning in the Technical Expertise Center Sentinel-2 in CNES, CESBIO proposed to adapt the Venus Level-2 , taking advantage of the similarities between the two missions: image acquisition at a high frequency (2 days for Venus, 5 days with the two Sentinel-2), high resolution (5m for Venus, 10, 20 and 60m for Sentinel-2), images acquisition under constant viewing conditions. The Multi-Mission Atmospheric Correction and Cloud Screening (MACCS) tool was born: based on CNES Orfeo Toolbox Library, Venμs processor which was already able to process Formosat2 and VENμS data, was adapted to process Sentinel-2 and Landsat5-7 data; since then, a great effort has been made reviewing MACCS software architecture in order to ease the add-on of new missions that have also the peculiarity of acquiring images at high resolution, high revisit and under constant viewing angles, such as Spot4/Take5 and Landsat8. The recursive and multi-temporal algorithm is implemented in a core that is the same for all the sensors and that combines several processing steps: estimation of cloud cover, cloud shadow, water, snow and shadows masks, of water vapor content, aerosol optical thickness, atmospheric correction. This core is accessed via a number of plug-ins where the specificity of the sensor and of the user project are taken into account: products format, algorithmic processing chaining and parameters. After a presentation of MACCS architecture and functionalities, the paper will give an overview of the production facilities integrating MACCS and the associated specificities: the interest for this tool has grown worldwide and MACCS will be used for extensive production within the THEIA land data center and Agri-S2 project. Finally the paper will zoom on the use of MACCS during Sentinel-2 In Orbit Test phase showing the first Level-2A products.
Oh, Wen-Da; Lua, Shun-Kuang; Dong, Zhili; Lim, Teik-Thye
2015-03-02
Magnetic activated carbon composite (CuFe2O4/AC, MACC) was prepared by a co-precipitation-calcination method. The MACC consisted of porous micro-particle morphology with homogeneously distributed CuFe2O4 and possessed high magnetic saturation moment (8.1 emu g(-1)). The performance of MACC was evaluated as catalyst and regenerable adsorbent via peroxymonosulfate (PMS, Oxone(®)) activation for methylene blue (MB) removal. Optimum CuFe2O4/AC w/w ratio was 1:1.5 giving excellent performance and can be reused for at least 3 cycles. The presence of common inorganic ions, namely Cl(-) and NO3(-) did not exert significant influence on MB degradation but humic acid decreased the MB degradation rate. As a regenerable adsorbent, negligible difference in regeneration efficiency was observed when a higher Oxone(®) dosage was employed but a better efficiency was obtained at a lower MACC loading. The factors hindering complete MACC regeneration are MB adsorption irreversibility and AC surface modification by PMS making it less favorable for subsequent MB adsorption. With an additional mild heat treatment (150 °C) after regeneration, 82% of the active sites were successfully regenerated. A kinetic model incorporating simultaneous first-order desorption, second-order adsorption and pseudo-first order degradation processes was numerically-solved to describe the rate of regeneration. The regeneration rate increased linearly with increasing Oxone(®):MACC ratio. The MACC could potentially serve as a catalyst for PMS activation and regenerable adsorbent. Copyright © 2014 Elsevier B.V. All rights reserved.
Ilm, Katharina; Kemmner, Wolfgang; Osterland, Marc; Burock, Susen; Koch, Gudrun; Herrmann, Pia; Schlag, Peter M; Stein, Ulrike
2015-02-14
The metastasis-associated in colon cancer 1 (MACC1) gene has been identified as prognostic biomarker for colorectal cancer (CRC). Here, we aimed at the refinement of risk assessment by separate and combined survival analyses of MACC1 expression with any of the markers KRAS mutated in codon 12 (KRAS G12) or codon 13 (KRAS G13), BRAF V600 mutation and MSI status in a retrospective study of 99 CRC patients with tumors UICC staged I, II and III. We showed that only high MACC1 expression (HR: 6.09, 95% CI: 2.50-14.85, P < 0.001) and KRAS G13 mutation (HR: 5.19, 95% CI: 1.06-25.45, P = 0.042) were independent prognostic markers for shorter metastasis-free survival (MFS). Accordingly, Cox regression analysis revealed that patients with high MACC1 expression and KRAS G13 mutation exhibited the worst prognosis (HR: 14.48, 95% CI: 3.37-62.18, P < 0.001). Patients were classified based on their molecular characteristics into four clusters with significant differences in MFS (P = 0.003) by using the SPSS 2-step cluster function and Kaplan-Meier survival analysis. According to our results, patients with high MACC1 expression and mutated KRAS G13 exhibited the highest risk for metachronous metastases formation. Moreover, we demonstrated that the "Traditional pathway" with an intermediate risk for metastasis formation can be further subdivided by assessing MACC1 expression into a low and high risk group with regard to MFS prognosis. This is the first report showing that identification of CRC patients at high risk for metastasis is possible by assessing MACC1 expression in combination with KRAS G13 mutation.
Prognostic Value of MACC1 in Digestive System Neoplasms: A Systematic Review and Meta-Analysis
Wu, Zhenzhen; Zhou, Rui; Su, Yuqi; Sun, Li; Liao, Yulin; Liao, Wangjun
2015-01-01
Metastasis associated in colon cancer 1 (MACC1), a newly identified oncogene, has been associated with poor survival of cancer patients by multiple studies. However, the prognostic value of MACC1 in digestive system neoplasms needs systematic evidence to verify. Therefore, we aimed to provide further evidence on this topic by systematic review and meta-analysis. Literature search was conducted in multiple databases and eligible studies analyzing survival data and MACC1 expression were included for meta-analysis. Hazard ratio (HR) for clinical outcome was chosen as an effect measure of interest. According to our inclusion criteria, 18 studies with a total of 2,948 patients were identified. Pooled HRs indicated that high MACC1 expression significantly correlates with poorer OS in patients with digestive system neoplasms (HR = 1.94; 95% CI: 1.49–2.53) as well as poorer relapse-free survival (HR = 1.94, 95% CI: 1.33–2.82). The results of subgroup studies categorized by methodology, anatomic structure, and cancer subtype for pooled OS were all consistent with the overall pooled HR for OS as well. No publication bias was detected according to test of funnel plot asymmetry and Egger's test. In conclusion, high MACC1 expression may serve as a prognostic biomarker to guide individualized management in clinical practice for digestive system neoplasms. PMID:26090393
Prognostic Value of MACC1 in Digestive System Neoplasms: A Systematic Review and Meta-Analysis.
Wu, Zhenzhen; Zhou, Rui; Su, Yuqi; Sun, Li; Liao, Yulin; Liao, Wangjun
2015-01-01
Metastasis associated in colon cancer 1 (MACC1), a newly identified oncogene, has been associated with poor survival of cancer patients by multiple studies. However, the prognostic value of MACC1 in digestive system neoplasms needs systematic evidence to verify. Therefore, we aimed to provide further evidence on this topic by systematic review and meta-analysis. Literature search was conducted in multiple databases and eligible studies analyzing survival data and MACC1 expression were included for meta-analysis. Hazard ratio (HR) for clinical outcome was chosen as an effect measure of interest. According to our inclusion criteria, 18 studies with a total of 2,948 patients were identified. Pooled HRs indicated that high MACC1 expression significantly correlates with poorer OS in patients with digestive system neoplasms (HR = 1.94; 95% CI: 1.49-2.53) as well as poorer relapse-free survival (HR = 1.94, 95% CI: 1.33-2.82). The results of subgroup studies categorized by methodology, anatomic structure, and cancer subtype for pooled OS were all consistent with the overall pooled HR for OS as well. No publication bias was detected according to test of funnel plot asymmetry and Egger's test. In conclusion, high MACC1 expression may serve as a prognostic biomarker to guide individualized management in clinical practice for digestive system neoplasms.
Juneja, Manisha; Kobelt, Dennis; Walther, Wolfgang; Voss, Cynthia; Smith, Janice; Specker, Edgar; Neuenschwander, Martin; Gohlke, Björn-Oliver; Dahlmann, Mathias; Radetzki, Silke; Preissner, Robert; von Kries, Jens Peter; Schlag, Peter Michael; Stein, Ulrike
2017-06-01
MACC1 (Metastasis Associated in Colon Cancer 1) is a key driver and prognostic biomarker for cancer progression and metastasis in a large variety of solid tumor types, particularly colorectal cancer (CRC). However, no MACC1 inhibitors have been identified yet. Therefore, we aimed to target MACC1 expression using a luciferase reporter-based high-throughput screening with the ChemBioNet library of more than 30,000 compounds. The small molecules lovastatin and rottlerin emerged as the most potent MACC1 transcriptional inhibitors. They remarkably inhibited MACC1 promoter activity and expression, resulting in reduced cell motility. Lovastatin impaired the binding of the transcription factors c-Jun and Sp1 to the MACC1 promoter, thereby inhibiting MACC1 transcription. Most importantly, in CRC-xenografted mice, lovastatin and rottlerin restricted MACC1 expression and liver metastasis. This is-to the best of our knowledge-the first identification of inhibitors restricting cancer progression and metastasis via the novel target MACC1. This drug repositioning might be of therapeutic value for CRC patients.
ChemoPy: freely available python package for computational biology and chemoinformatics.
Cao, Dong-Sheng; Xu, Qing-Song; Hu, Qian-Nan; Liang, Yi-Zeng
2013-04-15
Molecular representation for small molecules has been routinely used in QSAR/SAR, virtual screening, database search, ranking, drug ADME/T prediction and other drug discovery processes. To facilitate extensive studies of drug molecules, we developed a freely available, open-source python package called chemoinformatics in python (ChemoPy) for calculating the commonly used structural and physicochemical features. It computes 16 drug feature groups composed of 19 descriptors that include 1135 descriptor values. In addition, it provides seven types of molecular fingerprint systems for drug molecules, including topological fingerprints, electro-topological state (E-state) fingerprints, MACCS keys, FP4 keys, atom pairs fingerprints, topological torsion fingerprints and Morgan/circular fingerprints. By applying a semi-empirical quantum chemistry program MOPAC, ChemoPy can also compute a large number of 3D molecular descriptors conveniently. The python package, ChemoPy, is freely available via http://code.google.com/p/pychem/downloads/list, and it runs on Linux and MS-Windows. Supplementary data are available at Bioinformatics online.
MISR Regional GoMACCS Products
Atmospheric Science Data Center
2016-08-24
... parameters from one Level 1 or Level 2 product. Further information about the Level 1 and Level 2 data products can be found on the ... MISR GoMACCS data table . Images available on this web site include the following parameters: Image Description ...
MACC1 - a novel target for solid cancers.
Stein, Ulrike
2013-09-01
The metastatic dissemination of primary tumors is directly linked to patient survival in many tumor entities. The previously undescribed gene metastasis-associated in colon cancer 1 (MACC1) was discovered by genome-wide analyses in colorectal cancer (CRC) tissues. MACC1 is a tumor stage-independent predictor for CRC metastasis linked to metastasis-free survival. In this review, the discovery of MACC1 is briefly presented. In the following, the overwhelming confirmation of these data is provided supporting MACC1 as a new remarkable biomarker for disease prognosis and prediction of therapy response for CRC and also for a variety of additional forms of solid cancers. Lastly, the potential clinical utility of MACC1 as a target for prevention or restriction of tumor progression and metastasis is envisioned. MACC1 has been identified as a prognostic biomarker in a variety of solid cancers. MACC1 correlated with tumor formation and progression, development of metastases and patient survival representing a decisive driver for tumorigenesis and metastasis. MACC1 was also demonstrated to be of predictive value for therapy response. MACC1 is a promising therapeutic target for anti-tumor and anti-metastatic intervention strategies of solid cancers. Its clinical utility, however, must be demonstrated in clinical trials.
Hussey, Daniel K; McGrory, Brian J
2017-08-01
Mechanically assisted crevice corrosion (MACC) in metal-on-polyethylene total hip arthroplasty (THA) is of concern, but its prevalence, etiology, and natural history are incompletely understood. From January 2003 to December 2012, 1352 consecutive THA surgeries using a titanium stem, cobalt-chromium alloy femoral head, and highly cross-linked polyethylene liner from a single manufacturer were performed. Patients were followed at 1-year and 5-year intervals for surveillance, but also seen earlier if they had symptoms. Any patient with osteolysis >1 cm (n = 3) or unexplained pain (n = 85) underwent examination, radiographs, complete blood count, erythrocyte sedimentation rate, and C-reactive protein, as well as tests for serum cobalt and chromium levels. Symptomatic MACC was present in 43 of 1352 patients (3.2%). Prevalence of MACC by year of implant ranged from 0% (0 of 61, 2003; 0 of 138, 2005) to 10.5% (17 of 162; 2009). The M/L Taper stem had a greater prevalence (4.9%) of MACC than all other Zimmer (Zimmer, Inc, Warsaw, IN) 12/14 trunnion stem types combined (1.2%; P < .001). Twenty-seven of 43 (62.8%) patients have undergone revision surgery, and 16 of 43 (37.2%) patients have opted for ongoing surveillance. Comparing symptomatic THA patients with and without MACC, no demographic, clinical, or radiographic differences were found. MACC was significantly more common in 0 length femoral heads (compared with both -3.5 mm and +3.5 mm heads). The prevalence of MACC in metal-on-polyethylene hips is higher in this cross-sectional study than previously reported. A significantly higher prevalence was found in patients with M/L Taper style stem and THA performed both in 2009 and also between 2009 and 2012 with this manufacturer. Copyright © 2017 Elsevier Inc. All rights reserved.
Suh, Soon Yong; Kang, Woong Chol; Oh, Pyung Chun; Choi, Hanul; Moon, Chan Il; Lee, Kyounghoon; Han, Seung Hwan; Ahn, Taehoon; Choi, In Suck; Shin, Eak Kyun
2014-09-01
There are limited data on the optimal antithrombotic therapy for patients with atrial fibrillation (AF) who undergoing coronary stenting. We reviewed 203 patients (62.6 % men, mean age 68.3 ± 10.1 years) between 2003 and 2012, and recorded clinical and demographic characteristics of the patients. Clinical follow-up included major adverse cardiac and cerebrovascular events (MACCE) (cardiac death, myocardial infarction, target lesion revascularization, and stroke), stent thrombosis, and bleeding. The most commonly associated comorbidities were hypertension (70.4 %), diabetes mellitus (35.5 %), and congestive heart failure (26.6 %). Sixty-three percent of patients had stroke risk higher than CHADS2 score 2. At discharge, dual-antiplatelet therapy (aspirin, clopidogrel) was used in 166 patients (81.8 %; Group I), whereas 37 patients (18.2 %) were discharged with triple therapy (aspirin, clopidogrel, warfarin; Group II). The mean follow-up period was 42.0 ± 29.0 months. The mean international normalized ratio (INR) in group II was 1.83 ± 0.41. The total MACCE was 16.3 %, with stroke in 3.4 %. Compared with the group II, the incidence of MACCE (2.7 % vs 19.3 %, P = 0.012) and cardiac death (0 % vs 11.4 %, P = 0.028) were higher in the group I. Major and any bleeding, however, did not differ between the two groups. In multivariate analysis, no warfarin therapy (odds ratio 7.8, 95 % confidence interval 1.02-59.35; P = 0.048) was an independent predictor of MACCE. By Kaplan-Meier survival analysis, warfarin therapy was associated with a lower risk of MACCE (P = 0.024). In patients with AF undergoing coronary artery stenting, MACCE were reduced by warfarin therapy without increased bleeding, which might be related to tighter control with a lower INR value.
Integrative marker analysis allows risk assessment for metastasis in stage II colon cancer.
Nitsche, Ulrich; Rosenberg, Robert; Balmert, Alexander; Schuster, Tibor; Slotta-Huspenina, Julia; Herrmann, Pia; Bader, Franz G; Friess, Helmut; Schlag, Peter M; Stein, Ulrike; Janssen, Klaus-Peter
2012-11-01
Individualized risk assessment in patients with UICC stage II colon cancer based on a panel of molecular genetic alterations. Risk assessment in patients with colon cancer and localized disease (UICC stage II) is not sufficiently reliable. Development of metachronous metastasis is assumed to be governed largely by individual tumor genetics. Fresh frozen tissue from 232 patients (T3-4, N0, M0) with complete tumor resection and a median follow-up of 97 months was analyzed for microsatellite stability, KRAS exon 2, and BRAF exon 15 mutations. Gene expression of the WNT-pathway surrogate marker osteopontin and the metastasis-associated genes SASH1 and MACC1 was determined for 179 patients. The results were correlated with metachronous distant metastasis risk (n = 22 patients). Mutations of KRAS were detected in 30% patients, mutations of BRAF in 15% patients, and microsatellite instability in 26% patients. Risk of recurrence was associated with KRAS mutation (P = 0.033), microsatellite stable tumors (P = 0.015), decreased expression of SASH1 (P = 0.049), and increased expression of MACC1 (P < 0.001). MACC1 was the only independent parameter for recurrence prediction (hazard ratio: 6.2; 95% confidence interval: 2.4-16; P < 0.001). Integrative 2-step cluster analysis allocated patients into 4 groups, according to their tumor genetics. KRAS mutation, BRAF wild type, microsatellite stability, and high MACC1 expression defined the group with the highest risk of recurrence (16%, 7 of 43), whereas BRAF wild type, microsatellite instability, and low MACC1 expression defined the group with the lowest risk (4%, 1 of 26). MACC1 expression predicts development of metastases, outperforming microsatellite stability status, as well as KRAS/BRAF mutation status.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harper, F.T.; Young, M.L.; Miller, L.A.
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, completed in 1990, estimate the risks presented by nuclear installations based on postulated frequencies and magnitudes of potential accidents. In 1991, the US Nuclear Regulatory Commission (NRC) and the Commission of the European Communities (CEC) began a joint uncertainty analysis of the two codes. The objective was to develop credible and traceable uncertainty distributions for the input variables of the codes. Expert elicitation, developed independently, was identified as the best technology available for developing a library of uncertainty distributions for the selected consequence parameters. The study was formulatedmore » jointly and was limited to the current code models and to physical quantities that could be measured in experiments. To validate the distributions generated for the wet deposition input variables, samples were taken from these distributions and propagated through the wet deposition code model along with the Gaussian plume model (GPM) implemented in the MACCS and COSYMA codes. Resulting distributions closely replicated the aggregated elicited wet deposition distributions. Project teams from the NRC and CEC cooperated successfully to develop and implement a unified process for the elaboration of uncertainty distributions on consequence code input parameters. Formal expert judgment elicitation proved valuable for synthesizing the best available information. Distributions on measurable atmospheric dispersion and deposition parameters were successfully elicited from experts involved in the many phenomenological areas of consequence analysis. This volume is the second of a three-volume document describing the project and contains two appendices describing the rationales for the dispersion and deposition data along with short biographies of the 16 experts who participated in the project.« less
NASA Astrophysics Data System (ADS)
Engebretson, M. J.; Valentic, T. A.; Stehle, R. H.; Hughes, W. J.
2004-05-01
The Magnetometer Array for Cusp and Cleft Studies (MACCS) is a two-dimensional array of eight fluxgate magnetometers that was established in 1992-1993 in the Eastern Canadian Arctic from 75° to over 80° MLAT to study electrodynamic interactions between the solar wind and Earth's magnetosphere and high-latitude ionosphere. A ninth site in Nain, Labrador, extends coverage down to 66° between existing Canadian and Greenland stations. Originally designed as part of NSF's GEM (Geospace Environment Modeling) Program, MACCS has contributed to the study of transients and waves at the magnetospheric boundary and in the near-cusp region as well as to large, cooperative, studies of ionospheric convection and substorm processes. Because of the limitations of existing telephone lines to each site, it has not been possible to economically access MACCS data promptly; instead, each month's collected data is recorded and mailed to the U.S. for processing and eventual posting on a publicly-accessible web site, http://space.augsburg.edu/space. As part of its recently renewed funding, NSF has supported the development of a near-real-time data transport system using the Iridium satellite network, which will be implemented at two MACCS sites in summer 2004. At the core of the new MACCS communications system is the Data Transport Network, software developed with NSF-ITR funding to automate the transfer of scientific data from remote field stations over unreliable, bandwidth-constrained network connections. The system utilizes a store-and-forward architecture based on sending data files as attachments to Usenet messages. This scheme not only isolates the instruments from network outages, but also provides a consistent framework for organizing and accessing multiple data feeds. Client programs are able to subscribe to data feeds to perform tasks such as system health monitoring, data processing, web page updates and e-mail alerts. The MACCS sites will employ the Data Transport Network on a small local Linux-based computer connected to an Iridium transceiver. Between 3-5Mb of data a day will be collected from the magnetometers and delivered in near-real-time for automatic distribution to modelers and index developers. More information about the Data Transport Network can be found at http://transport.sri.com/TransportDevel .
Roy, Andrew K; Chevalier, Bernard; Lefèvre, Thierry; Louvard, Yves; Segurado, Ricardo; Sawaya, Fadi; Spaziano, Marco; Neylon, Antoinette; Serruys, Patrick A; Dawkins, Keith D; Kappetein, Arie Pieter; Mohr, Friedrich-Wilhelm; Colombo, Antonio; Feldman, Ted; Morice, Marie-Claude
2017-09-20
The use of multiple geographical sites for randomised cardiovascular trials may lead to important heterogeneity in treatment effects. This study aimed to determine whether treatment effects from different geographical recruitment regions impacted significantly on five-year MACCE rates in the SYNTAX trial. Five-year SYNTAX results (n=1,800) were analysed for geographical variability by site and country for the effect of treatment (CABG vs. PCI) on MACCE rates. Fixed, random, and linear mixed models were used to test clinical covariate effects, such as diabetes, lesion characteristics, and procedural factors. Comparing five-year MACCE rates, the pooled odds ratio (OR) between study sites was 0.58 (95% CI: 0.47-0.71), and countries 0.59 (95% CI: 0.45-0.73). By homogeneity testing, no individual site (X2=93.8, p=0.051) or country differences (X2=25.7, p=0.080) were observed. For random effects models, the intraclass correlation was minimal (ICC site=5.1%, ICC country=1.5%, p<0.001), indicating minimal geographical heterogeneity, with a hazard ratio of 0.70 (95% CI: 0.59-0.83). Baseline risk (smoking, diabetes, PAD) did not influence regional five-year MACCE outcomes (ICC 1.3%-5.2%), nor did revascularisation of the left main vs. three-vessel disease (p=0.241), across site or country subgroups. For CABG patients, the number of arterial (p=0.49) or venous (p=0.38) conduits used also made no difference. Geographic variability has no significant treatment effect on MACCE rates at five years. These findings highlight the generalisability of the five-year outcomes of the SYNTAX study.
Severity of OSAS, CPAP and cardiovascular events: A follow-up study.
Baratta, Francesco; Pastori, Daniele; Fabiani, Mario; Fabiani, Valerio; Ceci, Fabrizio; Lillo, Rossella; Lolli, Valeria; Brunori, Marco; Pannitteri, Gaetano; Cravotto, Elena; De Vito, Corrado; Angelico, Francesco; Del Ben, Maria
2018-05-01
Previous studies suggested obstructive sleep apnoea syndrome (OSAS) as a major risk factor for incident cardiovascular events. However, the relationship between OSAS severity, the use of continuous positive airway pressure (CPAP) treatment and the development of cardiovascular disease is still matter of debate. The aim was to test the association between OSAS and cardiovascular events in patients with concomitant cardio-metabolic diseases and the potential impact of CPAP therapy on cardiovascular outcomes. Prospective observational cohort study of consecutive outpatients with suspected metabolic disorders who had complete clinical and biochemical workup including polysomnography because of heavy snoring and possible OSAS. The primary endpoint was a composite of major adverse cardiovascular and cerebrovascular events (MACCE). Median follow-up was 81.3 months, including 434 patients (2701.2 person/years); 83 had a primary snoring, 84 had mild, 93 moderate and 174 severe OSAS, respectively. The incidence of MACCE was 0.8% per year (95% confidence interval [CI] 0.2-2.1) in primary snorers and 2.1% per year (95% CI 1.5-2.8) for those with OSAS. A positive association was observed between event-free survival and OSAS severity (log-rank test; P = .041). A multivariable Cox regression analysis showed obesity (HR = 8.011, 95% CI 1.071-59.922, P = .043), moderate OSAS (vs non-OSAS HR = 3.853, 95% CI 1.069-13.879, P = .039) and severe OSAS (vs non-OSAS HR = 3.540, 95% CI 1.026-12.217, P = .045) as predictors of MACCE. No significant association was observed between CPAP treatment and MACCE (log-rank test; P = .227). Our findings support the role of moderate/severe OSAS as a risk factor for incident MACCE. CPAP treatment was not associated with a lower rate of MACCE. © 2018 Stichting European Society for Clinical Investigation Journal Foundation.
Moon, Jeonggeun; Suh, Jon; Oh, Pyung Chun; Lee, Kyounghoon; Park, Hyun Woo; Jang, Ho-Jun; Kim, Tae-Hoon; Park, Sang-Don; Kwon, Sung Woo; Kang, Woong Chol
2016-07-15
Although epidemiologic studies have shown the impact of height on occurrence and/or prognosis of cardiovascular diseases, the underlying mechanism is unclear. In addition, the relation in patients with ST-segment elevation myocardial infarction (STEMI) who underwent primary percutaneous coronary intervention (PCI) remains unknown. We sought to assess the influence of height on outcomes of patients with acute STEMI undergoing primary PCI and to provide a pathophysiological explanation. All 1,490 patients with STEMI undergoing primary PCI were analyzed. Major adverse cardiac and cerebrovascular events (MACCE) were defined as all-cause mortality, nonfatal myocardial infarction, nonfatal stroke, and unplanned hospitalization for heart failure (HF). Patients were divided into (1) MACCE (+) versus MACCE (-) and (2) first- to third-tertile groups according to height. MACCE (+) group was shorter than MACCE (-) group (164 ± 8 vs 166 ± 8 cm, p = 0.012). Prognostic impact of short stature was significant in older (≥70 years) male patients even after adjusting for co-morbidities (hazard ratio 0.951, 95% confidence interval 0.912 to 0.991, p = 0.017). The first-tertile group showed the worst MACCE-free survival (p = 0.035), and most cases of MACCE were HF (n, 17 [3%] vs 6 [1%] vs 2 [0%], p = 0.004). On post-PCI echocardiography, left atrial volume and early diastolic mitral velocity to early diastolic mitral annulus velocity ratio showed an inverse relation with height (p <0.001 for all) despite similar left ventricular ejection fraction. In conclusion, short stature is associated with occurrence of HF after primary PCI for STEMI, and its influence is prominent in aged male patients presumably for its correlation with diastolic dysfunction. Copyright © 2016 Elsevier Inc. All rights reserved.
2013-01-01
Background Activity of disease in patients with multiple sclerosis (MS) is monitored by detecting and delineating hyper-intense lesions on MRI scans. The Minimum Area Contour Change (MACC) algorithm has been created with two main goals: a) to improve inter-operator agreement on outlining regions of interest (ROIs) and b) to automatically propagate longitudinal ROIs from the baseline scan to a follow-up scan. Methods The MACC algorithm first identifies an outer bound for the solution path, forms a high number of iso-contour curves based on equally spaced contour values, and then selects the best contour value to outline the lesion. The MACC software was tested on a set of 17 FLAIR MRI images evaluated by a pair of human experts and a longitudinal dataset of 12 pairs of T2-weighted Fluid Attenuated Inversion Recovery (FLAIR) images that had lesion analysis ROIs drawn by a single expert operator. Results In the tests where two human experts evaluated the same MRI images, the MACC program demonstrated that it could markedly reduce inter-operator outline error. In the longitudinal part of the study, the MACC program created ROIs on follow-up scans that were in close agreement to the original expert’s ROIs. Finally, in a post-hoc analysis of 424 follow-up scans 91% of propagated MACC were accepted by an expert and only 9% of the final accepted ROIS had to be created or edited by the expert. Conclusion When used with an expert operator's verification of automatically created ROIs, MACC can be used to improve inter- operator agreement and decrease analysis time, which should improve data collected and analyzed in multicenter clinical trials. PMID:24004511
Investigation of MACC1 Gene Expression in Head and Neck Cancer and Cancer Stem Cells.
Evran, Ebru; Şahin, Hilal; Akbaş, Kübra; Çiğdem, Sadik; Gündüz, Esra
2016-12-01
By investigating the MACC1 gene (metastasis-associated in colon cancer 1) in cancer stem cells (CSC) resistant to chemotherapy and in cancer stem cells (CSC) resistant to chemotherapy and in cancer cells (CS) sensitive to chemotherapy we determineda steady expression in both types of cells in head and neck cancer. In conformity with the result we examined if this gene could be a competitor gene for chemotherapy. According to literature, the MACC1 gene shows a clear expression in head and neck cancer cells [1]. Here we examined MACC1 expression in CSC and investigated it as a possible biomarker. Our experiments were performed in the UT -SCC -74 in primary head and neck cancer cell line. We examined the MACC -1 gene expression by Real Time PCR from both isolated CSC and CS. Expression of MACC -1 gene of cancer stem cells showed an two-fold increase compared with cancer cells. Based on the positive expression of MACC1 in both CS and CSC, this gene may serve as a potential biomarker in head and neck cancer. By comparing the results of this study with the novel features of MACC1, two important hypotheses could be examined. The first hypothesis is that MACC1 is a possible transcripton factor in colon cancer, which influences a high expression of CSC in head and neck and affects the expression of three biomarkers of the CSC control group biomarkers. The second hypothesisis is that the positive expression of MACC1 in patients with a malignant prognosis of tongue cancer, which belongs to head and neck cancer types, operates a faster development of CSC to cancer cells.
Wiemers, Paul D; Marney, Lucy; White, Nicole; Bough, Georgina; Hustig, Alistair; Tan, Wei; Cheng, Ching-Siang; Kang, Dong; Yadav, Sumit; Tam, Robert; Fraser, John F
2018-04-24
There is a paucity of data in regards to longer term morbidity outcomes in Indigenous Australian patients undergoing coronary artery bypass grafting (CABG). No comparative data on re-infarction, stroke or reintervention rates exist. Outcome data following percutaneous coronary intervention (PCI) is also extremely limited. Addressing this gap in knowledge forms the major aim of our study. This was a single centre cohort study conducted at the Townsville Hospital, Australia which provides tertiary adult cardiac surgical services to the northern parts of the state of Queensland. It incorporated consecutive patients (n=350) undergoing isolated CABG procedures, 2008-2010, 20.9% (73/350) of whom were Indigenous Australians. The main outcome measures were major adverse cardiac or cerebrovascular events (MACCE) at mid-term follow-up (mean 38.9 months). The incidence of MACCE among Indigenous Australian patients was approximately twice that of non-Indigenous patients at mid-term follow-up (36.7% vs. 18.6%; p=0.005; OR 2.525 (1.291-4.880)). Following adjustment for preoperative and operative variables, Indigenous Australian status itself was not significantly associated with MACCE (AOR 1.578 (0.637-3.910)). Significant associations with MACCE included renal impairment (AOR 2.198 (1.010-4.783)) and moderate-severe left ventricular impairment (AOR 3.697 (1.820-7.508)). An association between diabetes and MACCE failed to reach statistical significance (AOR 1.812 (0.941-3.490)). Indigenous Australians undergoing CABG suffer an excess of MACCE when followed-up in the longer term. High rates of comorbidities in the Indigenous Australian population likely play an aetiological role. Copyright © 2018. Published by Elsevier B.V.
Atmospheric Science Data Center
2016-11-25
Gulf of Mexico Atmospheric Composition and Climate Study (GoMACCS) Gulf of Mexico Atmospheric Composition and Climate Study (GoMACCS) is an intensive ... study area encompasses Texas and the northwestern Gulf of Mexico during July, August, September, and October, 2006. The Multi-angle ...
Burock, Susen; Herrmann, Pia; Wendler, Ina; Niederstrasser, Markus; Wernecke, Klaus-Dieter; Stein, Ulrike
2015-01-01
AIM: To evaluate the diagnostic and prognostic value of circulating Metastasis Associated in Colon Cancer 1 (MACC1) transcripts in plasma of gastric cancer patients. METHODS: We provide for the first time a blood-based assay for transcript quantification of the metastasis inducer MACC1 in a prospective study of gastric cancer patient plasma. MACC1 is a strong prognostic biomarker for tumor progression and metastasis in a variety of solid cancers. We conducted a study to define the diagnostic and prognostic power of MACC1 transcripts using 76 plasma samples from gastric cancer patients, either newly diagnosed with gastric cancer, newly diagnosed with metachronous metastasis of gastric cancer, as well as follow-up patients. Findings were controlled by using plasma samples from 54 tumor-free volunteers. Plasma was separated, RNA was isolated, and levels of MACC1 as well as S100A4 transcripts were determined by quantitative RT-PCR. RESULTS: Based on the levels of circulating MACC1 transcripts in plasma we significantly discriminated tumor-free volunteers and gastric cancer patients (P < 0.001). Levels of circulating MACC1 transcripts were increased in gastric cancer patients of each disease stage, compared to tumor-free volunteers: patients with tumors without metastasis (P = 0.005), with synchronous metastasis (P = 0.002), with metachronous metastasis (P = 0.005), and patients during follow-up (P = 0.021). Sensitivity was 0.68 (95%CI: 0.45-0.85) and specificity was 0.89 (95%CI: 0.77-0.95), respectively. Importantly, gastric cancer patients with high circulating MACC1 transcript levels in plasma demonstrated significantly shorter survival when compared with patients demonstrating low MACC1 levels (P = 0.0015). Furthermore, gastric cancer patients with high circulating transcript levels of MACC1 as well as of S100A4 in plasma demonstrated significantly shorter survival when compared with patients demonstrating low levels of both biomarkers or with only one biomarker elevated (P = 0.001). CONCLUSION: Levels of circulating MACC1 transcripts in plasma of gastric cancer patients are of diagnostic value and are prognostic for patient survival in a prospective study. PMID:25574109
Barile, Christopher J.; Barile, Elizabeth C.; Zavadil, Kevin R.; ...
2014-12-04
We describe in this report the electrochemistry of Mg deposition and dissolution from the magnesium aluminum chloride complex (MACC). The results define the requirements for reversible Mg deposition and definitively establish that voltammetric cycling of the electrolyte significantly alters its composition and performance. Elemental analysis, scanning electron microscopy, and energy-dispersive X-ray spectroscopy (SEM-EDS) results demonstrate that irreversible Mg and Al deposits form during early cycles. Electrospray ionization-mass spectrometry (ESI-MS) data show that inhibitory oligomers develop in THF-based solutions. These oligomers form via the well-established mechanism of a cationic ring-opening polymerization of THF during the initial synthesis of the MACC andmore » under resting conditions. In contrast, MACC solutions in 1,2-dimethoxyethane (DME), an acyclic solvent, do not evolve as dramatically at open circuit potential. Furthermore, we propose a mechanism describing how the conditioning process of the MACC in THF improves its performance by both tuning the Mg:Al stoichiometry and eliminating oligomers.« less
A Web Server for MACCS Magnetometer Data
NASA Technical Reports Server (NTRS)
Engebretson, Mark J.
1998-01-01
NASA Grant NAG5-3719 was provided to Augsburg College to support the development of a web server for the Magnetometer Array for Cusp and Cleft Studies (MACCS), a two-dimensional array of fluxgate magnetometers located at cusp latitudes in Arctic Canada. MACCS was developed as part of the National Science Foundation's GEM (Geospace Environment Modeling) Program, which was designed in part to complement NASA's Global Geospace Science programs during the decade of the 1990s. This report describes the successful use of these grant funds to support a working web page that provides both daily plots and file access to any user accessing the worldwide web. The MACCS home page can be accessed at http://space.augsburg.edu/space/MaccsHome.html.
Wang, Chunlin; Wen, Zhaowei; Xie, Jianming; Zhao, Yang; Zhao, Liang; Zhang, Shuyi; Liu, Yajing; Xue, Yan; Shi, Min
2017-04-08
Chemotherapeutic insensitivity is a main obstacle for effective treatment of gastric cancer (GC), the underlying mechanism remains to be investigated. Metastasis-associated in colon cancer-1 (MACC1), a transcription factor highly expressed in GC, is found to be related to chemotherapy sensitivity. Monocarboxylate transporter 1 (MCT1), a plasma membrane protein co-transporting lactate and H + , mediates drug sensitivity by regulating lactate metabolism. Targeting MCT1 has recently been regarded as a promising way to treat cancers and MCT1 inhibitor has entered the clinical trial for GC treatment. However, the correlation of these two genes and their combined effects on chemotherapy sensitivity has not been clarified. In this study, we found that MACC1 and MCT1 were both highly expressed in GC and exhibited a positive correlation in clinical samples. Further, we demonstrated that MACC1 could mediate sensitivity of 5-FU and cisplatin in GC cells, and MACC1 mediated MCT1 regulation was closely related to this sensitivity. A MCT1 inhibitor AZD3965 recovered the sensitivity of 5-FU and cisplatin in GC cells which overexpressed MACC1. These results suggested that MACC1 could influence the chemotherapy sensitivity by regulating MCT1 expression, providing new ideas and strategy for GC treatment. Copyright © 2017 Elsevier Inc. All rights reserved.
Tropospheric chemistry in the integrated forecasting system of ECMWF
NASA Astrophysics Data System (ADS)
Flemming, J.; Huijnen, V.; Arteta, J.; Bechtold, P.; Beljaars, A.; Blechschmidt, A.-M.; Josse, B.; Diamantakis, M.; Engelen, R. J.; Gaudel, A.; Inness, A.; Jones, L.; Katragkou, E.; Marecal, V.; Peuch, V.-H.; Richter, A.; Schultz, M. G.; Stein, O.; Tsikerdekis, A.
2014-11-01
A representation of atmospheric chemistry has been included in the Integrated Forecasting System (IFS) of the European Centre for Medium-range Weather Forecasts (ECMWF). The new chemistry modules complement the aerosol modules of the IFS for atmospheric composition, which is named C-IFS. C-IFS for chemistry supersedes a coupled system, in which the Chemical Transport Model (CTM) Model for OZone and Related chemical Tracers 3 was two-way coupled to the IFS (IFS-MOZART). This paper contains a description of the new on-line implementation, an evaluation with observations and a comparison of the performance of C-IFS with MOZART and with a re-analysis of atmospheric composition produced by IFS-MOZART within the Monitoring Atmospheric Composition and Climate (MACC) project. The chemical mechanism of C-IFS is an extended version of the Carbon Bond 2005 (CB05) chemical mechanism as implemented in the CTM Transport Model 5 (TM5). CB05 describes tropospheric chemistry with 54 species and 126 reactions. Wet deposition and lightning nitrogen monoxide (NO) emissions are modelled in C-IFS using the detailed input of the IFS physics package. A one-year simulation by C-IFS, MOZART and the MACC re-analysis is evaluated against ozonesondes, carbon monoxide (CO) aircraft profiles, European surface observations of ozone (O3), CO, sulphur dioxide (SO2) and nitrogen dioxide (NO2) as well as satellite retrievals of CO, tropospheric NO2 and formaldehyde. Anthropogenic emissions from the MACC/CityZen (MACCity) inventory and biomass burning emissions from the Global Fire Assimilation System (GFAS) data set were used in the simulations by both C-IFS and MOZART. C-IFS (CB05) showed an improved performance with respect to MOZART for CO, upper tropospheric O3, winter time SO2 and was of a similar accuracy for other evaluated species. C-IFS (CB05) is about ten times more computationally efficient than IFS-MOZART.
Tropospheric chemistry in the Integrated Forecasting System of ECMWF
NASA Astrophysics Data System (ADS)
Flemming, J.; Huijnen, V.; Arteta, J.; Bechtold, P.; Beljaars, A.; Blechschmidt, A.-M.; Diamantakis, M.; Engelen, R. J.; Gaudel, A.; Inness, A.; Jones, L.; Josse, B.; Katragkou, E.; Marecal, V.; Peuch, V.-H.; Richter, A.; Schultz, M. G.; Stein, O.; Tsikerdekis, A.
2015-04-01
A representation of atmospheric chemistry has been included in the Integrated Forecasting System (IFS) of the European Centre for Medium-Range Weather Forecasts (ECMWF). The new chemistry modules complement the aerosol modules of the IFS for atmospheric composition, which is named C-IFS. C-IFS for chemistry supersedes a coupled system in which chemical transport model (CTM) Model for OZone and Related chemical Tracers 3 was two-way coupled to the IFS (IFS-MOZART). This paper contains a description of the new on-line implementation, an evaluation with observations and a comparison of the performance of C-IFS with MOZART and with a re-analysis of atmospheric composition produced by IFS-MOZART within the Monitoring Atmospheric Composition and Climate (MACC) project. The chemical mechanism of C-IFS is an extended version of the Carbon Bond 2005 (CB05) chemical mechanism as implemented in CTM Transport Model 5 (TM5). CB05 describes tropospheric chemistry with 54 species and 126 reactions. Wet deposition and lightning nitrogen monoxide (NO) emissions are modelled in C-IFS using the detailed input of the IFS physics package. A 1 year simulation by C-IFS, MOZART and the MACC re-analysis is evaluated against ozonesondes, carbon monoxide (CO) aircraft profiles, European surface observations of ozone (O3), CO, sulfur dioxide (SO2) and nitrogen dioxide (NO2) as well as satellite retrievals of CO, tropospheric NO2 and formaldehyde. Anthropogenic emissions from the MACC/CityZen (MACCity) inventory and biomass burning emissions from the Global Fire Assimilation System (GFAS) data set were used in the simulations by both C-IFS and MOZART. C-IFS (CB05) showed an improved performance with respect to MOZART for CO, upper tropospheric O3, and wintertime SO2, and was of a similar accuracy for other evaluated species. C-IFS (CB05) is about 10 times more computationally efficient than IFS-MOZART.
Validation of reactive gases and aerosols in the MACC global analysis and forecast system
NASA Astrophysics Data System (ADS)
Eskes, H.; Huijnen, V.; Arola, A.; Benedictow, A.; Blechschmidt, A.-M.; Botek, E.; Boucher, O.; Bouarar, I.; Chabrillat, S.; Cuevas, E.; Engelen, R.; Flentje, H.; Gaudel, A.; Griesfeller, J.; Jones, L.; Kapsomenakis, J.; Katragkou, E.; Kinne, S.; Langerock, B.; Razinger, M.; Richter, A.; Schultz, M.; Schulz, M.; Sudarchikova, N.; Thouret, V.; Vrekoussis, M.; Wagner, A.; Zerefos, C.
2015-02-01
The European MACC (Monitoring Atmospheric Composition and Climate) project is preparing the operational Copernicus Atmosphere Monitoring Service (CAMS), one of the services of the European Copernicus Programme on Earth observation and environmental services. MACC uses data assimilation to combine in-situ and remote sensing observations with global and regional models of atmospheric reactive gases, aerosols and greenhouse gases, and is based on the Integrated Forecast System of the ECMWF. The global component of the MACC service has a dedicated validation activity to document the quality of the atmospheric composition products. In this paper we discuss the approach to validation that has been developed over the past three years. Topics discussed are the validation requirements, the operational aspects, the measurement data sets used, the structure of the validation reports, the models and assimilation systems validated, the procedure to introduce new upgrades, and the scoring methods. One specific target of the MACC system concerns forecasting special events with high pollution concentrations. Such events receive extra attention in the validation process. Finally, a summary is provided of the results from the validation of the latest set of daily global analysis and forecast products from the MACC system reported in November 2014.
Prognostic Implications of Dual Platelet Reactivity Testing in Acute Coronary Syndrome.
de Carvalho, Leonardo P; Fong, Alan; Troughton, Richard; Yan, Bryan P; Chin, Chee-Tang; Poh, Sock-Cheng; Mejin, Melissa; Huang, Nancy; Seneviratna, Aruni; Lee, Chi-Hang; Low, Adrian F; Tan, Huay-Cheem; Chan, Siew-Pang; Frampton, Christopher; Richards, A Mark; Chan, Mark Y
2018-02-01
Studies on platelet reactivity (PR) testing commonly test PR only after percutaneous coronary intervention (PCI) has been performed. There are few data on pre- and post-PCI testing. Data on simultaneous testing of aspirin and adenosine diphosphate antagonist response are conflicting. We investigated the prognostic value of combined serial assessments of high on-aspirin PR (HASPR) and high on-adenosine diphosphate receptor antagonist PR (HADPR) in patients with acute coronary syndrome (ACS). HASPR and HADPR were assessed in 928 ACS patients before (initial test) and 24 hours after (final test) coronary angiography, with or without revascularization. Patients with HASPR on the initial test, compared with those without, had significantly higher intraprocedural thrombotic events (IPTE) (8.6 vs. 1.2%, p ≤ 0.001) and higher 30-day major adverse cardiovascular and cerebrovascular events (MACCE; 5.2 vs. 2.3%, p = 0.05), but not 12-month MACCE (13.0 vs. 15.1%, p = 0.50). Patients with initial HADPR, compared with those without, had significantly higher IPTE (4.4 vs. 0.9%, p = 0.004), but not 30-day (3.5 vs. 2.3%, p = 0.32) or 12-month MACCE (14.0 vs. 12.5%, p = 0.54). The c-statistic of the Global Registry of Acute Coronary Events (GRACE) score alone, GRACE score + ASPR test and GRACE score + ADPR test for discriminating 30-day MACCE was 0.649, 0.803 and 0.757, respectively. Final ADPR was associated with 30-day MACCE among patients with intermediate-to-high GRACE score (adjusted odds ratio [OR]: 4.50, 95% confidence interval [CI]: 1.14-17.66), but not low GRACE score (adjusted OR: 1.19, 95% CI: 0.13-10.79). In conclusion, both HASPR and HADPR predict ischaemic events in ACS. This predictive utility is time-dependent and risk-dependent. Schattauer GmbH Stuttgart.
Gyöngyösi, Mariann; Christ, Günter; Lang, Irene; Kreiner, Gerhard; Sochor, Heinz; Probst, Peter; Neunteufl, Thomas; Badr-Eslam, Rosa; Winkler, Susanne; Nyolczas, Noemi; Posa, Aniko; Leisch, Franz; Karnik, Ronald; Siostrzonek, Peter; Harb, Stefan; Heigert, Matthias; Zenker, Gerald; Benzer, Werner; Bonner, Gerhard; Kaider, Alexandra; Glogar, Dietmar
2009-08-01
The multicenter AUTAX (Austrian Multivessel TAXUS-Stent) registry investigated the 2-year clinical/angiographic outcomes of patients with multivessel coronary artery disease after implantation of TAXUS Express stents (Boston Scientific, Natick, Massachusetts), in a "real-world" setting. The AUTAX registry included patients with 2- or 3-vessel disease, with/without previous percutaneous coronary intervention (PCI) and concomitant surgery. Patients (n = 441, 64 +/- 12 years, 78% men) (n = 1,080 lesions) with possible complete revascularization by PCI were prospectively included. Median clinical follow-up was 753 (quartiles 728 to 775) days after PCI in 95.7%, with control angiography of 78% at 6 months. The primary end point was the composite of major adverse cardiac (nonfatal acute myocardial infarction [AMI], all-cause mortality, target lesion revascularization [TLR]) and cerebrovascular events (MACCE). Potential risk factor effects on 2-year MACCE were evaluated using Cox regression. Complete revascularization was successful in 90.5%, with left main PCI of 6.8%. Rates of acute, subacute, and late stent thrombosis were 0.7%, 0.5%, and 0.5%. Two-year follow-up identified AMI (1.4%), death (3.6%), stroke (0.2%), and TLR (13.1%), for a composite MACCE of 18.3%. The binary restenosis rate was 10.8%. The median of cumulative SYNTAX score was 23.0 (range 12.0 to 56.5). The SYNTAX score did not predict TLR or MACCE, due to lack of scoring of restenotic or bypass stenoses (29.8%). Age (hazard ratio [HR]: 1.03, p = 0.019) and acute coronary syndrome (HR: 2.1, p = 0.001) were significant predictors of 2-year MACCE. Incomplete revascularization predicted death or AMI (HR: 3.84, p = 0.002). With the aim of complete revascularization, TAXUS stent implantations can be safe for patients with multivessel disease. The AUTAX registry including patients with post-PCI lesions provides additional information to the SYNTAX (Synergy Between Percutaneous Coronary Intervention With TAXUS and Cardiac Surgery) study. (Austrian Multivessel TAXUS-Stent Registry; NCT00738686).
The Interplay of Al and Mg Speciation in Advanced Mg Battery Electrolyte Solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
See, Kimberly A.; Chapman, Karena W.; Zhu, Lingyang
2016-01-13
Mg batteries are an attractive alternative to Li-based energy storage due to the possibility of higher volumetric capacities with the added advantage of using sustainable materials. A promising emerging electrolyte for Mg batteries is the magnesium aluminum chloride complex (MACC) which shows high Mg electrodeposition and stripping efficiencies and relatively high anodic stabilities. As prepared, MACC is inactive with respect to Mg deposition; however, efficient Mg electrodeposition can be achieved following an electrolytic conditioning process. Through the use of Raman spectroscopy, surface enhanced Raman spectroscopy, 27Al and 35Cl nuclear magnetic resonance spectroscopy, and pair distribution function analysis, we explore themore » active vs inactive complexes in the MACC electrolyte and demonstrate the codependence of Al and Mg speciation. These techniques report on significant changes occurring in the bulk speciation of the conditioned electrolyte relative to the as-prepared solution. Analysis shows that the active Mg complex in conditioned MACC is very likely the [Mg2(μ–Cl)3·6THF]+ complex that is observed in the solid state structure. Additionally, conditioning creates free Cl– in the electrolyte solution, and we suggest the free Cl– adsorbs at the electrode surface to enhance Mg electrodeposition.« less
Effect of growth phase on the fatty acid compositions of four species of marine diatoms
NASA Astrophysics Data System (ADS)
Liang, Ying; Mai, Kangsen
2005-04-01
The fatty acid compositions of four species of marine diatoms ( Chaetoceros gracilis MACC/B13, Cylindrotheca fusiformis MACC/B211, Phaeodactylum tricornutum MACC/B221 and Nitzschia closterium MACC/B222), cultivated at 22°C±1°C with the salinity of 28 in f/2 medium and harvested in the exponential growth phase, the early stationary phase and the late stationary phase, were determined. The results showed that growth phase has significant effect on most fatty acid contents in the four species of marine diatoms. The proportions of 16:0 and 16:1n-7 fatty acids increased while those of 16:3n-4 and eicosapentaenoic acid (EPA) decreased with increasing culture age in all species studied. The subtotal of saturated fatty acids (SFA) increased with the increasing culture age in all species with the exception of B13. The subtotal of monounsaturated fatty acids (MUFA) increased while that of polyunsaturated fatty acids (PUFA) decreased with culture age in the four species of marine diatoms. MUFA reached their lowest value in the exponential growth phase, whereas PUFA reached their highest value in the same phase.
Validation of reactive gases and aerosols in the MACC global analysis and forecast system
NASA Astrophysics Data System (ADS)
Eskes, H.; Huijnen, V.; Arola, A.; Benedictow, A.; Blechschmidt, A.-M.; Botek, E.; Boucher, O.; Bouarar, I.; Chabrillat, S.; Cuevas, E.; Engelen, R.; Flentje, H.; Gaudel, A.; Griesfeller, J.; Jones, L.; Kapsomenakis, J.; Katragkou, E.; Kinne, S.; Langerock, B.; Razinger, M.; Richter, A.; Schultz, M.; Schulz, M.; Sudarchikova, N.; Thouret, V.; Vrekoussis, M.; Wagner, A.; Zerefos, C.
2015-11-01
The European MACC (Monitoring Atmospheric Composition and Climate) project is preparing the operational Copernicus Atmosphere Monitoring Service (CAMS), one of the services of the European Copernicus Programme on Earth observation and environmental services. MACC uses data assimilation to combine in situ and remote sensing observations with global and regional models of atmospheric reactive gases, aerosols, and greenhouse gases, and is based on the Integrated Forecasting System of the European Centre for Medium-Range Weather Forecasts (ECMWF). The global component of the MACC service has a dedicated validation activity to document the quality of the atmospheric composition products. In this paper we discuss the approach to validation that has been developed over the past 3 years. Topics discussed are the validation requirements, the operational aspects, the measurement data sets used, the structure of the validation reports, the models and assimilation systems validated, the procedure to introduce new upgrades, and the scoring methods. One specific target of the MACC system concerns forecasting special events with high-pollution concentrations. Such events receive extra attention in the validation process. Finally, a summary is provided of the results from the validation of the latest set of daily global analysis and forecast products from the MACC system reported in November 2014.
GLANCE - calculatinG heaLth impActs of atmospheric pollutioN in a Changing climatE
NASA Astrophysics Data System (ADS)
Vogel, Leif; Faria, Sérgio; Markandya, Anil
2016-04-01
Current annual global estimates of premature deaths from poor air quality are estimated in the range of 2.6-4.4 million, and 2050 projections are expected to double against 2010 levels. In Europe, annual economic burdens are estimated at around 750 bn €. Climate change will further exacerbate air pollution burdens; therefore, a better understanding of the economic impacts on human societies has become an area of intense investigation. European research efforts are being carried out within the MACC project series, which started in 2005. The outcome of this work has been integrated into a European capacity for Earth Observation, the Copernicus Atmospheric Monitoring Service (CAMS). In MACC/CAMS, key pollutant concentrations are computed at the European scale and globally by employing chemically-driven advanced transport models. The project GLANCE (calculatinG heaLth impActs of atmospheric pollutioN in a Changing climatE) aims at developing an integrated assessment model for calculating the health impacts and damage costs of air pollution at different physical scales. It combines MACC/CAMS (assimilated Earth Observations, an ensemble of chemical transport models and state of the art ECWMF weather forecasting) with downscaling based on in-situ network measurements. The strengthening of modelled projections through integration with empirical evidence reduces errors and uncertainties in the health impact projections and subsequent economic cost assessment. In addition, GLANCE will yield improved data accuracy at different time resolutions. This project is a multidisciplinary approach which brings together expertise from natural sciences and socio economic fields. Here, its general approach will be presented together with first results for the years 2007 - 2012 on the European scale. The results on health impacts and economic burdens are compared to existing assessments.
AIRS Views of Anthropogenic and Biomass Burning CO: INTEX-B/MILAGRO and TEXAQS/GoMACCS
NASA Astrophysics Data System (ADS)
McMillan, W. W.; Warner, J.; Wicks, D.; Barnet, C.; Sachse, G.; Chu, A.; Sparling, L.
2006-12-01
Utilizing the Atmospheric InfraRed Sounder's (AIRS) unique spatial and temporal coverage, we present observations of anthropogenic and biomass burning CO emissions as observed by AIRS during the 2006 field experiments INTEX-B/MILAGRO and TEXAQS/GoMACCS. AIRS daily CO maps covering more than 75% of the planet demonstrate the near global transport of these emissions. AIRS day/night coverage of significant portions of the Earth often show substantial changes in 12 hours or less. However, the coarse vertical resolution of AIRS retrieved CO complicates its interpretation. For example, extensive CO emissions are evident from Asia during April and May 2006, but it is difficult to determine the relative contributions of biomass burning in Thailand vs. domestic and industrial emissions from China. Similarly, sometimes AIRS sees enhanced CO over and downwind of Mexico City and other populated areas. AIRS low information content and decreasing sensitivity in the boundary layer can result in underestimates of CO total columns and free tropospheric abundances. Building on our analyses of INTEX-A/ICARTT data from 2004, we present comparisons with INTEX-B/MILAGRO and TEXAQS/GoMACCS in situ aircraft measurements and other satellite CO observations. The combined analysis of AIRS CO, water vapor and O3 retrievals; MODIS aerosol optical depths; and forward trajectory computations illuminate a variety of dynamical processes in the troposphere.
NASA Astrophysics Data System (ADS)
Wagner, A.; Blechschmidt, A.-M.; Bouarar, I.; Brunke, E.-G.; Clerbaux, C.; Cupeiro, M.; Cristofanelli, P.; Eskes, H.; Flemming, J.; Flentje, H.; George, M.; Gilge, S.; Hilboll, A.; Inness, A.; Kapsomenakis, J.; Richter, A.; Ries, L.; Spangl, W.; Stein, O.; Weller, R.; Zerefos, C.
2015-12-01
The Monitoring Atmospheric Composition and Climate (MACC) project represents the European Union's Copernicus Atmosphere Monitoring Service (CAMS) (
Bell, Diana; Bell, Achim H; Bondaruk, Jolanta; Hanna, Ehab Y; Weber, Randall S
2016-05-15
Adenoid cystic carcinoma (ACC), 1 of the most common salivary gland malignancies, arises from the intercalated ducts, which are composed of inner ductal epithelial cells and outer myoepithelial cells. The objective of this study was to determine the genomic subtypes of ACC with emphasis on dominant cell type to identify potential specific biomarkers for each subtype and to improve the understanding of this disease. A whole-genome expression study was performed based on 42 primary salivary ACCs and 5 normal salivary glands. RNA from these specimens was subjected to expression profiling with RNA sequencing, and results were analyzed to identify transcripts in epithelial-dominant ACC (E-ACC), myoepithelial-dominant ACC (M-ACC), and all ACC that were expressed differentially compared with the transcripts in normal salivary tissue. In total, the authors identified 430 differentially expressed transcripts that were unique to E-ACC, 392 that were unique to M-ACC, and 424 that were common to both M-ACC and E-ACC. The sets of E-ACC-specific and M-ACC-specific transcripts were sufficiently large to define and differentiate E-ACC from M-ACC. Ingenuity pathway analysis identified known cancer-related genes for 60% of the E-ACC transcripts, 69% of the M-ACC transcripts, and 68% of the transcripts that were common in both E-ACC and M-ACC. Three sets of highly expressed candidate genes-distal-less homeobox 6 (DLX6) for E-ACC; protein keratin 16 (KRT16), SRY box 11 (SOX11), and v-myb avian myeloblastosis viral oncogene homolog (MYB) for M-ACC; and engrailed 1 (EN1) and statherin (STATH), which are common to both E-ACC and M-ACC)-were further validated at the protein level. The current results enabled the authors to identify novel potential therapeutic targets and biomarkers in E-ACC and M-ACC individually, with the implication that EN1, DLX6, and OTX1 (orthodenticle homeobox 1) are potential drivers of these cancers. Cancer 2016;122:1513-22. © 2016 American Cancer Society. © 2016 American Cancer Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bixler, Nathan E.; Osborn, Douglas M.; Sallaberry, Cedric Jean-Marie
2014-02-01
This paper describes the convergence of MELCOR Accident Consequence Code System, Version 2 (MACCS2) probabilistic results of offsite consequences for the uncertainty analysis of the State-of-the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout scenario at the Peach Bottom Atomic Power Station. The consequence metrics evaluated are individual latent-cancer fatality (LCF) risk and individual early fatality risk. Consequence results are presented as conditional risk (i.e., assuming the accident occurs, risk per event) to individuals of the public as a result of the accident. In order to verify convergence for this uncertainty analysis, as recommended by the Nuclear Regulatory Commission’s Advisorymore » Committee on Reactor Safeguards, a ‘high’ source term from the original population of Monte Carlo runs has been selected to be used for: (1) a study of the distribution of consequence results stemming solely from epistemic uncertainty in the MACCS2 parameters (i.e., separating the effect from the source term uncertainty), and (2) a comparison between Simple Random Sampling (SRS) and Latin Hypercube Sampling (LHS) in order to validate the original results obtained with LHS. Three replicates (each using a different random seed) of size 1,000 each using LHS and another set of three replicates of size 1,000 using SRS are analyzed. The results show that the LCF risk results are well converged with either LHS or SRS sampling. The early fatality risk results are less well converged at radial distances beyond 2 miles, and this is expected due to the sparse data (predominance of “zero” results).« less
NASA Astrophysics Data System (ADS)
Yu, Dongshan; Liang, Xuejie; Wang, Jingwei; Li, Xiaoning; Nie, Zhiqiang; Liu, Xingsheng
2017-02-01
A novel marco channel cooler (MaCC) has been developed for packaging high power diode vertical stacked (HPDL) lasers, which eliminates many of the issues in commercially-available copper micro-channel coolers (MCC). The MaCC coolers, which do not require deionized water as coolant, were carefully designed for compact size and superior thermal dissipation capability. Indium-free packaging technology was adopted throughout product design and fabrication process to minimize the risk of solder electromigration and thermal fatigue at high current density and long pulse width under QCW operation. Single MaCC unit with peak output power of up to 700W/bar at pulse width in microsecond range and 200W/bar at pulse width in millisecond range has been recorded. Characteristic comparison on thermal resistivity, spectrum, near filed and lifetime have been conducted between a MaCC product and its counterpart MCC product. QCW lifetime test (30ms 10Hz, 30% duty cycle) has also been conducted with distilled water as coolant. A vertical 40-MaCC stack product has been fabricated, total output power of 9 kilowatts has been recorded under QCW mode (3ms, 30Hz, 9% duty cycle).
NASA Astrophysics Data System (ADS)
Wagner, A.; Blechschmidt, A.-M.; Bouarar, I.; Brunke, E.-G.; Clerbaux, C.; Cupeiro, M.; Cristofanelli, P.; Eskes, H.; Flemming, J.; Flentje, H.; George, M.; Gilge, S.; Hilboll, A.; Inness, A.; Kapsomenakis, J.; Richter, A.; Ries, L.; Spangl, W.; Stein, O.; Weller, R.; Zerefos, C.
2015-03-01
Monitoring Atmospheric Composition and Climate (MACC/MACCII) currently represents the European Union's Copernicus Atmosphere Monitoring Service (CAMS) (http://www.copernicus.eu), which will become fully operational in the course of 2015. The global near-real-time MACC model production run for aerosol and reactive gases provides daily analyses and 5 day forecasts of atmospheric composition fields. It is the only assimilation system world-wide that is operational to produce global analyses and forecasts of reactive gases and aerosol fields. We have investigated the ability of the MACC analysis system to simulate tropospheric concentrations of reactive gases (CO, O3, and NO2) covering the period between 2009 and 2012. A validation was performed based on CO and O3 surface observations from the Global Atmosphere Watch (GAW) network, O3 surface observations from the European Monitoring and Evaluation Programme (EMEP) and furthermore, NO2 tropospheric columns derived from the satellite sensors SCIAMACHY and GOME-2, and CO total columns derived from the satellite sensor MOPITT. The MACC system proved capable of reproducing reactive gas concentrations in consistent quality, however, with a seasonally dependent bias compared to surface and satellite observations: for northern hemispheric surface O3 mixing ratios, positive biases appear during the warm seasons and negative biases during the cold parts of the years, with monthly Modified Normalised Mean Biases (MNMBs) ranging between -30 and 30% at the surface. Model biases are likely to result from difficulties in the simulation of vertical mixing at night and deficiencies in the model's dry deposition parameterization. Observed tropospheric columns of NO2 and CO could be reproduced correctly during the warm seasons, but are mostly underestimated by the model during the cold seasons, when anthropogenic emissions are at a highest, especially over the US, Europe and Asia. Monthly MNMBs of the satellite data evaluation range between -110 and 40% for NO2 and at most -20% for CO, over the investigated regions. The underestimation is likely to result from a combination of errors concerning the dry deposition parameterization and certain limitations in the current emission inventories, together with an insufficiently established seasonality in the emissions.
A regional air quality forecasting system over Europe: the MACC-II daily ensemble production
NASA Astrophysics Data System (ADS)
Marécal, V.; Peuch, V.-H.; Andersson, C.; Andersson, S.; Arteta, J.; Beekmann, M.; Benedictow, A.; Bergström, R.; Bessagnet, B.; Cansado, A.; Chéroux, F.; Colette, A.; Coman, A.; Curier, R. L.; Denier van der Gon, H. A. C.; Drouin, A.; Elbern, H.; Emili, E.; Engelen, R. J.; Eskes, H. J.; Foret, G.; Friese, E.; Gauss, M.; Giannaros, C.; Guth, J.; Joly, M.; Jaumouillé, E.; Josse, B.; Kadygrov, N.; Kaiser, J. W.; Krajsek, K.; Kuenen, J.; Kumar, U.; Liora, N.; Lopez, E.; Malherbe, L.; Martinez, I.; Melas, D.; Meleux, F.; Menut, L.; Moinat, P.; Morales, T.; Parmentier, J.; Piacentini, A.; Plu, M.; Poupkou, A.; Queguiner, S.; Robertson, L.; Rouïl, L.; Schaap, M.; Segers, A.; Sofiev, M.; Thomas, M.; Timmermans, R.; Valdebenito, Á.; van Velthoven, P.; van Versendaal, R.; Vira, J.; Ung, A.
2015-03-01
This paper describes the pre-operational analysis and forecasting system developed during MACC (Monitoring Atmospheric Composition and Climate) and continued in MACC-II (Monitoring Atmospheric Composition and Climate: Interim Implementation) European projects to provide air quality services for the European continent. The paper gives an overall picture of its status at the end of MACC-II (summer 2014). This system is based on seven state-of-the art models developed and run in Europe (CHIMERE, EMEP, EURAD-IM, LOTOS-EUROS, MATCH, MOCAGE and SILAM). These models are used to calculate multi-model ensemble products. The MACC-II system provides daily 96 h forecasts with hourly outputs of 10 chemical species/aerosols (O3, NO2, SO2, CO, PM10, PM2.5, NO, NH3, total NMVOCs and PAN + PAN precursors) over 8 vertical levels from the surface to 5 km height. The hourly analysis at the surface is done a posteriori for the past day using a selection of representative air quality data from European monitoring stations. The performances of the system are assessed daily, weekly and 3 monthly (seasonally) through statistical indicators calculated using the available representative air quality data from European monitoring stations. Results for a case study show the ability of the median ensemble to forecast regional ozone pollution events. The time period of this case study is also used to illustrate that the median ensemble generally outperforms each of the individual models and that it is still robust even if two of the seven models are missing. The seasonal performances of the individual models and of the multi-model ensemble have been monitored since September 2009 for ozone, NO2 and PM10 and show an overall improvement over time. The change of the skills of the ensemble over the past two summers for ozone and the past two winters for PM10 are discussed in the paper. While the evolution of the ozone scores is not significant, there are improvements of PM10 over the past two winters that can be at least partly attributed to new developments on aerosols in the seven individual models. Nevertheless, the year to year changes in the models and ensemble skills are also linked to the variability of the meteorological conditions and of the set of observations used to calculate the statistical indicators. In parallel, a scientific analysis of the results of the seven models and of the ensemble is also done over the Mediterranean area because of the specificity of its meteorology and emissions. The system is robust in terms of the production availability. Major efforts have been done in MACC-II towards the operationalisation of all its components. Foreseen developments and research for improving its performances are discussed in the conclusion.
NASA Astrophysics Data System (ADS)
Sheel, Varun; Sahu, L. K.; Kajino, M.; Deushi, M.; Stein, O.; Nedelec, P.
2014-07-01
The spatial and temporal variations of carbon monoxide (CO) are analyzed over a tropical urban site, Hyderabad (17°27'N, 78°28'E) in central India. We have used vertical profiles from the Measurement of ozone and water vapor by Airbus in-service aircraft (MOZAIC) aircraft observations, Monitoring Atmospheric Composition and Climate (MACC) reanalysis, and two chemical transport model simulations (Model for Ozone And Related Tracers (MOZART) and MRI global Chemistry Climate Model (MRI-CCM2)) for the years 2006-2008. In the lower troposphere, the CO mixing ratio showed strong seasonality, with higher levels (>300 ppbv) during the winter and premonsoon seasons associated with a stable anticyclonic circulation, while lower CO values (up to 100 ppbv) were observed in the monsoon season. In the planetary boundary layer (PBL), the seasonal distribution of CO shows the impact of both local meteorology and emissions. While the PBL CO is predominantly influenced by strong winds, bringing regional background air from marine and biomass burning regions, under calm conditions CO levels are elevated by local emissions. On the other hand, in the free troposphere, seasonal variation reflects the impact of long-range transport associated with the Intertropical Convergence Zone and biomass burning. The interannual variations were mainly due to transition from El Niño to La Niña conditions. The overall modified normalized mean biases (normalization based on the observed and model mean values) with respect to the observed CO profiles were lower for the MACC reanalysis than the MOZART and MRI-CCM2 models. The CO in the PBL region was consistently underestimated by MACC reanalysis during all the seasons, while MOZART and MRI-CCM2 show both positive and negative biases depending on the season.
NASA Technical Reports Server (NTRS)
Kulawik, Susan; Wunch, Debra; O’Dell, Christopher; Frankenberg, Christian; Reuter, Maximilian; Chevallier, Frederic; Oda, Tomohiro; Sherlock, Vanessa; Buchwitz, Michael; Osterman, Greg;
2016-01-01
Consistent validation of satellite CO2 estimates is a prerequisite for using multiple satellite CO2 measurements for joint flux inversion, and for establishing an accurate long-term atmospheric CO2 data record. Harmonizing satellite CO2 measurements is particularly important since the differences in instruments, observing geometries, sampling strategies, etc. imbue different measurement characteristics in the various satellite CO2 data products. We focus on validating model and satellite observation attributes that impact flux estimates and CO2 assimilation, including accurate error estimates, correlated and random errors, overall biases, biases by season and latitude, the impact of coincidence criteria, validation of seasonal cycle phase and amplitude, yearly growth, and daily variability. We evaluate dry-air mole fraction (X(sub CO2)) for Greenhouse gases Observing SATellite (GOSAT) (Atmospheric CO2 Observations from Space, ACOS b3.5) and SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY (SCIAMACHY) (Bremen Optimal Estimation DOAS, BESD v2.00.08) as well as the CarbonTracker (CT2013b) simulated CO2 mole fraction fields and the Monitoring Atmospheric Composition and Climate (MACC) CO2 inversion system (v13.1) and compare these to Total Carbon Column Observing Network (TCCON) observations (GGG2012/2014). We find standard deviations of 0.9, 0.9, 1.7, and 2.1 parts per million vs. TCCON for CT2013b, MACC, GOSAT, and SCIAMACHY, respectively, with the single observation errors 1.9 and 0.9 times the predicted errors for GOSAT and SCIAMACHY, respectively. We quantify how satellite error drops with data averaging by interpreting according to (error(sup 2) equals a(sup 2) plus b(sup 2) divided by n (with n being the number of observations averaged, a the systematic (correlated) errors, and b the random (uncorrelated) errors). a and b are estimated by satellites, coincidence criteria, and hemisphere. Biases at individual stations have year-to-year variability of 0.3 parts per million, with biases larger than the TCCON predicted bias uncertainty of 0.4 parts per million at many stations. We find that GOSAT and CT2013b under-predict the seasonal cycle amplitude in the Northern Hemisphere (NH) between 46 and 53 degrees North latitude, MACC over-predicts between 26 and 37 degrees North latitude, and CT2013b under-predicts the seasonal cycle amplitude in the Southern Hemisphere (SH). The seasonal cycle phase indicates whether a data set or model lags another data set in time. We find that the GOSAT measurements improve the seasonal cycle phase substantially over the prior while SCIAMACHY measurements improve the phase significantly for just two of seven sites. The models reproduce the measured seasonal cycle phase well except for at Lauder_125HR (CT2013b) and Darwin (MACC). We compare the variability within 1 day between TCCON and models in June-July-August; there is correlation between 0.2 and 0.8 in the NH, with models showing 10-50 percent the variability of TCCON at different stations and CT2013b showing more variability than MACC. This paper highlights findings that provide inputs to estimate flux errors in model assimilations, and places where models and satellites need further investigation, e.g., the SH for models and 45-67 degrees North latitude for GOSAT and CT2013b.
The Mobile Advanced Command and Control Station (MACCS) Experimental Testbed
2007-10-01
were selected: Vehicle: Dodge (Sprinter 2500 high-roof - Mercedes - Benz vehicle) Electrical equipment and habitability equipment: Crossroads Coaches...this innovative , mobile, experimental tested. IMPACT/APPLICATIONS While MACCS clearly supports the research agenda for both HAL and ONR (as well as
Weighting Composite Endpoints in Clinical Trials: Essential Evidence for the Heart Team
Tong, Betty C.; Huber, Joel C.; Ascheim, Deborah D.; Puskas, John D.; Ferguson, T. Bruce; Blackstone, Eugene H.; Smith, Peter K.
2013-01-01
Background Coronary revascularization trials often use a composite endpoint of major adverse cardiac and cerebrovascular events (MACCE). The usual practice in analyzing data with a composite endpoint is to assign equal weights to each of the individual MACCE elements. Non-inferiority margins are used to offset effects of presumably less important components, but their magnitudes are subject to bias. This study describes the relative importance of MACCE elements from a patient perspective. Methods A discrete choice experiment was conducted. Survey respondents were presented with a scenario that would make them eligible for the SYNTAX 3-Vessel Disease cohort. Respondents chose among pairs of procedures that differed on the 3-year probability of MACCE, potential for increased longevity, and procedure/recovery time. Conjoint analysis derived relative weights for these attributes. Results In all, 224 respondents completed the survey. The attributes did not have equal weight. Risk of death was most important (relative weight 0.23), followed by stroke (.18), potential increased longevity and recovery time (each 0.17), MI (0.14) and risk of repeat revascularization (0.11). Applying these weights to the SYNTAX 3-year endpoints resulted in a persistent, but decreased margin of difference in MACCE favoring CABG compared to PCI. When labeled only as “Procedure A” and “B,” 87% of respondents chose CABG over PCI. When procedures were labeled as “Coronary Stent” and “Coronary Bypass Surgery,” only 73% chose CABG. Procedural preference varied with demographics, gender and familiarity with the procedures. Conclusions MACCE elements do not carry equal weight in a composite endpoint, from a patient perspective. Using a weighted composite endpoint increases the validity of statistical analyses and trial conclusions. Patients are subject to bias by labels when considering coronary revascularization. PMID:22795064
Papachristidis, Alexandros; Roper, Damian; Cassar Demarco, Daniela; Tsironis, Ioannis; Papitsas, Michael; Byrne, Jonathan; Alfakih, Khaled; Monaghan, Mark J
2016-12-01
In this study, we aim to reassess the prognostic value of stress echocardiography (SE) in a contemporary population and to evaluate the clinical significance of limited apical ischaemia, which has not been previously studied. We included 880 patients who underwent SE. Follow-up data with regards to MACCE (cardiac death, myocardial infarction, any repeat revascularisation and cerebrovascular accident) were collected over 12 months after the SE. Mortality data were recorded over 27.02 ± 4.6 months (5.5-34.2 months). We sought to investigate the predictors of MACCE and all-cause mortality. In a multivariable analysis, only the positive result of SE was predictive of MACCE (HR, 3.71; P = 0.012). The positive SE group was divided into 2 subgroups: (a) inducible ischaemia limited to the apical segments ('apical ischaemia') and (b) ischaemia in any other segments with or without apical involvement ('other positive'). The subgroup of patients with apical ischaemia had a significantly worse outcome compared to the patients with a negative SE (HR, 3.68; P = 0.041) but a similar outcome to the 'other positive' subgroup. However, when investigated with invasive coronary angiography, the prevalence of coronary artery disease (CAD) and their rate of revascularisation was considerably lower. Only age (HR, 1.07; P < 0.001) was correlated with all-cause mortality. SE remains a strong predictor of patients' outcome in a contemporary population. A positive SE result was the only predictor of 12-month MACCE. The subgroup of patients with limited apical ischaemia have similar outcome to patients with ischaemia in other segments despite a lower prevalence of CAD and a lower revascularisation rate. © 2016 The authors.
Bamberg, Fabian; Parhofer, Klaus G; Lochner, Elena; Marcus, Roy P; Theisen, Daniel; Findeisen, Hannes M; Hoffmann, Udo; Schönberg, Stefan O; Schlett, Christopher L; Reiser, Maximilian F; Weckbach, Sabine
2013-12-01
To study the predictive value of whole-body magnetic resonance (MR) imaging for the occurrence of cardiac and cerebrovascular events in a cohort of patients with diabetes mellitus (DM). This HIPAA-compliant study was approved by the institutional review board. Informed consent was obtained from all patients before enrollment into the study. The authors followed up 65 patients with DM (types 1 and 2) who underwent a comprehensive, contrast material-enhanced whole-body MR imaging protocol, including brain, cardiac, and vascular sequences at baseline. Follow-up was performed by phone interview. The primary endpoint was a major adverse cardiac and cerebrovascular event (MACCE), which was defined as composite cardiac-cerebrovascular death, myocardial infarction, cerebrovascular event, or revascularization. MR images were assessed for the presence of systemic atherosclerotic vessel changes, white matter lesions, and myocardial changes. Kaplan-Meier survival and Cox regression analyses were performed to determine associations. Follow-up was completed in 61 patients (94%; median age, 67.5 years; 30 women [49%]; median follow-up, 70 months); 14 of the 61 patients (23%) experienced MACCE. Although normal whole-body MR imaging excluded MACCE during the follow-up period (0%; 95% confidence interval [CI]: 0%, 17%), any detectable ischemic and/or atherosclerotic changes at whole-body MR imaging (prevalence, 66%) conferred a cumulative event rate of 20% at 3 years and 35% at 6 years. Whole-body MR imaging summary estimate of disease was strongly predictive for MACCE (one increment of vessel score and each territory with atherosclerotic changes: hazard ratio, 13.2 [95% CI: 4.5, 40.1] and 3.9 [95% CI: 2.2, 7.5], respectively), also beyond clinical characteristics as well as individual cardiac or cerebrovascular MR findings. These initial data indicate that disease burden as assessed with whole-body MR imaging confers strong prognostic information in patients with DM. Online supplemental material is available for this article. © RSNA, 2013.
Loughlin, Daniel H; Macpherson, Alexander J; Kaufman, Katherine R; Keaveny, Brian N
2017-10-01
A marginal abatement cost curve (MACC) traces out the relationship between the quantity of pollution abated and the marginal cost of abating each additional unit. In the context of air quality management, MACCs are typically developed by sorting control technologies by their relative cost-effectiveness. Other potentially important abatement measures such as renewable electricity, energy efficiency, and fuel switching (RE/EE/FS) are often not incorporated into MACCs, as it is difficult to quantify their costs and abatement potential. In this paper, a U.S. energy system model is used to develop a MACC for nitrogen oxides (NO x ) that incorporates both traditional controls and these additional measures. The MACC is decomposed by sector, and the relative cost-effectiveness of RE/EE/FS and traditional controls are compared. RE/EE/FS are shown to have the potential to increase emission reductions beyond what is possible when applying traditional controls alone. Furthermore, a portion of RE/EE/FS appear to be cost-competitive with traditional controls. Renewable electricity, energy efficiency, and fuel switching can be cost-competitive with traditional air pollutant controls for abating air pollutant emissions. The application of renewable electricity, energy efficiency, and fuel switching is also shown to have the potential to increase emission reductions beyond what is possible when applying traditional controls alone.
Insights Gained from Forensic Analysis with MELCOR of the Fukushima-Daiichi Accidents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, Nathan C.; Gauntt, Randall O.
Since the accidents at Fukushima-Daiichi, Sandia National Laboratories has been modeling these accident scenarios using the severe accident analysis code, MELCOR. MELCOR is a widely used computer code developed at Sandia National Laboratories since ~1982 for the U.S. Nuclear Regulatory Commission. Insights from the modeling of these accidents is being used to better inform future code development and potentially improved accident management. To date, our necessity to better capture in-vessel thermal-hydraulic and ex-vessel melt coolability and concrete interactions has led to the implementation of new models. The most recent analyses, presented in this paper, have been in support of themore » of the Organization for Economic Cooperation and Development Nuclear Energy Agency’s (OECD/NEA) Benchmark Study of the Accident at the Fukushima Daiichi Nuclear Power Station (BSAF) Project. The goal of this project is to accurately capture the source term from all three releases and then model the atmospheric dispersion. In order to do this, a forensic approach is being used in which available plant data and release timings is being used to inform the modeled MELCOR accident scenario. For example, containment failures, core slumping events and lower head failure timings are all enforced parameters in these analyses. This approach is fundamentally different from a blind code assessment analysis often used in standard problem exercises. The timings of these events are informed by representative spikes or decreases in plant data. The combination of improvements to the MELCOR source code resulting from analysis previous accident analysis and this forensic approach has allowed Sandia to generate representative and plausible source terms for all three accidents at Fukushima Daiichi out to three weeks after the accident to capture both early and late releases. In particular, using the source terms developed by MELCOR, the MACCS software code, which models atmospheric dispersion and deposition, we are able to reasonably capture the deposition of radionuclides to the northwest of the reactor site.« less
Adjusted Levenberg-Marquardt method application to methene retrieval from IASI/METOP spectra
NASA Astrophysics Data System (ADS)
Khamatnurova, Marina; Gribanov, Konstantin
2016-04-01
Levenberg-Marquardt method [1] with iteratively adjusted parameter and simultaneous evaluation of averaging kernels together with technique of parameters selection are developed and applied to the retrieval of methane vertical profiles in the atmosphere from IASI/METOP spectra. Retrieved methane vertical profiles are then used for calculation of total atmospheric column amount. NCEP/NCAR reanalysis data provided by ESRL (NOAA, Boulder,USA) [2] are taken as initial guess for retrieval algorithm. Surface temperature, temperature and humidity vertical profiles are retrieved before methane vertical profile retrieval for each selected spectrum. Modified software package FIRE-ARMS [3] were used for numerical experiments. To adjust parameters and validate the method we used ECMWF MACC reanalysis data [4]. Methane columnar values retrieved from cloudless IASI spectra demonstrate good agreement with MACC columnar values. Comparison is performed for IASI spectra measured in May of 2012 over Western Siberia. Application of the method for current IASI/METOP measurements are discussed. 1.Ma C., Jiang L. Some Research on Levenberg-Marquardt Method for the Nonlinear Equations // Applied Mathematics and Computation. 2007. V.184. P. 1032-1040 2.http://www.esrl.noaa.gov/psdhttp://www.esrl.noaa.gov/psd 3.Gribanov K.G., Zakharov V.I., Tashkun S.A., Tyuterev Vl.G.. A New Software Tool for Radiative Transfer Calculations and its application to IMG/ADEOS data // JQSRT.2001.V.68.№ 4. P. 435-451. 4.http://www.ecmwf.int/http://www.ecmwf.int
Zhang, Ming; Cheng, Yun-Jiu; Zheng, Wei-Ping; Liu, Guang-Hui; Chen, Huai-Sheng; Ning, Yu; Zhao, Xin; Su, Li-Xiao; Liu, Li-Juan
2016-01-01
Objective . The aim of this study was to investigate the association between COPD and major adverse cardiovascular and cerebral events (MACCE) in patients undergoing percutaneous coronary intervention (PCI). Methods . 2,362 patients who underwent PCI were included in this study. Subjects were divided into 2 groups: with COPD ( n = 233) and without COPD ( n = 2,129). Cox proportional hazards models were analyzed to determine the effect of COPD on the incidence of MACCE. Results . The patients with COPD were older ( P < 0.0001) and were more likely to be current smokers ( P = 0.02) and have had hypertension ( P = 0.02) and diabetes mellitus ( P = 0.01). Prevalence of serious cardiovascular comorbidity was higher in the patients with COPD, including a history of MI ( P = 0.02) and HF ( P < 0.0001). Compared with non-COPD group, the COPD group showed a higher risk of all-cause death (hazard ratio (HR): 2.45, P < 0.0001), cardiac death (HR: 2.53, P = 0.0002), MI (HR: 1.387, P = 0.027), and HF (HR: 2.25, P < 0.0001). Conclusions . Patients with CAD and concomitant COPD are associated with a higher incidence of MACCE (all-cause death, cardiac death, MI, and HF) compared to patients without COPD. The patients with a history of COPD have higher in-hospital and long-term mortality rates than those without COPD after PCI.
Gao, Fei; Zhou, Yu Jie; Wang, Zhi Jian; Shen, Hua; Liu, Xiao Li; Nie, Bin; Yan, Zhen Xian; Yang, Shi Wei; Jia, De An; Yu, Miao
2010-04-01
The optimal antithrombotic strategy for patients with atrial fibrillation (AF) undergoing drug-eluting stent (DES) implantation is unknown. The 622 consecutive AF patients undergoing DES implantation were prospectively enrolled. Among them, 142 patients (TT group) continued triple antithrombotic therapy comprising aspirin, clopidogrel and warfarin after discharge; 355 patients (DT group) had dual antiplatelet therapy; 125 patients (WS group) were discharged with warfarin and a single antiplatelet agent. Target INR was set as 1.8-2.5 and was regularly monitored after discharge. The TT group had a significant reduction in stroke and major adverse cardiac and cerebral events (MACCE) (8.8% vs 20.1% vs 14.9%, P=0.010) as compared with either the DT or WS group. In the Cox regression analysis, administration with warfarin (hazard ratio (HR) 0.49; 95% confidence interval (CI) 0.31-0.77; P=0.002) and baseline CHADS(2) score >or=2 (HR 2.09; 95%CI 1.27-3.45; P=0.004) were independent predictors of MACCE. Importantly, the incidence of major bleeding was comparable among 3 groups (2.9% vs 1.8% vs 2.5%, P=0.725), although the overall bleeding rate was increased in the TT group. Kaplan-Meier analysis indicated that the TT group was associated with the best net clinical outcome. The cardiovascular benefits of triple antithrombotic therapy were confirmed by reducing the MACCE rate, and its major bleeding risk might be acceptable if the INR is closely monitored.
A marginal abatement cost curve (MACC) traces out the relationship between the quantity of pollution abated and the marginal cost of abating each additional unit. In the context of air quality management, MACCs typically are developed by sorting end-of-pipe controls by their resp...
A marginal abatement cost curve (MACC) traces out the relationship between the quantity of pollution abated and the marginal cost of abating each additional unit. In the context of air quality management, MACCs typically are developed by sorting end-of-pipe controls by their resp...
A marginal abatement cost curve (MACC) traces out the relationship between the quantity of pollution abated and the marginal cost of abating each additional unit. In the context of air quality management, MACCs typically are developed by sorting end-of-pipe controls by their resp...
A marginal abatement cost curve (MACC) traces out the relationship between the quantity of pollution abated and the marginal cost of abating each additional unit. In the context of air quality management, MACCs typically are developed by sorting end-of-pipe controls by their rela...
López-Aguilar, Carlos; Abundes-Velasco, Arturo; Eid-Lidt, Guering; Piña-Reyna, Yigal; Gaspar-Hernández, Jorge
The best revascularisation method of the unprotected left main artery is a current and evolving topic. A total of 2439 percutaneous coronary interventions (PCI) were registered during a 3-year period. The study included all the patients with PCI of the unprotected left main coronary (n=48) and matched with patients who underwent coronary artery bypass graft (CABG) (n=50). Major adverse cerebral and cardiac events (MACCE) were assessed within the hospital and in outpatients during a 16 month follow up. The cardiovascular risk was greater in the PCI group; logEuroSCORE 16±21 vs. 5±6, P=.001; clinical Syntax 77±74 vs 53±39, P=.04. On admission, the PCI group of patients had a higher frequency of ST segment elevation myocardial infarction (STEMI) and cardiogenic shock. The MACCE were similar in both groups (14% vs. 18%, P=.64). STEMI was less frequent in the PCI group (0% vs. 10%, P=.03). Cardiovascular events were lower in the PCI group (2.3% vs. 18%, P=.01), and there was a decrease in general and cardiac mortality (2.3% vs. 12%, P=.08 y 2.3% vs. 8%, P=.24), on excluding the patients with cardiogenic shock as a presentation. MACCE were similar in both groups in the out-patient phase (15% vs. 12%, P=.46). Survival without MACCE, general and cardiac death were comparable between groups (log rank, P=.38, P=.44 and P=.16, respectively). Even though the clinical and peri-procedural risk profile of the PCI patients were higher, the in-hospital and out-hospital efficacy and safety were comparable with CABG. Copyright © 2016 Instituto Nacional de Cardiología Ignacio Chávez. Publicado por Masson Doyma México S.A. All rights reserved.
Krenn, Lisa; Kopp, Christoph; Glogar, Dietmar; Lang, Irene M; Delle-Karth, Georg; Neunteufl, Thomas; Kreiner, Gerhard; Kaider, Alexandra; Bergler-Klein, Jutta; Khorsand, Aliasghar; Nikfardjam, Mariam; Laufer, Günther; Maurer, Gerald; Gyöngyösi, Mariann
2014-01-01
Objectives Cost-effectiveness of percutaneous coronary intervention (PCI) using drug-eluting stents (DES), and coronary artery bypass surgery (CABG) was analyzed in patients with multivessel coronary artery disease over a 5-year follow-up. Background DES implantation reducing revascularization rate and associated costs might be attractive for health economics as compared to CABG. Methods Consecutive patients with multivessel DES-PCI (n = 114, 3.3 ± 1.2 DES/patient) or CABG (n = 85, 2.7 ± 0.9 grafts/patient) were included prospectively. Primary endpoint was cost-benefit of multivessel DES-PCI over CABG, and the incremental cost-effectiveness ratio (ICER) was calculated. Secondary endpoint was the incidence of major adverse cardiac and cerebrovascular events (MACCE), including acute myocardial infarction (AMI), all-cause death, revascularization, and stroke. Results Despite multiple uses for DES, in-hospital costs were significantly less for PCI than CABG, with 4551 €/patient difference between the groups. At 5-years, the overall costs remained higher for CABG patients (mean difference 5400 € between groups). Cost-effectiveness planes including all patients or subgroups of elderly patients, diabetic patients, or Syntax score >32 indicated that CABG is a more effective, more costly treatment mode for multivessel disease. At the 5-year follow-up, a higher incidence of MACCE (37.7% vs. 25.8%; log rank P = 0.048) and a trend towards more AMI/death/stroke (25.4% vs. 21.2%, log rank P = 0.359) was observed in PCI as compared to CABG. ICER indicated 45615 € or 126683 € to prevent one MACCE or AMI/death/stroke if CABG is performed. Conclusions Cost-effectiveness analysis of DES-PCI vs. CABG demonstrated that CABG is the most effective, but most costly, treatment for preventing MACCE in patients with multivessel disease. © 2014 Wiley Periodicals, Inc. PMID:24403120
Clinical outcomes of patients with hypothyroidism undergoing percutaneous coronary intervention
Zhang, Ming; Sara, Jaskanwal D.S.; Matsuzawa, Yasushi; Gharib, Hossein; Bell, Malcolm R.; Gulati, Rajiv; Lerman, Lilach O.
2016-01-01
Abstract Aims The aim of this study was to investigate the association between hypothyroidism and major adverse cardiovascular and cerebral events (MACCE) in patients undergoing percutaneous coronary intervention (PCI). Methods and results Two thousand four hundred and thirty patients who underwent PCI were included. Subjects were divided into two groups: hypothyroidism ( n = 686) defined either as a history of hypothyroidism or thyroid-stimulating hormone (TSH) ≥5.0 mU/mL, and euthyroidism ( n = 1744) defined as no history of hypothyroidism and/or 0.3 mU/mL ≤ TSH < 5.0 mU/mL. Patients with hypothyroidism were further categorized as untreated ( n = 193), or those taking thyroid replacement therapy (TRT) with adequate replacement (0.3 mU/mL ≤ TSH < 5.0 mU/mL, n = 175) or inadequate replacement (TSH ≥ 5.0 mU/mL, n = 318). Adjusted hazard ratios (HRs) were calculated using Cox proportional hazards models. Median follow-up was 3.0 years (interquartile range, 0.5–7.0). After adjustment for covariates, the risk of MACCE and its constituent parts was higher in patients with hypothyroidism compared with those with euthyroidism (MACCE: HR: 1.28, P = 0.0001; myocardial infarction (MI): HR: 1.25, P = 0.037; heart failure: HR: 1.46, P = 0.004; revascularization: HR: 1.26, P = 0.0008; stroke: HR: 1.62, P = 0.04). Compared with untreated patients or those with inadequate replacement, adequately treated hypothyroid patients had a lower risk of MACCE (HR: 0.69, P = 0.005; HR: 0.78, P = 0.045), cardiac death (HR: 0.43, P = 0.008), MI (HR: 0.50, P = 0.0004; HR: 0.60, P = 0.02), and heart failure (HR: 0.50, P = 0.02; HR: 0.52, P = 0.017). Conclusion Hypothyroidism is associated with a higher incidence of MACCE compared with euthyroidism in patients undergoing PCI. Maintaining adequate control on TRT is beneficial in preventing MACCE. PMID:26757789
Kang, Se Hun; Ahn, Jung-Min; Lee, Cheol Hyun; Lee, Pil Hyung; Kang, Soo-Jin; Lee, Seung-Whan; Kim, Young-Hak; Lee, Cheol Whan; Park, Seong-Wook; Park, Duk-Woo; Park, Seung-Jung
2017-07-01
Identifying predictive factors for major cardiovascular events and death in patients with unprotected left main coronary artery disease is of great clinical value for risk stratification and possible guidance for tailored preventive strategies. The Interventional Research Incorporation Society-Left MAIN Revascularization registry included 5795 patients with unprotected left main coronary artery disease (percutaneous coronary intervention, n=2850; coronary-artery bypass grafting, n=2337; medication alone, n=608). We analyzed the incidence and independent predictors of major adverse cardiac and cerebrovascular events (MACCE; a composite of death, MI, stroke, or repeat revascularization) and all-cause mortality in each treatment stratum. During follow-up (median, 4.3 years), the rates of MACCE and death were substantially higher in the medical group than in the percutaneous coronary intervention and coronary-artery bypass grafting groups ( P <0.001). In the percutaneous coronary intervention group, the 3 strongest predictors for MACCE were chronic renal failure, old age (≥65 years), and previous heart failure; those for all-cause mortality were chronic renal failure, old age, and low ejection fraction. In the coronary-artery bypass grafting group, old age, chronic renal failure, and low ejection fraction were the 3 strongest predictors of MACCE and death. In the medication group, old age, low ejection fraction, and diabetes mellitus were the 3 strongest predictors of MACCE and death. Among patients with unprotected left main coronary artery disease, the key clinical predictors for MACCE and death were generally similar regardless of index treatment. This study provides effect estimates for clinically relevant predictors of long-term clinical outcomes in real-world left main coronary artery patients, providing possible guidance for tailored preventive strategies. URL: https://clinicaltrials.gov. Unique identifier: NCT01341327. © 2017 American Heart Association, Inc.
Yatsu, Shoichiro; Naito, Ryo; Kasai, Takatoshi; Matsumoto, Hiroki; Shitara, Jun; Shimizu, Megumi; Murata, Azusa; Kato, Takao; Suda, Shoko; Hiki, Masaru; Sai, Eiryu; Miyauchi, Katsumi; Daida, Hiroyuki
2018-03-31
Sleep-disordered breathing (SDB) has been recognized as an important risk factor for coronary artery disease (CAD). However, SDB was not fully examined, because sleep studies are limited. Nocturnal pulse oximetry has been suggested to be a useful tool for evaluating SDB. Therefore, the aim of this study was to investigate the influence of SDB assessed by nocturnal pulse oximetry on clinical outcomes in patients who underwent percutaneous coronary intervention (PCI). We conducted a prospective, multicenter, observational cohort study, wherein SDB was assessed by finger pulse oximetry in patients who underwent PCI from January 2014 to December 2016. SDB was defined as 4% oxygen desaturation index of 5 and higher. The primary endpoint was major adverse cardiac or cerebrovascular event (MACCE), defined as a composite of all-cause mortality, acute coronary syndrome, and/or stroke. Of 539 patients, 296 (54.9%) had SDB. MACCE occurred in 32 patients (5.8%) during a median follow-up of 1.9 years. The cumulative incidence of MACCE was significantly higher in patients with SDB (P = 0.0134). In the stepwise multivariable Cox proportional model, the presence of SDB was a significant predictor of MACCE (hazard ratio 2.26; 95% confidence interval 1.05-5.4, P = 0.036). SDB determined by nocturnal pulse oximetry was associated with worse clinical outcomes in patients who underwent PCI. Screening for SDB with nocturnal pulse oximetry was considered to be important for risk stratification in patients with CAD.
Kappetein, Arie Pieter; Head, Stuart J; Morice, Marie-Claude; Banning, Adrian P; Serruys, Patrick W; Mohr, Friedrich-Wilhelm; Dawkins, Keith D; Mack, Michael J
2013-05-01
This prespecified subgroup analysis examined the effect of diabetes on left main coronary disease (LM) and/or three-vessel disease (3VD) in patients treated with percutaneous coronary intervention (PCI) or coronary artery bypass grafting (CABG) in the SYNTAX trial. Patients (n = 1800) with LM and/or 3VD were randomized to receive either PCI with TAXUS Express paclitaxel-eluting stents or CABG. Five-year outcomes in subgroups with (n = 452) or without (n = 1348) diabetes were examined: major adverse cardiac or cerebrovascular events (MACCE), the composite safety end-point of all-cause death/stroke/myocardial infarction (MI) and individual MACCE components death, stroke, MI and repeat revascularization. Event rates were estimated with Kaplan-Meier analyses. In diabetic patients, 5-year rates were significantly higher for PCI vs CABG for MACCE (PCI: 46.5% vs CABG: 29.0%; P < 0.001) and repeat revascularization (PCI: 35.3% vs CABG: 14.6%; P < 0.001). There was no difference in the composite of all-cause death/stroke/MI (PCI: 23.9% vs CABG: 19.1%; P = 0.26) or individual components all-cause death (PCI: 19.5% vs CABG: 12.9%; P = 0.065), stroke (PCI: 3.0% vs CABG: 4.7%; P = 0.34) or MI (PCI: 9.0% vs CABG: 5.4%; P = 0.20). In non-diabetic patients, rates with PCI were also higher for MACCE (PCI: 34.1% vs CABG: 26.3%; P = 0.002) and repeat revascularization (PCI: 22.8% vs CABG: 13.4%; P < 0.001), but not for the composite end-point of all-cause death/stroke/MI (PCI: 19.8% vs CABG: 15.9%; P = 0.069). There were no differences in all-cause death (PCI: 12.0% vs CABG: 10.9%; P = 0.48) or stroke (PCI: 2.2% vs CABG: 3.5%; P = 0.15), but rates of MI (PCI: 9.9% vs CABG: 3.4%; P < 0.001) were significantly increased in the PCI arm in non-diabetic patients. In both diabetic and non-diabetic patients, PCI resulted in higher rates of MACCE and repeat revascularization at 5 years. Although PCI is a potential treatment option in patients with less-complex lesions, CABG should be the revascularization option of choice for patients with more-complex anatomic disease, especially with concurrent diabetes.
Morice, Marie-Claude; Feldman, Ted E E; Mack, Michael J; Ståhle, Elisabeth; Holmes, David R; Colombo, Antonio; Morel, Marie-Angèle; van den Brand, Marcel; Serruys, Patrick W; Mohr, Friedrich; Carrié, Didier; Fournial, Gérard; James, Stefan; Leadley, Katrin; Dawkins, Keith D; Kappetein, A Pieter
2011-10-30
The SYNTAX-LE MANS substudy prospectively evaluated 15-month angiographic and clinical outcomes in patients with treated left main (LM) disease. In the SYNTAX trial, 1,800 patients with three-vessel and/or LM disease were randomised to either CABG or PCI; of these, 271 LM patients were prospectively assigned to receive a 15-month angiogram. The primary endpoint for the CABG arm was the ratio of ≥50% to <100% obstructed/occluded grafts bypassing LM lesions to the number placed. The primary endpoint for the PCI arm was the proportion of patients with ≤50% diameter stenosis ('patent' stents) of treated LM lesions. Per protocol, no formal comparison between CABG and PCI arms was intended based on the differing primary endpoints. Available 15-month angiograms were analysed for 114 CABG and 149 PCI patients. At 15 months, 9.9% (26/263) of CABG grafts were 100% occluded and an additional 5.7% (15/263) were ≥50% to <100% occluded. Overall, 27.2% (31/114) of patients had ≥1 obstructed/occluded graft. The 15-month CABG MACCE rate was 8.8% (10/114) and MACCE at 15 months was not significantly associated with graft obstruction/occlusion (p=0.85). In the PCI arm, 92.4% (134/145) of patients had ≤50% diameter LM stenosis at 15 months (89.7% [87/97] distal LM lesions and 97.9% [47/48] non-distal LM lesions). The 15-month PCI MACCE rate was 12.8% (20/156) and this was significantly associated with lack of stent patency at 15 months (p<0.001), mainly due to repeat revascularisation. At 15 months, 15.6% (41/263) of grafts were at least 50% obstructed but this was not significantly associated with MACCE; 92.4% (134/145) of patients had stents that remained patent at 15 months, and stent restenosis was significantly associated with MACCE, predominantly due to revascularisation.
A comparison of two brands of clopidogrel in patients with drug-eluting stent implantation.
Park, Yae Min; Ahn, Taehoon; Lee, Kyounghoon; Shin, Kwen-Chul; Jung, Eul Sik; Shin, Dong Su; Kim, Myeong Gun; Kang, Woong Chol; Han, Seung Hwan; Choi, In Suck; Shin, Eak Kyun
2012-07-01
Although generic clopidogrel is widely used, clinical efficacy and safety between generic and original clopidogrel had not been well evaluated. The aim of this study was to evaluate the clinical outcomes of 2 oral formulations of clopidogrel 75 mg tablets in patients with coronary artery disease (CAD) undergoing drug-eluting stent (DES) implantation. Between July 2006 and February 2009, 428 patients that underwent implantation with DES for CAD and completed >1 year of clinical follow-up were enrolled in this study. Patients were divided into the following 2 groups based on treatment formulation, Platless® (test formulation, n=211) or Plavix® (reference formulation, n=217). The incidence of 1-year major adverse cardiovascular and cerebrovascular event (MACCE) and stent thrombosis (ST) were retrospectively reviewed. The baseline demographic and procedural characteristics were not significantly different between two treatment groups. The incidence of 1-year MACCEs was 8.5% {19/211, 2 deaths, 4 myocardial infarctions (MIs), 2 strokes, and 11 target vessel revascularizations (TVRs)} in Platless® group vs. 7.4% (16/217, 4 deaths, 1 MI, 2 strokes, and 9 TVRs) in Plavix® group (p=0.66). The incidence of 1-year ST was 0.5% (1 definite and subacute ST) in Platless® group vs. 0% in Plavix® group (p=0.49). In this study, the 2 tablet preparations of clopidogrel showed similar rates of MACCEs, but additional prospective randomized studies with pharmacodynamics and platelet reactivity are needed to conclude whether generic clopidgrel may replace original clopidogrel.
Lipoprotein(a) levels predict adverse vascular events after acute myocardial infarction.
Mitsuda, Takayuki; Uemura, Yusuke; Ishii, Hideki; Takemoto, Kenji; Uchikawa, Tomohiro; Koyasu, Masayoshi; Ishikawa, Shinji; Miura, Ayako; Imai, Ryo; Iwamiya, Satoshi; Ozaki, Yuta; Kato, Tomohiro; Shibata, Rei; Watarai, Masato; Murohara, Toyoaki
2016-12-01
Lipoprotein(a) [Lp(a)], which is genetically determined, has been reported as an independent risk factor for atherosclerotic vascular disease. However, the prognostic value of Lp(a) for secondary vascular events in patients after coronary artery disease has not been fully elucidated. This 3-year observational study included a total of 176 patients with ST-elevated myocardial infarction (STEMI), whose Lp(a) levels were measured within 24 h after primary percutaneous coronary intervention. We divided enrolled patients into two groups according to Lp(a) level and investigated the association between Lp(a) and the incidence of major adverse cardiac and cerebrovascular events (MACCE). A Kaplan-Meier analysis demonstrated that patients with higher Lp(a) levels had a higher incidence of MACCE than those with lower Lp(a) levels (log-rank P = 0.034). A multivariate Cox regression analysis revealed that Lp(a) levels were independently correlated with the occurrence of MACCE after adjusting for other classical risk factors of atherosclerotic vascular diseases (hazard ratio 1.030, 95 % confidence interval: 1.011-1.048, P = 0.002). In receiver-operating curve analysis, the cutoff value to maximize the predictive power of Lp(a) was 19.0 mg/dl (area under the curve = 0.674, sensitivity 69.2 %, specificity 62.0 %). Evaluation of Lp(a) in addition to the established coronary risk factors improved their predictive value for the occurrence of MACCE. In conclusion, Lp(a) levels at admission independently predict secondary vascular events in patients with STEMI. Lp(a) might provide useful information for the development of secondary prevention strategies in patients with myocardial infarction.
Kim, Yong Hoon; Her, Ae-Young; Kim, Byeong-Keuk; Shin, Dong-Ho; Kim, Jung-Sun; Ko, Young-Guk; Choi, Donghoon; Hong, Myeong-Ki; Jang, Yangsoo
2017-01-01
Objective: The appropriate selection of elderly patients for revascularization has become increasingly important because these subsets of patients are more likely to experience a major cardiac or cerebrovascular event—percutaneous coronary intervention (PCI). The objective of this study was to determine important independent risk factor for predicting clinical outcomes in the elderly patients after successful PCI, particularly in a series of South Korean population. Methods: This study is prospective, multicenter, observational cross-sectional study. A total of 1,884 consecutive patients who underwent successful PCI with Nobori® Biolimus A9-eluting stents were enrolled between April 2010 and December 2012. They were divided into two groups according to the age: patients <75 years old (younger patient group) and ≥75 years old (elderly patient group). The primary endpoint was major adverse cardiac or cerebrovascular events (MACCE) at 1-year after index PCI. Results: The 1-year cumulative incidence of MACCE (12.9% vs. 4.3%, p<0.001) and total death (7.1% vs. 1.5%, p<0.001) was significantly higher in the elderly group than in younger group. Previous cerebrovascular disease was significantly correlated with MACCE in elderly patients 1-year after PCI (hazard ratio, 2.804; 95% confidence interval, 1.290–6.093 p=0.009). Conclusion: Previous cerebrovascular disease is important independent predictor of the MACCE in elderly patients at 1-year after PCI with Nobori® Biolimus A9-eluting stents especially in a series of South Korean population. Therefore, careful PCI with intensive monitoring and management can improve major clinical outcomes after successful PCI in elderly patients with previous cerebrovascular disease compared with younger patients. PMID:28554989
Air Support Control Officer Individual Position Training Simulation
2017-06-01
Analysis design development implementation evaluation ASCO Air support control officer ASLT Air support liaison team ASNO Air support net operator...Instructional system design LSTM Long-short term memory MACCS Marine Air Command and Control System MAGTF Marine Air Ground Task Force MASS Marine Air...information to designated MACCS agencies. ASCOs play an important part in facilitating the safe and successful conduct of air operations in DASC- controlled
Global data set of biogenic VOC emissions calculated by the MEGAN model over the last 30 years
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sindelarova, K.; Granier, Claire; Bouarar, I.
The Model of Emissions of Gases and Aerosols from Nature (MEGANv2.1) together with the Modern-Era Retrospective Analysis for Research and Applications (MERRA) meteorological fields were used to create a global emission dataset of biogenic VOCs available on a monthly basis for the time period of 1980 - 2010. This dataset is called MEGAN-MACC. The model estimated mean annual total BVOC emission of 760 Tg(C) yr1 consisting of isoprene (70%), monoterpenes (11%), methanol (6%), acetone (3%), sesquiterpenes (2.5%) and other BVOC species each contributing less than 2 %. Several sensitivity model runs were performed to study the impact of different modelmore » input and model settings on isoprene estimates and resulted in differences of * 17% of the reference isoprene total. A greater impact was observed for sensitivity run applying parameterization of soil moisture deficit that led to a 50% reduction of isoprene emissions on a global scale, most significantly in specific regions of Africa, South America and Australia. MEGAN-MACC estimates are comparable to results of previous studies. More detailed comparison with other isoprene in ventories indicated significant spatial and temporal differences between the datasets especially for Australia, Southeast Asia and South America. MEGAN-MACC estimates of isoprene and*-pinene showed a reasonable agreement with surface flux measurements in the Amazon andthe model was able to capture the seasonal variation of emissions in this region.« less
Clinical outcomes of patients with hypothyroidism undergoing percutaneous coronary intervention.
Zhang, Ming; Sara, Jaskanwal D S; Matsuzawa, Yasushi; Gharib, Hossein; Bell, Malcolm R; Gulati, Rajiv; Lerman, Lilach O; Lerman, Amir
2016-07-07
The aim of this study was to investigate the association between hypothyroidism and major adverse cardiovascular and cerebral events (MACCE) in patients undergoing percutaneous coronary intervention (PCI). Two thousand four hundred and thirty patients who underwent PCI were included. Subjects were divided into two groups: hypothyroidism (n = 686) defined either as a history of hypothyroidism or thyroid-stimulating hormone (TSH) ≥5.0 mU/mL, and euthyroidism (n = 1744) defined as no history of hypothyroidism and/or 0.3 mU/mL ≤ TSH < 5.0 mU/mL. Patients with hypothyroidism were further categorized as untreated (n = 193), or those taking thyroid replacement therapy (TRT) with adequate replacement (0.3 mU/mL ≤ TSH < 5.0 mU/mL, n = 175) or inadequate replacement (TSH ≥ 5.0 mU/mL, n = 318). Adjusted hazard ratios (HRs) were calculated using Cox proportional hazards models. Median follow-up was 3.0 years (interquartile range, 0.5-7.0). After adjustment for covariates, the risk of MACCE and its constituent parts was higher in patients with hypothyroidism compared with those with euthyroidism (MACCE: HR: 1.28, P = 0.0001; myocardial infarction (MI): HR: 1.25, P = 0.037; heart failure: HR: 1.46, P = 0.004; revascularization: HR: 1.26, P = 0.0008; stroke: HR: 1.62, P = 0.04). Compared with untreated patients or those with inadequate replacement, adequately treated hypothyroid patients had a lower risk of MACCE (HR: 0.69, P = 0.005; HR: 0.78, P = 0.045), cardiac death (HR: 0.43, P = 0.008), MI (HR: 0.50, P = 0.0004; HR: 0.60, P = 0.02), and heart failure (HR: 0.50, P = 0.02; HR: 0.52, P = 0.017). Hypothyroidism is associated with a higher incidence of MACCE compared with euthyroidism in patients undergoing PCI. Maintaining adequate control on TRT is beneficial in preventing MACCE. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2016. For permissions please email: journals.permissions@oup.com.
The Minnesota Adolescent Community Cohort Study: Design and Baseline Results
Forster, Jean; Chen, Vincent; Perry, Cheryl; Oswald, John; Willmorth, Michael
2014-01-01
The Minnesota Adolescent Community Cohort (MACC) Study is a population-based, longitudinal study that enrolled 3636 youth from Minnesota and 605 youth from comparison states age 12 to 16 years in 2000–2001. Participants have been surveyed by telephone semi-annually about their tobacco-related attitudes and behaviors. The goals of the study are to evaluate the effects of the Minnesota Youth Tobacco Prevention Initiative and its shutdown on youth smoking patterns, and to better define the patterns of development of tobacco use in adolescents. A multilevel sample was constructed representing individuals, local jurisdictions and the entire state, and data are collected to characterize each of these levels. This paper presents the details of the multilevel study design. We also provide baseline information about MACC participants including demographics and tobacco-related attitudes and behaviors. This paper describes smoking prevalence at the local level, and compares MACC participants to the state as a whole. PMID:21360063
Wang, Shifei; Li, Hairui; He, Nvqin; Sun, Yili; Guo, Shengcun; Liao, Wangjun; Liao, Yulin; Chen, Yanmei; Bin, Jianping
2017-01-15
The impact of remote ischaemic preconditioning (RIPC) on major clinical outcomes in patients undergoing cardiovascular surgery remains controversial. We systematically reviewed the available evidence to evaluate the potential benefits of RIPC in such patients. PubMed, Embase, and Cochrane Library databases were searched for relevant randomised controlled trials (RCTs) conducted between January 2006 and March 2016. The pooled population of patients who underwent cardiovascular surgery was divided into the RIPC and control groups. Trial sequential analysis was applied to judge data reliability. The pooled relative risks (RRs) with 95% confidence intervals (CIs) between the groups were calculated for all-cause mortality, major adverse cardiovascular and cerebral events (MACCEs), myocardial infarction (MI), and renal failure. RIPC was not associated with improvement in all-cause mortality (RR, 1.04; 95%CI, 0.82-1.31; I 2 =26%; P>0.05) or MACCE incidence (RR, 0.90; 95%CI, 0.71-1.14; I 2 =40%; P>0.05) after cardiovascular surgery, and both results were assessed by trial sequential analysis as sufficient and conclusive. Nevertheless, RIPC was associated with a significantly lower incidence of MI (RR, 0.87; 95%CI, 0.76-1.00; I 2 =13%; P≤0.05). However, after excluding a study that had a high contribution to heterogeneity, RIPC was associated with increased rates of renal failure (RR, 1.53; 95%CI, 1.12-2.10; I 2 =5%; P≤0.05). In patients undergoing cardiovascular surgery, RIPC reduced the risk for postoperative MI, but not that for MACCEs or all-cause mortality, a discrepancy likely related to the higher rate of renal failure associated with RIPC. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Consistent evaluation of GOSAT, SCIAMACHY, carbontracker, and MACC through comparisons to TCCON
Kulawik, S. S.; Wunch, D.; O'Dell, C.; ...
2015-06-22
Consistent validation of satellite CO 2 estimates is a prerequisite for using multiple satellite CO 2 measurements for joint flux inversion, and for establishing an accurate long-term atmospheric CO 2 data record. We focus on validating model and satellite observation attributes that impact flux estimates and CO 2 assimilation, including accurate error estimates, correlated and random errors, overall biases, biases by season and latitude, the impact of coincidence criteria, validation of seasonal cycle phase and amplitude, yearly growth, and daily variability. We evaluate dry air mole fraction (X CO 2) for GOSAT (ACOS b3.5) and SCIAMACHY (BESD v2.00.08) as wellmore » as the CarbonTracker (CT2013b) simulated CO 2 mole fraction fields and the MACC CO 2 inversion system (v13.1) and compare these to TCCON observations (GGG2014). We find standard deviations of 0.9 ppm, 0.9, 1.7, and 2.1 ppm versus TCCON for CT2013b, MACC, GOSAT, and SCIAMACHY, respectively, with the single target errors 1.9 and 0.9 times the predicted errors for GOSAT and SCIAMACHY, respectively. When satellite data are averaged and interpreted according to error 2 = a 2+ b 2 / n (where n are the number of observations averaged, a are the systematic (correlated) errors, and b are the random (uncorrelated) errors), we find that the correlated error term a = 0.6 ppm and the uncorrelated error term b = 1.7 ppm for GOSAT and a = 1.0 ppm, b = 1.4 ppm for SCIAMACHY regional averages. Biases at individual stations have year-to-year variability of ~ 0.3 ppm, with biases larger than the TCCON predicted bias uncertainty of 0.4 ppm at many stations. Using fitting software, we find that GOSAT underpredicts the seasonal cycle amplitude in the Northern Hemisphere (NH) between 46–53° N. In the Southern Hemisphere (SH), CT2013b underestimates the seasonal cycle amplitude. Biases are calculated for 3-month intervals and indicate the months that contribute to the observed amplitude differences. The seasonal cycle phase indicates whether a dataset or model lags another dataset in time. We calculate this at a subset of stations where there is adequate satellite data, and find that the GOSAT retrieved phase improves substantially over the prior and the SCIAMACHY retrieved phase improves substantially for 2 of 7 sites. The models reproduce the measured seasonal cycle phase well except for at Lauder125 (CT2013b), Darwin (MACC), and Izana (+ 10 days, CT2013b), as for Bremen and Four Corners, which are highly influenced by local effects. We compare the variability within one day between TCCON and models in JJA; there is correlation between 0.2 and 0.8 in the NH, with models showing 10–100 % the variability of TCCON at different stations (except Bremen and Four Corners which have no variability compared to TCCON) and CT2013b showing more variability than MACC. This paper highlights findings that provide inputs to estimate flux errors in model assimilations, and places where models and satellites need further investigation, e.g. the SH for models and 45–67° N for GOSAT« less
Marginal abatement cost curves for NOx that account for ...
A marginal abatement cost curve (MACC) traces out the relationship between the quantity of pollution abated and the marginal cost of abating each additional unit. In the context of air quality management, MACCs typically are developed by sorting end-of-pipe controls by their respective cost effectiveness. Alternative measures, such as renewable electricity, energy efficiency, and fuel switching (RE/EE/FS), are not considered as it is difficult to quantify their abatement potential. In this paper, we demonstrate the use of an energy system model to develop a MACC for nitrogen oxides (NOx) that incorporates both end-of-pipe controls and these alternative measures. We decompose the MACC by sector, and evaluate the cost-effectiveness of RE/EE/FS relative to end-of-pipe controls. RE/EE/FS are shown to produce considerable emission reductions after end-of-pipe controls have been exhausted. Furthermore, some RE/EE/FS are shown to be cost-competitive with end-of-pipe controls. Demonstrate how the MARKAL energy system model can be used to evaluate the potential role of renewable electricity, energy efficiency and fuel switching (RE/EE/FS) in achieving NOx reductions. For this particular analysis, we show that RE/EE/FSs are able to increase the quantity of NOx reductions available for a particular marginal cost (ranging from $5k per ton to $40k per ton) by approximately 50%.
Wańha, Wojciech; Kawecki, Damian; Roleder, Tomasz; Pluta, Aleksandra; Marcinkiewicz, Kamil; Dola, Janusz; Morawiec, Beata; Krzych, Łukasz; Pawłowski, Tomasz; Smolka, Grzegorz; Ochała, Andrzej; Nowalany-Kozielska, Ewa; Tendera, Michał; Wojakowski, Wojciech
2016-01-01
Coexisting anaemia is associated with an increased risk of major adverse cardiac and cerebrovascular events (MACCE) and bleeding complications after percutaneous coronary intervention (PCI), especially in patients with acute coronary syndrome. To assess the impact of anaemia in patients with coronary artery disease (CAD) treated with first- and second-generation drug-eluting stents (DES) on one-year MACCE. The registry included 1916 consecutive patients (UA: n = 1502, 78.3%; NSTEMI: n = 283, 14.7%; STEMI/LBBB: n = 131, 6.8%) treated either with first- (34%) or second-generation (66%) DES. The study population was divided into two groups: patients presenting with anaemia 217 (11%) and without anaemia 1699 (89%) prior to PCI. Anaemia was defined according to World Heart Organisation (haemoglobin [Hb] level < 13 g/dL for men and < 12 g/dL for women). Patients with anaemia were older (69, IQR: 61-75 vs. 62, IQR: 56-70, p < 0.001), had higher prevalence of co-morbidities: diabetes (44.7% vs. 36.4%, p = 0.020), chronic kidney disease (31.3% vs. 19.4%; p < 0.001), peripheral artery disease (10.1% vs. 5.4%, p = 0.005), and lower left ventricular ejection fraction values (50, IQR: 40-57% vs. 55, IQR: 45-60%; p < 0.001). No difference between gender in frequency of anaemia was found. Patients with anaemia more often had prior myocardial infarction (MI) (57.6% vs. 46.4%; p = 0.002) and coronary artery bypass grafting (31.3% vs. 19.4%; p < 0.001) in comparison to patients without anaemia. They also more often had multivessel disease in angiography (36.4% vs. 26.1%; p = 0.001) and more complexity CAD as measured by SYNTAX score (21, IQR: 12-27 points vs. 14, IQR: 8-22 points; p = 0.001). In-hospital risk of acute heart failure (2.7% vs. 0.7%; p = 0.006) and bleeding requiring transfusion (3.2% vs. 0.5%; p < 0.001) was significantly higher in patients with anaemia. One-year follow-up showed that there was higher rate of death in patients with anaemia. However, there were no differences in MI, stroke, target vessel revascularisation (TVR) and MACCE in comparison to patients with normal Hb. There were no differences according to type of DES (first vs. second generation) in the population of patients with anaemia. In patients with anaemia there is a significantly higher risk of death in 12-month follow-up, but anaemia has no impact on the incidence of MI, repeat revascularisation, stroke and MACCE. There is no advantage of II-DES over I-DES generation in terms of MACCE and TVR in patients with anaemia.
Wang, Ning; Zhang, Yang; Liang, Huaxin
2018-02-14
The dysregulation of microRNAs (miRNAs) expression is closely related with tumorigenesis and tumour development in glioblastoma (GBM). In this study, we found that miRNA-598 (miR-598) expression was significantly downregulated in GBM tissues and cell lines. Restoring miR-598 expression inhibited cell proliferation and invasion in GBM. Moreover, we validated that metastasis associated in colon cancer-1 (MACC1) is a novel target of miR-598 in GBM. Recovered MACC1 expression reversed the inhibitory effects of miR-598 overexpression on GBM cells. In addition, miR-598 overexpression suppressed the Met/AKT pathway activation in GBM. Our results provided compelling evidence that miR-598 serves tumour suppressive roles in GBM and that its anti-oncogenic effects are mediated chiefly through the direct suppression of MACC1 expression and regulation of the Met/AKT signalling pathway. Therefore, miR-598 is a potential target in the treatment of GBM.
Kang, Dong Oh; Yu, Cheol Woong; Kim, Hee Dong; Cho, Jae Young; Joo, Hyung Joon; Choi, Rak Kyong; Park, Jin Sik; Lee, Hyun Jong; Kim, Je Sang; Park, Jae Hyung; Hong, Soon Jun; Lim, Do-Sun
2015-08-01
The optimal antithrombotic regimen in patients with atrial fibrillation (AF) undergoing drug-eluting stent (DES) implantation for complex coronary artery disease is unclear. We compared the net clinical outcomes of triple antithrombotic therapy (TAT; aspirin, thienopyridine, and warfarin) and dual antiplatelet therapy (DAPT; aspirin and thienopyridine) in AF patients who had undergone DES implantation. A total of 367 patients were enrolled and analyzed retrospectively; 131 patients (35.7%) received TAT and 236 patients (64.3%) received DAPT. DAPT and warfarin were maintained for a minimum of 12 and 24 months, respectively. The primary endpoint was the 2-year net clinical outcomes, a composite of major bleeding and major adverse cardiac and cerebral events (MACCE). Propensity score-matching analysis was carried out in 99 patient pairs. The 2-year net clinical outcomes of the TAT group were worse than those of the DAPT group (34.3 vs. 21.1%, P=0.006), which was mainly due to the higher incidence of major bleeding (16.7 vs. 4.6%, P<0.001), without any significant increase in MACCE (22.1 vs. 17.7%, P=0.313). In the multivariate analysis, TAT was an independent predictor of worse net clinical outcomes (odds ratio 1.63, 95% confidence interval 1.06-2.50) and major bleeding (odds ratio 3.54, 95% confidence interval 1.65-7.58). After propensity score matching, the TAT group still had worse net clinical outcomes and a higher incidence of major bleeding compared with the DAPT group. In AF patients undergoing DES implantation, prolonged administration of TAT may be harmful due to the substantial increase in the risk for major bleeding without any reduction in MACCE.
Cavallari, Ilaria; Ruff, Christian T; Nordio, Francesco; Deenadayalu, Naveen; Shi, Minggao; Lanz, Hans; Rutman, Howard; Mercuri, Michele F; Antman, Elliott M; Braunwald, Eugene; Giugliano, Robert P
2018-04-15
Patients with atrial fibrillation (AF) who interrupt anticoagulation are at high risk of thromboembolism and death. Patients enrolled in the ENGAGE AF-TIMI 48 trial (randomized comparison of edoxaban vs. warfarin) who interrupted study anticoagulant for >3 days were identified. Clinical events (ischemic stroke/systemic embolism, major cardiac and cerebrovascular events [MACCE]) were analyzed from day 4 after interruption until day 34 or study drug resumption. During 2.8 years median follow-up, 13,311 (63%) patients interrupted study drug for >3 days. After excluding those who received open-label anticoagulation during the at-risk window, the population for analysis included 9148 patients. The rates of ischemic stroke/systemic embolism and MACCE post interruption were substantially greater than in patients who never interrupted (15.42 vs. 0.26 and 60.82 vs. 0.36 per 100 patient-years, respectively, p adj < .001). Patients who interrupted study drug for an adverse event (44.1% of the cohort), compared to those who interrupted for other reasons, had an increased risk of MACCE (HR adj 2.75; 95% CI 2.02-3.74, p < .0001), but similar rates of ischemic stroke/systemic embolism. Rates of clinical events after interruption of warfarin and edoxaban were similar. Interruption of study drug was frequent in patients with AF and was associated with a substantial risk of major cardiac and cerebrovascular events over the ensuing 30 days. This risk was particularly high in patients who interrupted as a result of an adverse event; these patients deserve close monitoring and resumption of anticoagulation as soon as it is safe to do so. Copyright © 2018 Elsevier B.V. All rights reserved.
Pattanshetty, Deepak J; Bhat, Pradeep K; Aneja, Ashish; Pillai, Dilip P
2012-12-01
Hypertensive crisis is associated with poor clinical outcomes. Elevated troponin, frequently observed in hypertensive crisis, may be attributed to myocardial supply-demand mismatch or obstructive coronary artery disease (CAD). However, in patients presenting with hypertensive crisis and an elevated troponin, the prevalence of CAD and the long-term adverse cardiovascular outcomes are unknown. We sought to assess the impact of elevated troponin on cardiovascular outcomes and evaluate the role of troponin as a predictor of obstructive CAD in patients with hypertensive crisis. Patients who presented with hypertensive crisis (n = 236) were screened retrospectively. Baseline and follow-up data including the event rates were obtained using electronic patient records. Those without an assay for cardiac Troponin I (cTnI) (n = 65) were excluded. Of the remaining 171 patients, those with elevated cTnI (cTnI ≥ 0.12 ng/ml) (n = 56) were compared with those with normal cTnI (cTnI < 0.12 ng/ml) (n = 115) at 2 years for the occurrence of major adverse cardiac or cerebrovascular events (MACCE) (composite of myocardial infarction, unstable angina, hypertensive crisis, pulmonary edema, stroke or transient ischemic attack). At 2 years, MACCE occurred in 40 (71.4%) patients with elevated cTnI compared with 44 (38.3%) patients with normal cTnI [hazard ratio: 2.77; 95% confidence interval (CI): 1.79-4.27; P < 0.001]. Also, patients with elevated cTnI were significantly more likely to have underlying obstructive CAD (odds ratio: 8.97; 95% CI: 1.4-55.9; P < 0.01). In patients with hypertensive crisis, elevated cTnI confers a significantly greater risk of long-term MACCE, and is a strong predictor of obstructive CAD.
Karpov, Yu; Logunova, N; Tomilova, D; Buza, V; Khomitskaya, Yu
2017-02-01
The OPTIMA II study sought to evaluate rates of major adverse cardiac and cerebrovascular events (MACCEs) during the long-term follow-up of chronic statin users who underwent percutaneous coronary intervention (PCI) with implantation of a drug-eluting stent (DES). OPTIMA II was a non-interventional, observational study conducted at a single center in the Russian Federation. Included patients were aged ≥18 years with stable angina who had received long-term (≥1 month) statin therapy prior to elective PCI with DES implantation and who had participated in the original OPTIMA study. Patients received treatment for stable angina after PCI as per routine study site clinical practice. Study data were collected from patient medical records and a routine visit 4 years after PCI. NCT02099565. Rate of MACCEs 4 years after PCI. Overall, 543 patients agreed to participate in the study (90.2% of patients in the original OPTIMA study). The mean (± standard deviation [SD]) duration of follow-up from the date of PCI to data collection was 4.42 ± 0.58 (range: 0.28-5.56) years. The frequency of MACCEs (including data in patients who died) was 30.8% (95% confidence interval: 27.0-34.7); half of MACCEs occurred in the first year of follow-up. After PCI, the majority of patients had no clinical signs of angina. Overall, 24.3% of patients discontinued statin intake in the 4 years after PCI. Only 7.7% of patients achieved a low-density lipoprotein (LDL) cholesterol goal of <1.8 mmol/L. Key limitations of this study related to its observational nature; for example, the sample size was small, the clinical results were derived from outpatients and hospitalized medical records, only one follow-up visit was performed at the end of the study (after 4 years' follow-up), only depersonalized medical information was made available for statistical analysis, and adherence to statin treatment was evaluated on the basis of patient questionnaire. Long-term follow-up of patients who underwent PCI with DES implantation demonstrated MACCEs in nearly one-third of patients, which is comparable to data from other studies. PCI was associated with relief from angina or minimal angina frequency, but compliance with statin therapy and the achievement of LDL cholesterol targets 4 years after PCI were suboptimal.
Frye, Mark A; Hinton, David J; Karpyak, Victor M; Biernacka, Joanna M; Gunderson, Lee J; Feeder, Scott E; Choi, Doo-Sup; Port, John D
2016-12-01
Although the precise drug mechanism of action of acamprosate remains unclear, its antidipsotropic effect is mediated in part through glutamatergic neurotransmission. We evaluated the effect of 4 weeks of acamprosate treatment in a cohort of 13 subjects with alcohol dependence (confirmed by a structured interview, Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision) on proton magnetic resonance spectroscopy glutamate levels in the midline anterior cingulate cortex (MACC). We compared levels of metabolites with a group of 16 healthy controls. The Pennsylvania Alcohol Craving Scale was used to assess craving intensity. At baseline, before treatment, the mean cerebrospinal fluid-corrected MACC glutamate (Glu) level was significantly elevated in subjects with alcohol dependence compared with controls (P = 0.004). Four weeks of acamprosate treatment reduced glutamate levels (P = 0.025), an effect that was not observed in subjects who did not take acamprosate. At baseline, there was a significant positive correlation between cravings, measured by the Pennsylvania Alcohol Craving Scale, and MACC (Glu) levels (P = 0.019). Overall, these data would suggest a normalizing effect of acamprosate on a hyperglutamatergic state observed in recently withdrawn patients with alcohol dependence and a positive association between MACC glutamate levels and craving intensity in early abstinence. Further research is needed to evaluate the use of these findings for clinical practice, including monitoring of craving intensity and individualized selection of treatment with antidipsotropic medications in subjects with alcohol dependence.
2010-06-01
mutation si gnature i s prognostic in EGFR wild-type l ung adenocarcinomas and identifies Metastasis associated in colon cancer 1 (MACC1) as an EGFR...T790M mutation (N=7, blue curve) (AUC: area under the curve). Figure 3. EGFR dependency signature is a favorable prognostic factor. EGFR index...developed. T he si gnature w as shown t o b e prognostic regardless of EGFR status. T he results also suggest MACC1 to be a regulator of MET in NSCLC
Deconvolution of magnetic acoustic change complex (mACC).
Bardy, Fabrice; McMahon, Catherine M; Yau, Shu Hui; Johnson, Blake W
2014-11-01
The aim of this study was to design a novel experimental approach to investigate the morphological characteristics of auditory cortical responses elicited by rapidly changing synthesized speech sounds. Six sound-evoked magnetoencephalographic (MEG) responses were measured to a synthesized train of speech sounds using the vowels /e/ and /u/ in 17 normal hearing young adults. Responses were measured to: (i) the onset of the speech train, (ii) an F0 increment; (iii) an F0 decrement; (iv) an F2 decrement; (v) an F2 increment; and (vi) the offset of the speech train using short (jittered around 135ms) and long (1500ms) stimulus onset asynchronies (SOAs). The least squares (LS) deconvolution technique was used to disentangle the overlapping MEG responses in the short SOA condition only. Comparison between the morphology of the recovered cortical responses in the short and long SOAs conditions showed high similarity, suggesting that the LS deconvolution technique was successful in disentangling the MEG waveforms. Waveform latencies and amplitudes were different for the two SOAs conditions and were influenced by the spectro-temporal properties of the sound sequence. The magnetic acoustic change complex (mACC) for the short SOA condition showed significantly lower amplitudes and shorter latencies compared to the long SOA condition. The F0 transition showed a larger reduction in amplitude from long to short SOA compared to the F2 transition. Lateralization of the cortical responses were observed under some stimulus conditions and appeared to be associated with the spectro-temporal properties of the acoustic stimulus. The LS deconvolution technique provides a new tool to study the properties of the auditory cortical response to rapidly changing sound stimuli. The presence of the cortical auditory evoked responses for rapid transition of synthesized speech stimuli suggests that the temporal code is preserved at the level of the auditory cortex. Further, the reduced amplitudes and shorter latencies might reflect intrinsic properties of the cortical neurons to rapidly presented sounds. This is the first demonstration of the separation of overlapping cortical responses to rapidly changing speech sounds and offers a potential new biomarker of discrimination of rapid transition of sound. Crown Copyright © 2014. Published by Elsevier Ireland Ltd. All rights reserved.
Benedetto, Umberto; Altman, Douglas G; Gerry, Stephen; Gray, Alastair; Lees, Belinda; Flather, Marcus; Taggart, David P
2017-09-01
There is still little evidence to boldport routine dual antiplatelet therapy (DAPT) with P2Y12 antagonists following coronary artery bypass grafting (CABG). The Arterial Revascularization Trial (ART) was designed to compare 10-year survival after bilateral versus single internal thoracic artery grafting. We aimed to get insights into the effect of DAPT (with clopidogrel) following CABG on 1-year outcomes by performing a post hoc ART analysis. Among patients enrolled in the ART (n = 3102), 609 (21%) and 2308 (79%) were discharged on DAPT or aspirin alone, respectively. The primary end-point was the incidence of major adverse cerebrovascular and cardiac events (MACCE) at 1 year including cardiac death, myocardial infarction, cerebrovascular accident and reintervention; safety end-point was bleeding requiring hospitalization. Propensity score (PS) matching was used to create comparable groups. Among 609 PS-matched pairs, MACCE occurred in 34 (5.6%) and 34 (5.6%) in the DAPT and aspirin alone groups, respectively, with no significant difference between the 2 groups [hazard ratio (HR) 0.97, 95% confidence interval (CI) 0.59-1.59; P = 0.90]. Only 188 (31%) subjects completed 1 year of DAPT, and in this subgroup, MACCE rate was 5.8% (HR 1.11, 95% CI 0.53-2.30; P = 0.78). In the overall sample, bleeding rate was higher in DAPT group (2.3% vs 1.1%; P = 0.02), although this difference was no longer significant after matching (2.3% vs 1.8%; P = 0.54). Based on these findings, when compared with aspirin alone, DAPT with clopidogrel prescribed at discharge was not associated with a significant reduction of adverse cardiac and cerebrovascular events at 1 year following CABG. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Access to MISR Aerosol Data and Imagery for the GoMACCS Field Study
NASA Astrophysics Data System (ADS)
Ritchey, N.; Watkinson, T.; Davis, J.; Walter, J.; Protack, S.; Matthews, J.; Smyth, M.; Rheingans, B.; Gaitley, B.; Ferebee, M.; Haberer, S.
2006-12-01
NASA Langley Atmospheric Science Data Center (ASDC) and NASA Jet Propulsion Laboratory (JPL) Multi- angle Imaging SpectroRadiometer (MISR) teams collaborated to provide special data products and images in an innovative approach for the Gulf of Mexico Atmospheric Composition and Climate Study (GoMACCS) field campaign. GoMACCS was an intensive field study focused on providing a better understanding of the sources and atmospheric processes responsible for the formation and distribution of ozone and aerosols in the atmosphere and the influence that these species have on the radiative forcing of regional and global climate, as well as their impact on human health and regional haze. The study area encompassed Texas and the northwestern Gulf of Mexico. Numerous U. S. Government agencies, universities and commercial entities participated in the field campaign which occurred August through September 2006. Aerosol and meteorological measurements were provided by a network of instruments on land, buoys and ships, by airborne in situ and remote instruments, and by satellite retrievals. MISR's role in GoMACCS was to provide satellite retrievals of aerosols and cloud properties and imagery as quickly as possible after data acquisition. The diverse group of scientific participants created unique opportunities for ASDC and MISR to develop special data products and images that were easily accessible by all participants. Examples of the data products, images and access methods as well as the data and imagery flow will be presented. Additional information about ASDC and MISR is available from the following web sites, http://eosweb.larc.nasa.gov and http://www-misr.jpl.nasa.gov/.
Red light regulation of ethylene biosynthesis and gravitropism in etiolated pea stems
NASA Technical Reports Server (NTRS)
Steed, C. L.; Taylor, L. K.; Harrison, M. A.
2004-01-01
During gravitropism, the accumulation of auxin in the lower side of the stem causes increased growth and the subsequent curvature, while the gaseous hormone ethylene plays a modulating role in regulating the kinetics of growth asymmetries. Light also contributes to the control of gravitropic curvature, potentially through its interaction with ethylene biosynthesis. In this study, red-light pulse treatment of etiolated pea epicotyls was evaluated for its effect on ethylene biosynthesis during gravitropic curvature. Ethylene biosynthesis analysis included measurements of ethylene; the ethylene precursor 1-aminocyclopropane-1-carboxylic acid (ACC); malonyl-conjugated ACC (MACC); and expression levels of pea ACC oxidase (Ps-ACO1) and ACC synthase (Ps-ACS1, Ps-ACS2) genes by reverse transcriptase-polymerase chain reaction analysis. Red-pulsed seedlings were given a 6 min pulse of 11 micromoles m-2 s-1 red-light 15 h prior to horizontal reorientation for consistency with the timeline of red-light inhibition of ethylene production. Red-pulse treatment significantly reduced ethylene production and MACC levels in epicotyl tissue. However, there was no effect of red-pulse treatment on ACC level, or expression of ACS or ACO genes. During gravitropic curvature, ethylene production increased from 60 to 120 min after horizontal placement in both control and red-pulsed epicotyls. In red-pulsed tissues, ACC levels increased by 120 min after horizontal reorientation, accompanied by decreased MACC levels in the lower portion of the epicotyl. Overall, our results demonstrate that ethylene production in etiolated epicotyls increases after the initiation of curvature. This ethylene increase may inhibit cell growth in the lower portion of the epicotyl and contribute to tip straightening and reduced overall curvature observed after the initial 60 min of curvature in etiolated pea epicotyls.
Kitchener, Henry C; Gittins, Matthew; Desai, Mina; Smith, John H F; Cook, Gary; Roberts, Chris; Turnbull, Lesley
2015-03-01
Liquid-based cytology (LBC) for cervical screening would benefit from laboratory practice guidelines that define specimen adequacy for reporting of slides. The evidence base required to define cell adequacy should incorporate both ThinPrep™ (TP; Hologic, Inc., Bedford, MA, USA) and SurePath™ (SP; BD Diagnostics, Burlington, NC, USA), the two LBC systems used in the UK cervical screening programmes. The objectives of this study were to determine (1) current practice for reporting LBC in England, Wales and Scotland, (2) a reproducible method for cell counting, (3) the cellularity of slides classified as inadequate, negative or abnormal and (4) the impact of varying cellularity on the likelihood of detecting cytological abnormalities. The study involved four separate arms to pursue each of the four objectives. (1) A questionnaire survey of laboratories was conducted. (2) A standard counting protocol was developed and used by three experienced cytopathologists to determine a reliable and reproducible cell counting method. (3) Slide sets which included a range of cytological abnormalities were each sent to three laboratories for cell counting to study the correlation between cell counts and reported cytological outcomes. (4) Dilution of LBC samples by fluid only (unmixed) or by dilution with a sample containing normal cells (mixed) was performed to study the impact on reporting of reducing either the total cell count or the relative proportion of abnormal to normal cells. The study was conducted within the cervical screening programmes in England, Wales and Scotland, using routinely obtained cervical screening samples, and in 56 participating NHS cervical cytology laboratories. The study involved only routinely obtained cervical screening samples. There was no clinical intervention. The main outcome measures were (1) reliability of counting method, (2) correlation of reported cytology grades with cellularity and (3) levels of detection of abnormal cells in progressively diluted cervical samples. Laboratory practice varied in terms of threshold of cellular adequacy and of morphological markers of adequacy. While SP laboratories generally used a minimum acceptable cell count (MACC) of 15,000, the MACC employed by TP laboratories varied between 5000 and 15,000. The cell counting study showed that a standard protocol achieved moderate to strong inter-rater reproducibility. Analysis of slide reporting from laboratories revealed that a large proportion of the samples reported as inadequate had cell counts above a threshold of 15,000 for SP, and 5000 and 10,000 for TP. Inter-rater unanimity was greater among more cellular preparations. Dilution studies demonstrated greater detection of abnormalities in slides with counts above the MACC and among slides with more than 25 dyskaryotic cells. Variation in laboratory practice demonstrates a requirement for evidence-based standards for designating a MACC. This study has indicated that a MACC of 15,000 and 5000 for SP and TP, respectively, achieves a balance in terms of maintaining sensitivity and low inadequacy rates. The findings of this study should inform the development of laboratory practice guidelines. The National Institute for Health Research Health Technology Assessment programme.
See, Kimberly A; Liu, Yao-Min; Ha, Yeyoung; Barile, Christopher J; Gewirth, Andrew A
2017-10-18
Magnesium batteries offer an opportunity to use naturally abundant Mg and achieve large volumetric capacities reaching over four times that of conventional Li-based intercalation anodes. High volumetric capacity is enabled by the use of a Mg metal anode in which charge is stored via electrodeposition and stripping processes, however, electrolytes that support efficient Mg electrodeposition and stripping are few and are often prepared from highly reactive compounds. One interesting electrolyte solution that supports Mg deposition and stripping without the use of highly reactive reagents is the magnesium aluminum chloride complex (MACC) electrolyte. The MACC exhibits high Coulombic efficiencies and low deposition overpotentials following an electrolytic conditioning protocol that stabilizes species necessary for such behavior. Here, we discuss the effect of the MgCl 2 and AlCl 3 concentrations on the deposition overpotential, current density, and the conditioning process. Higher concentrations of MACC exhibit enhanced Mg electrodeposition current density and much faster conditioning. An increase in the salt concentrations causes a shift in the complex equilibria involving both cations. The conditioning process is strongly dependent on the concentration suggesting that the electrolyte is activated through a change in speciation of electrolyte complexes and is not simply due to the annihilation of electrolyte impurities. Additionally, the presence of the [Mg 2 (μ-Cl) 3 ·6THF] + in the electrolyte solution is again confirmed through careful analysis of experimental Raman spectra coupled with simulation and direct observation of the complex in sonic spray ionization mass spectrometry. Importantly, we suggest that the ∼210 cm -1 mode commonly observed in the Raman spectra of many Mg electrolytes is indicative of the C 3v symmetric [Mg 2 (μ-Cl) 3 ·6THF] + . The 210 cm -1 mode is present in many electrolytes containing MgCl 2 , so its assignment is of broad interest to the Mg electrolyte community.
Papachristidis, Alexandros; Demarco, Daniela Cassar; Roper, Damian; Tsironis, Ioannis; Papitsas, Michael; Byrne, Jonathan; Alfakih, Khaled; Monaghan, Mark J
2017-01-01
In this study, we assess the clinical and cost-effectiveness of stress echocardiography (SE), as well as the place of SE in patients with high pretest probability (PTP) of coronary artery disease (CAD). We investigated 257 patients with no history of CAD, who underwent SE, and they had a PTP risk score >61% (high PTP). According to the National Institute for Health and Care Excellence guidance (NICE CG95, 2010), these patients should be investigated directly with an invasive coronary angiogram (ICA). We investigated those patients with SE initially and then with ICA when appropriate. Follow-up data with regard to Major Adverse Cardiac and Cerebrovascular Events (MACCE, defined as cardiovascular mortality, cerebrovascular accident (CVA), myocardial infarction (MI) and late revascularisation for acute coronary syndrome/unstable angina) were recorded for a period of 12 months following the SE. The tariff for SE and ICA is £300 and £1400, respectively. 106 patients had a positive SE (41.2%) and 61 of them (57.5%) had further investigation with ICA. 15 (24.6%) of these patients were revascularised. The average cost per patient for investigations was £654.09. If NICE guidance had been followed, the cost would have been significantly higher at £1400 (p<0.001). Overall, 5 MACCE (2.0%) were recorded; 4 (3.8%) in the group of positive SE (2 CVAs and 2 MIs) and 1 (0.7%) in the group of negative SE (1 CVA). There was no MI and no need for revascularisation in the negative SE group. Our approach to investigate patients who present with de novo chest pain and high PTP, with SE initially and subsequently with ICA when appropriate, reduces the cost significantly (£745.91 per patient) with a very low rate of MACCE. However, this study is underpowered to assess safety of SE.
Sharma, Sharan P; Dahal, Khagendra; Khatra, Jaspreet; Rosenfeld, Alan; Lee, Juyong
2017-06-01
It is not clear whether percutaneous coronary intervention (PCI) is as effective and safe as coronary artery bypass grafting (CABG) for left main coronary artery disease. We aimed to perform a systematic review and meta-analysis of all randomized controlled trials (RCTs) that compared PCI and CABG in left main coronary disease. We searched PubMed, EMBASE, Cochrane, Scopus and relevant references for RCTs (inception through, November 20, 2016 without language restrictions) and performed meta-analysis using random-effects model. All-cause mortality, myocardial infarction, revascularization rate, stroke, and major adverse cardiac and cerebrovascular events (MACCE) were the measured outcomes. Six RCTs with a total population of 4700 were analyzed. There was no difference in all-cause mortality at 30-day, one-year, and five-year (1.8% vs 1.1%; OR 0.60; 95% CI: 0.26-1.39; P=.23; I 2 =9%) follow-up between PCI and CABG. CABG group had less myocardial infarction (MI) at five-year follow-up than PCI (5% vs 2.5%; OR 2.04; CI: 1.30-3.19; P=.002; I 2 =1%). Revascularization rate favored CABG in one-year (8.6% vs 4.5%; OR 2; CI: 1.46-2.73; P<.0001; I 2 =45%) and five-year (15.9% vs 9.9%; OR 1.73; CI: 1.36-2.20; P<.0001; I 2 =0%) follow-up. Although stroke rate was lower in PCI group at 1 year, there was no difference in longer follow-up. MACCE at 5 years favored CABG (24% vs 18%; OR 1.45; CI: 1.19-1.76; P=.0001; I 2 =0%). On subgroup analysis, MACCE were not different between two groups in low-to-intermediate SYNTAX group while it was higher for PCI group with high SYNTAX group. Percutaneous coronary intervention could be as safe and effective as CABG in a select group of left main coronary artery disease patients. © 2017 John Wiley & Sons Ltd.
The Air Quality Model Evaluation International Initiative (AQMEII) has now reached its second phase which is dedicated to the evaluation of online coupled chemistry-meteorology models. Sixteen modeling groups from Europe and five from North America have run regional air quality m...
Mid-latitude storm track variability and its influence on atmospheric composition
NASA Astrophysics Data System (ADS)
Knowland, K. E.; Doherty, R. M.; Hodges, K.
2013-12-01
Using the storm tracking algorithm, TRACK (Hodges, 1994, 1995, 1999), we have studied the behaviour of storm tracks in the North Atlantic basin, using 850-hPa relative vorticity from the ERA-Interim Re-analysis (Dee et al., 2011). We have correlated surface ozone measurements at rural coastal sites in Europe to the storm track data to explore the role mid-latitude cyclones and their transport of pollutants play in determining surface air quality in Western Europe. To further investigate this relationship, we have used the Monitoring Atmospheric Composition Climate (MACC) Re-analysis dataset (Inness et al., 2013) in TRACK. The MACC Re-analysis is a 10-year dataset which couples a chemistry transport model (Mozart-3; Stein 2009, 2012) to an extended version of the European Centre for Medium-Range Weather Forecasts' (ECMWF) Integrated Forecast System (IFS). Storm tracks in the MACC Re-analysis compare well to the storm tracks using the ERA-Interim Re-analysis for the same 10-year period, as both are based on ECMWF IFSs. We also compare surface ozone values from MACC to surface ozone measurements previously studied. Using TRACK, we follow ozone (O3) and carbon monoxide (CO) through the life cycle of storms from North America to Western Europe. Along the storm tracks, we examine the distribution of CO and O3 within 6 degrees of the center of each storm and vertically at different pressure levels in the troposphere. We hope to better understand the mechanisms with which pollution is vented from the boundary layer to the free troposphere, as well as transport of pollutants to rural areas. Our hope is to give policy makers more detailed information on how climate variability associated with storm tracks between 1979-2013 may affect air quality in Northeast USA and Western Europe.
Monitoring Air Quality over China: Evaluation of the modeling system of the PANDA project
NASA Astrophysics Data System (ADS)
Bouarar, Idir; Katinka Petersen, Anna; Brasseur, Guy; Granier, Claire; Xie, Ying; Wang, Xuemei; Fan, Qi; Wang, Lili
2015-04-01
Air pollution has become a pressing problem in Asia and specifically in China due to rapid increase in anthropogenic emissions related to growth of China's economic activity and increasing demand for energy in the past decade. Observed levels of particulate matter and ozone regularly exceed World Health Organization (WHO) air quality guidelines in many parts of the country leading to increased risk of respiratory illnesses and other health problems. The EU-funded project PANDA aims to establish a team of European and Chinese scientists to monitor air pollution over China and elaborate air quality indicators in support of European and Chinese policies. PANDA combines state-of-the-art air pollution modeling with space and surface observations of chemical species to improve methods for monitoring air quality. The modeling system of the PANDA project follows a downscaling approach: global models such as MOZART and MACC system provide initial and boundary conditions to regional WRF-Chem and EMEP simulations over East Asia. WRF-Chem simulations at higher resolution (e.g. 20km) are then performed over a smaller domain covering East China and initial and boundary conditions from this run are used to perform simulations at a finer resolution (e.g. 5km) over specific megacities like Shanghai. Here we present results of model simulations for January and July 2010 performed during the first year of the project. We show an intercomparison of the global (MACC, EMEP) and regional (WRF-Chem) simulations and a comprehensive evaluation with satellite measurements (NO2, CO) and in-situ data (O3, CO, NOx, PM10 and PM2.5) at several surface stations. Using the WRF-Chem model, we demonstrate that model performance is influenced not only by the resolution (e.g. 60km, 20km) but also the emission inventories used (MACCity, HTAPv2), their resolution and diurnal variation, and the choice of initial and boundary conditions (e.g. MOZART, MACC analysis).
Users manual and modeling improvements for axial turbine design and performance computer code TD2-2
NASA Technical Reports Server (NTRS)
Glassman, Arthur J.
1992-01-01
Computer code TD2 computes design point velocity diagrams and performance for multistage, multishaft, cooled or uncooled, axial flow turbines. This streamline analysis code was recently modified to upgrade modeling related to turbine cooling and to the internal loss correlation. These modifications are presented in this report along with descriptions of the code's expanded input and output. This report serves as the users manual for the upgraded code, which is named TD2-2.
2016-09-09
law enforcement detachment (USCG) LEO law enforcement operations LOC line of communications MACCS Marine air command and control system MAS...enemy command and control [C2], intelligence, fires, reinforcing units, lines of communications [ LOCs ], logistics, and other operational- and tactical...enemy naval, engineering, and personnel resources to the tasks of repairing and recovering damaged equipment, facilities, and LOCs . It can draw the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauntt, Randall O.; Mattie, Patrick D.; Bixler, Nathan E.
2014-02-01
This paper describes the knowledge advancements from the uncertainty analysis for the State-of- the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout accident scenario at the Peach Bottom Atomic Power Station. This work assessed key MELCOR and MELCOR Accident Consequence Code System, Version 2 (MACCS2) modeling uncertainties in an integrated fashion to quantify the relative importance of each uncertain input on potential accident progression, radiological releases, and off-site consequences. This quantitative uncertainty analysis provides measures of the effects on consequences, of each of the selected uncertain parameters both individually and in interaction with other parameters. The results measure the modelmore » response (e.g., variance in the output) to uncertainty in the selected input. Investigation into the important uncertain parameters in turn yields insights into important phenomena for accident progression and off-site consequences. This uncertainty analysis confirmed the known importance of some parameters, such as failure rate of the Safety Relief Valve in accident progression modeling and the dry deposition velocity in off-site consequence modeling. The analysis also revealed some new insights, such as dependent effect of cesium chemical form for different accident progressions. (auth)« less
NASA Astrophysics Data System (ADS)
Gaudel, A.; Clark, H.; Thouret, V.; Eskes, H.; Huijnen, V.; Nedelec, P.
2013-12-01
Tropospheric ozone is probably one of the most important trace gases in the atmosphere. It plays a major role in the chemistry of the troposphere by exerting a strong influence on the concentrations of oxidants such as hydroxyl radical (OH) and is the third greenhouse gas after carbon dioxide and methane. Its radiative impact is of particular importance in the Upper Troposphere / Lower Stratosphere (UTLS), the most critical region regarding the climate change. Carbon Monoxide (CO) is one of the major ozone precursors (originating from all types of combustion) in the troposphere. In the UTLS, it also has implications for stratospheric chemistry and indirect radiative forcing effects (as a chemical precursor of CO2 and O3). Assessing the global distribution (and possibly trends) of O3 and CO in this region of the atmosphere, combining high resolution in situ data and the most appropriate global 3D model to further quantify the different sources and their origins is then of particular interest. This is one of the objectives of the MOZAIC-IAGOS (http://www.iagos.fr) and MACC-II (http://www.gmes-atmosphere.eu) European programs. The aircraft of the MOZAIC program have collected simultaneously O3 and CO data regularly all over the world since the end of 2001. Most of the data are recorded in northern mid-latitudes, in the UTLS region (as commercial aircraft cruise altitude is between 9 and 12 km). MACC-II aims at providing information services covering air quality, climate forcing and stratospheric ozone, UV radiation and solar-energy resources, using near real time analysis and forecasting products, and reanalysis. The validation reports of the MACC models are regularly published (http://www.gmes-atmosphere.eu/services/gac/nrt/ and http://www.gmes-atmosphere.eu/services/gac/reanalysis/). We will present and discuss the performance of the MACC-reanalysis, including the ECMWF-Integrated Forecasting System (IFS) coupled to the CTM MOZART with 4DVAR data assimilation, to reproduce ozone and CO in the UTLS, as evaluated by the observations of MOZAIC between 2003 and 2008. In the UT, the model tends to overestimate O3 by about 30-40 % in the mid-latitudes and polar regions. This applies broadly to all seasons but is more marked in DJF and MAM. In tropical regions, the model underestimates UT ozone by about 20 % in all seasons but this is stronger in JJA. Upper-tropospheric CO is globally underestimated by the model in all seasons, by 10-20 %. In the southern hemisphere, it is particularly the case in SON in the regions of wildfires in South Africa. In the northern hemisphere, the zonal gradient of CO between the US, Europe and Asia is not well-captured by the model, especially in MAM.
NASA Astrophysics Data System (ADS)
Verstraeten, W. W.; Boersma, K. F.; Douros, J.; Williams, J. E.; Eskes, H.; Delcloo, A. W.
2017-12-01
High nitrogen oxides (NOX = NO + NO2) concentrations near the surface impact humans and ecosystems badly and play a key role in tropospheric chemistry. NO2 is an important precursor of tropospheric ozone (O3) which in turn affects the production of the hydroxyl radical controlling the chemical lifetime of key atmospheric pollutants and reactive greenhouse gases. Combustion from industrial, traffic and household activities in large and densely populated urban areas result in high NOX emissions. Accurate mapping of these emissions is essential but hard to do since reported emissions factors may differ from real-time emissions in order of magnitude. Modelled NO2 levels and lifetimes also have large associated uncertainties and overestimation in the chemical lifetime which may mask missing NOX chemistry in current chemistry transport models (CTM's). The simultaneously estimation of both the NO2 lifetime and as well as the concentrations by applying the Exponentially Modified Gaussian (EMG) method on tropospheric NO2 columns lines densities should improve the surface NOX emission estimates. Here we evaluate if the EMG methodology applied on the tropospheric NO2 columns simulated by the LOTOS-EUROS (Long Term Ozone Simulation-European Ozone Simulation) CTM can reproduce the NOX emissions used as model input. First we process both the modelled tropospheric NO2 columns for the period April-September 2013 for 21 selected European urban areas under windy conditions (averaged vertical wind speeds between surface and 500 m from ECMWF > 2 m s-1) as well as the accompanying OMI (Ozone Monitoring Instrument) data providing us with real-time observation-based estimates of midday NO2 columns. Then we compare the top-down derived surface NOX emissions with the 2011 MACC-III emission inventory, used in the CTM as input to simulate the NO2 columns. For cities where NOX emissions can be assumed as originating from one large source good agreement is found between the top-down derived NOX emissions from CTM and OMI with the MACC-III inventory. For cities where multiple sources of NOX are observed (e.g. Brussels, London), an adapted methodology is required. For some cities such as St-Petersburg and Moscow the top-down NOX estimates from 2013 OMI data are biased low compared to the MACC-III inventory which uses a 2011 NOX emissions update.
This study presents a comparative evaluation of the impact of WRF-NMM and WRF-ARW meteorology on CMAQ simulations of PM2.5, its composition and related precursors over the eastern United States with the intensive observations obtained by aircraft (NOAA WP-3), ship and ...
NASA Astrophysics Data System (ADS)
Posch, J. L.; Witte, A. J.; Engebretson, M. J.; Murr, D.; Lessard, M.; Raita, T.; Singer, H. J.
2010-12-01
Traveling convection vortices (TCVs), which appear in ground magnetometer records at near-cusp latitudes as solitary ~5 mHz pulses, are now known to originate in instabilities in the ion foreshock just upstream of Earth’s bow shock. They can also stimulate compressions or relaxations of the dayside magnetosphere (evident in geosynchronous satellite data). These transient compressions can in turn sharply increase the growth rate of electromagnetic ion cyclotron (EMIC) waves, which also appear in ground records at near-cusp latitudes as bursts of Pc 1-2 pulsations. In this study we have identified simultaneous TCV - Pc 1-2 burst events occurring from 2008 through the first 7 months of 2010 in Eastern Arctic Canada and Svalbard, using a combination of fluxgate magnetometers (MACCS and IMAGE) and search coil magnetometers in each region. Magnetometer observations at GOES 10 and 12, at longitudes near the MACCS sites, are also used to characterize the strength of the magnetic perturbations. There is no direct proportion between the amplitude of TCV and Pc 1-2 wave events in either region, consistent with the highly variable densities and pitch angle distributions of plasma of ring current / plasma sheet energies in the outer dayside magnetosphere.
NASA Astrophysics Data System (ADS)
Siddans, Richard; Knappett, Diane; Kerridge, Brian; Waterfall, Alison; Hurley, Jane; Latter, Barry; Boesch, Hartmut; Parker, Robert
2017-11-01
This paper describes the global height-resolved methane (CH4) retrieval scheme for the Infrared Atmospheric Sounding Interferometer (IASI) on MetOp, developed at the Rutherford Appleton Laboratory (RAL). The scheme precisely fits measured spectra in the 7.9 micron region to allow information to be retrieved on two independent layers centred in the upper and lower troposphere. It also uses nitrous oxide (N2O) spectral features in the same spectral interval to directly retrieve effective cloud parameters to mitigate errors in retrieved methane due to residual cloud and other geophysical variables. The scheme has been applied to analyse IASI measurements between 2007 and 2015. Results are compared to model fields from the MACC greenhouse gas inversion and independent measurements from satellite (GOSAT), airborne (HIPPO) and ground (TCCON) sensors. The estimated error on methane mixing ratio in the lower- and upper-tropospheric layers ranges from 20 to 100 and from 30 to 40 ppbv, respectively, and error on the derived column-average ranges from 20 to 40 ppbv. Vertical sensitivity extends through the lower troposphere, though it decreases near to the surface. Systematic differences with the other datasets are typically < 10 ppbv regionally and < 5 ppbv globally. In the Southern Hemisphere, a bias of around 20 ppbv is found with respect to MACC, which is not explained by vertical sensitivity or found in comparison of IASI to TCCON. Comparisons to HIPPO and MACC support the assertion that two layers can be independently retrieved and provide confirmation that the estimated random errors on the column- and layer-averaged amounts are realistic. The data have been made publically available via the Centre for Environmental Data Analysis (CEDA) data archive (Siddans, 2016).
Systematic review of preoperative physical activity and its impact on postcardiac surgical outcomes.
Kehler, D Scott; Stammers, Andrew N; Tangri, Navdeep; Hiebert, Brett; Fransoo, Randy; Schultz, Annette S H; Macdonald, Kerry; Giacomontonio, Nicholas; Hassan, Ansar; Légaré, Jean-Francois; Arora, Rakesh C; Duhamel, Todd A
2017-08-11
The objective of this systematic review was to study the impact of preoperative physical activity levels on adult cardiac surgical patients' postoperative: (1) major adverse cardiac and cerebrovascular events (MACCEs), (2) adverse events within 30 days, (3) hospital length of stay (HLOS), (4) intensive care unit length of stay (ICU LOS), (5) activities of daily living (ADLs), (6) quality of life, (7) cardiac rehabilitation attendance and (8) physical activity behaviour. A systematic search of MEDLINE, Embase, AgeLine and Cochrane library for cohort studies was conducted. Eleven studies (n=5733 patients) met the inclusion criteria. Only self-reported physical activity tools were used. Few studies used multivariate analyses to compare active versus inactive patients prior to surgery. When comparing patients who were active versus inactive preoperatively, there were mixed findings for MACCE, 30 day adverse events, HLOS and ICU LOS. Of the studies that adjusted for confounding variables, five studies found a protective, independent association between physical activity and MACCE (n=1), 30-day postoperative events (n=2), HLOS (n=1) and ICU LOS (n=1), but two studies found no protective association for 30-day postoperative events (n=1) and postoperative ADLs (n=1). No studies investigated if activity status before surgery impacted quality of life or cardiac rehabilitation attendance postoperatively. Three studies found that active patients prior to surgery were more likely to be inactive postoperatively. Due to the mixed findings, the literature does not presently support that self-reported preoperative physical activity behaviour is associated with postoperative cardiac surgical outcomes. Future studies should objectively measure physical activity, clearly define outcomes and adjust for clinically relevant variables. Trial registration number NCT02219815. PROSPERO number CRD42015023606. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Computer codes developed and under development at Lewis
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
1992-01-01
The objective of this summary is to provide a brief description of: (1) codes developed or under development at LeRC; and (2) the development status of IPACS with some typical early results. The computer codes that have been developed and/or are under development at LeRC are listed in the accompanying charts. This list includes: (1) the code acronym; (2) select physics descriptors; (3) current enhancements; and (4) present (9/91) code status with respect to its availability and documentation. The computer codes list is grouped by related functions such as: (1) composite mechanics; (2) composite structures; (3) integrated and 3-D analysis; (4) structural tailoring; and (5) probabilistic structural analysis. These codes provide a broad computational simulation infrastructure (technology base-readiness) for assessing the structural integrity/durability/reliability of propulsion systems. These codes serve two other very important functions: they provide an effective means of technology transfer; and they constitute a depository of corporate memory.
NASA Technical Reports Server (NTRS)
Smith, S. D.
1984-01-01
A users manual for the RAMP2 computer code is provided. The RAMP2 code can be used to model the dominant phenomena which affect the prediction of liquid and solid rocket nozzle and orbital plume flow fields. The general structure and operation of RAMP2 are discussed. A user input/output guide for the modified TRAN72 computer code and the RAMP2F code is given. The application and use of the BLIMPJ module are considered. Sample problems involving the space shuttle main engine and motor are included.
Guo, Yutao; Apostalakis, Stavros; Blann, Andrew D; Lip, Gregory Y H
2014-01-01
There is growing evidence that chemokines are potentially important mediators of the pathogenesis of atherosclerotic disease. Major atherothrombotic complications, such as stroke and myocardial infarction, are common among atrial fibrillation (AF) patients. This increase in risk of adverse events may be predicted by a score based on the presence of certain clinical features of chronic heart failure, hypertension, age 75 years or greater, diabetes and stroke (the CHADS2 score). Our objective was to assess the prognostic value of plasma chemokines CCL2, CXCL4 and CX3CL1, and their relationship with the CHADS2 score, in AF patients. Plasma CCL2, CXCL4 and CX3CL1 were measured in 441 patients (59% male, mean age 75 years, 12% paroxysmal, 99% on warfarin) with AF. Baseline clinical and demographic factors were used to define each subject's CHADS2 score. Patients were followed up for a mean 2.1 years, and major adverse cardiovascular and cerebrovascular events (MACCE) were sought, being the combination of cardiovascular death, acute coronary events, stroke and systemic embolism. Fifty-five of the AF patients suffered a MACCE (6% per year). Those in the lowest CX3CL1 quartile (≤ 0.24 ng/ml) had fewest MACCE (p = 0.02). In the Cox regression analysis, CX3CL1 levels >0.24 ng/ml (Hazard ratio 2.8, 95% CI 1.02-8.2, p = 0.045) and age (p = 0.042) were independently linked with adverse outcomes. The CX3CL1 levels rose directly with the CHADS2 risk score (p = 0.009). The addition of CX3CL1 did not significantly increased the discriminatory ability of the CHADS2 clinical factor-based risk stratification (c-index 0.60 for CHADS2 alone versus 0.67 for CHADS2 plus CX3CL1 >0.24 ng/ml, p = 0.1). Aspirin use was associated with lower levels of CX3CL1 (p = 0.0002) and diabetes with higher levels (p = 0.031). There was no association between CXCL4 and CCL2 plasma levels and outcomes. There is an independent association between low plasma CX3CL1 levels and low risk of major cardiovascular events in AF patients, as well as a linear association between CX3CL1 plasma levels and CHADS2-defined cardiovascular risk. The potential for CX3CL1 in refining risk stratification in AF patients merits consideration. © 2014 S. Karger AG, Basel.
Hanford meteorological station computer codes: Volume 9, The quality assurance computer codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burk, K.W.; Andrews, G.L.
1989-02-01
The Hanford Meteorological Station (HMS) was established in 1944 on the Hanford Site to collect and archive meteorological data and provide weather forecasts and related services for Hanford Site approximately 1/2 mile east of the 200 West Area and is operated by PNL for the US Department of Energy. Meteorological data are collected from various sensors and equipment located on and off the Hanford Site. These data are stored in data bases on the Digital Equipment Corporation (DEC) VAX 11/750 at the HMS (hereafter referred to as the HMS computer). Files from those data bases are routinely transferred to themore » Emergency Management System (EMS) computer at the Unified Dose Assessment Center (UDAC). To ensure the quality and integrity of the HMS data, a set of Quality Assurance (QA) computer codes has been written. The codes will be routinely used by the HMS system manager or the data base custodian. The QA codes provide detailed output files that will be used in correcting erroneous data. The following sections in this volume describe the implementation and operation of QA computer codes. The appendices contain detailed descriptions, flow charts, and source code listings of each computer code. 2 refs.« less
Stochastic Modeling of Radioactive Material Releases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrus, Jason; Pope, Chad
2015-09-01
Nonreactor nuclear facilities operated under the approval authority of the U.S. Department of Energy use unmitigated hazard evaluations to determine if potential radiological doses associated with design basis events challenge or exceed dose evaluation guidelines. Unmitigated design basis events that sufficiently challenge dose evaluation guidelines or exceed the guidelines for members of the public or workers, merit selection of safety structures, systems, or components or other controls to prevent or mitigate the hazard. Idaho State University, in collaboration with Idaho National Laboratory, has developed a portable and simple to use software application called SODA (Stochastic Objective Decision-Aide) that stochastically calculatesmore » the radiation dose associated with hypothetical radiological material release scenarios. Rather than producing a point estimate of the dose, SODA produces a dose distribution result to allow a deeper understanding of the dose potential. SODA allows users to select the distribution type and parameter values for all of the input variables used to perform the dose calculation. SODA then randomly samples each distribution input variable and calculates the overall resulting dose distribution. In cases where an input variable distribution is unknown, a traditional single point value can be used. SODA was developed using the MATLAB coding framework. The software application has a graphical user input. SODA can be installed on both Windows and Mac computers and does not require MATLAB to function. SODA provides improved risk understanding leading to better informed decision making associated with establishing nuclear facility material-at-risk limits and safety structure, system, or component selection. It is important to note that SODA does not replace or compete with codes such as MACCS or RSAC, rather it is viewed as an easy to use supplemental tool to help improve risk understanding and support better informed decisions. The work was funded through a grant from the DOE Nuclear Safety Research and Development Program.« less
NASA Technical Reports Server (NTRS)
Anderson, O. L.; Chiappetta, L. M.; Edwards, D. E.; Mcvey, J. B.
1982-01-01
A user's manual describing the operation of three computer codes (ADD code, PTRAK code, and VAPDIF code) is presented. The general features of the computer codes, the input/output formats, run streams, and sample input cases are described.
Walsh, Simon J; Hanratty, Colm G; Watkins, Stuart; Oldroyd, Keith G; Mulvihill, Niall T; Hensey, Mark; Chase, Alex; Smith, Dave; Cruden, Nick; Spratt, James C; Mylotte, Darren; Johnson, Tom; Hill, Jonathan; Hussein, Hafiz M; Bogaerts, Kris; Morice, Marie-Claude; Foley, David P
2018-05-24
The aim of this study was to provide contemporary outcome data for patients with de novo coronary disease and Medina 1,1,1 lesions who were treated with a culotte two-stent technique, and to compare the performance of two modern-generation drug-eluting stent (DES) platforms, the 3-connector XIENCE and the 2-connector SYNERGY. Patients with Medina 1,1,1 bifurcation lesions who had disease that was amenable to culotte stenting were randomised 1:1 to treatment with XIENCE or SYNERGY DES. A total of 170 patients were included. Technical success and final kissing balloon inflation occurred in >96% of cases. Major adverse cardiovascular or cerebrovascular events (MACCE: a composite of death, myocardial infarction [MI], cerebrovascular accident [CVA] and target vessel revascularisation [TVR]) occurred in 5.9% of patients by nine months. The primary endpoint was a composite of death, MI, CVA, target vessel failure (TVF), stent thrombosis and binary angiographic restenosis. At nine months, the primary endpoint occurred in 19% of XIENCE patients and 16% of SYNERGY patients (p=0.003 for non-inferiority for platform performance). MACCE rates for culotte stenting using contemporary everolimus-eluting DES are low at nine months. The XIENCE and SYNERGY stents demonstrated comparable performance for the primary endpoint.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grebennikov, A.N.; Zhitnik, A.K.; Zvenigorodskaya, O.A.
1995-12-31
In conformity with the protocol of the Workshop under Contract {open_quotes}Assessment of RBMK reactor safety using modern Western Codes{close_quotes} VNIIEF performed a neutronics computation series to compare western and VNIIEF codes and assess whether VNIIEF codes are suitable for RBMK type reactor safety assessment computation. The work was carried out in close collaboration with M.I. Rozhdestvensky and L.M. Podlazov, NIKIET employees. The effort involved: (1) cell computations with the WIMS, EKRAN codes (improved modification of the LOMA code) and the S-90 code (VNIIEF Monte Carlo). Cell, polycell, burnup computation; (2) 3D computation of static states with the KORAT-3D and NEUmore » codes and comparison with results of computation with the NESTLE code (USA). The computations were performed in the geometry and using the neutron constants presented by the American party; (3) 3D computation of neutron kinetics with the KORAT-3D and NEU codes. These computations were performed in two formulations, both being developed in collaboration with NIKIET. Formulation of the first problem maximally possibly agrees with one of NESTLE problems and imitates gas bubble travel through a core. The second problem is a model of the RBMK as a whole with imitation of control and protection system controls (CPS) movement in a core.« less
Green Infrastructure Barriers and Opportunities in the Macatawa Watershed, Michigan
The project supports MACC outreach and implementation efforts of the watershed management plan by facilitating communication with local municipal staff and educating local decision makers about green infrastructure.
Methodology for Air Quality Forecast Downscaling from Regional- to Street-Scale
NASA Astrophysics Data System (ADS)
Baklanov, Alexander; Nuterman, Roman; Mahura, Alexander; Amstrup, Bjarne; Hansen Saas, Bent; Havskov Sørensen, Jens; Lorenzen, Thomas; Weismann, Jakob
2010-05-01
The most serious air pollution events occur in cities where there is a combination of high population density and air pollution, e.g. from vehicles. The pollutants can lead to serious human health problems, including asthma, irritation of the lungs, bronchitis, pneumonia, decreased resistance to respiratory infections, and premature death. In particular air pollution is associated with increase in cardiovascular disease and lung cancer. In 2000 WHO estimated that between 2.5 % and 11 % of total annual deaths are caused by exposure to air pollution. However, European-scale air quality models are not suited for local forecasts, as their grid-cell is typically of the order of 5 to 10km and they generally lack detailed representation of urban effects. Two suites are used in the framework of the EC FP7 project MACC (Monitoring of Atmosphere Composition and Climate) to demonstrate how downscaling from the European MACC ensemble to local-scale air quality forecast will be carried out: one will illustrate capabilities for the city of Copenhagen (Denmark); the second will focus on the city of Bucharest (Romania). This work is devoted to the first suite, where methodological aspects of downscaling from regional (European/ Denmark) to urban scale (Copenhagen), and from the urban down to street scale. The first results of downscaling according to the proposed methodology are presented. The potential for downscaling of European air quality forecasts by operating urban and street-level forecast models is evaluated. This will bring a strong support for continuous improvement of the regional forecast modelling systems for air quality in Europe, and underline clear perspectives for the future regional air quality core and downstream services for end-users. At the end of the MACC project, requirements on "how-to-do" downscaling of European air-quality forecasts to the city and street levels with different approaches will be formulated.
Samim, Mariam; van der Worp, Bart; Agostoni, Pierfrancesco; Hendrikse, Jeroen; Budde, Ricardo P J; Nijhoff, Freek; Ramjankhan, Faiz; Doevendans, Pieter A; Stella, Pieter R
2017-02-15
This study aims to evaluate the safety and performance of the new embolic deflection device TriGuard™HDH in patients undergoing TAVR. Transcatheter aortic valve replacement (TAVR) is associated with a high incidence of new cerebral ischemic lesions. The use of an embolic protection device may reduce the frequency of TAVR-related embolic events. This prospective, single arm feasibility pilot study included 14 patients with severe symptomatic aortic stenosis scheduled for TAVR. Cerebral diffusion weighted magnetic resonance imaging (DWI) was planned in all patients one day before and at day 4 (±2) after the procedure. Major adverse cerebral and cardiac events (MACCEs) were recorded for all patients. Primary endpoints of this study were I) device performance success defined as coverage of the aortic arch takeoffs throughout the entire TAVR procedure and II) MACCE occurrence. Secondary endpoints included the number and the volume of new cerebral ischemic lesions on DWI. Thirteen patients underwent transfemoral TAVR and one patient a transapical procedure. Edwards SAPIEN valve prosthesis was implanted in 8 (57%) patients and Medtronic CoreValve prosthesis in the remaining 6 (43%). Predefined performance success of the TriGuard™HDH device was achieved in 9 (64%) patients. The composite endpoint MACCE occurred in none of the patients. Post-procedural DWI was performed in 11 patients. Comparing the DWI of these patients to a historical control group showed no reduction in number [median 5.5 vs. 5.0, P = 0.857], however there was a significant reduction in mean lesion volume per patient [median 13.8 vs. 25.1, P = 0.049]. This study showed the feasibility and safety of using the TriGuard™HDH for cerebral protection during TAVR. This device did not decrease the number of post-procedural new cerebral DWI lesions, however its use showed decreased lesion volume as compared to unprotected TAVR. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ritchie, L.T.; Johnson, J.D.; Blond, R.M.
The CRAC2 computer code is a revision of the Calculation of Reactor Accident Consequences computer code, CRAC, developed for the Reactor Safety Study. The CRAC2 computer code incorporates significant modeling improvements in the areas of weather sequence sampling and emergency response, and refinements to the plume rise, atmospheric dispersion, and wet deposition models. New output capabilities have also been added. This guide is to facilitate the informed and intelligent use of CRAC2. It includes descriptions of the input data, the output results, the file structures, control information, and five sample problems.
CFD Modeling of Free-Piston Stirling Engines
NASA Technical Reports Server (NTRS)
Ibrahim, Mounir B.; Zhang, Zhi-Guo; Tew, Roy C., Jr.; Gedeon, David; Simon, Terrence W.
2001-01-01
NASA Glenn Research Center (GRC) is funding Cleveland State University (CSU) to develop a reliable Computational Fluid Dynamics (CFD) code that can predict engine performance with the goal of significant improvements in accuracy when compared to one-dimensional (1-D) design code predictions. The funding also includes conducting code validation experiments at both the University of Minnesota (UMN) and CSU. In this paper a brief description of the work-in-progress is provided in the two areas (CFD and Experiments). Also, previous test results are compared with computational data obtained using (1) a 2-D CFD code obtained from Dr. Georg Scheuerer and further developed at CSU and (2) a multidimensional commercial code CFD-ACE+. The test data and computational results are for (1) a gas spring and (2) a single piston/cylinder with attached annular heat exchanger. The comparisons among the codes are discussed. The paper also discusses plans for conducting code validation experiments at CSU and UMN.
NASA Technical Reports Server (NTRS)
Capo, M. A.; Disney, R. K.
1971-01-01
The work performed in the following areas is summarized: (1) Analysis of Realistic nuclear-propelled vehicle was analyzed using the Marshall Space Flight Center computer code package. This code package includes one and two dimensional discrete ordinate transport, point kernel, and single scatter techniques, as well as cross section preparation and data processing codes, (2) Techniques were developed to improve the automated data transfer in the coupled computation method of the computer code package and improve the utilization of this code package on the Univac-1108 computer system. (3) The MSFC master data libraries were updated.
Computer Description of the Field Artillery Ammunition Supply Vehicle
1983-04-01
Combinatorial Geometry (COM-GEOM) GIFT Computer Code Computer Target Description 2& AfTNACT (Cmne M feerve shb N ,neemssalyan ify by block number) A...input to the GIFT computer code to generate target vulnerability data. F.a- 4 ono OF I NOV 5S OLETE UNCLASSIFIED SECUOITY CLASSIFICATION OF THIS PAGE...Combinatorial Geometry (COM-GEOM) desrription. The "Geometric Information for Tarqets" ( GIFT ) computer code accepts the CO!-GEOM description and
An emulator for minimizing computer resources for finite element analysis
NASA Technical Reports Server (NTRS)
Melosh, R.; Utku, S.; Islam, M.; Salama, M.
1984-01-01
A computer code, SCOPE, has been developed for predicting the computer resources required for a given analysis code, computer hardware, and structural problem. The cost of running the code is a small fraction (about 3 percent) of the cost of performing the actual analysis. However, its accuracy in predicting the CPU and I/O resources depends intrinsically on the accuracy of calibration data that must be developed once for the computer hardware and the finite element analysis code of interest. Testing of the SCOPE code on the AMDAHL 470 V/8 computer and the ELAS finite element analysis program indicated small I/O errors (3.2 percent), larger CPU errors (17.8 percent), and negligible total errors (1.5 percent).
Computer Code Aids Design Of Wings
NASA Technical Reports Server (NTRS)
Carlson, Harry W.; Darden, Christine M.
1993-01-01
AERO2S computer code developed to aid design engineers in selection and evaluation of aerodynamically efficient wing/canard and wing/horizontal-tail configurations that includes simple hinged-flap systems. Code rapidly estimates longitudinal aerodynamic characteristics of conceptual airplane lifting-surface arrangements. Developed in FORTRAN V on CDC 6000 computer system, and ported to MS-DOS environment.
User manual for semi-circular compact range reflector code: Version 2
NASA Technical Reports Server (NTRS)
Gupta, Inder J.; Burnside, Walter D.
1987-01-01
A computer code has been developed at the Ohio State University ElectroScience Laboratory to analyze a semi-circular paraboloidal reflector with or without a rolled edge at the top and a skirt at the bottom. The code can be used to compute the total near field of the reflector or its individual components at a given distance from the center of the paraboloid. The code computes the fields along a radial, horizontal, vertical or axial cut at that distance. Thus, it is very effective in computing the size of the sweet spot for a semi-circular compact range reflector. This report describes the operation of the code. Various input and output statements are explained. Some results obtained using the computer code are presented to illustrate the code's capability as well as being samples of input/output sets.
Preliminary risks associated with postulated tritium release from production reactor operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Kula, K.R.; Horton, W.H.
1988-01-01
The Probabilistic Risk Assessment (PRA) of Savannah River Plant (SRP) reactor operation is assessing the off-site risk due to tritium releases during postulated full or partial loss of heavy water moderator accidents. Other sources of tritium in the reactor are less likely to contribute to off-site risk in non-fuel melting accident scenarios. Preliminary determination of the frequency of average partial moderator loss (including incidents with leaks as small as .5 kg) yields an estimate of /approximately/1 per reactor year. The full moderator loss frequency is conservatively chosen as 5 /times/ 10/sup /minus/3/ per reactor year. Conditional consequences, determined with amore » version of the MACCS code modified to handle tritium, are found to be insignificant. The 95th percentile individual cancer risk is 4 /times/ 10/sup /minus/8/ per reactor year within 16 km of the release point. The full moderator loss accident contributes about 75% of the evaluated risks. 13 refs., 4 figs., 5 tabs.« less
Navier-Stokes Simulation of Homogeneous Turbulence on the CYBER 205
NASA Technical Reports Server (NTRS)
Wu, C. T.; Ferziger, J. H.; Chapman, D. R.; Rogallo, R. S.
1984-01-01
A computer code which solves the Navier-Stokes equations for three dimensional, time-dependent, homogenous turbulence has been written for the CYBER 205. The code has options for both 64-bit and 32-bit arithmetic. With 32-bit computation, mesh sizes up to 64 (3) are contained within core of a 2 million 64-bit word memory. Computer speed timing runs were made for various vector lengths up to 6144. With this code, speeds a little over 100 Mflops have been achieved on a 2-pipe CYBER 205. Several problems encountered in the coding are discussed.
Operations analysis (study 2.1). Program listing for the LOVES computer code
NASA Technical Reports Server (NTRS)
Wray, S. T., Jr.
1974-01-01
A listing of the LOVES computer program is presented. The program is coded partially in SIMSCRIPT and FORTRAN. This version of LOVES is compatible with both the CDC 7600 and the UNIVAC 1108 computers. The code has been compiled, loaded, and executed successfully on the EXEC 8 system for the UNIVAC 1108.
NASA Technical Reports Server (NTRS)
Harper, Warren
1989-01-01
Two electromagnetic scattering codes, NEC-BSC and ESP3, were delivered and installed on a NASA VAX computer for use by Marshall Space Flight Center antenna design personnel. The existing codes and certain supplementary software were updated, the codes installed on a computer that will be delivered to the customer, to provide capability for graphic display of the data to be computed by the use of the codes and to assist the customer in the solution of specific problems that demonstrate the use of the codes. With the exception of one code revision, all of these tasks were performed.
Cloud Computing for Complex Performance Codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appel, Gordon John; Hadgu, Teklu; Klein, Brandon Thorin
This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.
Variations of trace gases over the Bay of Bengal during the summer monsoon
NASA Astrophysics Data System (ADS)
Girach, I. A.; Ojha, Narendra; Nair, Prabha R.; Tiwari, Yogesh K.; Kumar, K. Ravi
2018-02-01
In situ measurements of near-surface ozone (O3), carbon monoxide (CO), and methane (CH4) were carried out over the Bay of Bengal (BoB) as a part of the Continental Tropical Convergence Zone (CTCZ) campaign during the summer monsoon season of 2009. O3, CO and CH4 mixing ratios varied in the ranges of 8-54 ppbv, 50-200 ppbv and 1.57-2.15 ppmv, respectively during 16 July-17 August 2009. The spatial distribution of mean tropospheric O3 from satellite retrievals is found to be similar to that in surface O3 observations, with higher levels over coastal and northern BoB as compared to central BoB. The comparison of in situ measurements with the Monitoring Atmospheric Composition & Climate (MACC) global reanalysis shows that MACC simulations reproduce the observations with small mean biases of 1.6 ppbv, -2.6 ppbv and 0.07 ppmv for O3, CO and CH4, respectively. The analysis of diurnal variation of O3 based on observations and the simulations from Weather Research and Forecasting coupled with Chemistry (WRF-Chem) at a stationary point over the BoB did not show a net photochemical build up during daytime. Satellite retrievals show limitations in capturing CH4 variations as measured by in situ sample analysis highlighting the need of more shipborne in situ measurements of trace gases over this region during monsoon.
2,445 Hours of Code: What I Learned from Facilitating Hour of Code Events in High School Libraries
ERIC Educational Resources Information Center
Colby, Jennifer
2015-01-01
This article describes a school librarian's experience with initiating an Hour of Code event for her school's student body. Hadi Partovi of Code.org conceived the Hour of Code "to get ten million students to try one hour of computer science" (Partovi, 2013a), which is implemented during Computer Science Education Week with a goal of…
NASA Astrophysics Data System (ADS)
Verstraeten, Willem W.; Folkert Boersma, K.; Douros, John; Williams, Jason E.; Eskes, Henk H.; Delcloo, Andy
2017-04-01
High nitrogen oxides concentrations at the surface (NOX = NO + NO2) impact humans and ecosystem badly and play a key role in tropospheric chemistry. Surface NOX emissions drive major processes in regional and global chemistry transport models (CTM). NOX contributes to the formation of acid rain, act as aerosol precursors and is an important trace gas for the formation of tropospheric ozone (O3). Via tropospheric O3, NOX indirectly affects the production of the hydroxyl radical which controls the chemical lifetime of key atmospheric pollutants and reactive greenhouse gases. High NOX emissions are mainly observed in polluted regions produced by anthropogenic combustion from industrial, traffic and household activities typically observed in large and densely populated urban areas. Accurate NOX inventories are essential, but state-of the- art emission databases may vary substantially and uncertainties are high since reported emissions factors may differ in order of magnitude and more. To date, the modelled NO2 concentrations and lifetimes have large associated uncertainties due to the highly non-linear small-scale chemistry that occurs in urban areas and uncertainties in the reaction rate data, missing nitrogen (N) species and volatile organic compounds (VOC) emissions, and incomplete knowledge of nitrogen oxides chemistry. Any overestimation in the chemical lifetime may mask missing NOX chemistry in current CTM's. By simultaneously estimating both the NO2 lifetime and concentrations, for instance by using the Exponentially Modified Gaussian (EMG), a better surface NOX emission flux estimate can be obtained. Here we evaluate if the EMG methodology can reproduce the emissions input from the tropospheric NO2 columns simulated by the LOTOS-EUROS (Long Term Ozone Simulation-European Ozone Simulation) CTM model. We apply the EMG methodology on LOTOS-EUROS simulated tropospheric NO2 columns for the period April-September 2013 for 21 selected European urban areas under windy conditions (surface wind speeds > 3 m s-1). We then compare the top-down derived surface NOX emissions with the 2011 MACC-III emission inventory, used in the LOTOS-EUROS model as input to simulate the NO2 columns. We also apply the EMG methodology on OMI (Ozone Monitoring Instrument) tropospheric NO2 column data, providing us with real-time observation-based estimates of midday NO2 lifetime and NOX emissions over 21 European cities in 2013. Results indicate that the top-down derived NOX emissions from LOTOS-EUROS (respectively OMI) are comparable with the MACC-III inventory with a R2 of 0.99 (respectively R2 = 0.79). For St-Petersburg and Moscow the top-down NOX estimates from 2013 OMI data are biased low compared to the MACC-III inventory which uses a 2011 NOX emissions update.
The development of a classification system for maternity models of care.
Donnolley, Natasha; Butler-Henderson, Kerryn; Chapman, Michael; Sullivan, Elizabeth
2016-08-01
A lack of standard terminology or means to identify and define models of maternity care in Australia has prevented accurate evaluations of outcomes for mothers and babies in different models of maternity care. As part of the Commonwealth-funded National Maternity Data Development Project, a classification system was developed utilising a data set specification that defines characteristics of models of maternity care. The Maternity Care Classification System or MaCCS was developed using a participatory action research design that built upon the published and grey literature. The study identified the characteristics that differentiate models of care and classifies models into eleven different Major Model Categories. The MaCCS will enable individual health services, local health districts (networks), jurisdictional and national health authorities to make better informed decisions for planning, policy development and delivery of maternity services in Australia. © The Author(s) 2016.
Microgravity computing codes. User's guide
NASA Astrophysics Data System (ADS)
1982-01-01
Codes used in microgravity experiments to compute fluid parameters and to obtain data graphically are introduced. The computer programs are stored on two diskettes, compatible with the floppy disk drives of the Apple 2. Two versions of both disks are available (DOS-2 and DOS-3). The codes are written in BASIC and are structured as interactive programs. Interaction takes place through the keyboard of any Apple 2-48K standard system with single floppy disk drive. The programs are protected against wrong commands given by the operator. The programs are described step by step in the same order as the instructions displayed on the monitor. Most of these instructions are shown, with samples of computation and of graphics.
MIADS2 ... an alphanumeric map information assembly and display system for a large computer
Elliot L. Amidon
1966-01-01
A major improvement and extension of the Map Information Assembly and Display System (MIADS) developed in 1964 is described. Basic principles remain unchanged, but the computer programs have been expanded and rewritten for a large computer, in Fortran IV and MAP languages. The code system is extended from 99 integers to about 2,200 alphanumeric 2-character codes. Hand-...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eyler, L L; Trent, D S; Budden, M J
During the course of the TEMPEST computer code development a concurrent effort was conducted to assess the code's performance and the validity of computed results. The results of this work are presented in this document. The principal objective of this effort was to assure the code's computational correctness for a wide range of hydrothermal phenomena typical of fast breeder reactor application. 47 refs., 94 figs., 6 tabs.
User's Guide for TOUGH2-MP - A Massively Parallel Version of the TOUGH2 Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Earth Sciences Division; Zhang, Keni; Zhang, Keni
TOUGH2-MP is a massively parallel (MP) version of the TOUGH2 code, designed for computationally efficient parallel simulation of isothermal and nonisothermal flows of multicomponent, multiphase fluids in one, two, and three-dimensional porous and fractured media. In recent years, computational requirements have become increasingly intensive in large or highly nonlinear problems for applications in areas such as radioactive waste disposal, CO2 geological sequestration, environmental assessment and remediation, reservoir engineering, and groundwater hydrology. The primary objective of developing the parallel-simulation capability is to significantly improve the computational performance of the TOUGH2 family of codes. The particular goal for the parallel simulator ismore » to achieve orders-of-magnitude improvement in computational time for models with ever-increasing complexity. TOUGH2-MP is designed to perform parallel simulation on multi-CPU computational platforms. An earlier version of TOUGH2-MP (V1.0) was based on the TOUGH2 Version 1.4 with EOS3, EOS9, and T2R3D modules, a software previously qualified for applications in the Yucca Mountain project, and was designed for execution on CRAY T3E and IBM SP supercomputers. The current version of TOUGH2-MP (V2.0) includes all fluid property modules of the standard version TOUGH2 V2.0. It provides computationally efficient capabilities using supercomputers, Linux clusters, or multi-core PCs, and also offers many user-friendly features. The parallel simulator inherits all process capabilities from V2.0 together with additional capabilities for handling fractured media from V1.4. This report provides a quick starting guide on how to set up and run the TOUGH2-MP program for users with a basic knowledge of running the (standard) version TOUGH2 code, The report also gives a brief technical description of the code, including a discussion of parallel methodology, code structure, as well as mathematical and numerical methods used. To familiarize users with the parallel code, illustrative sample problems are presented.« less
Use of the RenalGuard system to prevent contrast-induced AKI: A meta-analysis.
Mattathil, Stephanie; Ghumman, Saad; Weinerman, Jonathan; Prasad, Anand
2017-10-01
Contrast-induced kidney injury (CI-AKI) following cardiovascular interventions results in increased morbidity and mortality. RenalGuard (RG) is a novel, closed loop system which balances volume administration with forced diuresis to maintain a high urine output. We performed a meta-analysis of the existing data comparing use of RG to conventional volume expansion. Ten studies were found eligible, of which four were randomized controlled trials. Of an aggregate sample size (N) of 1585 patients, 698 were enrolled in the four RCTs and 887 belonged to the remaining registries included in this meta-analysis. Primary outcomes included CI-AKI incidence and relative risk. Mortality, dialysis, and major adverse cardiovascular events (MACCE) were secondary outcomes. A random effects model was used and data were evaluated for publication bias. RG was associated with significant risk reduction in CI-AKI compared to control (RR: 0.30, 95%CI: 0.18-0.50, P < 0.01). CI-AKI in RG was found to be 7.7% versus 23.6% in the control group (P < 0.01). Use of RG was associated with decreased mortality (RR: 0.43, 95%CI: 0.18-0.99, P = 0.05), dialysis (RR: 0.20, 95%CI: 0.06-0.61, P = 0.01), and MACCE (RR: 0.42, 95%CI: 0.27-0.65, P < 0.01) compared to control. RG significantly reduces rates of CI-AKI compared to standard volume expansion and is also associated with decreased rates of death, dialysis, and MACCE. © 2017, Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; Brink, Henrik; Crellin-Quick, Arien
2012-12-01
With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In addition to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.
2012-12-15
With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In additionmore » to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.« less
User's manual for a two-dimensional, ground-water flow code on the Octopus computer network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naymik, T.G.
1978-08-30
A ground-water hydrology computer code, programmed by R.L. Taylor (in Proc. American Society of Civil Engineers, Journal of Hydraulics Division, 93(HY2), pp. 25-33 (1967)), has been adapted to the Octopus computer system at Lawrence Livermore Laboratory. Using an example problem, this manual details the input, output, and execution options of the code.
Unveiling the High Energy Obscured Universe: Hunting Collapsed Objects Physics
NASA Technical Reports Server (NTRS)
Ubertini, P.; Bazzano, A.; Cocchi, M.; Natalucci, L.; Bassani, L.; Caroli, E.; Stephen, J. B.; Caraveo, P.; Mereghetti, S.; Villa, G.
2005-01-01
A large part of energy from space is coming from collapsing stars (SN, Hypernovae) and collapsed stars (black holes, neutron stars and white dwarfs). The peak of their energy release is in the hard-X and gamma-ray wavelengths where photons are insensitive to absorption and can travel from the edge the Universe or the central core of the Galaxy without loosing the primordial information of energy, time signature and polarization. The most efficient process to produce energetic photons is gravitational accretion of matter from a "normal" star onto a collapsed companion (LGxMcollxdMacc/dtx( 1Rdisc)-dMacc/dt x c2), exceeding by far the nuclear reaction capability to generate high energy quanta. Thus our natural laboratory for "in situ" investigations are collapsed objects in which matter and radiation co-exist in extreme conditions of temperature and density due to gravitationally bent geometry and magnetic fields. This is a unique opportunity to study the physics of accretion flows in stellar mass and super-massive Black Holes (SMBHs), plasmoids generated in relativistic jets in galactic microQSOs and AGNs, ionised plasma interacting at the touching point of weakly magnetized NS surface, GRB/Supernovae connection, and the mysterious origins of "dark" GRB and X-ray flash.
Orbital-Dependent Density Functionals for Chemical Catalysis
2014-10-17
noncollinear density functional theory to show that the low-spin state of Mn3 in a model of the oxygen -evolving complex of photosystem II avoids...DK, which denotes the cc-pV5Z-DK basis set for 3d metals and hydrogen and the ma-cc- pV5Z-DK basis set for oxygen ) and to nonrelativistic all...cc-pV5Z basis set for oxygen ). As compared to NCBS-DK results, all ECP calculations perform worse than def2-TZVP all-electron relativistic
Nagappa, Mahesh; Ho, George; Patra, Jayadeep; Wong, Jean; Singh, Mandeep; Kaw, Roop; Cheng, Davy; Chung, Frances
2017-12-01
Obstructive sleep apnea (OSA) is a common comorbidity in patients undergoing cardiac surgery and may predispose patients to postoperative complications. The purpose of this meta-analysis is to determine the evidence of postoperative complications associated with OSA patients undergoing cardiac surgery. A literature search of Cochrane Database of Systematic Reviews, Medline, Medline In-process, Web of Science, Scopus, EMBASE, Cochrane Central Register of Controlled Trials, and CINAHL until October 2016 was performed. The search was constrained to studies in adult cardiac surgical patients with diagnosed or suspected OSA. All included studies must report at least 1 postoperative complication. The primary outcome is major adverse cardiac or cerebrovascular events (MACCEs) up to 30 days after surgery, which includes death from all-cause mortality, myocardial infarction, myocardial injury, nonfatal cardiac arrest, revascularization process, pulmonary embolism, deep venous thrombosis, newly documented postoperative atrial fibrillation (POAF), stroke, and congestive heart failure. Secondary outcome is newly documented POAF. The other exploratory outcomes include the following: (1) postoperative tracheal intubation and mechanical ventilation; (2) infection and/or sepsis; (3) unplanned intensive care unit (ICU) admission; and (4) duration of stay in hospital and ICU. Meta-analysis and meta- regression were conducted using Cochrane Review Manager 5.3 (Cochrane, London, UK) and OpenBUGS v3.0, respectively. Eleven comparative studies were included (n = 1801 patients; OSA versus non-OSA: 688 vs 1113, respectively). MACCEs were 33.3% higher odds in OSA versus non-OSA patients (OSA versus non-OSA: 31% vs 10.6%; odds ratio [OR], 2.4; 95% confidence interval [CI], 1.38-4.2; P = .002). The odds of newly documented POAF (OSA versus non-OSA: 31% vs 21%; OR, 1.94; 95% CI, 1.13-3.33; P = .02) was higher in OSA compared to non-OSA. Even though the postoperative tracheal intubation and mechanical ventilation (OSA versus non-OSA: 13% vs 5.4%; OR, 2.67; 95% CI, 1.03-6.89; P = .04) were significantly higher in OSA patients, the length of ICU stay and hospital stay were not significantly prolonged in patients with OSA compared to non-OSA. The majority of OSA patients were not treated with continuous positive airway pressure therapy. Meta-regression and sensitivity analysis of the subgroups did not impact the OR of postoperative complications for OSA versus non-OSA groups. Our meta-analysis demonstrates that after cardiac surgery, MACCEs and newly documented POAF were 33.3% and 18.1% higher odds in OSA versus non-OSA patients, respectively.
Efficacy of multiple arterial coronary bypass grafting in patients with diabetes mellitus.
Yamaguchi, Atsushi; Kimura, Naoyuki; Itoh, Satoshi; Adachi, Koichi; Yuri, Koichi; Okamura, Homare; Adachi, Hideo
2016-09-01
Use of the left internal mammary artery in patients with diabetes mellitus and multivessel coronary artery disease is known to improve survival after coronary artery bypass grafting (CABG); however, the survival benefit of multiple arterial grafts (MAGs) in diabetic patients is debated. We investigated the efficacy of CABG performed with MAGs in diabetic patients. The overall patient group comprised 2618 consecutive patients who underwent isolated CABG at our hospital between 1990 and 2014. Perioperative characteristics, in-hospital outcomes and long-term outcomes were compared between diabetic (n = 1110) and non-diabetic patients (n = 1508). The long-term outcomes of diabetic and non-diabetic patients were analysed between those who received a single arterial graft (SAG) and those who received MAGs. Both full unmatched patient population and propensity-matched patient population analyses (diabetic cohort = 431 pairs, non-diabetic cohort = 577 pairs) were performed. Preoperative comorbidities were much more common in the diabetic patients than in the non-diabetic patients; however, comorbidities were not associated with in-hospital outcomes (diabetes versus non-diabetes group, in-hospital mortality: 2.2 vs 1.5%; deep sternal wound infection: 2.2 vs 1.8%, P > 0.05). Although survival and freedom from major cardiac and cerebrovascular events (MACCEs) at 15 years were lower in the diabetes group than in the non-diabetes group (survival: 48.6 vs 55.0%, P = 0.019; MACCE-free survival: 40.8 vs 46.1%, P = 0.02), cardiac death-free survival at 15 years was similar (81.7 vs 83.9%, P = 0.24). Overall, 12-year survival was higher in both diabetic and non-diabetic patients treated with MAGs than in those treated with an SAG (64.9 vs 56.8%, P = 0.006, and 71.9 vs 60.5%, P < 0.001). Propensity-matched patient cohort analysis revealed improved 12-year survival with MAGs versus SAG in both the diabetes group (64.9 vs 58.8%, P = 0.041) and non-diabetes group (71.4 vs 63.8%, P = 0.014). Similarly, MACCE-free survival was improved in both groups. A long-term survival advantage, with no increase in perioperative morbidity, is conferred with the use of multiple arterial bypass grafts not only in non-diabetic patients but also in diabetic patients. © The Author 2016. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weber, Scott; Bixler, Nathan E.; McFadden, Katherine Letizia
In 1973 the U.S. Environmental Protection Agency (EPA) developed SecPop to calculate population estimates to support a study on air quality. The Nuclear Regulatory Commission (NRC) adopted this program to support siting reviews for nuclear power plant construction and license applications. Currently SecPop is used to prepare site data input files for offsite consequence calculations with the MELCOR Accident Consequence Code System (MACCS). SecPop enables the use of site-specific population, land use, and economic data for a polar grid defined by the user. Updated versions of SecPop have been released to use U.S. decennial census population data. SECPOP90 was releasedmore » in 1997 to use 1990 population and economic data. SECPOP2000 was released in 2003 to use 2000 population data and 1997 economic data. This report describes the current code version, SecPop version 4.3, which uses 2010 population data and both 2007 and 2012 economic data. It is also compatible with 2000 census and 2002 economic data. At the time of this writing, the current version of SecPop is 4.3.0, and that version is described herein. This report contains guidance for the installation and use of the code as well as a description of the theory, models, and algorithms involved. This report contains appendices which describe the development of the 2010 census file, 2007 county file, and 2012 county file. Finally, an appendix is included that describes the validation assessments performed.« less
Navier-Stokes and Comprehensive Analysis Performance Predictions of the NREL Phase VI Experiment
NASA Technical Reports Server (NTRS)
Duque, Earl P. N.; Burklund, Michael D.; Johnson, Wayne
2003-01-01
A vortex lattice code, CAMRAD II, and a Reynolds-Averaged Navier-Stoke code, OVERFLOW-D2, were used to predict the aerodynamic performance of a two-bladed horizontal axis wind turbine. All computations were compared with experimental data that was collected at the NASA Ames Research Center 80- by 120-Foot Wind Tunnel. Computations were performed for both axial as well as yawed operating conditions. Various stall delay models and dynamics stall models were used by the CAMRAD II code. Comparisons between the experimental data and computed aerodynamic loads show that the OVERFLOW-D2 code can accurately predict the power and spanwise loading of a wind turbine rotor.
Monte Carlo simulation of Ising models by multispin coding on a vector computer
NASA Astrophysics Data System (ADS)
Wansleben, Stephan; Zabolitzky, John G.; Kalle, Claus
1984-11-01
Rebbi's efficient multispin coding algorithm for Ising models is combined with the use of the vector computer CDC Cyber 205. A speed of 21.2 million updates per second is reached. This is comparable to that obtained by special- purpose computers.
Computation of neutron fluxes in clusters of fuel pins arranged in hexagonal assemblies (2D and 3D)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prabha, H.; Marleau, G.
2012-07-01
For computations of fluxes, we have used Carvik's method of collision probabilities. This method requires tracking algorithms. An algorithm to compute tracks (in 2D and 3D) has been developed for seven hexagonal geometries with cluster of fuel pins. This has been implemented in the NXT module of the code DRAGON. The flux distribution in cluster of pins has been computed by using this code. For testing the results, they are compared when possible with the EXCELT module of the code DRAGON. Tracks are plotted in the NXT module by using MATLAB, these plots are also presented here. Results are presentedmore » with increasing number of lines to show the convergence of these results. We have numerically computed volumes, surface areas and the percentage errors in these computations. These results show that 2D results converge faster than 3D results. The accuracy on the computation of fluxes up to second decimal is achieved with fewer lines. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrus, Jason P.; Pope, Chad; Toston, Mary
2016-12-01
Nonreactor nuclear facilities operating under the approval authority of the U.S. Department of Energy use unmitigated hazard evaluations to determine if potential radiological doses associated with design basis events challenge or exceed dose evaluation guidelines. Unmitigated design basis events that sufficiently challenge dose evaluation guidelines or exceed the guidelines for members of the public or workers, merit selection of safety structures, systems, or components or other controls to prevent or mitigate the hazard. Idaho State University, in collaboration with Idaho National Laboratory, has developed a portable and simple to use software application called SODA (Stochastic Objective Decision-Aide) that stochastically calculatesmore » the radiation dose distribution associated with hypothetical radiological material release scenarios. Rather than producing a point estimate of the dose, SODA produces a dose distribution result to allow a deeper understanding of the dose potential. SODA allows users to select the distribution type and parameter values for all of the input variables used to perform the dose calculation. Users can also specify custom distributions through a user defined distribution option. SODA then randomly samples each distribution input variable and calculates the overall resulting dose distribution. In cases where an input variable distribution is unknown, a traditional single point value can be used. SODA, developed using the MATLAB coding framework, has a graphical user interface and can be installed on both Windows and Mac computers. SODA is a standalone software application and does not require MATLAB to function. SODA provides improved risk understanding leading to better informed decision making associated with establishing nuclear facility material-at-risk limits and safety structure, system, or component selection. It is important to note that SODA does not replace or compete with codes such as MACCS or RSAC; rather it is viewed as an easy to use supplemental tool to help improve risk understanding and support better informed decisions. The SODA development project was funded through a grant from the DOE Nuclear Safety Research and Development Program.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrus, Jason P.; Pope, Chad; Toston, Mary
Nonreactor nuclear facilities operating under the approval authority of the U.S. Department of Energy use unmitigated hazard evaluations to determine if potential radiological doses associated with design basis events challenge or exceed dose evaluation guidelines. Unmitigated design basis events that sufficiently challenge dose evaluation guidelines or exceed the guidelines for members of the public or workers, merit selection of safety structures, systems, or components or other controls to prevent or mitigate the hazard. Idaho State University, in collaboration with Idaho National Laboratory, has developed a portable and simple to use software application called SODA (Stochastic Objective Decision-Aide) that stochastically calculatesmore » the radiation dose distribution associated with hypothetical radiological material release scenarios. Rather than producing a point estimate of the dose, SODA produces a dose distribution result to allow a deeper understanding of the dose potential. SODA allows users to select the distribution type and parameter values for all of the input variables used to perform the dose calculation. Users can also specify custom distributions through a user defined distribution option. SODA then randomly samples each distribution input variable and calculates the overall resulting dose distribution. In cases where an input variable distribution is unknown, a traditional single point value can be used. SODA, developed using the MATLAB coding framework, has a graphical user interface and can be installed on both Windows and Mac computers. SODA is a standalone software application and does not require MATLAB to function. SODA provides improved risk understanding leading to better informed decision making associated with establishing nuclear facility material-at-risk limits and safety structure, system, or component selection. It is important to note that SODA does not replace or compete with codes such as MACCS or RSAC; rather it is viewed as an easy to use supplemental tool to help improve risk understanding and support better informed decisions. The SODA development project was funded through a grant from the DOE Nuclear Safety Research and Development Program.« less
Comparison of FDNS liquid rocket engine plume computations with SPF/2
NASA Technical Reports Server (NTRS)
Kumar, G. N.; Griffith, D. O., II; Warsi, S. A.; Seaford, C. M.
1993-01-01
Prediction of a plume's shape and structure is essential to the evaluation of base region environments. The JANNAF standard plume flowfield analysis code SPF/2 predicts plumes well, but cannot analyze base regions. Full Navier-Stokes CFD codes can calculate both zones; however, before they can be used, they must be validated. The CFD code FDNS3D (Finite Difference Navier-Stokes Solver) was used to analyze the single plume of a Space Transportation Main Engine (STME) and comparisons were made with SPF/2 computations. Both frozen and finite rate chemistry models were employed as well as two turbulence models in SPF/2. The results indicate that FDNS3D plume computations agree well with SPF/2 predictions for liquid rocket engine plumes.
Performance evaluation of CESM in simulating the dust cycle
NASA Astrophysics Data System (ADS)
Parajuli, S. P.; Yang, Z. L.; Kocurek, G.; Lawrence, D. M.
2014-12-01
Mineral dust in the atmosphere has implications for Earth's radiation budget, biogeochemical cycles, hydrological cycles, human health and visibility. Mineral dust is injected into the atmosphere during dust storms when the surface winds are sufficiently strong and the land surface conditions are favorable. Dust storms are very common in specific regions of the world including the Middle East and North Africa (MENA) region, which contains more than 50% of the global dust sources. In this work, we present simulation of the dust cycle under the framework of CESM1.2.2 and evaluate how well the model captures the spatio-temporal characteristics of dust sources, transport and deposition at global scale, especially in dust source regions. We conducted our simulations using two existing erodibility maps (geomorphic and topographic) and a new erodibility map, which is based on the correlation between observed wind and dust. We compare the simulated results with MODIS satellite data, MACC reanalysis data, and AERONET station data. Comparison with MODIS satellite data and MACC reanalysis data shows that all three erodibility maps generally reproduce the spatio-temporal characteristics of dust optical depth globally. However, comparison with AERONET station data shows that the simulated dust optical depth is generally overestimated for all erodibility maps. Results vary greatly by region and scale of observational data. Our results also show that the simulations forced by reanalysis meteorology capture the overall dust cycle more realistically compared to the simulations done using online meteorology.
On the error statistics of Viterbi decoding and the performance of concatenated codes
NASA Technical Reports Server (NTRS)
Miller, R. L.; Deutsch, L. J.; Butman, S. A.
1981-01-01
Computer simulation results are presented on the performance of convolutional codes of constraint lengths 7 and 10 concatenated with the (255, 223) Reed-Solomon code (a proposed NASA standard). These results indicate that as much as 0.8 dB can be gained by concatenating this Reed-Solomon code with a (10, 1/3) convolutional code, instead of the (7, 1/2) code currently used by the DSN. A mathematical model of Viterbi decoder burst-error statistics is developed and is validated through additional computer simulations.
NASA Astrophysics Data System (ADS)
Boulanger, Damien; Gautron, Benoit; Schultz, Martin; Brötz, Björn; Rauthe-Schöch, Armin; Thouret, Valérie
2015-04-01
IAGOS (In-service Aircraft for a Global Observing System) aims at the provision of long-term, frequent, regular, accurate, and spatially resolved in situ observations of the atmospheric composition. IAGOS observation systems are deployed on a fleet of commercial aircraft. The IAGOS database is an essential part of the global atmospheric monitoring network. Data access is handled by open access policy based on the submission of research requests which are reviewed by the PIs. The IAGOS database (http://www.iagos.fr, damien.boulanger@obs-mip.fr) is part of the French atmospheric chemistry data centre Ether (CNES and CNRS). In the framework of the IGAS project (IAGOS for Copernicus Atmospheric Service) interoperability with international portals or other databases is implemented in order to improve IAGOS data discovery. The IGAS data network is composed of three data centres: the IAGOS database in Toulouse including IAGOS-core data and IAGOS-CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container) data since January 2015; the HALO research aircraft database at DLR (https://halo-db.pa.op.dlr.de); and the MACC data centre in Jülich (http://join.iek.fz-juelich.de). The MACC (Monitoring Atmospheric Composition and Climate) project is a prominent user of the IGAS data network. In June 2015 a new version of the IAGOS database will be released providing improved services such as download in NetCDF or NASA Ames formats; graphical tools (maps, scatter plots, etc.); standardized metadata (ISO 19115) and a better users management. The link with the MACC data centre, through JOIN (Jülich OWS Interface), will allow to combine model outputs with IAGOS data for intercomparison. The interoperability within the IGAS data network, implemented thanks to many web services, will improve the functionalities of the web interfaces of each data centre.
Wang, Wei-Ting; You, Li-Kai; Chiang, Chern-En; Sung, Shih-Hsien; Chuang, Shao-Yuan; Cheng, Hao-Min; Chen, Chen-Huan
2016-01-01
Abstract Hypertension is the most important risk factor for stroke and stroke recurrence. However, the preferred blood pressure (BP)-lowering drug class for patients who have suffered from a stroke has yet to be determined. To investigate the relative effects of BP-lowering therapies [angiotensin-converting enzyme inhibitor (ACEI), angiotensin receptor blockers (ARB), β blockers, calcium channel blockers (CCBs), diuretics, and combinations of these drugs] in patients with a prior stroke history, we performed a systematic review and meta-analysis using both traditional frequentist and Bayesian random-effects models and meta-regression of randomized controlled trials (RCTs) on the outcomes of recurrent stroke, coronary heart disease (CHD), and any major adverse cardiac and cerebrovascular events (MACCE). Trials were identified from searches of published hypertension guidelines, electronic databases, and previous systematic reviews. Fifteen RCTs composed of 39,329 participants with previous stroke were identified. Compared with the placebo, only ACEI along with diuretics significantly reduced recurrent stroke events [odds ratio (OR) = 0.54, 95% credibility interval (95% CI) 0.33–0.90]. On the basis of the distribution of posterior probabilities, the treatment ranking consistently identified ACEI along with diuretics as the preferred BP-lowering strategy for the reduction of recurrent stroke and CHD (31% and 35%, respectively). For preventing MACCE, diuretics appeared to be the preferred agent for stroke survivors (34%). Moreover, the meta-regression analysis failed to demonstrate a statistical significance between BP reduction and all outcomes (P = 0.1618 for total stroke, 0.4933 for CHD, and 0.2411 for MACCE). Evidence from RCTs supports the use of diuretics-based treatment, especially when combined with ACEI, for the secondary prevention of recurrent stroke and any vascular events in patients who have suffered from stroke. PMID:27082571
A new method for assessing surface solar irradiance: Heliosat-4
NASA Astrophysics Data System (ADS)
Qu, Z.; Oumbe, A.; Blanc, P.; Lefèvre, M.; Wald, L.; Schroedter-Homscheidt, M.; Gesell, G.
2012-04-01
Downwelling shortwave irradiance at surface (SSI) is more and more often assessed by means of satellite-derived estimates of optical properties of the atmosphere. Performances are judged satisfactory for the time being but there is an increasing need for the assessment of the direct and diffuse components of the SSI. MINES ParisTech and the German Aerospace Center (DLR) are currently developing the Heliosat-4 method to assess the SSI and its components in a more accurate way than current practices. This method is composed by two parts: a clear sky module based on the radiative transfer model libRadtran, and a cloud-ground module using two-stream and delta-Eddington approximations for clouds and a database of ground albedo. Advanced products derived from geostationary satellites and recent Earth Observation missions are the inputs of the Heliosat-4 method. Such products are: cloud optical depth, cloud phase, cloud type and cloud coverage from APOLLO of DLR, aerosol optical depth, aerosol type, water vapor in clear-sky, ozone from MACC products (FP7), and ground albedo from MODIS of NASA. In this communication, we briefly present Heliosat-4 and focus on its performances. The results of Heliosat-4 for the period 2004-2010 will be compared to the measurements made in five stations within the Baseline Surface Radiation Network. Extensive statistic analysis as well as case studies are performed in order to better understand Heliosat-4 and have an in-depth view of the performance of Heliosat-4, to understand its advantages comparing to existing methods and to identify its defaults for future improvements. The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement no. 218793 (MACC project) and no. 283576 (MACC-II project).
Force user's manual: A portable, parallel FORTRAN
NASA Technical Reports Server (NTRS)
Jordan, Harry F.; Benten, Muhammad S.; Arenstorf, Norbert S.; Ramanan, Aruna V.
1990-01-01
The use of Force, a parallel, portable FORTRAN on shared memory parallel computers is described. Force simplifies writing code for parallel computers and, once the parallel code is written, it is easily ported to computers on which Force is installed. Although Force is nearly the same for all computers, specific details are included for the Cray-2, Cray-YMP, Convex 220, Flex/32, Encore, Sequent, Alliant computers on which it is installed.
NASA Astrophysics Data System (ADS)
Massart, S.; Agusti-Panareda, A.; Aben, I.; Butz, A.; Chevallier, F.; Crevosier, C.; Engelen, R.; Frankenberg, C.; Hasekamp, O.
2014-06-01
The Monitoring Atmospheric Composition and Climate Interim Implementation (MACC-II) delayed-mode (DM) system has been producing an atmospheric methane (CH4) analysis 6 months behind real time since June 2009. This analysis used to rely on the assimilation of the CH4 product from the SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY (SCIAMACHY) instrument onboard Envisat. Recently the Laboratoire de Météorologie Dynamique (LMD) CH4 products from the Infrared Atmospheric Sounding Interferometer (IASI) and the SRON Netherlands Institute for Space Research CH4 products from the Thermal And Near-infrared Sensor for carbon Observation (TANSO) were added to the DM system. With the loss of Envisat in April 2012, the DM system now has to rely on the assimilation of methane data from TANSO and IASI. This paper documents the impact of this change in the observing system on the methane tropospheric analysis. It is based on four experiments: one free run and three analyses from respectively the assimilation of SCIAMACHY, TANSO and a combination of TANSO and IASI CH4 products in the MACC-II system. The period between December 2010 and April 2012 is studied. The SCIAMACHY experiment globally underestimates the tropospheric methane by 35 part per billion (ppb) compared to the HIAPER Pole-to-Pole Observations (HIPPO) data and by 28 ppb compared the Total Carbon Column Observing Network (TCCON) data, while the free run presents an underestimation of 5 ppb and 1 ppb against the same HIPPO and TCCON data, respectively. The assimilated TANSO product changed in October 2011 from version v.1 to version v.2.0. The analysis of version v.1 globally underestimates the tropospheric methane by 18 ppb compared to the HIPPO data and by 15 ppb compared to the TCCON data. In contrast, the analysis of version v.2.0 globally overestimates the column by 3 ppb. When the high density IASI data are added in the tropical region between 30° N and 30° S, their impact is mainly positive but more pronounced and effective when combined with version v.2.0 of the TANSO products. The resulting analysis globally underestimates the column-averaged dry-air mole fractions of methane (xCH4) just under 1 ppb on average compared to the TCCON data, whereas in the tropics it overestimates xCH4 by about 3 ppb. The random error is estimated to be less than 7 ppb when compared to TCCON data.
Pretest aerosol code comparisons for LWR aerosol containment tests LA1 and LA2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, A.L.; Wilson, J.H.; Arwood, P.C.
The Light-Water-Reactor (LWR) Aerosol Containment Experiments (LACE) are being performed in Richland, Washington, at the Hanford Engineering Development Laboratory (HEDL) under the leadership of an international project board and the Electric Power Research Institute. These tests have two objectives: (1) to investigate, at large scale, the inherent aerosol retention behavior in LWR containments under simulated severe accident conditions, and (2) to provide an experimental data base for validating aerosol behavior and thermal-hydraulic computer codes. Aerosol computer-code comparison activities are being coordinated at the Oak Ridge National Laboratory. For each of the six LACE tests, ''pretest'' calculations (for code-to-code comparisons) andmore » ''posttest'' calculations (for code-to-test data comparisons) are being performed. The overall goals of the comparison effort are (1) to provide code users with experience in applying their codes to LWR accident-sequence conditions and (2) to evaluate and improve the code models.« less
MISR Regional GoMACCS Map Projection
Atmospheric Science Data Center
2017-03-29
... Regional Imagery: Overview | Products | Data Quality | Map Projection | File Format | View Data | ... is needed if you are doing high precision work. The packages mentioned about (HDF-EOS library, GCTP, and IDL) all convert to and ...
A performance comparison of the Cray-2 and the Cray X-MP
NASA Technical Reports Server (NTRS)
Schmickley, Ronald; Bailey, David H.
1986-01-01
A suite of thirteen large Fortran benchmark codes were run on Cray-2 and Cray X-MP supercomputers. These codes were a mix of compute-intensive scientific application programs (mostly Computational Fluid Dynamics) and some special vectorized computation exercise programs. For the general class of programs tested on the Cray-2, most of which were not specially tuned for speed, the floating point operation rates varied under a variety of system load configurations from 40 percent up to 125 percent of X-MP performance rates. It is concluded that the Cray-2, in the original system configuration studied (without memory pseudo-banking) will run untuned Fortran code, on average, about 70 percent of X-MP speeds.
Scudiero, Fernando; Zocchi, Chiara; De Vito, Elena; Tarantini, Giuseppe; Marcucci, Rossella; Valenti, Renato; Migliorini, Angela; Antoniucci, David; Marchionni, Niccolò; Parodi, Guido
2018-07-01
The CHA 2 DS 2 -VASc score predicts stroke risk in patients with atrial fibrillation, but recently has been reported to have a prognostic role even in patients with ACS. We sought to assess the ability of the CHA 2 DS 2 -VASc score to predict the severity of coronary artery disease, high residual platelet reactivity and long-term outcomes in patients with acute coronary syndrome (ACS). Overall, 1729 consecutive patients with ACS undergoing invasive management were included in this prospective registry. We assessed platelet reactivity via light transmittance aggregometry after clopidogrel loading. Patients were divided according to the CHA 2 DS 2 -VASc score: group A = 0, B = 1, C = 2, D = 3, E = 4 and F ≥ 5. Patients with higher CHA 2 DS 2 -VASc score were more likely to have a higher rate of multivessel CAD (37%, 47%, 55%, 62%, 67 and 75% in Group A, B, C, D, E and F; p < 0.001); moreover, CHA 2 DS 2 -VASc score correlated linearly with residual platelet reactivity (R = 0.77; p < 0.001). At long-term follow-up, estimated adverse event rates (MACCE: cardiac death, MI, stroke or any urgent coronary revascularization) were 3%, 8%, 10%, 14%, 19% and 24% in group A, B, C, D, E and F; p < 0.001. Multivariable analysis demonstrated CHA 2 DS 2 -VASc to be an independent predictor of severity of coronary artery disease, of high residual platelet reactivity and of MACCE. In a cohort of patients with ACS, CHA 2 DS 2 -VASc score correlated with coronary disease severity and residual platelet reactivity, and therefore it predicted the risk of long-term adverse events. Copyright © 2018 Elsevier B.V. All rights reserved.
A Guide to Axial-Flow Turbine Off-Design Computer Program AXOD2
NASA Technical Reports Server (NTRS)
Chen, Shu-Cheng S.
2014-01-01
A Users Guide for the axial flow turbine off-design computer program AXOD2 is composed in this paper. This Users Guide is supplementary to the original Users Manual of AXOD. Three notable contributions of AXOD2 to its predecessor AXOD, both in the context of the Guide or in the functionality of the code, are described and discussed in length. These are: 1) a rational representation of the mathematical principles applied, with concise descriptions of the formulas implemented in the actual coding. Their physical implications are addressed; 2) the creation and documentation of an Addendum Listing of input namelist-parameters unique to AXOD2, that differ from or are in addition to the original input-namelists given in the Manual of AXOD. Their usages are discussed; and 3) the institution of proper stoppages of the code execution, encoding termination messaging and error messages of the execution to AXOD2. These measures are to safe-guard the integrity of the code execution, such that a failure mode encountered during a case-study would not plunge the code execution into indefinite loop, or cause a blow-out of the program execution. Details on these are discussed and illustrated in this paper. Moreover, this computer program has since been reconstructed substantially. Standard FORTRAN Langue was instituted, and the code was formatted in Double Precision (REAL*8). As the result, the code is now suited for use in a local Desktop Computer Environment, is perfectly portable to any Operating System, and can be executed by any FORTRAN compiler equivalent to a FORTRAN 9095 compiler. AXOD2 will be available through NASA Glenn Research Center (GRC) Software Repository.
NASA Technical Reports Server (NTRS)
Norment, H. G.
1980-01-01
Calculations can be performed for any atmospheric conditions and for all water drop sizes, from the smallest cloud droplet to large raindrops. Any subsonic, external, non-lifting flow can be accommodated; flow into, but not through, inlets also can be simulated. Experimental water drop drag relations are used in the water drop equations of motion and effects of gravity settling are included. Seven codes are described: (1) a code used to debug and plot body surface description data; (2) a code that processes the body surface data to yield the potential flow field; (3) a code that computes flow velocities at arrays of points in space; (4) a code that computes water drop trajectories from an array of points in space; (5) a code that computes water drop trajectories and fluxes to arbitrary target points; (6) a code that computes water drop trajectories tangent to the body; and (7) a code that produces stereo pair plots which include both the body and trajectories. Code descriptions include operating instructions, card inputs and printouts for example problems, and listing of the FORTRAN codes. Accuracy of the calculations is discussed, and trajectory calculation results are compared with prior calculations and with experimental data.
Spectral fitting, shock layer modeling, and production of nitrogen oxides and excited nitrogen
NASA Technical Reports Server (NTRS)
Blackwell, H. E.
1991-01-01
An analysis was made of N2 emission from 8.72 MJ/kg shock layer at 2.54, 1.91, and 1.27 cm positions and vibrational state distributions, temperatures, and relative electronic state populations was obtained from data sets. Other recorded arc jet N2 and air spectral data were reviewed and NO emission characteristics were studied. A review of operational procedures of the DSMC code was made. Information on other appropriate codes and modifications, including ionization, were made as well as a determination of the applicability of codes reviewed to task requirement. A review was also made of computational procedures used in CFD codes of Li and other codes on JSC computers. An analysis was made of problems associated with integration of specific chemical kinetics applicable to task into CFD codes.
Gigaflop performance on a CRAY-2: Multitasking a computational fluid dynamics application
NASA Technical Reports Server (NTRS)
Tennille, Geoffrey M.; Overman, Andrea L.; Lambiotte, Jules J.; Streett, Craig L.
1991-01-01
The methodology is described for converting a large, long-running applications code that executed on a single processor of a CRAY-2 supercomputer to a version that executed efficiently on multiple processors. Although the conversion of every application is different, a discussion of the types of modification used to achieve gigaflop performance is included to assist others in the parallelization of applications for CRAY computers, especially those that were developed for other computers. An existing application, from the discipline of computational fluid dynamics, that had utilized over 2000 hrs of CPU time on CRAY-2 during the previous year was chosen as a test case to study the effectiveness of multitasking on a CRAY-2. The nature of dominant calculations within the application indicated that a sustained computational rate of 1 billion floating-point operations per second, or 1 gigaflop, might be achieved. The code was first analyzed and modified for optimal performance on a single processor in a batch environment. After optimal performance on a single CPU was achieved, the code was modified to use multiple processors in a dedicated environment. The results of these two efforts were merged into a single code that had a sustained computational rate of over 1 gigaflop on a CRAY-2. Timings and analysis of performance are given for both single- and multiple-processor runs.
A fast technique for computing syndromes of BCH and RS codes. [deep space network
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.; Miller, R. L.
1979-01-01
A combination of the Chinese Remainder Theorem and Winograd's algorithm is used to compute transforms of odd length over GF(2 to the m power). Such transforms are used to compute the syndromes needed for decoding CBH and RS codes. The present scheme requires substantially fewer multiplications and additions than the conventional method of computing the syndromes directly.
Raptor: An Enterprise Knowledge Discovery Engine Version 2.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
2011-08-31
The Raptor Version 2.0 computer code uses a set of documents as seed documents to recommend documents of interest from a large, target set of documents. The computer code provides results that show the recommended documents with the highest similarity to the seed documents. Version 2.0 was specifically developed to work with SharePoint 2007 and MS SQL server.
Off-pump compared to minimal extracorporeal circulation surgery in coronary artery bypass grafting.
Reuthebuch, Oliver; Koechlin, Luca; Gahl, Brigitta; Matt, Peter; Schurr, Ulrich; Grapow, Martin; Eckstein, Friedrich
2014-01-01
Coronary artery bypass grafting (CABG) using extracorporeal circulation (ECC) is still the gold standard. However, alternative techniques have been developed to avoid ECC and its potential adverse effects. These encompass minimal extracorporeal circulation (MECC) or off-pump coronary artery bypass grafting (OPCAB). However, the prevailing potential benefits when comparing MECC and OPCABG are not yet clearly established. In this retrospective study we investigated the potential benefits of MECC and OPCABG in 697 patients undergoing CABG. Of these, 555 patients had been operated with MECC and 142 off-pump. The primary endpoint was Troponin T level as an indicator for myocardial damage. Study groups were not significantly different in general. However, patients undergoing OPCABG were significantly older (65.01 years ± 9.5 vs. 69.39 years ± 9.5; p value <0.001) with a higher Logistic EuroSCORE I (4.92% ± 6.5 vs. 5.88% ± 6.8; p value = 0.017). Operating off pump significantly reduced the need for intra-operative blood products (0.7% vs. 8.6%; p-value <0.001) and the length of stay in the intensive care unit (ICU) (2.04 days ± 2.63 vs. 2.76 days ± 2.79; p value <0.001). Regarding other blood values a significant difference could not be found in the adjusted calculations. The combined secondary endpoint, major cardiac or cerebrovascular events (MACCE), was equal in both groups as well. Coronary artery bypass grafting using MECC or OPCABG are two comparable techniques with advantages for OPCABG regarding the reduced need for intra-operative blood products and shorter length of stay in the ICU. However serological values and combined endpoint MACCE did not differ significantly in both groups.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, D.G.: Watkins, J.C.
This report documents an evaluation of the TRAC-PF1/MOD1 reactor safety analysis computer code during computer simulations of feedwater line break transients. The experimental data base for the evaluation included the results of three bottom feedwater line break tests performed in the Semiscale Mod-2C test facility. The tests modeled 14.3% (S-FS-7), 50% (S-FS-11), and 100% (S-FS-6B) breaks. The test facility and the TRAC-PF1/MOD1 model used in the calculations are described. Evaluations of the accuracy of the calculations are presented in the form of comparisons of measured and calculated histories of selected parameters associated with the primary and secondary systems. In additionmore » to evaluating the accuracy of the code calculations, the computational performance of the code during the simulations was assessed. A conclusion was reached that the code is capable of making feedwater line break transient calculations efficiently, but there is room for significant improvements in the simulations that were performed. Recommendations are made for follow-on investigations to determine how to improve future feedwater line break calculations and for code improvements to make the code easier to use.« less
Fault tolerant computing: A preamble for assuring viability of large computer systems
NASA Technical Reports Server (NTRS)
Lim, R. S.
1977-01-01
The need for fault-tolerant computing is addressed from the viewpoints of (1) why it is needed, (2) how to apply it in the current state of technology, and (3) what it means in the context of the Phoenix computer system and other related systems. To this end, the value of concurrent error detection and correction is described. User protection, program retry, and repair are among the factors considered. The technology of algebraic codes to protect memory systems and arithmetic codes to protect memory systems and arithmetic codes to protect arithmetic operations is discussed.
Five-year outcomes of staged percutaneous coronary intervention in the SYNTAX study.
Watkins, Stuart; Oldroyd, Keith G; Preda, Istvan; Holmes, David R; Colombo, Antonio; Morice, Marie-Claude; Leadley, Katrin; Dawkins, Keith D; Mohr, Friedrich W; Serruys, Patrick W; Feldman, Ted E
2015-04-01
The SYNTAX study compared PCI with TAXUS Express stents to CABG for the treatment of de novo 3-vessel and/or left main coronary disease. This study aimed to determine patient characteristics and five-year outcomes after a staged PCI strategy compared to single-session PCI. In the SYNTAX trial, staged procedures were discouraged but were allowed within 72 hours or, if renal insufficiency or contrast-induced nephropathy occurred, within 14 days (mean 9.8±18.1 days post initial procedure). A total of 125 (14%) patients underwent staged PCI. These patients had greater disease severity and/or required a more complex procedure. MACCE was significantly increased in staged patients (48.1% vs. 35.5%, p=0.004), as was the composite of death/stroke/MI (32.2% vs. 19%, p=0.0007). Individually, cardiac death and stroke occurred more frequently in the staged PCI group (p=0.03). Repeat revascularisation was significantly higher in staged patients (32.8% vs 24.8%, p=0.035), as was stent thrombosis (10.9% vs. 4.7%, p=0.005). There is a higher incidence of MACCE in patients undergoing staged compared to single-session PCI for 3-vessel and/or left main disease over the first five years of follow-up. However, these patients had more comorbidities and more diffuse disease.
Three-dimensional turbopump flowfield analysis
NASA Technical Reports Server (NTRS)
Sharma, O. P.; Belford, K. A.; Ni, R. H.
1992-01-01
A program was conducted to develop a flow prediction method applicable to rocket turbopumps. The complex nature of a flowfield in turbopumps is described and examples of flowfields are discussed to illustrate that physics based models and analytical calculation procedures based on computational fluid dynamics (CFD) are needed to develop reliable design procedures for turbopumps. A CFD code developed at NASA ARC was used as the base code. The turbulence model and boundary conditions in the base code were modified, respectively, to: (1) compute transitional flows and account for extra rates of strain, e.g., rotation; and (2) compute surface heat transfer coefficients and allow computation through multistage turbomachines. Benchmark quality data from two and three-dimensional cascades were used to verify the code. The predictive capabilities of the present CFD code were demonstrated by computing the flow through a radial impeller and a multistage axial flow turbine. Results of the program indicate that the present code operated in a two-dimensional mode is a cost effective alternative to full three-dimensional calculations, and that it permits realistic predictions of unsteady loadings and losses for multistage machines.
ASR4: A computer code for fitting and processing 4-gage anelastic strain recovery data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warpinski, N.R.
A computer code for analyzing four-gage Anelastic Strain Recovery (ASR) data has been modified for use on a personal computer. This code fits the viscoelastic model of Warpinski and Teufel to measured ASR data, calculates the stress orientation directly, and computes stress magnitudes if sufficient input data are available. The code also calculates the stress orientation using strain-rosette equations, and its calculates stress magnitudes using Blanton's approach, assuming sufficient input data are available. The program is written in FORTRAN, compiled with Ryan-McFarland Version 2.4. Graphics use PLOT88 software by Plotworks, Inc., but the graphics software must be obtained by themore » user because of licensing restrictions. A version without graphics can also be run. This code is available through the National Energy Software Center (NESC), operated by Argonne National Laboratory. 5 refs., 3 figs.« less
NASA Technical Reports Server (NTRS)
Hanson, Donald B.
1994-01-01
A two dimensional linear aeroacoustic theory for rotor/stator interaction with unsteady coupling was derived and explored in Volume 1 of this report. Computer program CUP2D has been written in FORTRAN embodying the theoretical equations. This volume (Volume 2) describes the structure of the code, installation and running, preparation of the input file, and interpretation of the output. A sample case is provided with printouts of the input and output. The source code is included with comments linking it closely to the theoretical equations in Volume 1.
NASA Technical Reports Server (NTRS)
Rodal, J. J. A.; French, S. E.; Witmer, E. A.; Stagliano, T. R.
1979-01-01
The CIVM-JET 4C computer program for the 'finite strain' analysis of 2 d transient structural responses of complete or partial rings and beams subjected to fragment impact stored on tape as a series of individual files. Which subroutines are found in these files are described in detail. All references to the CIVM-JET 4C program are made assuming that the user has a copy of NASA CR-134907 (ASRL TR 154-9) which serves as a user's guide to (1) the CIVM-JET 4B computer code and (2) the CIVM-JET 4C computer code 'with the use of the modified input instructions' attached hereto.
Marginal abatement cost curves for NOx incorporating both controls and alternative measures
A marginal abatement cost curve (MACC) traces out the efficient marginal abatement cost level for any aggregate emissions target when a least cost approach is implemented. In order for it to represent the efficient MAC level, all abatement opportunities across all sectors and loc...
Validation of the NCC Code for Staged Transverse Injection and Computations for a RBCC Combustor
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Liu, Nan-Suey
2005-01-01
The NCC code was validated for a case involving staged transverse injection into Mach 2 flow behind a rearward facing step. Comparisons with experimental data and with solutions from the FPVortex code was then used to perform computations to study fuel-air mixing for the combustor of a candidate rocket based combined cycle engine geometry. Comparisons with a one-dimensional analysis and a three-dimensional code (VULCAN) were performed to assess the qualitative and quantitative performance of the NCC solver.
A supersonic three-dimensional code for flow over blunt bodies: Program documentation and test cases
NASA Technical Reports Server (NTRS)
Chaussee, D. S.; Mcmillan, O. J.
1980-01-01
The use of a computer code for the calculation of steady, supersonic, three dimensional, inviscid flow over blunt bodies is illustrated. Input and output are given and explained for two cases: a pointed code of 20 deg half angle at 15 deg angle of attack in a free stream with M sub infinite = 7, and a cone-ogive-cylinder at 10 deg angle of attack with M sub infinite = 2.86. A source listing of the computer code is provided.
NASA Astrophysics Data System (ADS)
Chubarova, Nataly; Poliukhov, Alexei; Shatunova, Marina; Rivin, Gdali; Becker, Ralf; Muskatel, Harel; Blahak, Ulrich; Kinne, Stefan; Tarasova, Tatiana
2017-04-01
We use the operational Russian COSMO-Ru weather forecast model (Ritter and and Geleyn, 1991) with different aerosol input data for the evaluation of radiative and temperature effects of aerosol in different atmospheric conditions. Various aerosol datasets were utilized including Tegen climatology (Tegen et al., 1997), updated Macv2 climatology (Kinne et al., 2013), Tanre climatology (Tanre et al., 1984) as well as the MACC data (Morcrette et al., 2009). For clear sky conditions we compare the radiative effects from the COSMO-Ru model over Moscow (55.7N, 37.5E) and Lindenberg/Falkenberg sites (52.2N, 14.1E) with the results obtained using long-term aerosol measurements. Additional tests of the COSMO RT code were performed against (FC05)-SW model (Tarasova T.A. and Fomin B.A., 2007). The overestimation of about 5-8% of COSMO RT code was obtained. The study of aerosol effect on temperature at 2 meters has revealed the sensitivity of about 0.7-1.1 degree C per 100 W/m2 change in shortwave net radiation due to aerosol variations. We also discuss the radiative impact of urban aerosol properties according to the long-term AERONET measurements in Moscow and Moscow suburb as well as long-term aerosol trends over Moscow from the measurements and Macv2 dataset. References: Kinne, S., O'Donnel D., Stier P., et al., J. Adv. Model. Earth Syst., 5, 704-740, 2013. Morcrette J.-J.,O. Boucher, L. Jones, eet al, J.GEOPHYS. RES.,VOL. 114, D06206, doi:10.1029/2008JD011235, 2009. Ritter, B. and Geleyn, J., Monthly Weather Review, 120, 303-325, 1992. Tanre, D., Geleyn, J., and Slingo, J., A. Deepak Publ., Hampton, Virginia, 133-177, 1984. Tarasova, T., and Fomin, B., Journal of Atmospheric and Oceanic Technology, 24, 1157-1162, 2007. Tegen, I., Hollrig, P., Chin, M., et al., Journal of Geophysical Research- Atmospheres, 102, 23895-23915, 1997.
Simulation of 2D Kinetic Effects in Plasmas using the Grid Based Continuum Code LOKI
NASA Astrophysics Data System (ADS)
Banks, Jeffrey; Berger, Richard; Chapman, Tom; Brunner, Stephan
2016-10-01
Kinetic simulation of multi-dimensional plasma waves through direct discretization of the Vlasov equation is a useful tool to study many physical interactions and is particularly attractive for situations where minimal fluctuation levels are desired, for instance, when measuring growth rates of plasma wave instabilities. However, direct discretization of phase space can be computationally expensive, and as a result there are few examples of published results using Vlasov codes in more than a single configuration space dimension. In an effort to fill this gap we have developed the Eulerian-based kinetic code LOKI that evolves the Vlasov-Poisson system in 2+2-dimensional phase space. The code is designed to reduce the cost of phase-space computation by using fully 4th order accurate conservative finite differencing, while retaining excellent parallel scalability that efficiently uses large scale computing resources. In this poster I will discuss the algorithms used in the code as well as some aspects of their parallel implementation using MPI. I will also overview simulation results of basic plasma wave instabilities relevant to laser plasma interaction, which have been obtained using the code.
Experimental and analytical comparison of flowfields in a 110 N (25 lbf) H2/O2 rocket
NASA Technical Reports Server (NTRS)
Reed, Brian D.; Penko, Paul F.; Schneider, Steven J.; Kim, Suk C.
1991-01-01
A gaseous hydrogen/gaseous oxygen 110 N (25 lbf) rocket was examined through the RPLUS code using the full Navier-Stokes equations with finite rate chemistry. Performance tests were conducted on the rocket in an altitude test facility. Preliminary parametric analyses were performed for a range of mixture ratios and fuel film cooling pcts. It is shown that the computed values of specific impulse and characteristic exhaust velocity follow the trend of the experimental data. Specific impulse computed by the code is lower than the comparable test values by about two to three percent. The computed characteristic exhaust velocity values are lower than the comparable test values by three to four pct. Thrust coefficients computed by the code are found to be within two pct. of the measured values. It is concluded that the discrepancy between computed and experimental performance values could not be attributed to experimental uncertainty.
1975-09-01
This report assumes a familiarity with the GIFT and MAGIC computer codes. The EDIT-COMGEOM code is a FORTRAN computer code. The EDIT-COMGEOM code...converts the target description data which was used in the MAGIC computer code to the target description data which can be used in the GIFT computer code
Procedures for the computation of unsteady transonic flows including viscous effects
NASA Technical Reports Server (NTRS)
Rizzetta, D. P.
1982-01-01
Modifications of the code LTRAN2, developed by Ballhaus and Goorjian, which account for viscous effects in the computation of planar unsteady transonic flows are presented. Two models are considered and their theoretical development and numerical implementation is discussed. Computational examples employing both models are compared with inviscid solutions and with experimental data. Use of the modified code is described.
NASA Astrophysics Data System (ADS)
Yahya, Khairunnisa; He, Jian; Zhang, Yang
2015-12-01
Multiyear applications of an online-coupled meteorology-chemistry model allow an assessment of the variation trends in simulated meteorology, air quality, and their interactions to changes in emissions and meteorology, as well as the impacts of initial and boundary conditions (ICONs/BCONs) on simulated aerosol-cloud-radiation interactions over a period of time. In this work, the Weather Research and Forecasting model with Chemistry version 3.4.1 (WRF/Chem v. 3.4.1) with the 2005 Carbon Bond mechanism coupled with the Volatility Basis Set module for secondary organic aerosol formation (WRF/Chem-CB05-VBS) is applied for multiple years (2001, 2006, and 2010) over continental U.S. This work also examines the changes in simulated air quality and meteorology due to changes in emissions and meteorology and the model's capability in reproducing the observed variation trends in species concentrations from 2001 to 2010. In addition, the impacts of the chemical ICONs/BCONs on model predictions are analyzed. ICONs/BCONs are downscaled from two global models, the modified Community Earth System Model/Community Atmosphere model version 5.1 (CESM/CAM v5.1) and the Monitoring Atmospheric Composition and Climate model (MACC). The evaluation of WRF/Chem-CB05-VBS simulations with the CESM ICONs/BCONs for 2001, 2006, and 2010 shows that temperature at 2 m (T2) is underpredicted for all three years likely due to inaccuracies in soil moisture and soil temperature, resulting in biases in surface relative humidity, wind speed, and precipitation. With the exception of cloud fraction, other aerosol-cloud variables including aerosol optical depth, cloud droplet number concentration, and cloud optical thickness are underpredicted for all three years, resulting in overpredictions of radiation variables. The model performs well for O3 and particulate matter with diameter less than or equal to 2.5 (PM2.5) for all three years comparable to other studies from literature. The model is able to reproduce observed annual average trends in O3 and PM2.5 concentrations from 2001 to 2006 and from 2006 to 2010 but is less skillful in simulating their observed seasonal trends. The 2006 and 2010 results using CESM and MACC ICONs/BCONs are compared to analyze the impact of ICONs/BCONs on model performance and their feedbacks to aerosol, clouds, and radiation. Comparing to the simulations with MACC ICONs/BCONs, the simulations with the CESM ICONs/BCONs improve the performance of O3 mixing ratios (e.g., the normalized mean bias for maximum 8 h O3 is reduced from -17% to -1% in 2010), PM2.5 in 2010, and sulfate in 2006 (despite a slightly larger normalized mean bias for PM2.5 in 2006). The impacts of different ICONs/BCONs on simulated aerosol-cloud-radiation variables are not negligible, with larger impacts in 2006 compared to 2010.
NASA Astrophysics Data System (ADS)
Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu
2015-07-01
The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.
Ansel, Gary M; Hopkins, L Nelson; Jaff, Michael R; Rubino, Paolo; Bacharach, J Michael; Scheinert, Dierk; Myla, Subbarao; Das, Tony; Cremonesi, Alberto
2010-07-01
The multicenter ARMOUR (ProximAl PRotection with the MO.MA Device DUring CaRotid Stenting) trial evaluated the 30-day safety and effectiveness of the MO.MA Proximal Cerebral Protection Device (Invatec, Roncadelle, Italy) utilized to treat high surgical risk patients undergoing carotid artery stenting (CAS). Distal embolic protection devices (EPD) have been traditionally utilized during CAS. The MO.MA device acts as a balloon occlusion "endovascular clamping" system to achieve cerebral protection prior to crossing the carotid stenosis. This prospective registry enrolled 262 subjects, 37 roll-in and 225 pivotal subjects evaluated with intention to treat (ITT) from September 2007 to February 2009. Subjects underwent CAS using the MO.MA device. The primary endpoint, myocardial infarction, stroke, or death through 30 days (30-day major adverse cardiac and cerebrovascular events [MACCE]) was compared to a performance goal of 13% derived from trials utilizing distal EPD. For the ITT population, the mean age was 74.7 years with 66.7% of the cohort being male. Symptomatic patients comprised 15.1% and 28.9% were octogenarians. Device success was 98.2% and procedural success was 93.2%. The 30-day MACCE rate was 2.7% [95% CI (1.0-5.8%)] with a 30-day major stroke rate of 0.9%. No symptomatic patient suffered a stroke during this trial. The ARMOUR trial demonstrated that the MO.MA(R) Proximal Cerebral Protection Device is safe and effective for high surgical risk patients undergoing CAS. The absence of stroke in symptomatic patients is the lowest rate reported in any independently adjudicated prospective multicenter registry trial to date. (c) 2010 Wiley-Liss, Inc.
NASA Technical Reports Server (NTRS)
Fishbach, L. H.
1979-01-01
The computational techniques utilized to determine the optimum propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements are described. The characteristics and use of the following computer codes are discussed: (1) NNEP - a very general cycle analysis code that can assemble an arbitrary matrix fans, turbines, ducts, shafts, etc., into a complete gas turbine engine and compute on- and off-design thermodynamic performance; (2) WATE - a preliminary design procedure for calculating engine weight using the component characteristics determined by NNEP; (3) POD DRG - a table look-up program to calculate wave and friction drag of nacelles; (4) LIFCYC - a computer code developed to calculate life cycle costs of engines based on the output from WATE; and (5) INSTAL - a computer code developed to calculate installation effects, inlet performance and inlet weight. Examples are given to illustrate how these computer techniques can be applied to analyze and optimize propulsion system fuel consumption, weight, and cost for representative types of aircraft and missions.
A verification of the gyrokinetic microstability codes GEM, GYRO, and GS2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bravenec, R. V.; Chen, Y.; Wan, W.
2013-10-15
A previous publication [R. V. Bravenec et al., Phys. Plasmas 18, 122505 (2011)] presented favorable comparisons of linear frequencies and nonlinear fluxes from the Eulerian gyrokinetic codes gyro[J. Candy and R. E. Waltz, J. Comput. Phys. 186, 545 (2003)] and gs2[W. Dorland et al., Phys. Rev. Lett. 85, 5579 (2000)]. The motivation was to verify the codes, i.e., demonstrate that they correctly solve the gyrokinetic-Maxwell equations. The premise was that it is highly unlikely for both codes to yield the same incorrect results. In this work, we add the Lagrangian particle-in-cell code gem[Y. Chen and S. Parker, J. Comput. Phys.more » 220, 839 (2007)] to the comparisons, not simply to add another code, but also to demonstrate that the codes' algorithms do not matter. We find good agreement of gem with gyro and gs2 for the plasma conditions considered earlier, thus establishing confidence that the codes are verified and that ongoing validation efforts for these plasma parameters are warranted.« less
NASA Technical Reports Server (NTRS)
Norment, H. G.
1985-01-01
Subsonic, external flow about nonlifting bodies, lifting bodies or combinations of lifting and nonlifting bodies is calculated by a modified version of the Hess lifting code. Trajectory calculations can be performed for any atmospheric conditions and for all water drop sizes, from the smallest cloud droplet to large raindrops. Experimental water drop drag relations are used in the water drop equations of motion and effects of gravity settling are included. Inlet flow can be accommodated, and high Mach number compressibility effects are corrected for approximately. Seven codes are described: (1) a code used to debug and plot body surface description data; (2) a code that processes the body surface data to yield the potential flow field; (3) a code that computes flow velocities at arrays of points in space; (4) a code that computes water drop trajectories from an array of points in space; (5) a code that computes water drop trajectories and fluxes to arbitrary target points; (6) a code that computes water drop trajectories tangent to the body; and (7) a code that produces stereo pair plots which include both the body and trajectories. Accuracy of the calculations is discussed, and trajectory calculation results are compared with prior calculations and with experimental data.
Hypercube matrix computation task
NASA Technical Reports Server (NTRS)
Calalo, R.; Imbriale, W.; Liewer, P.; Lyons, J.; Manshadi, F.; Patterson, J.
1987-01-01
The Hypercube Matrix Computation (Year 1986-1987) task investigated the applicability of a parallel computing architecture to the solution of large scale electromagnetic scattering problems. Two existing electromagnetic scattering codes were selected for conversion to the Mark III Hypercube concurrent computing environment. They were selected so that the underlying numerical algorithms utilized would be different thereby providing a more thorough evaluation of the appropriateness of the parallel environment for these types of problems. The first code was a frequency domain method of moments solution, NEC-2, developed at Lawrence Livermore National Laboratory. The second code was a time domain finite difference solution of Maxwell's equations to solve for the scattered fields. Once the codes were implemented on the hypercube and verified to obtain correct solutions by comparing the results with those from sequential runs, several measures were used to evaluate the performance of the two codes. First, a comparison was provided of the problem size possible on the hypercube with 128 megabytes of memory for a 32-node configuration with that available in a typical sequential user environment of 4 to 8 megabytes. Then, the performance of the codes was anlyzed for the computational speedup attained by the parallel architecture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system hasmore » been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system.« less
Error threshold for color codes and random three-body Ising models.
Katzgraber, Helmut G; Bombin, H; Martin-Delgado, M A
2009-08-28
We study the error threshold of color codes, a class of topological quantum codes that allow a direct implementation of quantum Clifford gates suitable for entanglement distillation, teleportation, and fault-tolerant quantum computation. We map the error-correction process onto a statistical mechanical random three-body Ising model and study its phase diagram via Monte Carlo simulations. The obtained error threshold of p(c) = 0.109(2) is very close to that of Kitaev's toric code, showing that enhanced computational capabilities do not necessarily imply lower resistance to noise.
Photoionization and High Density Gas
NASA Technical Reports Server (NTRS)
Kallman, T.; Bautista, M.; White, Nicholas E. (Technical Monitor)
2002-01-01
We present results of calculations using the XSTAR version 2 computer code. This code is loosely based on the XSTAR v.1 code which has been available for public use for some time. However it represents an improvement and update in several major respects, including atomic data, code structure, user interface, and improved physical description of ionization/excitation. In particular, it now is applicable to high density situations in which significant excited atomic level populations are likely to occur. We describe the computational techniques and assumptions, and present sample runs with particular emphasis on high density situations.
Guide to AERO2S and WINGDES Computer Codes for Prediction and Minimization of Drag Due to Lift
NASA Technical Reports Server (NTRS)
Carlson, Harry W.; Chu, Julio; Ozoroski, Lori P.; McCullers, L. Arnold
1997-01-01
The computer codes, AER02S and WINGDES, are now widely used for the analysis and design of airplane lifting surfaces under conditions that tend to induce flow separation. These codes have undergone continued development to provide additional capabilities since the introduction of the original versions over a decade ago. This code development has been reported in a variety of publications (NASA technical papers, NASA contractor reports, and society journals). Some modifications have not been publicized at all. Users of these codes have suggested the desirability of combining in a single document the descriptions of the code development, an outline of the features of each code, and suggestions for effective code usage. This report is intended to supply that need.
Hypersonic simulations using open-source CFD and DSMC solvers
NASA Astrophysics Data System (ADS)
Casseau, V.; Scanlon, T. J.; John, B.; Emerson, D. R.; Brown, R. E.
2016-11-01
Hypersonic hybrid hydrodynamic-molecular gas flow solvers are required to satisfy the two essential requirements of any high-speed reacting code, these being physical accuracy and computational efficiency. The James Weir Fluids Laboratory at the University of Strathclyde is currently developing an open-source hybrid code which will eventually reconcile the direct simulation Monte-Carlo method, making use of the OpenFOAM application called dsmcFoam, and the newly coded open-source two-temperature computational fluid dynamics solver named hy2Foam. In conjunction with employing the CVDV chemistry-vibration model in hy2Foam, novel use is made of the QK rates in a CFD solver. In this paper, further testing is performed, in particular with the CFD solver, to ensure its efficacy before considering more advanced test cases. The hy2Foam and dsmcFoam codes have shown to compare reasonably well, thus providing a useful basis for other codes to compare against.
Solution of 3-dimensional time-dependent viscous flows. Part 2: Development of the computer code
NASA Technical Reports Server (NTRS)
Weinberg, B. C.; Mcdonald, H.
1980-01-01
There is considerable interest in developing a numerical scheme for solving the time dependent viscous compressible three dimensional flow equations to aid in the design of helicopter rotors. The development of a computer code to solve a three dimensional unsteady approximate form of the Navier-Stokes equations employing a linearized block emplicit technique in conjunction with a QR operator scheme is described. Results of calculations of several Cartesian test cases are presented. The computer code can be applied to more complex flow fields such as these encountered on rotating airfoils.
NASA Technical Reports Server (NTRS)
Nguyen, H. L.; Ying, S.-J.
1990-01-01
Jet-A spray combustion has been evaluated in gas turbine combustion with the use of propane chemical kinetics as the first approximation for the chemical reactions. Here, the numerical solutions are obtained by using the KIVA-2 computer code. The KIVA-2 code is the most developed of the available multidimensional combustion computer programs for application of the in-cylinder combustion dynamics of internal combustion engines. The released version of KIVA-2 assumes that 12 chemical species are present; the code uses an Arrhenius kinetic-controlled combustion model governed by a four-step global chemical reaction and six equilibrium reactions. Researchers efforts involve the addition of Jet-A thermophysical properties and the implementation of detailed reaction mechanisms for propane oxidation. Three different detailed reaction mechanism models are considered. The first model consists of 131 reactions and 45 species. This is considered as the full mechanism which is developed through the study of chemical kinetics of propane combustion in an enclosed chamber. The full mechanism is evaluated by comparing calculated ignition delay times with available shock tube data. However, these detailed reactions occupy too much computer memory and CPU time for the computation. Therefore, it only serves as a benchmark case by which to evaluate other simplified models. Two possible simplified models were tested in the existing computer code KIVA-2 for the same conditions as used with the full mechanism. One model is obtained through a sensitivity analysis using LSENS, the general kinetics and sensitivity analysis program code of D. A. Bittker and K. Radhakrishnan. This model consists of 45 chemical reactions and 27 species. The other model is based on the work published by C. K. Westbrook and F. L. Dryer.
NASA Technical Reports Server (NTRS)
Goodwin, Sabine A.; Raj, P.
1999-01-01
Progress to date towards the development and validation of a fast, accurate and cost-effective aeroelastic method for advanced parallel computing platforms such as the IBM SP2 and the SGI Origin 2000 is presented in this paper. The ENSAERO code, developed at the NASA-Ames Research Center has been selected for this effort. The code allows for the computation of aeroelastic responses by simultaneously integrating the Euler or Navier-Stokes equations and the modal structural equations of motion. To assess the computational performance and accuracy of the ENSAERO code, this paper reports the results of the Navier-Stokes simulations of the transonic flow over a flexible aeroelastic wing body configuration. In addition, a forced harmonic oscillation analysis in the frequency domain and an analysis in the time domain are done on a wing undergoing a rigid pitch and plunge motion. Finally, to demonstrate the ENSAERO flutter-analysis capability, aeroelastic Euler and Navier-Stokes computations on an L-1011 wind tunnel model including pylon, nacelle and empennage are underway. All computational solutions are compared with experimental data to assess the level of accuracy of ENSAERO. As the computations described above are performed, a meticulous log of computational performance in terms of wall clock time, execution speed, memory and disk storage is kept. Code scalability is also demonstrated by studying the impact of varying the number of processors on computational performance on the IBM SP2 and the Origin 2000 systems.
NASA Technical Reports Server (NTRS)
Liu, D. D.; Kao, Y. F.; Fung, K. Y.
1989-01-01
A transonic equivalent strip (TES) method was further developed for unsteady flow computations of arbitrary wing planforms. The TES method consists of two consecutive correction steps to a given nonlinear code such as LTRAN2; namely, the chordwise mean flow correction and the spanwise phase correction. The computation procedure requires direct pressure input from other computed or measured data. Otherwise, it does not require airfoil shape or grid generation for given planforms. To validate the computed results, four swept wings of various aspect ratios, including those with control surfaces, are selected as computational examples. Overall trends in unsteady pressures are established with those obtained by XTRAN3S codes, Isogai's full potential code and measured data by NLR and RAE. In comparison with these methods, the TES has achieved considerable saving in computer time and reasonable accuracy which suggests immediate industrial applications.
Capodanno, Davide; Caggegi, Anna; Capranzano, Piera; Cincotta, Glauco; Miano, Marco; Barrano, Gionbattista; Monaco, Sergio; Calvo, Francesco; Tamburino, Corrado
2011-06-01
The aim of this study is to verify the study hypothesis of the EXCEL trial by comparing percutaneous coronary intervention (PCI) and coronary artery bypass graft (CABG) in an EXCEL-like population of patients. The upcoming EXCEL trial will test the hypothesis that left main patients with SYNTAX score ≤ 32 experience similar rates of 3-year death, myocardial infarction (MI), or cerebrovascular accidents (CVA) following revascularization by PCI or CABG. We compared the 3-year rates of death/MI/CVA and death/MI/CVA/target vessel revascularization (MACCE) in 556 patients with left main disease and SYNTAX score ≤ 32 undergoing PCI (n = 285) or CABG (n = 271). To account for confounders, outcome parameters underwent extensive statistical adjustment. The unadjusted incidence of death/MI/CVA was similar between PCI and CABG (12.7% vs. 8.4%, P = 0.892), while MACCE were higher in the PCI group compared to the CABG group (27.0% vs. 11.8%, P < 0.001). After propensity score matching, PCI was not associated with a significant increase in the rate of death/MI/CVA (11.8% vs. 10.7%, P = 0.948), while MACCE were more frequently noted among patients treated with PCI (28.8% vs. 14.1%, P = 0.002). Adjustment by means of SYNTAX score and EUROSCORE, covariates with and without propensity score, and propensity score alone did not change significantly these findings. In an EXCEL-like cohort of patients with left main disease, there seems to be a clinical equipoise between PCI and CABG in terms of death/MI/CVA. However, even in patients with SYNTAX score ≤ 32, CABG is superior to PCI when target vessel revascularization is included in the combined endpoint. Copyright © 2011 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Knowland, K. E.; Doherty, R. M.; Hodges, K.
2015-12-01
The influence of the North Atlantic Oscillation (NAO) on the tropospheric distributions of ozone (O3) and carbon monoxide (CO) has been quantified. The Monitoring Atmospheric Composition and Climate (MACC) Reanalysis, a combined meteorology and composition dataset for the period 2003-2012 (Innes et al., 2013), is used to investigate the composition of the troposphere and lower stratosphere in relation to the location of the storm track as well as other meteorological parameters over the North Atlantic associated with the different NAO phases. Cyclone tracks in the MACC Reanalysis compare well to the cyclone tracks in the widely-used ERA-Interim Reanalysis for the same 10-year period (cyclone tracking performed using the tracking algorithm of Hodges (1995, 1999)), as both are based on the European Centre for Medium-Range Weather Forecasts' (ECMWF) Integrated Forecast System (IFS). A seasonal analysis is performed whereby the MACC reanalysis meteorological fields, O3 and CO mixing ratios are weighted by the monthly NAO index values. The location of the main storm track, which tilts towards high latitudes (toward the Arctic) during positive NAO phases to a more zonal location in the mid-latitudes (toward Europe) during negative NAO phases, impacts the location of both horizontal and vertical transport across the North Atlantic and into the Arctic. During positive NAO seasons, the persistence of cyclones over the North Atlantic coupled with a stronger Azores High promotes strong horizontal transport across the North Atlantic throughout the troposphere. In all seasons, significantly more intense cyclones occur at higher latitudes (north of ~50°C) during the positive phase of the NAO and in the southern mid-latitudes during the negative NAO phase. This impacts the location of stratospheric intrusions within the descending dry airstream behind the associated cold front of the extratropical cyclone and the venting of low-level pollution up into the free troposphere within the warm conveyor belt airstream which rises ahead of the cold front.
Mammo, Dalia F; Cheng, Chin-I; Ragina, Neli P; Alani, Firas
This study seeks to identify factors associated with periprocedural complications of carotid artery stenting (CAS) to best understand CAS complication rates and optimize patient outcomes. Periprocedural complications include major adverse cardiovascular and cerebrovascular events (MACCE) that include myocardial infarction (MI), stroke, or death. We retrospectively analyzed 181 patients from Northern Michigan who underwent CAS. Rates of stroke, MI, and death occurring within 30days post-procedure were examined. Associations of open vs. closed cell stent type, demographics, comorbidities, and symptomatic carotid stenosis were compared to determine significance. All patients had three NIH Stroke Scale (NIHSS) exams: at baseline, 24h post-procedure, and at the one-month visit. Cardiac enzymes were measured twice in all patients, within 24h post-procedure. All patients were treated with dual anti-platelet therapy for at least 6months post-procedure. Three patients (1.66%) experienced a major complication within one-month post-procedure. These complications included one MI (0.55%), one stroke (0.55%), and one death (0.55%). The following variable factors were not associated with the occurrence of MACCE complications within 30days post-procedure: stent design (open vs. closed cell) (p=1.000), age ≥80 (p=0.559), smoking history (p=0.569), hypertension (p=1.000), diabetes (p=1.000), and symptomatic carotid stenosis (p=0.254). Age of 80years old or above, symptomatic carotid stenosis, open-cell stent design, and history of diabetes, smoking, or hypertension were not found to have an association with MACCE within 1month after CAS. Future studies using a greater sample size will be beneficial to better assess periprocedural complication risks of CAS, while also considering the effect of operator experience and technological advancements on decreasing periprocedural complication rates. Copyright © 2017 Elsevier Inc. All rights reserved.
Torres, Carolina A; Sepúlveda, Gloria; Kahlaoui, Besma
2017-01-01
Sun-related physiological disorders such as sun damage on apples ( Malus domestica Borkh) are caused by cumulative photooxidative and heat stress during their growing season triggering morphological, physiological, and biochemical changes in fruit tissues not only while it is on the tree but also after it has been harvested. The objective of the work was to establish the interaction of auxin (indole-3-acetic acid; IAA), abscisic acid (ABA), jasmonic acid (JA), salicylic acid (SA), and ethylene (ET) and its precursor ACC (free and conjugated, MACC) during development of sun-injury-related disorders pre- and post-harvest on apples. Peel tissue was extracted from fruit growing under different sun exposures (Non-exposed, NE; Exposed, EX) and with sun injury symptoms (Moderate, Mod). Sampling was carried out every 15 days from 75 days after full bloom (DAFB) until 120 days post-harvest in cold storage (1°C, > 90%RH). Concentrations of IAA, ABA, JA, SA, were determined using UHPLC mass spectrometry, and ET and ACC (free and conjugated MACC) using gas chromatography. IAA was found not to be related directly to sun injury development, but it decreased 60% in sun exposed tissue, and during fruit development. ABA, JA, SA, and ethylene concentrations were significantly higher ( P ≤ 0.05) in Mod tissue, but their concentration, except for ethylene, were not affected by sun exposure. ACC and MACC concentrations increased until 105 DAFB in all sun exposure categories. During post-harvest, ethylene climacteric peak was delayed on EX compared to Mod. ABA and SA concentrations remained stable throughout storage in both tissue. JA dramatically increased post-harvest in both EX and Mod tissue, and orchards, confirming its role in low temperature tolerance. The results suggest that ABA, JA, and SA together with ethylene are modulating some of the abiotic stress defense responses on sun-exposed fruit during photooxidative and heat stress on the tree.
Torres, Carolina A.; Sepúlveda, Gloria; Kahlaoui, Besma
2017-01-01
Sun-related physiological disorders such as sun damage on apples (Malus domestica Borkh) are caused by cumulative photooxidative and heat stress during their growing season triggering morphological, physiological, and biochemical changes in fruit tissues not only while it is on the tree but also after it has been harvested. The objective of the work was to establish the interaction of auxin (indole-3-acetic acid; IAA), abscisic acid (ABA), jasmonic acid (JA), salicylic acid (SA), and ethylene (ET) and its precursor ACC (free and conjugated, MACC) during development of sun-injury-related disorders pre- and post-harvest on apples. Peel tissue was extracted from fruit growing under different sun exposures (Non-exposed, NE; Exposed, EX) and with sun injury symptoms (Moderate, Mod). Sampling was carried out every 15 days from 75 days after full bloom (DAFB) until 120 days post-harvest in cold storage (1°C, > 90%RH). Concentrations of IAA, ABA, JA, SA, were determined using UHPLC mass spectrometry, and ET and ACC (free and conjugated MACC) using gas chromatography. IAA was found not to be related directly to sun injury development, but it decreased 60% in sun exposed tissue, and during fruit development. ABA, JA, SA, and ethylene concentrations were significantly higher (P ≤ 0.05) in Mod tissue, but their concentration, except for ethylene, were not affected by sun exposure. ACC and MACC concentrations increased until 105 DAFB in all sun exposure categories. During post-harvest, ethylene climacteric peak was delayed on EX compared to Mod. ABA and SA concentrations remained stable throughout storage in both tissue. JA dramatically increased post-harvest in both EX and Mod tissue, and orchards, confirming its role in low temperature tolerance. The results suggest that ABA, JA, and SA together with ethylene are modulating some of the abiotic stress defense responses on sun-exposed fruit during photooxidative and heat stress on the tree. PMID:29491868
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greene, N.M.; Petrie, L.M.; Westfall, R.M.
SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system hasmore » been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation; Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries.« less
Efficient Helicopter Aerodynamic and Aeroacoustic Predictions on Parallel Computers
NASA Technical Reports Server (NTRS)
Wissink, Andrew M.; Lyrintzis, Anastasios S.; Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak
1996-01-01
This paper presents parallel implementations of two codes used in a combined CFD/Kirchhoff methodology to predict the aerodynamics and aeroacoustics properties of helicopters. The rotorcraft Navier-Stokes code, TURNS, computes the aerodynamic flowfield near the helicopter blades and the Kirchhoff acoustics code computes the noise in the far field, using the TURNS solution as input. The overall parallel strategy adds MPI message passing calls to the existing serial codes to allow for communication between processors. As a result, the total code modifications required for parallel execution are relatively small. The biggest bottleneck in running the TURNS code in parallel comes from the LU-SGS algorithm that solves the implicit system of equations. We use a new hybrid domain decomposition implementation of LU-SGS to obtain good parallel performance on the SP-2. TURNS demonstrates excellent parallel speedups for quasi-steady and unsteady three-dimensional calculations of a helicopter blade in forward flight. The execution rate attained by the code on 114 processors is six times faster than the same cases run on one processor of the Cray C-90. The parallel Kirchhoff code also shows excellent parallel speedups and fast execution rates. As a performance demonstration, unsteady acoustic pressures are computed at 1886 far-field observer locations for a sample acoustics problem. The calculation requires over two hundred hours of CPU time on one C-90 processor but takes only a few hours on 80 processors of the SP2. The resultant far-field acoustic field is analyzed with state of-the-art audio and video rendering of the propagating acoustic signals.
Variation in clinical coding lists in UK general practice: a barrier to consistent data entry?
Tai, Tracy Waize; Anandarajah, Sobanna; Dhoul, Neil; de Lusignan, Simon
2007-01-01
Routinely collected general practice computer data are used for quality improvement; poor data quality including inconsistent coding can reduce their usefulness. To document the diversity of data entry systems currently in use in UK general practice and highlight possible implications for data quality. General practice volunteers provided screen shots of the clinical coding screen they would use to code a diagnosis or problem title in the clinical consultation. The six clinical conditions examined were: depression, cystitis, type 2 diabetes mellitus, sore throat, tired all the time, and myocardial infarction. We looked at the picking lists generated for these problem titles in EMIS, IPS, GPASS and iSOFT general practice clinical computer systems, using the Triset browser as a gold standard for comparison. A mean of 19.3 codes is offered in the picking list after entering a diagnosis or problem title. EMIS produced the longest picking lists and GPASS the shortest, with a mean number of choices of 35.2 and 12.7, respectively. Approximately three-quarters (73.5%) of codes are diagnoses, one-eighth (12.5%) symptom codes, and the remainder come from a range of Read chapters. There was no readily detectable consistent order in which codes were displayed. Velocity coding, whereby commonly-used codes are placed higher in the picking list, results in variation between practices even where they have the same brand of computer system. Current systems for clinical coding promote diversity rather than consistency of clinical coding. As the UK moves towards an integrated health IT system consistency of coding will become more important. A standardised, limited list of codes for primary care might help address this need.
Getting Started in Classroom Computing.
ERIC Educational Resources Information Center
Ahl, David H.
Written for secondary students, this booklet provides an introduction to several computer-related concepts through a set of six classroom games, most of which can be played with little more than a sheet of paper and a pencil. The games are: 1) SECRET CODES--introduction to binary coding, punched cards, and paper tape; 2) GUESS--efficient methods…
A Computational Model for Observation in Quantum Mechanics.
1987-03-16
Interferometer experiment ............. 17 2.3 The EPR Paradox experiment ................. 22 3 The Computational Model, an Overview 28 4 Implementation 34...40 4.4 Code for the EPR paradox experiment ............... 46 4.5 Code for the double slit interferometer experiment ..... .. 50 5 Conclusions 59 A...particle run counter to fact. The EPR paradox experiment (see section 2.3) is hard to resolve with this class of models, collectively called hidden
A Combinatorial Geometry Computer Description of the XR311 Vehicle
1978-04-01
cards or magnetic tape. The shot line output of the GRID subroutine of the GIFT code is also stored on magnetic tape for future vulnera- bility...descriptions as processed by the Geometric Information For Targets ( GIFT )2 computer code. This report documents the COM-GEOM target description for all...72, March 1974. ’L.W. Bains and M.J. Reisinger, "The GIFT Code User Manual, VOL 1, Introduction and Input Requirements, " Ballistic Research
Verifying a computational method for predicting extreme ground motion
Harris, R.A.; Barall, M.; Andrews, D.J.; Duan, B.; Ma, S.; Dunham, E.M.; Gabriel, A.-A.; Kaneko, Y.; Kase, Y.; Aagaard, Brad T.; Oglesby, D.D.; Ampuero, J.-P.; Hanks, T.C.; Abrahamson, N.
2011-01-01
In situations where seismological data is rare or nonexistent, computer simulations may be used to predict ground motions caused by future earthquakes. This is particularly practical in the case of extreme ground motions, where engineers of special buildings may need to design for an event that has not been historically observed but which may occur in the far-distant future. Once the simulations have been performed, however, they still need to be tested. The SCEC-USGS dynamic rupture code verification exercise provides a testing mechanism for simulations that involve spontaneous earthquake rupture. We have performed this examination for the specific computer code that was used to predict maximum possible ground motion near Yucca Mountain. Our SCEC-USGS group exercises have demonstrated that the specific computer code that was used for the Yucca Mountain simulations produces similar results to those produced by other computer codes when tackling the same science problem. We also found that the 3D ground motion simulations produced smaller ground motions than the 2D simulations.
Experimental aerothermodynamic research of hypersonic aircraft
NASA Technical Reports Server (NTRS)
Cleary, Joseph W.
1987-01-01
The 2-D and 3-D advance computer codes being developed for use in the design of such hypersonic aircraft as the National Aero-Space Plane require comparison of the computational results with a broad spectrum of experimental data to fully assess the validity of the codes. This is particularly true for complex flow fields with control surfaces present and for flows with separation, such as leeside flow. Therefore, the objective is to provide a hypersonic experimental data base required for validation of advanced computational fluid dynamics (CFD) computer codes and for development of more thorough understanding of the flow physics necessary for these codes. This is being done by implementing a comprehensive test program for a generic all-body hypersonic aircraft model in the NASA/Ames 3.5 foot Hypersonic Wind Tunnel over a broad range of test conditions to obtain pertinent surface and flowfield data. Results from the flow visualization portion of the investigation are presented.
Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, L.M.; Hochstedler, R.D.
1997-02-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of themore » accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).« less
BRYNTRN: A baryon transport model
NASA Technical Reports Server (NTRS)
Wilson, John W.; Townsend, Lawrence W.; Nealy, John E.; Chun, Sang Y.; Hong, B. S.; Buck, Warren W.; Lamkin, S. L.; Ganapol, Barry D.; Khan, Ferdous; Cucinotta, Francis A.
1989-01-01
The development of an interaction data base and a numerical solution to the transport of baryons through an arbitrary shield material based on a straight ahead approximation of the Boltzmann equation are described. The code is most accurate for continuous energy boundary values, but gives reasonable results for discrete spectra at the boundary using even a relatively coarse energy grid (30 points) and large spatial increments (1 cm in H2O). The resulting computer code is self-contained, efficient and ready to use. The code requires only a very small fraction of the computer resources required for Monte Carlo codes.
An Analysis of CPA Firm Recruiters' Perceptions of Online Masters of Accounting Degrees
ERIC Educational Resources Information Center
Metrejean, Eddie; Noland, Thomas G.
2011-01-01
Online education continues to grow at a rapid pace. Assessment of the effectiveness of online programs is needed to differentiate legitimate programs from diploma mills. The authors examined the perceptions of CPA firm recruiters on whether an online Master of Accounting (MACC) matters in the hiring decision. Results show that recruiters do not…
PsyScript: a Macintosh application for scripting experiments.
Bates, Timothy C; D'Oliveiro, Lawrence
2003-11-01
PsyScript is a scriptable application allowing users to describe experiments in Apple's compiled high-level object-oriented AppleScript language, while still supporting millisecond or better within-trial event timing (delays can be in milliseconds or refresh-based, and PsyScript can wait on external I/O, such as eye movement fixations). Because AppleScript is object oriented and system-wide, PsyScript experiments support complex branching, code reuse, and integration with other applications. Included AppleScript-based libraries support file handling and stimulus randomization and sampling, as well as more specialized tasks, such as adaptive testing. Advanced features include support for the BBox serial port button box, as well as a low-cost USB-based digital I/O card for millisecond timing, recording of any number and types of responses within a trial, novel responses, such as graphics tablet drawing, and use of the Macintosh sound facilities to provide an accurate voice key, saving voice responses to disk, scriptable image creation, support for flicker-free animation, and gaze-dependent masking. The application is open source, allowing researchers to enhance the feature set and verify internal functions. Both the application and the source are available for free download at www.maccs.mq.edu.au/-tim/psyscript/.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candel, A.; Kabel, A.; Lee, L.
Over the past years, SLAC's Advanced Computations Department (ACD), under SciDAC sponsorship, has developed a suite of 3D (2D) parallel higher-order finite element (FE) codes, T3P (T2P) and Pic3P (Pic2P), aimed at accurate, large-scale simulation of wakefields and particle-field interactions in radio-frequency (RF) cavities of complex shape. The codes are built on the FE infrastructure that supports SLAC's frequency domain codes, Omega3P and S3P, to utilize conformal tetrahedral (triangular)meshes, higher-order basis functions and quadratic geometry approximation. For time integration, they adopt an unconditionally stable implicit scheme. Pic3P (Pic2P) extends T3P (T2P) to treat charged-particle dynamics self-consistently using the PIC (particle-in-cell)more » approach, the first such implementation on a conformal, unstructured grid using Whitney basis functions. Examples from applications to the International Linear Collider (ILC), Positron Electron Project-II (PEP-II), Linac Coherent Light Source (LCLS) and other accelerators will be presented to compare the accuracy and computational efficiency of these codes versus their counterparts using structured grids.« less
NASA Astrophysics Data System (ADS)
Gilman, Jessica B.; Kuster, William C.; Goldan, Paul D.; Herndon, Scott C.; Zahniser, Mark S.; Tucker, Sara C.; Brewer, W. Alan; Lerner, Brian M.; Williams, Eric J.; Harley, Robert A.; Fehsenfeld, Fred C.; Warneke, Carsten; de Gouw, Joost A.
2009-04-01
An extensive set of volatile organic compounds (VOCs) and other gas phase species were measured in situ aboard the NOAA R/V Ronald H. Brown as the ship sailed in the Gulf of Mexico and the Houston and Galveston Bay (HGB) area as part of the Texas Air Quality (TexAQS)/Gulf of Mexico Atmospheric Composition and Climate Study (GoMACCS) conducted from July-September 2006. The magnitudes of the reactivities of CH4, CO, VOCs, and NO2 with the hydroxyl radical, OH, were determined in order to quantify the contributions of these compounds to potential ozone formation. The average total OH reactivity (ROH,TOTAL) increased from 1.01 s-1 in the central gulf to 10.1 s-1 in the HGB area as a result of the substantial increase in the contribution from VOCs and NO2. The increase in the measured concentrations of reactive VOCs in the HGB area compared to the central gulf was explained by the impact of industrial emissions, the regional distribution of VOCs, and the effects of local meteorology. By compensating for the effects of boundary layer mixing, the diurnal profiles of the OH reactivity were used to characterize the source signatures and relative magnitudes of biogenic, anthropogenic (urban + industrial), and oxygenated VOCs as a function of the time of day. The source of reactive oxygenated VOCs (e.g., formaldehyde) was determined to be almost entirely from secondary production. The secondary formation of oxygenated VOCs, in addition to the continued emissions of reactive anthropogenic VOCs, served to sustain elevated levels of OH reactivity throughout the time of peak ozone production.
Marine aerosol distribution and variability over the pristine Southern Indian Ocean
NASA Astrophysics Data System (ADS)
Mallet, Paul-Étienne; Pujol, Olivier; Brioude, Jérôme; Evan, Stéphanie; Jensen, Andrew
2018-06-01
This paper presents an 8-year (2005-2012 inclusive) study of the marine aerosol distribution and variability over the Southern Indian Ocean, precisely in the area { 10 °S - 40 °S ; 50 °E - 110 °E } which has been identified as one of the most pristine regions of the globe. A large dataset consisting of satellite data (POLDER, CALIOP), AERONET measurements at Saint-Denis (French Réunion Island) and model reanalysis (MACC), has been used. In spite of a positive bias of about 0.05 between the AOD (aerosol optical depth) given by POLDER and MACC on one hand and the AOD measured by AERONET on the other, consistent results for aerosol distribution and variability over the area considered have been obtained. First, aerosols are mainly confined below 2km asl (above sea level) and are dominated by sea salt, especially in the center of the area of interest, with AOD ≤ 0 . 1. This zone is the most pristine and is associated with the position of the Mascarene anticyclone. There, the direct radiative effect is assessed around - 9 Wm-2 at the top of the atmosphere and probability density functions of the AOD s are leptokurtic lognormal functions without any significant seasonal variation. It is also suggested that the Madden-Jullian oscillation impacts sea salt emissions in the northern part of the area considered by modifying the state of the ocean surface. Finally, this area is surrounded in the northeast and the southwest by seasonal Australian and South African intrusions (AOD > 0.1) ; throughout the year, the ITCZ seems to limit continental contaminations from Asia. Due to the long period of time considered (almost a decade), this paper completes and strengthens results of studies based on observations performed during previous specific field campaigns.
Turbulent Bubbly Flow in a Vertical Pipe Computed By an Eddy-Resolving Reynolds Stress Model
2014-09-19
the numerical code OpenFOAM R©. 1 Introduction Turbulent bubbly flows are encountered in many industrially relevant applications, such as chemical in...performed using the OpenFOAM -2.2.2 computational code utilizing a cell- center-based finite volume method on an unstructured numerical grid. The...the mean Courant number is always below 0.4. The utilized turbulence models were implemented into the so-called twoPhaseEulerFoam solver in OpenFOAM , to
Application of CARS to scramjet combustion
NASA Technical Reports Server (NTRS)
Antcliff, R. R.
1987-01-01
A coherent anti-Stokes Raman spectroscopic (CARS) instrument has been developed for measuring simultaneously temperature and N2 - O2 species concentration in hostile flame environments. A folded BOXCARS arrangement was employed to obtain high spatial resolution. Polarization discrimination against the nonresonant background decreased the lower limits of O2 detectivity. The instrument has been primarily employed for validation of computational fluid-dynamics computer-model codes. Comparisons have been made to both the CHARNAL and TEACH codes on a hydrogen diffusion flame with good results.
Laser Signature Prediction Using The VALUE Computer Program
NASA Astrophysics Data System (ADS)
Akerman, Alexander; Hoffman, George A.; Patton, Ronald
1989-09-01
A variety of enhancements are being made to the 1976-vintage LASERX computer code. These include: - Surface characterization with BDRF tabular data - Specular reflection from transparent surfaces - Generation of glint direction maps - Generation of relative range imagery - Interface to the LOWTRAN atmospheric transmission code - Interface to the LEOPS laser sensor code - User friendly menu prompting for easy setup Versions of VALUE have been written for both VAX/VMS and PC/DOS computer environments. Outputs have also been revised to be user friendly and include tables, plots, and images for (1) intensity, (2) cross section,(3) reflectance, (4) relative range, (5) region type, and (6) silhouette.
NASA Technical Reports Server (NTRS)
Biringen, S. H.; Mcmillan, O. J.
1980-01-01
The use of a computer code for the calculation of two dimensional inlet flow fields in a supersonic free stream and a nonorthogonal mesh-generation code are illustrated by specific examples. Input, output, and program operation and use are given and explained for the case of supercritical inlet operation at a subdesign Mach number (M Mach free stream = 2.09) for an isentropic-compression, drooped-cowl inlet. Source listings of the computer codes are also provided.
1985-10-01
NOTE3 1W. KFY OORDS (Continwo =n reverse aide If necesesar aid ldwttlfy by" block ntmber) •JW7 Regions, COM-EOM Region Ident• fication GIFT Material...technique of mobna.tcri• i Geometr- (Com-Geom). The Com-Gem data is used as input to the Geometric Inf• •cation for Targets ( GIFT ) computer code to... GIFT ) 2 3 computer code. This report documents the combinatorial geometry (Com-Geom) target description data which is the input data for the GIFT code
Scherzinger, William M.
2016-05-01
The numerical integration of constitutive models in computational solid mechanics codes allows for the solution of boundary value problems involving complex material behavior. Metal plasticity models, in particular, have been instrumental in the development of these codes. Here, most plasticity models implemented in computational codes use an isotropic von Mises yield surface. The von Mises, of J 2, yield surface has a simple predictor-corrector algorithm - the radial return algorithm - to integrate the model.
Development and application of GASP 2.0
NASA Technical Reports Server (NTRS)
Mcgrory, W. D.; Huebner, L. D.; Slack, D. C.; Walters, R. W.
1992-01-01
GASP 2.0 represents a major new release of the computational fluid dynamics code in wide use by the aerospace community. The authors have spent the last two years analyzing the strengths and weaknesses of the previous version of the finite-rate chemistry, Navier Stokes solution algorithm. What has resulted is a completely redesigned computer code that offers two to four times the performance of previous versions while requiring as little as one quarter of the memory requirements. In addition to the improvements in efficiency over the original code, Version 2.0 contains many new features. A brief discussion of the improvements made to GASP, and an application using GASP 2.0 which demonstrates some of the new features are presented.
Sentinel-2: presentation of the CAL/VAL commissioning phase
NASA Astrophysics Data System (ADS)
Trémas, Thierry L.; Déchoz, Cécile; Lacherade, Sophie; Nosavan, Julien; Petrucci, Beatrice
2015-10-01
In partnership with the European Commission and in the frame of the Copernicus program, the European Space Agency (ESA) has developed the Sentinel-2 optical imaging mission devoted to the operational monitoring of land and coastal areas. The Sentinel-2 mission is based on a satellites constellation deployed in polar sun-synchronous orbits. Sentinel-2 will offer a unique combination of global coverage with a wide field of view (290km), a high revisit (5 days with two satellites), a high resolution (10m, 20m and 60m) and multi-spectral imagery (13 spectral bands in visible and shortwave infra-red domains). The first sentinel 2A has been launched on June 22nd, 2015, from Kourou, French Guyana. In this context, the Centre National d'Etudes Spatiales (CNES) supports ESA to insure the cal/val commissioning phase, for Image Quality aspects. This paper provides first, an overview of the Sentinel-2 system after the launch. Then the articles focuses on the means implemented and activated in CNES to perform the In Orbit Commissioning, the availability and performances of the different devices involved in the ground segment : the GPP in charge of producing the level 1 files, the "radiometric unit" that processes sensitivity parameters, the "geometric unit" in charge of fitting the images on a reference map, MACCS that will produce Level 2A files (computing reflectances at the Bottom of Atmosphere) and the TEC-S2 that will coordinate all the previous software and drive a database in which will be gather the incoming Level 0 files and the processed Level 1 files.
Fienen, Michael N.; Kunicki, Thomas C.; Kester, Daniel E.
2011-01-01
This report documents cloudPEST-a Python module with functions to facilitate deployment of the model-independent parameter estimation code PEST on a cloud-computing environment. cloudPEST makes use of low-level, freely available command-line tools that interface with the Amazon Elastic Compute Cloud (EC2(TradeMark)) that are unlikely to change dramatically. This report describes the preliminary setup for both Python and EC2 tools and subsequently describes the functions themselves. The code and guidelines have been tested primarily on the Windows(Registered) operating system but are extensible to Linux(Registered).
An arbitrary grid CFD algorithm for configuration aerodynamics analysis. Volume 2: FEMNAS user guide
NASA Technical Reports Server (NTRS)
Manhardt, Paul D.; Orzechowski, J. A.; Baker, A. J.
1992-01-01
This report documents the user input and output data requirements for the FEMNAS finite element Navier-Stokes code for real-gas simulations of external aerodynamics flowfields. This code was developed for the configuration aerodynamics branch of NASA ARC, under SBIR Phase 2 contract NAS2-124568 by Computational Mechanics Corporation (COMCO). This report is in two volumes. Volume 1 contains the theory for the derived finite element algorithm and describes the test cases used to validate the computer program described in the Volume 2 user guide.
NASA Technical Reports Server (NTRS)
Sozen, Mehmet
2003-01-01
In what follows, the model used for combustion of liquid hydrogen (LH2) with liquid oxygen (LOX) using chemical equilibrium assumption, and the novel computational method developed for determining the equilibrium composition and temperature of the combustion products by application of the first and second laws of thermodynamics will be described. The modular FORTRAN code developed as a subroutine that can be incorporated into any flow network code with little effort has been successfully implemented in GFSSP as the preliminary runs indicate. The code provides capability of modeling the heat transfer rate to the coolants for parametric analysis in system design.
Development and application of the GIM code for the Cyber 203 computer
NASA Technical Reports Server (NTRS)
Stainaker, J. F.; Robinson, M. A.; Rawlinson, E. G.; Anderson, P. G.; Mayne, A. W.; Spradley, L. W.
1982-01-01
The GIM computer code for fluid dynamics research was developed. Enhancement of the computer code, implicit algorithm development, turbulence model implementation, chemistry model development, interactive input module coding and wing/body flowfield computation are described. The GIM quasi-parabolic code development was completed, and the code used to compute a number of example cases. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and implicit finite difference scheme were also added. Development was completed on the interactive module for generating the input data for GIM. Solutions for inviscid hypersonic flow over a wing/body configuration are also presented.
Barigye, S J; Marrero-Ponce, Y; Martínez López, Y; Martínez Santiago, O; Torrens, F; García Domenech, R; Galvez, J
2013-01-01
Versatile event-based approaches for the definition of novel information theory-based indices (IFIs) are presented. An event in this context is the criterion followed in the "discovery" of molecular substructures, which in turn serve as basis for the construction of the generalized incidence and relations frequency matrices, Q and F, respectively. From the resultant F, Shannon's, mutual, conditional and joint entropy-based IFIs are computed. In previous reports, an event named connected subgraphs was presented. The present study is an extension of this notion, in which we introduce other events, namely: terminal paths, vertex path incidence, quantum subgraphs, walks of length k, Sach's subgraphs, MACCs, E-state and substructure fingerprints and, finally, Ghose and Crippen atom-types for hydrophobicity and refractivity. Moreover, we define magnitude-based IFIs, introducing the use of the magnitude criterion in the definition of mutual, conditional and joint entropy-based IFIs. We also discuss the use of information-theoretic parameters as a measure of the dissimilarity of codified structural information of molecules. Finally, a comparison of the statistics for QSPR models obtained with the proposed IFIs and DRAGON's molecular descriptors for two physicochemical properties log P and log K of 34 derivatives of 2-furylethylenes demonstrates similar to better predictive ability than the latter.
Proceedings of the 14th International Conference on the Numerical Simulation of Plasmas
NASA Astrophysics Data System (ADS)
Partial Contents are as follows: Numerical Simulations of the Vlasov-Maxwell Equations by Coupled Particle-Finite Element Methods on Unstructured Meshes; Electromagnetic PIC Simulations Using Finite Elements on Unstructured Grids; Modelling Travelling Wave Output Structures with the Particle-in-Cell Code CONDOR; SST--A Single-Slice Particle Simulation Code; Graphical Display and Animation of Data Produced by Electromagnetic, Particle-in-Cell Codes; A Post-Processor for the PEST Code; Gray Scale Rendering of Beam Profile Data; A 2D Electromagnetic PIC Code for Distributed Memory Parallel Computers; 3-D Electromagnetic PIC Simulation on the NRL Connection Machine; Plasma PIC Simulations on MIMD Computers; Vlasov-Maxwell Algorithm for Electromagnetic Plasma Simulation on Distributed Architectures; MHD Boundary Layer Calculation Using the Vortex Method; and Eulerian Codes for Plasma Simulations.
NASA Technical Reports Server (NTRS)
Athavale, Mahesh; Przekwas, Andrzej
2004-01-01
The objectives of the program were to develop computational fluid dynamics (CFD) codes and simpler industrial codes for analyzing and designing advanced seals for air-breathing and space propulsion engines. The CFD code SCISEAL is capable of producing full three-dimensional flow field information for a variety of cylindrical configurations. An implicit multidomain capability allow the division of complex flow domains to allow optimum use of computational cells. SCISEAL also has the unique capability to produce cross-coupled stiffness and damping coefficients for rotordynamic computations. The industrial codes consist of a series of separate stand-alone modules designed for expeditious parametric analyses and optimization of a wide variety of cylindrical and face seals. Coupled through a Knowledge-Based System (KBS) that provides a user-friendly Graphical User Interface (GUI), the industrial codes are PC based using an OS/2 operating system. These codes were designed to treat film seals where a clearance exists between the rotating and stationary components. Leakage is inhibited by surface roughness, small but stiff clearance films, and viscous pumping devices. The codes have demonstrated to be a valuable resource for seal development of future air-breathing and space propulsion engines.
Sentinel 2: implementation of the means and methods for the CAL/VAL commissioning phase
NASA Astrophysics Data System (ADS)
Trémas, Thierry L.; Déchoz, Cécile; Lacherade, Sophie; Nosavan, Julien; Petrucci, Beatrice; Martimort, P.; Isola, Claudia
2013-10-01
In partnership with the European Commission and in the frame of the Copernicus program, the European Space Agency (ESA) is developing the Sentinel-2 optical imaging mission devoted to the operational monitoring of land and coastal areas. The Sentinel-2 mission is based on a satellites constellation deployed in polar sun-synchronous orbit. Sentinel-2 will offer a unique combination of global coverage with a wide field of view (290km), a high revisit (5 days with two satellites), a high resolution (10m, 20m and 60m) and multi-spectral imagery (13 spectral bands in visible and shortwave infra-red domains). The first satellite is planned to be launched in late 2014. In this context, the Centre National d'Etudes Spatiales (CNES) supports ESA to insure the cal/val commissioning phase. This paper provides first, an overview of the Sentinel-2 system and the image products delivered by the ground processing. Then the paper will present the ground segment, presently under preparation at CNES, and the various devices that compose it : the GPP in charge of producing the level 1 files, the "radiometric unit" that processes sensitivity parameters, the "geometric unit" in charge of fitting the images on a reference map, MACCS that will produce Level 2A files (computing reflectances at the Bottom of Atmosphere) and the TEC-S2 that will coordinate all the previous software and drive a database in which will be gather the incoming Level 0 files and the processed Level 1 files.
NASA Technical Reports Server (NTRS)
Tsuchiya, T.; Murthy, S. N. B.
1982-01-01
A computer code is presented for the prediction of off-design axial flow compressor performance with water ingestion. Four processes were considered to account for the aero-thermo-mechanical interactions during operation with air-water droplet mixture flow: (1) blade performance change, (2) centrifuging of water droplets, (3) heat and mass transfer process between the gaseous and the liquid phases and (4) droplet size redistribution due to break-up. Stage and compressor performance are obtained by a stage stacking procedure using representative veocity diagrams at a rotor inlet and outlet mean radii. The Code has options for performance estimation with (1) mixtures of gas and (2) gas-water droplet mixtures, and therefore can take into account the humidity present in ambient conditions. A test case illustrates the method of using the Code. The Code follows closely the methodology and architecture of the NASA-STGSTK Code for the estimation of axial-flow compressor performance with air flow.
Real-time computer treatment of THz passive device images with the high image quality
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.
2012-06-01
We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.
ERIC Educational Resources Information Center
Cardenas-Claros, Monica Stella; Gruba, Paul A.
2013-01-01
This paper proposes a theoretical framework for the conceptualization and design of help options in computer-based second language (L2) listening. Based on four empirical studies, it aims at clarifying both conceptualization and design (CoDe) components. The elements of conceptualization consist of a novel four-part classification of help options:…
NASA Technical Reports Server (NTRS)
Bjork, C.
1981-01-01
The REEDS (rocket exhaust effluent diffusion single layer) computer code is used for the estimation of certain rocket exhaust effluent concentrations and dosages and their distributions near the Earth's surface following a rocket launch event. Output from REEDS is used in producing near real time air quality and environmental assessments of the effects of certain potentially harmful effluents, namely HCl, Al2O3, CO, and NO.
PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 2: User's guide
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady
1990-01-01
A new computer code was developed to solve the two-dimensional or axisymmetric, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 2 is the User's Guide, and describes the program's general features, the input and output, the procedure for setting up initial conditions, the computer resource requirements, the diagnostic messages that may be generated, the job control language used to run the program, and several test cases.
NASA Technical Reports Server (NTRS)
Koeksal, Adnan; Trew, Robert J.; Kauffman, J. Frank
1992-01-01
A Moment Method Model for the radiation pattern characterization of single Linearly Tapered Slot Antennas (LTSA) in air or on a dielectric substrate is developed. This characterization consists of: (1) finding the radiated far-fields of the antenna; (2) determining the E-Plane and H-Plane beamwidths and sidelobe levels; and (3) determining the D-Plane beamwidth and cross polarization levels, as antenna parameters length, height, taper angle, substrate thickness, and the relative substrate permittivity vary. The LTSA geometry does not lend itself to analytical solution with the given parameter ranges. Therefore, a computer modeling scheme and a code are necessary to analyze the problem. This necessity imposes some further objectives or requirements on the solution method (modeling) and tool (computer code). These may be listed as follows: (1) a good approximation to the real antenna geometry; and (2) feasible computer storage and time requirements. According to these requirements, the work is concentrated on the development of efficient modeling schemes for these type of problems and on reducing the central processing unit (CPU) time required from the computer code. A Method of Moments (MoM) code is developed for the analysis of LTSA's within the parameter ranges given.
A Computer Program for Flow-Log Analysis of Single Holes (FLASH)
Day-Lewis, F. D.; Johnson, C.D.; Paillet, Frederick L.; Halford, K.J.
2011-01-01
A new computer program, FLASH (Flow-Log Analysis of Single Holes), is presented for the analysis of borehole vertical flow logs. The code is based on an analytical solution for steady-state multilayer radial flow to a borehole. The code includes options for (1) discrete fractures and (2) multilayer aquifers. Given vertical flow profiles collected under both ambient and stressed (pumping or injection) conditions, the user can estimate fracture (or layer) transmissivities and far-field hydraulic heads. FLASH is coded in Microsoft Excel with Visual Basic for Applications routines. The code supports manual and automated model calibration. ?? 2011, The Author(s). Ground Water ?? 2011, National Ground Water Association.
Development of a CFD Code for Analysis of Fluid Dynamic Forces in Seals
NASA Technical Reports Server (NTRS)
Athavale, Mahesh M.; Przekwas, Andrzej J.; Singhal, Ashok K.
1991-01-01
The aim is to develop a 3-D computational fluid dynamics (CFD) code for the analysis of fluid flow in cylindrical seals and evaluation of the dynamic forces on the seals. This code is expected to serve as a scientific tool for detailed flow analysis as well as a check for the accuracy of the 2D industrial codes. The features necessary in the CFD code are outlined. The initial focus was to develop or modify and implement new techniques and physical models. These include collocated grid formulation, rotating coordinate frames and moving grid formulation. Other advanced numerical techniques include higher order spatial and temporal differencing and an efficient linear equation solver. These techniques were implemented in a 2D flow solver for initial testing. Several benchmark test cases were computed using the 2D code, and the results of these were compared to analytical solutions or experimental data to check the accuracy. Tests presented here include planar wedge flow, flow due to an enclosed rotor, and flow in a 2D seal with a whirling rotor. Comparisons between numerical and experimental results for an annular seal and a 7-cavity labyrinth seal are also included.
Evolvix BEST Names for semantic reproducibility across code2brain interfaces
Scheuer, Katherine S.; Keel, Seth A.; Vyas, Vaibhav; Liblit, Ben; Hanlon, Bret; Ferris, Michael C.; Yin, John; Dutra, Inês; Pietsch, Anthony; Javid, Christine G.; Moog, Cecilia L.; Meyer, Jocelyn; Dresel, Jerdon; McLoone, Brian; Loberger, Sonya; Movaghar, Arezoo; Gilchrist‐Scott, Morgaine; Sabri, Yazeed; Sescleifer, Dave; Pereda‐Zorrilla, Ivan; Zietlow, Andrew; Smith, Rodrigo; Pietenpol, Samantha; Goldfinger, Jacob; Atzen, Sarah L.; Freiberg, Erika; Waters, Noah P.; Nusbaum, Claire; Nolan, Erik; Hotz, Alyssa; Kliman, Richard M.; Mentewab, Ayalew; Fregien, Nathan; Loewe, Martha
2016-01-01
Names in programming are vital for understanding the meaning of code and big data. We define code2brain (C2B) interfaces as maps in compilers and brains between meaning and naming syntax, which help to understand executable code. While working toward an Evolvix syntax for general‐purpose programming that makes accurate modeling easy for biologists, we observed how names affect C2B quality. To protect learning and coding investments, C2B interfaces require long‐term backward compatibility and semantic reproducibility (accurate reproduction of computational meaning from coder‐brains to reader‐brains by code alone). Semantic reproducibility is often assumed until confusing synonyms degrade modeling in biology to deciphering exercises. We highlight empirical naming priorities from diverse individuals and roles of names in different modes of computing to show how naming easily becomes impossibly difficult. We present the Evolvix BEST (Brief, Explicit, Summarizing, Technical) Names concept for reducing naming priority conflicts, test it on a real challenge by naming subfolders for the Project Organization Stabilizing Tool system, and provide naming questionnaires designed to facilitate C2B debugging by improving names used as keywords in a stabilizing programming language. Our experiences inspired us to develop Evolvix using a flipped programming language design approach with some unexpected features and BEST Names at its core. PMID:27918836
Report from Hawai'i: The Rising Tide of Arts Education in the Islands
ERIC Educational Resources Information Center
Wood, Paul
2005-01-01
The establishment of Maui Arts & Cultural Center (MACC), a community arts facility that prioritizes education at the top of its mission, has been a significant factor in the growth of arts education in Hawai'i. This article describes the role such a facility can play in the kind of educational reform that people envision, and the author's…
In this paper, impact of meteorology derived from the Weather, Research and Forecasting (WRF)– Non–hydrostatic Mesoscale Model (NMM) and WRF–Advanced Research WRF (ARW) meteorological models on the Community Multiscale Air Quality (CMAQ) simulations for ozone and its related prec...
Turbofan noise generation. Volume 2: Computer programs
NASA Technical Reports Server (NTRS)
Ventres, C. S.; Theobald, M. A.; Mark, W. D.
1982-01-01
The use of a package of computer programs developed to calculate the in duct acoustic mods excited by a fan/stator stage operating at subsonic tip speed is described. The following three noise source mechanisms are included: (1) sound generated by the rotor blades interacting with turbulence ingested into, or generated within, the inlet duct; (2) sound generated by the stator vanes interacting with the turbulent wakes of the rotor blades; and (3) sound generated by the stator vanes interacting with the velocity deficits in the mean wakes of the rotor blades. The computations for three different noise mechanisms are coded as three separate computer program packages. The computer codes are described by means of block diagrams, tables of data and variables, and example program executions; FORTRAN listings are included.
Turbofan noise generation. Volume 2: Computer programs
NASA Astrophysics Data System (ADS)
Ventres, C. S.; Theobald, M. A.; Mark, W. D.
1982-07-01
The use of a package of computer programs developed to calculate the in duct acoustic mods excited by a fan/stator stage operating at subsonic tip speed is described. The following three noise source mechanisms are included: (1) sound generated by the rotor blades interacting with turbulence ingested into, or generated within, the inlet duct; (2) sound generated by the stator vanes interacting with the turbulent wakes of the rotor blades; and (3) sound generated by the stator vanes interacting with the velocity deficits in the mean wakes of the rotor blades. The computations for three different noise mechanisms are coded as three separate computer program packages. The computer codes are described by means of block diagrams, tables of data and variables, and example program executions; FORTRAN listings are included.
Towards Reproducibility in Computational Hydrology
NASA Astrophysics Data System (ADS)
Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei; Duffy, Chris; Arheimer, Berit
2017-04-01
Reproducibility is a foundational principle in scientific research. The ability to independently re-run an experiment helps to verify the legitimacy of individual findings, and evolve (or reject) hypotheses and models of how environmental systems function, and move them from specific circumstances to more general theory. Yet in computational hydrology (and in environmental science more widely) the code and data that produces published results are not regularly made available, and even if they are made available, there remains a multitude of generally unreported choices that an individual scientist may have made that impact the study result. This situation strongly inhibits the ability of our community to reproduce and verify previous findings, as all the information and boundary conditions required to set up a computational experiment simply cannot be reported in an article's text alone. In Hutton et al 2016 [1], we argue that a cultural change is required in the computational hydrological community, in order to advance and make more robust the process of knowledge creation and hypothesis testing. We need to adopt common standards and infrastructures to: (1) make code readable and re-useable; (2) create well-documented workflows that combine re-useable code together with data to enable published scientific findings to be reproduced; (3) make code and workflows available, easy to find, and easy to interpret, using code and code metadata repositories. To create change we argue for improved graduate training in these areas. In this talk we reflect on our progress in achieving reproducible, open science in computational hydrology, which are relevant to the broader computational geoscience community. In particular, we draw on our experience in the Switch-On (EU funded) virtual water science laboratory (http://www.switch-on-vwsl.eu/participate/), which is an open platform for collaboration in hydrological experiments (e.g. [2]). While we use computational hydrology as the example application area, we believe that our conclusions are of value to the wider environmental and geoscience community as far as the use of code and models for scientific advancement is concerned. References: [1] Hutton, C., T. Wagener, J. Freer, D. Han, C. Duffy, and B. Arheimer (2016), Most computational hydrology is not reproducible, so is it really science?, Water Resour. Res., 52, 7548-7555, doi:10.1002/2016WR019285. [2] Ceola, S., et al. (2015), Virtual laboratories: New opportunities for collaborative water science, Hydrol. Earth Syst. Sci. Discuss., 11(12), 13443-13478, doi:10.5194/hessd-11-13443-2014.
NASA Technical Reports Server (NTRS)
Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)
2000-01-01
This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor-noise correlation model was developed from engine acoustic test results. This work provided several insights on potential approaches to reducing aircraft engine noise. Code development is described in this report, and those insights are discussed.
NASA Technical Reports Server (NTRS)
Lahti, G. P.
1972-01-01
A two- or three-constraint, two-dimensional radiation shield weight optimization procedure and a computer program, DOPEX, is described. The DOPEX code uses the steepest descent method to alter a set of initial (input) thicknesses for a shield configuration to achieve a minimum weight while simultaneously satisfying dose constaints. The code assumes an exponential dose-shield thickness relation with parameters specified by the user. The code also assumes that dose rates in each principal direction are dependent only on thicknesses in that direction. Code input instructions, FORTRAN 4 listing, and a sample problem are given. Typical computer time required to optimize a seven-layer shield is about 0.1 minute on an IBM 7094-2.
A Rocket Engine Design Expert System
NASA Technical Reports Server (NTRS)
Davidian, Kenneth J.
1989-01-01
The overall structure and capabilities of an expert system designed to evaluate rocket engine performance are described. The expert system incorporates a JANNAF standard reference computer code to determine rocket engine performance and a state of the art finite element computer code to calculate the interactions between propellant injection, energy release in the combustion chamber, and regenerative cooling heat transfer. Rule-of-thumb heuristics were incorporated for the H2-O2 coaxial injector design, including a minimum gap size constraint on the total number of injector elements. One dimensional equilibrium chemistry was used in the energy release analysis of the combustion chamber. A 3-D conduction and/or 1-D advection analysis is used to predict heat transfer and coolant channel wall temperature distributions, in addition to coolant temperature and pressure drop. Inputting values to describe the geometry and state properties of the entire system is done directly from the computer keyboard. Graphical display of all output results from the computer code analyses is facilitated by menu selection of up to five dependent variables per plot.
Turbofan forced mixer-nozzle internal flowfield. Volume 2: Computational fluid dynamic predictions
NASA Technical Reports Server (NTRS)
Werle, M. J.; Vasta, V. N.
1982-01-01
A general program was conducted to develop and assess a computational method for predicting the flow properties in a turbofan forced mixed duct. The detail assessment of the resulting computer code is presented. It was found that the code provided excellent predictions of the kinematics of the mixing process throughout the entire length of the mixer nozzle. The thermal mixing process between the hot core and cold fan flows was found to be well represented in the low speed portion of the flowfield.
McKenzie, Kirsten; Walker, Sue; Tong, Shilu
It remains unclear whether the change from a manual to an automated coding system (ACS) for deaths has significantly affected the consistency of Australian mortality data. The underlying causes of 34,000 deaths registered in 1997 in Australia were dual coded, in ICD-9 manually, and by using an automated computer coding program. The diseases most affected by the change from manual to ACS were senile/presenile dementia, and pneumonia. The most common disease to which a manually assigned underlying cause of senile dementia was coded with ACS was unspecified psychoses (37.2%). Only 12.5% of codes assigned by ACS as senile dementia were coded the same by manual coders. This study indicates some important differences in mortality rates when comparing mortality data that have been coded manually with those coded using an automated computer coding program. These differences may be related to both the different interpretation of ICD coding rules between manual and automated coding, and different co-morbidities or co-existing conditions among demographic groups.
NASA Technical Reports Server (NTRS)
Chaderjian, N. M.
1986-01-01
A computer code is under development whereby the thin-layer Reynolds-averaged Navier-Stokes equations are to be applied to realistic fighter-aircraft configurations. This transonic Navier-Stokes code (TNS) utilizes a zonal approach in order to treat complex geometries and satisfy in-core computer memory constraints. The zonal approach has been applied to isolated wing geometries in order to facilitate code development. Part 1 of this paper addresses the TNS finite-difference algorithm, zonal methodology, and code validation with experimental data. Part 2 of this paper addresses some numerical issues such as code robustness, efficiency, and accuracy at high angles of attack. Special free-stream-preserving metrics proved an effective way to treat H-mesh singularities over a large range of severe flow conditions, including strong leading-edge flow gradients, massive shock-induced separation, and stall. Furthermore, lift and drag coefficients have been computed for a wing up through CLmax. Numerical oil flow patterns and particle trajectories are presented both for subcritical and transonic flow. These flow simulations are rich with complex separated flow physics and demonstrate the efficiency and robustness of the zonal approach.
Parallel community climate model: Description and user`s guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drake, J.B.; Flanery, R.E.; Semeraro, B.D.
This report gives an overview of a parallel version of the NCAR Community Climate Model, CCM2, implemented for MIMD massively parallel computers using a message-passing programming paradigm. The parallel implementation was developed on an Intel iPSC/860 with 128 processors and on the Intel Delta with 512 processors, and the initial target platform for the production version of the code is the Intel Paragon with 2048 processors. Because the implementation uses a standard, portable message-passing libraries, the code has been easily ported to other multiprocessors supporting a message-passing programming paradigm. The parallelization strategy used is to decompose the problem domain intomore » geographical patches and assign each processor the computation associated with a distinct subset of the patches. With this decomposition, the physics calculations involve only grid points and data local to a processor and are performed in parallel. Using parallel algorithms developed for the semi-Lagrangian transport, the fast Fourier transform and the Legendre transform, both physics and dynamics are computed in parallel with minimal data movement and modest change to the original CCM2 source code. Sequential or parallel history tapes are written and input files (in history tape format) are read sequentially by the parallel code to promote compatibility with production use of the model on other computer systems. A validation exercise has been performed with the parallel code and is detailed along with some performance numbers on the Intel Paragon and the IBM SP2. A discussion of reproducibility of results is included. A user`s guide for the PCCM2 version 2.1 on the various parallel machines completes the report. Procedures for compilation, setup and execution are given. A discussion of code internals is included for those who may wish to modify and use the program in their own research.« less
Generalized Advanced Propeller Analysis System (GAPAS). Volume 2: Computer program user manual
NASA Technical Reports Server (NTRS)
Glatt, L.; Crawford, D. R.; Kosmatka, J. B.; Swigart, R. J.; Wong, E. W.
1986-01-01
The Generalized Advanced Propeller Analysis System (GAPAS) computer code is described. GAPAS was developed to analyze advanced technology multi-bladed propellers which operate on aircraft with speeds up to Mach 0.8 and altitudes up to 40,000 feet. GAPAS includes technology for analyzing aerodynamic, structural, and acoustic performance of propellers. The computer code was developed for the CDC 7600 computer and is currently available for industrial use on the NASA Langley computer. A description of all the analytical models incorporated in GAPAS is included. Sample calculations are also described as well as users requirements for modifying the analysis system. Computer system core requirements and running times are also discussed.
NASA Technical Reports Server (NTRS)
Baker, A. J.; Iannelli, G. S.; Manhardt, Paul D.; Orzechowski, J. A.
1993-01-01
This report documents the user input and output data requirements for the FEMNAS finite element Navier-Stokes code for real-gas simulations of external aerodynamics flowfields. This code was developed for the configuration aerodynamics branch of NASA ARC, under SBIR Phase 2 contract NAS2-124568 by Computational Mechanics Corporation (COMCO). This report is in two volumes. Volume 1 contains the theory for the derived finite element algorithm and describes the test cases used to validate the computer program described in the Volume 2 user guide.
1983-09-01
6ENFRAL. ELECTROMAGNETIC MODEL FOR THE ANALYSIS OF COMPLEX SYSTEMS **%(GEMA CS) Computer Code Documentation ii( Version 3 ). A the BDM Corporation Dr...ANALYSIS FnlTcnclRpr F COMPLEX SYSTEM (GmCS) February 81 - July 83- I TR CODE DOCUMENTATION (Version 3 ) 6.PROMN N.REPORT NUMBER 5. CONTRACT ORGAT97...the ti and t2 directions on the source patch. 3 . METHOD: The electric field at a segment observation point due to the source patch j is given by 1-- lnA
Lee, Michael S; Shlofmitz, Evan; Mansourian, Pejman; Sethi, Sanjum; Shlofmitz, Richard A
2016-11-01
We evaluated the relationship between gender and angiographic and clinical outcomes in patients with severely calcified lesions who underwent orbital atherectomy. Female gender is associated with increased risk of adverse clinical events after percutaneous coronary intervention (PCI). Severe coronary artery calcification increases the complexity of PCI and increases the risk of adverse cardiac events. Orbital atherectomy is effective in plaque modification, which facilitates stent delivery and expansion. Whether gender differences exist after orbital atherectomy is unclear. Our analysis retrospectively analyzed 458 consecutive real-world patients (314 males and 144 females) from three centers who underwent orbital atherectomy. The primary endpoint was the major adverse cardiac and cerebrovascular event (MACCE) rate, defined as the composite of death, myocardial infarction (MI), target-vessel revascularization (TVR), and stroke, at 30 days. The primary endpoint of MACCE was low and similar in females and males (0.7% vs 2.9%; P=.14). The individual endpoints of death (0.7% vs 1.6%; P=.43), MI (0.7% vs 1.3%; P=.58), TVR (0% vs 0%; P>.99), and stroke (0% vs 0.3%; P=.50) were low in both groups and did not differ. Angiographic complications were low: perforation (0.8% vs 0.7%; P>.90), dissection (0.8% vs 1.1%; P=.80), and no-reflow (0.8% vs 0.7%; P>.90). Plaque modification with orbital atherectomy was safe and provided similar angiographic and clinical outcomes between females and males. Randomized trials with longer-term follow-up are needed to support our results.
Breuckmann, Frank; Hochadel, Matthias; Darius, Harald; Giannitsis, Evangelos; Münzel, Thomas; Maier, Lars S; Schmitt, Claus; Schumacher, Burghard; Heusch, Gerd; Voigtländer, Thomas; Mudra, Harald; Senges, Jochen
2015-08-01
We investigated the current management of unstable angina pectoris (UAP) in certified chest pain units (CPUs) in Germany and focused on the European Society of Cardiology (ESC) guideline-adherence in the timing of invasive strategies or choice of conservative treatment options. More specifically, we analyzed differences in clinical outcome with respect to guideline-adherence. Prospective data from 1400 UAP patients were collected. Analyses of high-risk criteria with indication for invasive management and 3-month clinical outcome data were performed. Guideline-adherence was tested for a primarily conservative strategy as well as for percutaneous coronary intervention (PCI) within <24 and <72h after admission. Overall guideline-conforming management was performed in 38.2%. In UAP patients at risk, undertreatment caused by an insufficient consideration of risk criteria was obvious in 78%. Reciprocally, overtreatment in the absence of adequate risk markers was performed in 27%, whereas a guideline-conforming primarily conservative strategy was chosen in 73% of the low-risk patients. Together, the 3-month major adverse coronary and cerebrovascular events (MACCE) were low (3.6%). Nonetheless, guideline-conforming treatment was even associated with significantly lower MACCE rates (1.6% vs. 4.0%, p<0.05). The data suggest an inadequate adherence to ESC guidelines in nearly two thirds of the patients, particularly in those patients at high to intermediate risk with secondary risk factors, emphasizing the need for further attention to consistent risk profiling in the CPU and its certification process. Copyright © 2014 Japanese College of Cardiology. Published by Elsevier Ltd. All rights reserved.
FORCE2: A state-of-the-art two-phase code for hydrodynamic calculations
NASA Astrophysics Data System (ADS)
Ding, Jianmin; Lyczkowski, R. W.; Burge, S. W.
1993-02-01
A three-dimensional computer code for two-phase flow named FORCE2 has been developed by Babcock and Wilcox (B & W) in close collaboration with Argonne National Laboratory (ANL). FORCE2 is capable of both transient as well as steady-state simulations. This Cartesian coordinates computer program is a finite control volume, industrial grade and quality embodiment of the pilot-scale FLUFIX/MOD2 code and contains features such as three-dimensional blockages, volume and surface porosities to account for various obstructions in the flow field, and distributed resistance modeling to account for pressure drops caused by baffles, distributor plates and large tube banks. Recently computed results demonstrated the significance of and necessity for three-dimensional models of hydrodynamics and erosion. This paper describes the process whereby ANL's pilot-scale FLUFIX/MOD2 models and numerics were implemented into FORCE2. A description of the quality control to assess the accuracy of the new code and the validation using some of the measured data from Illinois Institute of Technology (UT) and the University of Illinois at Urbana-Champaign (UIUC) are given. It is envisioned that one day, FORCE2 with additional modules such as radiation heat transfer, combustion kinetics and multi-solids together with user-friendly pre- and post-processor software and tailored for massively parallel multiprocessor shared memory computational platforms will be used by industry and researchers to assist in reducing and/or eliminating the environmental and economic barriers which limit full consideration of coal, shale and biomass as energy sources, to retain energy security, and to remediate waste and ecological problems.
Performance of a parallel code for the Euler equations on hypercube computers
NASA Technical Reports Server (NTRS)
Barszcz, Eric; Chan, Tony F.; Jesperson, Dennis C.; Tuminaro, Raymond S.
1990-01-01
The performance of hypercubes were evaluated on a computational fluid dynamics problem and the parallel environment issues were considered that must be addressed, such as algorithm changes, implementation choices, programming effort, and programming environment. The evaluation focuses on a widely used fluid dynamics code, FLO52, which solves the two dimensional steady Euler equations describing flow around the airfoil. The code development experience is described, including interacting with the operating system, utilizing the message-passing communication system, and code modifications necessary to increase parallel efficiency. Results from two hypercube parallel computers (a 16-node iPSC/2, and a 512-node NCUBE/ten) are discussed and compared. In addition, a mathematical model of the execution time was developed as a function of several machine and algorithm parameters. This model accurately predicts the actual run times obtained and is used to explore the performance of the code in interesting but yet physically realizable regions of the parameter space. Based on this model, predictions about future hypercubes are made.
NASA Technical Reports Server (NTRS)
Hicks, Raymond M.; Cliff, Susan E.
1991-01-01
Full-potential, Euler, and Navier-Stokes computational fluid dynamics (CFD) codes were evaluated for use in analyzing the flow field about airfoils sections operating at Mach numbers from 0.20 to 0.60 and Reynolds numbers from 500,000 to 2,000,000. The potential code (LBAUER) includes weakly coupled integral boundary layer equations for laminar and turbulent flow with simple transition and separation models. The Navier-Stokes code (ARC2D) uses the thin-layer formulation of the Reynolds-averaged equations with an algebraic turbulence model. The Euler code (ISES) includes strongly coupled integral boundary layer equations and advanced transition and separation calculations with the capability to model laminar separation bubbles and limited zones of turbulent separation. The best experiment/CFD correlation was obtained with the Euler code because its boundary layer equations model the physics of the flow better than the other two codes. An unusual reversal of boundary layer separation with increasing angle of attack, following initial shock formation on the upper surface of the airfoil, was found in the experiment data. This phenomenon was not predicted by the CFD codes evaluated.
Efficient preparation of large-block-code ancilla states for fault-tolerant quantum computation
NASA Astrophysics Data System (ADS)
Zheng, Yi-Cong; Lai, Ching-Yi; Brun, Todd A.
2018-03-01
Fault-tolerant quantum computation (FTQC) schemes that use multiqubit large block codes can potentially reduce the resource overhead to a great extent. A major obstacle is the requirement for a large number of clean ancilla states of different types without correlated errors inside each block. These ancilla states are usually logical stabilizer states of the data-code blocks, which are generally difficult to prepare if the code size is large. Previously, we have proposed an ancilla distillation protocol for Calderbank-Shor-Steane (CSS) codes by classical error-correcting codes. It was assumed that the quantum gates in the distillation circuit were perfect; however, in reality, noisy quantum gates may introduce correlated errors that are not treatable by the protocol. In this paper, we show that additional postselection by another classical error-detecting code can be applied to remove almost all correlated errors. Consequently, the revised protocol is fully fault tolerant and capable of preparing a large set of stabilizer states sufficient for FTQC using large block codes. At the same time, the yield rate can be boosted from O (t-2) to O (1 ) in practice for an [[n ,k ,d =2 t +1
Computational study of duct and pipe flows using the method of pseudocompressibility
NASA Technical Reports Server (NTRS)
Williams, Robert W.
1991-01-01
A viscous, three-dimensional, incompressible, Navier-Stokes Computational Fluid Dynamics code employing pseudocompressibility is used for the prediction of laminar primary and secondary flows in two 90-degree bends of constant cross section. Under study are a square cross section duct bend with 2.3 radius ratio and a round cross section pipe bend with 2.8 radius ratio. Sensitivity of predicted primary and secondary flow to inlet boundary conditions, grid resolution, and code convergence is investigated. Contour and velocity versus spanwise coordinate plots comparing prediction to experimental data flow components are shown at several streamwise stations before, within, and after the duct and pipe bends. Discussion includes secondary flow physics, computational method, computational requirements, grid dependence, and convergence rates.
PHoToNs–A parallel heterogeneous and threads oriented code for cosmological N-body simulation
NASA Astrophysics Data System (ADS)
Wang, Qiao; Cao, Zong-Yan; Gao, Liang; Chi, Xue-Bin; Meng, Chen; Wang, Jie; Wang, Long
2018-06-01
We introduce a new code for cosmological simulations, PHoToNs, which incorporates features for performing massive cosmological simulations on heterogeneous high performance computer (HPC) systems and threads oriented programming. PHoToNs adopts a hybrid scheme to compute gravitational force, with the conventional Particle-Mesh (PM) algorithm to compute the long-range force, the Tree algorithm to compute the short range force and the direct summation Particle-Particle (PP) algorithm to compute gravity from very close particles. A self-similar space filling a Peano-Hilbert curve is used to decompose the computing domain. Threads programming is advantageously used to more flexibly manage the domain communication, PM calculation and synchronization, as well as Dual Tree Traversal on the CPU+MIC platform. PHoToNs scales well and efficiency of the PP kernel achieves 68.6% of peak performance on MIC and 74.4% on CPU platforms. We also test the accuracy of the code against the much used Gadget-2 in the community and found excellent agreement.
Implementation of a 3D mixing layer code on parallel computers
NASA Technical Reports Server (NTRS)
Roe, K.; Thakur, R.; Dang, T.; Bogucz, E.
1995-01-01
This paper summarizes our progress and experience in the development of a Computational-Fluid-Dynamics code on parallel computers to simulate three-dimensional spatially-developing mixing layers. In this initial study, the three-dimensional time-dependent Euler equations are solved using a finite-volume explicit time-marching algorithm. The code was first programmed in Fortran 77 for sequential computers. The code was then converted for use on parallel computers using the conventional message-passing technique, while we have not been able to compile the code with the present version of HPF compilers.
Analytical investigation of the dynamics of tethered constellations in Earth orbit, phase 2
NASA Technical Reports Server (NTRS)
Lorenzini, E.
1985-01-01
This Quarterly Report deals with the deployment maneuver of a single-axis, vertical constellation with three masses. A new, easy to handle, computer code that simulates the two-dimensional dynamics of the constellation has been implemented. This computer code is used for designing control laws for the deployment maneuver that minimizes the acceleration level of the low-g platform during the maneuver.
Study of Two-Dimensional Compressible Non-Acoustic Modeling of Stirling Machine Type Components
NASA Technical Reports Server (NTRS)
Tew, Roy C., Jr.; Ibrahim, Mounir B.
2001-01-01
A two-dimensional (2-D) computer code was developed for modeling enclosed volumes of gas with oscillating boundaries, such as Stirling machine components. An existing 2-D incompressible flow computer code, CAST, was used as the starting point for the project. CAST was modified to use the compressible non-acoustic Navier-Stokes equations to model an enclosed volume including an oscillating piston. The devices modeled have low Mach numbers and are sufficiently small that the time required for acoustics to propagate across them is negligible. Therefore, acoustics were excluded to enable more time efficient computation. Background information about the project is presented. The compressible non-acoustic flow assumptions are discussed. The governing equations used in the model are presented in transport equation format. A brief description is given of the numerical methods used. Comparisons of code predictions with experimental data are then discussed.
LTCP 2D Graphical User Interface. Application Description and User's Guide
NASA Technical Reports Server (NTRS)
Ball, Robert; Navaz, Homayun K.
1996-01-01
A graphical user interface (GUI) written for NASA's LTCP (Liquid Thrust Chamber Performance) 2 dimensional computational fluid dynamic code is described. The GUI is written in C++ for a desktop personal computer running under a Microsoft Windows operating environment. Through the use of common and familiar dialog boxes, features, and tools, the user can easily and quickly create and modify input files for the LTCP code. In addition, old input files used with the LTCP code can be opened and modified using the GUI. The application is written in C++ for a desktop personal computer running under a Microsoft Windows operating environment. The program and its capabilities are presented, followed by a detailed description of each menu selection and the method of creating an input file for LTCP. A cross reference is included to help experienced users quickly find the variables which commonly need changes. Finally, the system requirements and installation instructions are provided.
NASA Technical Reports Server (NTRS)
Newman, P. A.; Hou, G. J.-W.; Jones, H. E.; Taylor, A. C., III; Korivi, V. M.
1992-01-01
How a combination of various computational methodologies could reduce the enormous computational costs envisioned in using advanced CFD codes in gradient based optimized multidisciplinary design (MdD) procedures is briefly outlined. Implications of these MdD requirements upon advanced CFD codes are somewhat different than those imposed by a single discipline design. A means for satisfying these MdD requirements for gradient information is presented which appear to permit: (1) some leeway in the CFD solution algorithms which can be used; (2) an extension to 3-D problems; and (3) straightforward use of other computational methodologies. Many of these observations have previously been discussed as possibilities for doing parts of the problem more efficiently; the contribution here is observing how they fit together in a mutually beneficial way.
Computation of turbulent boundary layers employing the defect wall-function method. M.S. Thesis
NASA Technical Reports Server (NTRS)
Brown, Douglas L.
1994-01-01
In order to decrease overall computational time requirements of spatially-marching parabolized Navier-Stokes finite-difference computer code when applied to turbulent fluid flow, a wall-function methodology, originally proposed by R. Barnwell, was implemented. This numerical effort increases computational speed and calculates reasonably accurate wall shear stress spatial distributions and boundary-layer profiles. Since the wall shear stress is analytically determined from the wall-function model, the computational grid near the wall is not required to spatially resolve the laminar-viscous sublayer. Consequently, a substantially increased computational integration step size is achieved resulting in a considerable decrease in net computational time. This wall-function technique is demonstrated for adiabatic flat plate test cases from Mach 2 to Mach 8. These test cases are analytically verified employing: (1) Eckert reference method solutions, (2) experimental turbulent boundary-layer data of Mabey, and (3) finite-difference computational code solutions with fully resolved laminar-viscous sublayers. Additionally, results have been obtained for two pressure-gradient cases: (1) an adiabatic expansion corner and (2) an adiabatic compression corner.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harvey, R. W.; Petrov, Yu. V.
2013-12-03
Within the US Department of Energy/Office of Fusion Energy magnetic fusion research program, there is an important whole-plasma-modeling need for a radio-frequency/neutral-beam-injection (RF/NBI) transport-oriented finite-difference Fokker-Planck (FP) code with combined capabilities for 4D (2R2V) geometry near the fusion plasma periphery, and computationally less demanding 3D (1R2V) bounce-averaged capabilities for plasma in the core of fusion devices. Demonstration of proof-of-principle achievement of this goal has been carried out in research carried out under Phase I of the SBIR award. Two DOE-sponsored codes, the CQL3D bounce-average Fokker-Planck code in which CompX has specialized, and the COGENT 4D, plasma edge-oriented Fokker-Planck code whichmore » has been constructed by Lawrence Livermore National Laboratory and Lawrence Berkeley Laboratory scientists, where coupled. Coupling was achieved by using CQL3D calculated velocity distributions including an energetic tail resulting from NBI, as boundary conditions for the COGENT code over the two-dimensional velocity space on a spatial interface (flux) surface at a given radius near the plasma periphery. The finite-orbit-width fast ions from the CQL3D distributions penetrated into the peripheral plasma modeled by the COGENT code. This combined code demonstrates the feasibility of the proposed 3D/4D code. By combining these codes, the greatest computational efficiency is achieved subject to present modeling needs in toroidally symmetric magnetic fusion devices. The more efficient 3D code can be used in its regions of applicability, coupled to the more computationally demanding 4D code in higher collisionality edge plasma regions where that extended capability is necessary for accurate representation of the plasma. More efficient code leads to greater use and utility of the model. An ancillary aim of the project is to make the combined 3D/4D code user friendly. Achievement of full-coupling of these two Fokker-Planck codes will advance computational modeling of plasma devices important to the USDOE magnetic fusion energy program, in particular the DIII-D tokamak at General Atomics, San Diego, the NSTX spherical tokamak at Princeton, New Jersey, and the MST reversed-field-pinch Madison, Wisconsin. The validation studies of the code against the experiments will improve understanding of physics important for magnetic fusion, and will increase our design capabilities for achieving the goals of the International Tokamak Experimental Reactor (ITER) project in which the US is a participant and which seeks to demonstrate at least a factor of five in fusion power production divided by input power.« less
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1995-01-01
This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
NASA Astrophysics Data System (ADS)
López-Aparicio, Susana; Guevara, Marc; Thunis, Philippe; Cuvelier, Kees; Tarrasón, Leonor
2017-04-01
This study shows the capabilities of a benchmarking system to identify inconsistencies in emission inventories, and to evaluate the reason behind discrepancies as a mean to improve both bottom-up and downscaled emission inventories. Fine scale bottom-up emission inventories for seven urban areas in Norway are compared with three regional emission inventories, EC4MACS, TNO_MACC-II and TNO_MACC-III, downscaled to the same areas. The comparison shows discrepancies in nitrogen oxides (NOx) and particulate matter (PM2.5 and PM10) when evaluating both total and sectorial emissions. The three regional emission inventories underestimate NOx and PM10 traffic emissions by approximately 20-80% and 50-90%, respectively. The main reasons for the underestimation of PM10 emissions from traffic in the regional inventories are related to non-exhaust emissions due to resuspension, which are included in the bottom-up emission inventories but are missing in the official national emissions, and therefore in the downscaled regional inventories. The benchmarking indicates that the most probable reason behind the underestimation of NOx traffic emissions by the regional inventories is the activity data. The fine scale NOx traffic emissions from bottom-up inventories are based on the actual traffic volume at the road link and are much higher than the NOx emissions downscaled from national estimates based on fuel sales and based on population for the urban areas. We have identified important discrepancies in PM2.5 emissions from wood burning for residential heating among all the inventories. These discrepancies are associated with the assumptions made for the allocation of emissions. In the EC4MACs inventory, such assumptions imply high underestimation of PM2.5 emissions from the residential combustion sector in urban areas, which ranges from 40 to 90% compared with the bottom-up inventories. The study shows that in three of the seven Norwegian cities there is need for further improvement of the emission inventories.
Baschin, M; Selleng, S; Hummel, A; Diedrich, S; Schroeder, H W; Kohlmann, T; Westphal, A; Greinacher, A; Thiele, T
2018-04-01
Essentials An increasing number of patients requiring surgery receive antiplatelet therapy (APT). We analyzed 181 patients receiving presurgery platelet transfusions to reverse APT. No coronary thrombosis occurred after platelet transfusion. This justifies a prospective trial to test preoperative platelet transfusions to reverse APT. Background Patients receiving antiplatelet therapy (APT) have an increased risk of perioperative bleeding and cardiac adverse events (CAE). Preoperative platelet transfusions may reduce the bleeding risk but may also increase the risk of CAE, particularly coronary thrombosis in patients after recent stent implantation. Objectives To analyze the incidence of perioperative CAE and bleeding in patients undergoing non-cardiac surgery using a standardized management of transfusing two platelet concentrates preoperatively and restart of APT within 24-72 h after surgery. Methods A cohort of consecutive patients on APT treated with two platelet concentrates before non-cardiac surgery between January 2012 and December 2014 was retrospectively identified. Patients were stratified by the risk of major adverse cardiac and cerebrovascular events (MACCE). The primary objective was the incidence of CAE (myocardial infarction, acute heart failure and cardiac troponine T increase). Secondary objectives were incidences of other thromboembolic events, bleedings, transfusions and mortality. Results Among 181 patients, 88 received aspirin, 21 clopidogrel and 72 dual APT. MACCE risk was high in 63, moderate in 103 and low in 15 patients; 67 had cardiac stents. Ten patients (5.5%; 95% CI, 3.0-9.9%) developed a CAE (three myocardial infarctions, four cardiac failures and three troponin T increases). None was caused by coronary thrombosis. Surgery-related bleeding occurred in 22 patients (12.2%; 95% CI, 8.2-17.7%), making 12 re-interventions necessary (6.6%; 95% CI, 3.8-11.2%). Conclusion Preoperative platelet transfusions and early restart of APT allowed urgent surgery and did not cause coronary thromboses, but non-thrombotic CAEs and re-bleeding occurred. Randomized trials are warranted to test platelet transfusion against other management strategies. © 2018 International Society on Thrombosis and Haemostasis.
Wang, Heyang; Liang, Zhenyang; Li, Yi; Li, Bin; Liu, Junming; Hong, Xueyi; Lu, Xin; Wu, Jiansheng; Zhao, Wei; Liu, Qiang; An, Jian; Li, Linfeng; Pu, Fanli; Ming, Qiang; Han, Yaling
2017-06-01
This study aimed to evaluate the effect of prolonged full-dose bivalirudin infusion in real-world population with ST-elevation myocardial infarction (STEMI). Subgroup data as well as meta-analysis from randomized clinical trials have shown the potency of postprocedural full-dose infusion (1.75 mg/kg/h) of bivalirudin on attenuating acute stent thrombosis (ST) after primary percutaneous coronary intervention (PCI). In this multicenter retrospective observational study, 2047 consecutive STEMI patients treated with bivalirudin during primary PCI were enrolled in 65 Chinese centers between July 2013 and May 2016. The primary outcome was acute ST defined as ARC definite/probable within 24 hours after the index procedure, and the secondary endpoints included total ST, major adverse cardiac or cerebral events (MACCE, defined as death, reinfarction, stroke, and target vessel revascularization), and any bleeding at 30 days. Among 2047 STEMI patients, 1123 (54.9%) were treated with postprocedural bivalirudin full-dose infusion (median 120 minutes) while the other 924 (45.1%) received low-dose (0.25 mg/kg/h) or null postprocedural infusion. A total of three acute ST (0.3%) occurred in STEMI patients with none or low-dose prolonged infusion of bivalirudin, but none was observed in those treated with post-PCI full-dose infusion (0.3% vs 0.0%, P=.092). Outcomes on MACCE (2.1% vs 2.7%, P=.402) and total bleeding (2.1% vs 1.4%, P=.217) at 30 days showed no significant difference between the two groups, and no subacute ST was observed. Post-PCI full-dose bivalirudin infusion is safe and has a trend to protect against acute ST in STEMI patients undergoing primary PCI in real-world settings. © 2017 John Wiley & Sons Ltd.
Computational electronics and electromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, C. C.
The Computational Electronics and Electromagnetics thrust area at Lawrence Livermore National Laboratory serves as the focal point for engineering R&D activities for developing computer-based design, analysis, and tools for theory. Key representative applications include design of particle accelerator cells and beamline components; engineering analysis and design of high-power components, photonics, and optoelectronics circuit design; EMI susceptibility analysis; and antenna synthesis. The FY-96 technology-base effort focused code development on (1) accelerator design codes; (2) 3-D massively parallel, object-oriented time-domain EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; (5) 3-D spectral-domainmore » CEM tools; and (6) enhancement of laser drilling codes. Joint efforts with the Power Conversion Technologies thrust area include development of antenna systems for compact, high-performance radar, in addition to novel, compact Marx generators. 18 refs., 25 figs., 1 tab.« less
Fast Acceleration of 2D Wave Propagation Simulations Using Modern Computational Accelerators
Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H.; Kay, Matthew
2014-01-01
Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least faster than the sequential implementation and faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of wave propagation in multi-dimensional media. PMID:24497950
ACDOS2: an improved neutron-induced dose rate code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lagache, J.C.
1981-06-01
To calculate the expected dose rate from fusion reactors as a function of geometry, composition, and time after shutdown a computer code, ACDOS2, was written, which utilizes up-to-date libraries of cross-sections and radioisotope decay data. ACDOS2 is in ANSI FORTRAN IV, in order to make it readily adaptable elsewhere.
Corruption in Myanmar - Holding a Country and its People from Economic Prosperity
2014-10-30
censorship laws and freedom to information by banning independent newspapers thereby repressing efforts towards democracy even further. 6 The SPP... censorship laws, insisting state officials return embezzled funds, signing and ratifying the United Nations Convention against Corruption (UNCAC), and...instill a culture of change. For example, in Malaysia , the government formed the Malaysian Anti-Corruption Commission (MACC), an independent watch
15 CFR 740.7 - Computers (APP).
Code of Federal Regulations, 2011 CFR
2011-01-01
... 4A003. (2) Technology and software. License Exception APP authorizes exports of technology and software... License Exception. (2) Access and release restrictions. (i)[Reserved] (ii) Technology and source code. Technology and source code eligible for License Exception APP may not be released to nationals of Cuba, Iran...
Validation of NASA Thermal Ice Protection Computer Codes Part 2 - LEWICE/Thermal
DOT National Transportation Integrated Search
1996-01-01
The Icing Technology Branch at NASA Lewis has been involved in an effort to validate two thermal ice protection codes developed at the NASA Lewis Research Center: LEWICE/Thermal 1 (electrothermal de-icing and anti-icing), and ANTICE 2 (hot gas and el...
Numerical solution of Space Shuttle Orbiter flow field including real gas effects
NASA Technical Reports Server (NTRS)
Prabhu, D. K.; Tannehill, J. C.
1984-01-01
The hypersonic, laminar flow around the Space Shuttle Orbiter has been computed for both an ideal gas (gamma = 1.2) and equilibrium air using a real-gas, parabolized Navier-Stokes code. This code employs a generalized coordinate transformation; hence, it places no restrictions on the orientation of the solution surfaces. The initial solution in the nose region was computed using a 3-D, real-gas, time-dependent Navier-Stokes code. The thermodynamic and transport properties of equilibrium air were obtained from either approximate curve fits or a table look-up procedure. Numerical results are presented for flight conditions corresponding to the STS-3 trajectory. The computed surface pressures and convective heating rates are compared with data from the STS-3 flight.
Lee, Ming-Chung; Shen, Yu-Chih; Wang, Ji-Hung; Li, Yu-Ying; Li, Tzu-Hsien; Chang, En-Ting; Wang, Hsiu-Mei
2017-01-01
Obstructive sleep apnea (OSA) is associated with bad cardiovascular outcomes and a high prevalence of anxiety and depression. This study investigated the effects of continuous positive airway pressure (CPAP) on the severity of anxiety and depression in OSA patients with or without coronary artery disease (CAD) and on the rate of cardio- and cerebro-vascular events in those with OSA and CAD. This prospective study included patients with moderate-to-severe OSA, with or without a recent diagnosis of CAD; all were started on CPAP therapy. Patients completed the Chinese versions of the Beck Anxiety Inventory (BAI) and Beck Depression Inventory-II (BDI-II) at baseline and after 6-month follow-up. The occurrence of major adverse cardiac and cerebrovascular events (MACCE) was assessed every 3 months up to 1 year. BAI scores decreased from 8.5 ± 8.4 at baseline to 5.4 ± 6.9 at 6 months in CPAP-compliant OSA patients without CAD ( P < 0.05). BAI scores also decreased from 20.7 ± 14.9 to 16.1 ± 14.5 in CPAP-compliant OSA patients with CAD. BDI-II scores decreased in CPAP-compliant OSA patients without CAD (from 11.1 ± 10.7 at baseline to 6.6 ± 9.5 at 6 months) and in CPAP-compliant OSA patients with CAD (from 20.4 ± 14.3 to 15.9 ± 7.3). In addition, there was a large effect size (ES) of BAI and BDI in 6-month CPAP treatment of OSA patients with CAD and a large ES in those with OSA under CPAP treatment. In OSA patients with CAD, the occurrence of MACCE was significantly lower in CPAP-compliant patients than that in CPAP noncompliant patients (11% in CPAP compliant and 50% in noncompliant; P < 0.05). CPAP improved anxiety and depression in OSA patients regardless of CAD. In OSA patients with CAD, CPAP-compliant patients had a lower 1-year rate of MACCE than CPAP-noncompliant patients.
Computer Description of Black Hawk Helicopter
1979-06-01
Model Combinatorial Geometry Models Black Hawk Helicopter Helicopter GIFT Computer Code Geometric Description of Targets 20. ABSTRACT...description was made using the technique of combinatorial geometry (COM-GEOM) and will be used as input to the GIFT computer code which generates Tliic...rnHp The data used bv the COVART comtmter code was eenerated bv the Geometric Information for Targets ( GIFT )Z computer code. This report documents
2-Step scalar deadzone quantization for bitplane image coding.
Auli-Llinas, Francesc
2013-12-01
Modern lossy image coding systems generate a quality progressive codestream that, truncated at increasing rates, produces an image with decreasing distortion. Quality progressivity is commonly provided by an embedded quantizer that employs uniform scalar deadzone quantization (USDQ) together with a bitplane coding strategy. This paper introduces a 2-step scalar deadzone quantization (2SDQ) scheme that achieves same coding performance as that of USDQ while reducing the coding passes and the emitted symbols of the bitplane coding engine. This serves to reduce the computational costs of the codec and/or to code high dynamic range images. The main insights behind 2SDQ are the use of two quantization step sizes that approximate wavelet coefficients with more or less precision depending on their density, and a rate-distortion optimization technique that adjusts the distortion decreases produced when coding 2SDQ indexes. The integration of 2SDQ in current codecs is straightforward. The applicability and efficiency of 2SDQ are demonstrated within the framework of JPEG2000.
ITER Simulations Using the PEDESTAL Module in the PTRANSP Code
NASA Astrophysics Data System (ADS)
Halpern, F. D.; Bateman, G.; Kritz, A. H.; Pankin, A. Y.; Budny, R. V.; Kessel, C.; McCune, D.; Onjun, T.
2006-10-01
PTRANSP simulations with a computed pedestal height are carried out for ITER scenarios including a standard ELMy H-mode (15 MA discharge) and a hybrid scenario (12MA discharge). It has been found that fusion power production predicted in simulations of ITER discharges depends sensitively on the height of the H-mode temperature pedestal [1]. In order to study this effect, the NTCC PEDESTAL module [2] has been implemented in PTRANSP code to provide boundary conditions used for the computation of the projected performance of ITER. The PEDESTAL module computes both the temperature and width of the pedestal at the edge of type I ELMy H-mode discharges once the threshold conditions for the H-mode are satisfied. The anomalous transport in the plasma core is predicted using the GLF23 or MMM95 transport models. To facilitate the steering of lengthy PTRANSP computations, the PTRANSP code has been modified to allow changes in the transport model when simulations are restarted. The PTRANSP simulation results are compared with corresponding results obtained using other integrated modeling codes.[1] G. Bateman, T. Onjun and A.H. Kritz, Plasma Physics and Controlled Fusion, 45, 1939 (2003).[2] T. Onjun, G. Bateman, A.H. Kritz, and G. Hammett, Phys. Plasmas 9, 5018 (2002).
Study of SOL in DIII-D tokamak with SOLPS suite of codes.
NASA Astrophysics Data System (ADS)
Pankin, Alexei; Bateman, Glenn; Brennan, Dylan; Coster, David; Hogan, John; Kritz, Arnold; Kukushkin, Andrey; Schnack, Dalton; Snyder, Phil
2005-10-01
The scrape-of-layer (SOL) region in DIII-D tokamak is studied with the SOLPS integrated suite of codes. The SOLPS package includes the 3D multi-species Monte-Carlo neutral code EIRINE and 2D multi-fluid code B2. The EIRINE and B2 codes are cross-coupled through B2-EIRINE interface. The results of SOLPS simulations are used in the integrated modeling of the plasma edge in DIII-D tokamak with the ASTRA transport code. Parameterized dependences for neutral particle fluxes that are computed with the SOLPS code are implemented in a model for the H-mode pedestal and ELMs [1] in the ASTRA code. The effects of neutrals on the H-mode pedestal and ELMs are studied in this report. [1] A. Y. Pankin, I. Voitsekhovitch, G. Bateman, et al., Plasma Phys. Control. Fusion 47, 483 (2005).
NASA Astrophysics Data System (ADS)
Giorgino, Toni
2018-07-01
The proper choice of collective variables (CVs) is central to biased-sampling free energy reconstruction methods in molecular dynamics simulations. The PLUMED 2 library, for instance, provides several sophisticated CV choices, implemented in a C++ framework; however, developing new CVs is still time consuming due to the need to provide code for the analytical derivatives of all functions with respect to atomic coordinates. We present two solutions to this problem, namely (a) symbolic differentiation and code generation, and (b) automatic code differentiation, in both cases leveraging open-source libraries (SymPy and Stan Math, respectively). The two approaches are demonstrated and discussed in detail implementing a realistic example CV, the local radius of curvature of a polymer. Users may use the code as a template to streamline the implementation of their own CVs using high-level constructs and automatic gradient computation.
Nonlinear 3D visco-resistive MHD modeling of fusion plasmas: a comparison between numerical codes
NASA Astrophysics Data System (ADS)
Bonfiglio, D.; Chacon, L.; Cappello, S.
2008-11-01
Fluid plasma models (and, in particular, the MHD model) are extensively used in the theoretical description of laboratory and astrophysical plasmas. We present here a successful benchmark between two nonlinear, three-dimensional, compressible visco-resistive MHD codes. One is the fully implicit, finite volume code PIXIE3D [1,2], which is characterized by many attractive features, notably the generalized curvilinear formulation (which makes the code applicable to different geometries) and the possibility to include in the computation the energy transport equation and the extended MHD version of Ohm's law. In addition, the parallel version of the code features excellent scalability properties. Results from this code, obtained in cylindrical geometry, are compared with those produced by the semi-implicit cylindrical code SpeCyl, which uses finite differences radially, and spectral formulation in the other coordinates [3]. Both single and multi-mode simulations are benchmarked, regarding both reversed field pinch (RFP) and ohmic tokamak magnetic configurations. [1] L. Chacon, Computer Physics Communications 163, 143 (2004). [2] L. Chacon, Phys. Plasmas 15, 056103 (2008). [3] S. Cappello, Plasma Phys. Control. Fusion 46, B313 (2004) & references therein.
Computations of the Magnus effect for slender bodies in supersonic flow
NASA Technical Reports Server (NTRS)
Sturek, W. B.; Schiff, L. B.
1980-01-01
A recently reported Parabolized Navier-Stokes code has been employed to compute the supersonic flow field about spinning cone, ogive-cylinder, and boattailed bodies of revolution at moderate incidence. The computations were performed for flow conditions where extensive measurements for wall pressure, boundary layer velocity profiles and Magnus force had been obtained. Comparisons between the computational results and experiment indicate excellent agreement for angles of attack up to six degrees. The comparisons for Magnus effects show that the code accurately predicts the effects of body shape and Mach number for the selected models for Mach numbers in the range of 2-4.
A 2D electrostatic PIC code for the Mark III Hypercube
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferraro, R.D.; Liewer, P.C.; Decyk, V.K.
We have implemented a 2D electrostastic plasma particle in cell (PIC) simulation code on the Caltech/JPL Mark IIIfp Hypercube. The code simulates plasma effects by evolving in time the trajectories of thousands to millions of charged particles subject to their self-consistent fields. Each particle`s position and velocity is advanced in time using a leap frog method for integrating Newton`s equations of motion in electric and magnetic fields. The electric field due to these moving charged particles is calculated on a spatial grid at each time by solving Poisson`s equation in Fourier space. These two tasks represent the largest part ofmore » the computation. To obtain efficient operation on a distributed memory parallel computer, we are using the General Concurrent PIC (GCPIC) algorithm previously developed for a 1D parallel PIC code.« less
Numerical simulation of turbulent jet noise, part 2
NASA Technical Reports Server (NTRS)
Metcalfe, R. W.; Orszag, S. A.
1976-01-01
Results on the numerical simulation of jet flow fields were used to study the radiated sound field, and in addition, to extend and test the capabilities of the turbulent jet simulation codes. The principal result of the investigation was the computation of the radiated sound field from a turbulent jet. In addition, the computer codes were extended to account for the effects of compressibility and eddy viscosity, and the treatment of the nonlinear terms of the Navier-Stokes equations was modified so that they can be computed in a semi-implicit way. A summary of the flow model and a description of the numerical methods used for its solution are presented. Calculations of the radiated sound field are reported. In addition, the extensions that were made to the fundamental dynamical codes are described. Finally, the current state-of-the-art for computer simulation of turbulent jet noise is summarized.
Computations of spray, fuel-air mixing, and combustion in a lean-premixed-prevaporized combustor
NASA Technical Reports Server (NTRS)
Dasgupta, A.; Li, Z.; Shih, T. I.-P.; Kundu, K.; Deur, J. M.
1993-01-01
A code was developed for computing the multidimensional flow, spray, combustion, and pollutant formation inside gas turbine combustors. The code developed is based on a Lagrangian-Eulerian formulation and utilizes an implicit finite-volume method. The focus of this paper is on the spray part of the code (both formulation and algorithm), and a number of issues related to the computation of sprays and fuel-air mixing in a lean-premixed-prevaporized combustor. The issues addressed include: (1) how grid spacings affect the diffusion of evaporated fuel, and (2) how spurious modes can arise through modelling of the spray in the Lagrangian computations. An upwind interpolation scheme is proposed to account for some effects of grid spacing on the artificial diffusion of the evaporated fuel. Also, some guidelines are presented to minimize errors associated with the spurious modes.
NASA Technical Reports Server (NTRS)
1991-01-01
The technical effort and computer code enhancements performed during the sixth year of the Probabilistic Structural Analysis Methods program are summarized. Various capabilities are described to probabilistically combine structural response and structural resistance to compute component reliability. A library of structural resistance models is implemented in the Numerical Evaluations of Stochastic Structures Under Stress (NESSUS) code that included fatigue, fracture, creep, multi-factor interaction, and other important effects. In addition, a user interface was developed for user-defined resistance models. An accurate and efficient reliability method was developed and was successfully implemented in the NESSUS code to compute component reliability based on user-selected response and resistance models. A risk module was developed to compute component risk with respect to cost, performance, or user-defined criteria. The new component risk assessment capabilities were validated and demonstrated using several examples. Various supporting methodologies were also developed in support of component risk assessment.
ERIC Educational Resources Information Center
Clyde, Anne
1999-01-01
Discussion of the Year 2000 (Y2K) problem, the computer-code problem that affects computer programs or computer chips, focuses on the impact on teacher-librarians. Topics include automated library systems, access to online information services, library computers and software, and other electronic equipment such as photocopiers and fax machines.…
Development of probabilistic multimedia multipathway computer codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, C.; LePoire, D.; Gnanapragasam, E.
2002-01-01
The deterministic multimedia dose/risk assessment codes RESRAD and RESRAD-BUILD have been widely used for many years for evaluation of sites contaminated with residual radioactive materials. The RESRAD code applies to the cleanup of sites (soils) and the RESRAD-BUILD code applies to the cleanup of buildings and structures. This work describes the procedure used to enhance the deterministic RESRAD and RESRAD-BUILD codes for probabilistic dose analysis. A six-step procedure was used in developing default parameter distributions and the probabilistic analysis modules. These six steps include (1) listing and categorizing parameters; (2) ranking parameters; (3) developing parameter distributions; (4) testing parameter distributionsmore » for probabilistic analysis; (5) developing probabilistic software modules; and (6) testing probabilistic modules and integrated codes. The procedures used can be applied to the development of other multimedia probabilistic codes. The probabilistic versions of RESRAD and RESRAD-BUILD codes provide tools for studying the uncertainty in dose assessment caused by uncertain input parameters. The parameter distribution data collected in this work can also be applied to other multimedia assessment tasks and multimedia computer codes.« less
Computer program BL2D for solving two-dimensional and axisymmetric boundary layers
NASA Technical Reports Server (NTRS)
Iyer, Venkit
1995-01-01
This report presents the formulation, validation, and user's manual for the computer program BL2D. The program is a fourth-order-accurate solution scheme for solving two-dimensional or axisymmetric boundary layers in speed regimes that range from low subsonic to hypersonic Mach numbers. A basic implementation of the transition zone and turbulence modeling is also included. The code is a result of many improvements made to the program VGBLP, which is described in NASA TM-83207 (February 1982), and can effectively supersede it. The code BL2D is designed to be modular, user-friendly, and portable to any machine with a standard fortran77 compiler. The report contains the new formulation adopted and the details of its implementation. Five validation cases are presented. A detailed user's manual with the input format description and instructions for running the code is included. Adequate information is presented in the report to enable the user to modify or customize the code for specific applications.
Sensitivity analysis of the Gupta and Park chemical models on the heat flux by DSMC and CFD codes
NASA Astrophysics Data System (ADS)
Morsa, Luigi; Festa, Giandomenico; Zuppardi, Gennaro
2012-11-01
The present study is the logical continuation of a former paper by the first author in which the influence of the chemical models by Gupta and by Park on the computation of heat flux on the Orion and EXPERT capsules was evaluated. Tests were carried out by the direct simulation Monte Carlo code DS2V and by the computational fluiddynamic (CFD) code H3NS. DS2V implements the Gupta model, while H3NS implements the Park model. In order to compare the effects of the chemical models, the Park model was implemented also in DS2V. The results showed that DS2V and H3NS compute a different composition both in the flow field and on the surface, even using the same chemical model (Park). Furthermore DS2V computes, by the two chemical models, different compositions in the flow field but the same composition on the surface, therefore the same heat flux. In the present study, in order to evaluate the influence of these chemical models also in a CFD code, the Gupta and the Park models have been implemented in FLUENT. Tests by DS2V and by FLUENT, have been carried out for the EXPERT capsule at the altitude of 70 km and with velocity of 5000 m/s. The capsule experiences a hypersonic, continuum low density regime. Due to the energy level of the flow, the vibration equation, lacking in the original version of FLUENT, has been implemented. The results of the heat flux computation verify that FLUENT is quite sensitive to the Gupta and to the Park chemical models. In fact, at the stagnation point, the percentage difference between the models is about 13%. On the opposite the DS2V results by the two models are practically equivalent.
NASA Technical Reports Server (NTRS)
Smith, S. D.
1984-01-01
All of the elements used in the Reacting and Multi-Phase (RAMP2) computer code are described in detail. The code can be used to model the dominant phenomena which affect the prediction of liquid and solid rocket nozzle and orbital plume flow fields.
Code White: A Signed Code Protection Mechanism for Smartphones
2010-09-01
analogous to computer security is the use of antivirus (AV) software . 12 AV software is a brute force approach to security. The software ...these users, numerous malicious programs have also surfaced. And while smartphones have desktop-like capabilities to execute software , they do not...11 2.3.1 Antivirus and Mobile Phones ............................................................... 11 2.3.2
NASA Astrophysics Data System (ADS)
Lidar, Daniel A.; Brun, Todd A.
2013-09-01
Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and Harold Baranger; 26. Critique of fault-tolerant quantum information processing Robert Alicki; References; Index.
NASA Technical Reports Server (NTRS)
Lawrence, Charles; Putt, Charles W.
1997-01-01
The Visual Computing Environment (VCE) is a NASA Lewis Research Center project to develop a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis. The objectives of VCE are to (1) develop a visual computing environment for controlling the execution of individual simulation codes that are running in parallel and are distributed on heterogeneous host machines in a networked environment, (2) develop numerical coupling algorithms for interchanging boundary conditions between codes with arbitrary grid matching and different levels of dimensionality, (3) provide a graphical interface for simulation setup and control, and (4) provide tools for online visualization and plotting. VCE was designed to provide a distributed, object-oriented environment. Mechanisms are provided for creating and manipulating objects, such as grids, boundary conditions, and solution data. This environment includes parallel virtual machine (PVM) for distributed processing. Users can interactively select and couple any set of codes that have been modified to run in a parallel distributed fashion on a cluster of heterogeneous workstations. A scripting facility allows users to dictate the sequence of events that make up the particular simulation.
User's manual for semi-circular compact range reflector code
NASA Technical Reports Server (NTRS)
Gupta, Inder J.; Burnside, Walter D.
1986-01-01
A computer code was developed to analyze a semi-circular paraboloidal reflector antenna with a rolled edge at the top and a skirt at the bottom. The code can be used to compute the total near field of the antenna or its individual components at a given distance from the center of the paraboloid. Thus, it is very effective in computing the size of the sweet spot for RCS or antenna measurement. The operation of the code is described. Various input and output statements are explained. Some results obtained using the computer code are presented to illustrate the code's capability as well as being samples of input/output sets.
Efficient full wave code for the coupling of large multirow multijunction LH grills
NASA Astrophysics Data System (ADS)
Preinhaelter, Josef; Hillairet, Julien; Milanesio, Daniele; Maggiora, Riccardo; Urban, Jakub; Vahala, Linda; Vahala, George
2017-11-01
The full wave code OLGA, for determining the coupling of a single row lower hybrid launcher (waveguide grills) to the plasma, is extended to handle multirow multijunction active passive structures (like the C3 and C4 launchers on TORE SUPRA) by implementing the scattering matrix formalism. The extended code is still computationally fast because of the use of (i) 2D splines of the plasma surface admittance in the accessibility region of the k-space, (ii) high order Gaussian quadrature rules for the integration of the coupling elements and (iii) utilizing the symmetries of the coupling elements in the multiperiodic structures. The extended OLGA code is benchmarked against the ALOHA-1D, ALOHA-2D and TOPLHA codes for the coupling of the C3 and C4 TORE SUPRA launchers for several plasma configurations derived from reflectometry and interferometery. Unlike nearly all codes (except the ALOHA-1D code), OLGA does not require large computational resources and can be used for everyday usage in planning experimental runs. In particular, it is shown that the OLGA code correctly handles the coupling of the C3 and C4 launchers over a very wide range of plasma densities in front of the grill.
NASA Technical Reports Server (NTRS)
Hall, Edward J.; Heidegger, Nathan J.; Delaney, Robert A.
1999-01-01
The overall objective of this study was to evaluate the effects of turbulence models in a 3-D numerical analysis on the wake prediction capability. The current version of the computer code resulting from this study is referred to as ADPAC v7 (Advanced Ducted Propfan Analysis Codes -Version 7). This report is intended to serve as a computer program user's manual for the ADPAC code used and modified under Task 15 of NASA Contract NAS3-27394. The ADPAC program is based on a flexible multiple-block and discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. Turbulence models now available in the ADPAC code are: a simple mixing-length model, the algebraic Baldwin-Lomax model with user defined coefficients, the one-equation Spalart-Allmaras model, and a two-equation k-R model. The consolidated ADPAC code is capable of executing in either a serial or parallel computing mode from a single source code.
Highly fault-tolerant parallel computation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spielman, D.A.
We re-introduce the coded model of fault-tolerant computation in which the input and output of a computational device are treated as words in an error-correcting code. A computational device correctly computes a function in the coded model if its input and output, once decoded, are a valid input and output of the function. In the coded model, it is reasonable to hope to simulate all computational devices by devices whose size is greater by a constant factor but which are exponentially reliable even if each of their components can fail with some constant probability. We consider fine-grained parallel computations inmore » which each processor has a constant probability of producing the wrong output at each time step. We show that any parallel computation that runs for time t on w processors can be performed reliably on a faulty machine in the coded model using w log{sup O(l)} w processors and time t log{sup O(l)} w. The failure probability of the computation will be at most t {center_dot} exp(-w{sup 1/4}). The codes used to communicate with our fault-tolerant machines are generalized Reed-Solomon codes and can thus be encoded and decoded in O(n log{sup O(1)} n) sequential time and are independent of the machine they are used to communicate with. We also show how coded computation can be used to self-correct many linear functions in parallel with arbitrarily small overhead.« less
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan; Bittker, David A.
1994-01-01
LSENS, the Lewis General Chemical Kinetics Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 2 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 2 describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part 1 (NASA RP-1328) derives the governing equations describes the numerical solution procedures for the types of problems that can be solved by lSENS. Part 3 (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Virtanen, E.; Haapalehto, T.; Kouhia, J.
1995-09-01
Three experiments were conducted to study the behavior of the new horizontal steam generator construction of the PACTEL test facility. In the experiments the secondary side coolant level was reduced stepwise. The experiments were calculated with two computer codes RELAP5/MOD3.1 and APROS version 2.11. A similar nodalization scheme was used for both codes to that the results may be compared. Only the steam generator was modelled and the rest of the facility was given as a boundary condition. The results show that both codes calculate well the behaviour of the primary side of the steam generator. On the secondary sidemore » both codes calculate lower steam temperatures in the upper part of the heat exchange tube bundle than was measured in the experiments.« less
Enhancement of the Probabilistic CEramic Matrix Composite ANalyzer (PCEMCAN) Computer Code
NASA Technical Reports Server (NTRS)
Shah, Ashwin
2000-01-01
This report represents a final technical report for Order No. C-78019-J entitled "Enhancement of the Probabilistic Ceramic Matrix Composite Analyzer (PCEMCAN) Computer Code." The scope of the enhancement relates to including the probabilistic evaluation of the D-Matrix terms in MAT2 and MAT9 material properties card (available in CEMCAN code) for the MSC/NASTRAN. Technical activities performed during the time period of June 1, 1999 through September 3, 1999 have been summarized, and the final version of the enhanced PCEMCAN code and revisions to the User's Manual is delivered along with. Discussions related to the performed activities were made to the NASA Project Manager during the performance period. The enhanced capabilities have been demonstrated using sample problems.
Orion Service Module Reaction Control System Plume Impingement Analysis Using PLIMP/RAMP2
NASA Technical Reports Server (NTRS)
Wang, Xiao-Yen J.; Gati, Frank; Yuko, James R.; Motil, Brian J.; Lumpkin, Forrest E.
2009-01-01
The Orion Crew Exploration Vehicle Service Module Reaction Control System engine plume impingement was computed using the plume impingement program (PLIMP). PLIMP uses the plume solution from RAMP2, which is the refined version of the reacting and multiphase program (RAMP) code. The heating rate and pressure (force and moment) on surfaces or components of the Service Module were computed. The RAMP2 solution of the flow field inside the engine and the plume was compared with those computed using GASP, a computational fluid dynamics code, showing reasonable agreement. The computed heating rate and pressure using PLIMP were compared with the Reaction Control System plume model (RPM) solution and the plume impingement dynamics (PIDYN) solution. RPM uses the GASP-based plume solution, whereas PIDYN uses the SCARF plume solution. Three sets of the heating rate and pressure solutions agree well. Further thermal analysis on the avionic ring of the Service Module showed that thermal protection is necessary because of significant heating from the plume.
A generalized one-dimensional computer code for turbomachinery cooling passage flow calculations
NASA Technical Reports Server (NTRS)
Kumar, Ganesh N.; Roelke, Richard J.; Meitner, Peter L.
1989-01-01
A generalized one-dimensional computer code for analyzing the flow and heat transfer in the turbomachinery cooling passages was developed. This code is capable of handling rotating cooling passages with turbulators, 180 degree turns, pin fins, finned passages, by-pass flows, tip cap impingement flows, and flow branching. The code is an extension of a one-dimensional code developed by P. Meitner. In the subject code, correlations for both heat transfer coefficient and pressure loss computations were developed to model each of the above mentioned type of coolant passages. The code has the capability of independently computing the friction factor and heat transfer coefficient on each side of a rectangular passage. Either the mass flow at the inlet to the channel or the exit plane pressure can be specified. For a specified inlet total temperature, inlet total pressure, and exit static pressure, the code computers the flow rates through the main branch and the subbranches, flow through tip cap for impingement cooling, in addition to computing the coolant pressure, temperature, and heat transfer coefficient distribution in each coolant flow branch. Predictions from the subject code for both nonrotating and rotating passages agree well with experimental data. The code was used to analyze the cooling passage of a research cooled radial rotor.
BRYNTRN: A baryon transport computer code, computation procedures and data base
NASA Technical Reports Server (NTRS)
Wilson, John W.; Townsend, Lawrence W.; Chun, Sang Y.; Buck, Warren W.; Khan, Ferdous; Cucinotta, Frank
1988-01-01
The development is described of an interaction data base and a numerical solution to the transport of baryons through the arbitrary shield material based on a straight ahead approximation of the Boltzmann equation. The code is most accurate for continuous energy boundary values but gives reasonable results for discrete spectra at the boundary with even a relatively coarse energy grid (30 points) and large spatial increments (1 cm in H2O).
Joint Services Electronics Program Annual Progress Report.
1985-11-01
one symbol memory) adaptive lHuffman codes were performed, and the compression achieved was compared with that of Ziv - Lempel coding. As was expected...MATERIALS 8 4. Information Systems 9 4.1 REAL TIME STATISTICAL DATA PROCESSING 9 -. 4.2 DATA COMPRESSION for COMPUTER DATA STRUCTURES 9 5. PhD...a. Real Time Statistical Data Processing (T. Kailatb) b. Data Compression for Computer Data Structures (J. Gill) Acces Fo NTIS CRA&I I " DTIC TAB
COM-GEOM Interactive Display Debugger (CIDD)
1984-08-01
necessery and Identify by block nlum.ber) Target Description GIFT interactive Computer Graphics SolIi d Geone t ry Combintatorial Gecometry * COM-GLOM 120...program was written to speed up the process of formulating the Com-Geom data used by the Geometric Information for Targets ( GIFT ) 1,2 computer code...Polyhedron Lawrence W. Bain, Mathew J. Reisinger, "The GIFT Code User Manual; Volume I, Introduction and Input Requirements (u)," BRL Report No. 1802
Review of numerical models to predict cooling tower performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, B.M.; Nomura, K.K.; Bartz, J.A.
1987-01-01
Four state-of-the-art computer models developed to predict the thermal performance of evaporative cooling towers are summarized. The formulation of these models, STAR and TEFERI (developed in Europe) and FACTS and VERA2D (developed in the U.S.), is summarized. A fifth code, based on Merkel analysis, is also discussed. Principal features of the codes, computation time and storage requirements are described. A discussion of model validation is also provided.
Efficient Proximity Computation Techniques Using ZIP Code Data for Smart Cities †
Murdani, Muhammad Harist; Hong, Bonghee
2018-01-01
In this paper, we are interested in computing ZIP code proximity from two perspectives, proximity between two ZIP codes (Ad-Hoc) and neighborhood proximity (Top-K). Such a computation can be used for ZIP code-based target marketing as one of the smart city applications. A naïve approach to this computation is the usage of the distance between ZIP codes. We redefine a distance metric combining the centroid distance with the intersecting road network between ZIP codes by using a weighted sum method. Furthermore, we prove that the results of our combined approach conform to the characteristics of distance measurement. We have proposed a general and heuristic approach for computing Ad-Hoc proximity, while for computing Top-K proximity, we have proposed a general approach only. Our experimental results indicate that our approaches are verifiable and effective in reducing the execution time and search space. PMID:29587366
Efficient Proximity Computation Techniques Using ZIP Code Data for Smart Cities †.
Murdani, Muhammad Harist; Kwon, Joonho; Choi, Yoon-Ho; Hong, Bonghee
2018-03-24
In this paper, we are interested in computing ZIP code proximity from two perspectives, proximity between two ZIP codes ( Ad-Hoc ) and neighborhood proximity ( Top-K ). Such a computation can be used for ZIP code-based target marketing as one of the smart city applications. A naïve approach to this computation is the usage of the distance between ZIP codes. We redefine a distance metric combining the centroid distance with the intersecting road network between ZIP codes by using a weighted sum method. Furthermore, we prove that the results of our combined approach conform to the characteristics of distance measurement. We have proposed a general and heuristic approach for computing Ad-Hoc proximity, while for computing Top-K proximity, we have proposed a general approach only. Our experimental results indicate that our approaches are verifiable and effective in reducing the execution time and search space.
Navier-Stokes analysis of cold scramjet-afterbody flows
NASA Technical Reports Server (NTRS)
Baysal, Oktay; Engelund, Walter C.; Eleshaky, Mohamed E.
1989-01-01
The progress of two efforts in coding solutions of Navier-Stokes equations is summarized. The first effort concerns a 3-D space marching parabolized Navier-Stokes (PNS) code being modified to compute the supersonic mixing flow through an internal/external expansion nozzle with multicomponent gases. The 3-D PNS equations, coupled with a set of species continuity equations, are solved using an implicit finite difference scheme. The completed work is summarized and includes code modifications for four chemical species, computing the flow upstream of the upper cowl for a theoretical air mixture, developing an initial plane solution for the inner nozzle region, and computing the flow inside the nozzle for both a N2/O2 mixture and a Freon-12/Ar mixture, and plotting density-pressure contours for the inner nozzle region. The second effort concerns a full Navier-Stokes code. The species continuity equations account for the diffusion of multiple gases. This 3-D explicit afterbody code has the ability to use high order numerical integration schemes such as the 4th order MacCormack, and the Gottlieb-MacCormack schemes. Changes to the work are listed and include, but are not limited to: (1) internal/external flow capability; (2) new treatments of the cowl wall boundary conditions and relaxed computations around the cowl region and cowl tip; (3) the entering of the thermodynamic and transport properties of Freon-12, Ar, O, and N; (4) modification to the Baldwin-Lomax turbulence model to account for turbulent eddies generated by cowl walls inside and external to the nozzle; and (5) adopting a relaxation formula to account for the turbulence in the mixing shear layer.
Trellis coding with multidimensional QAM signal sets
NASA Technical Reports Server (NTRS)
Pietrobon, Steven S.; Costello, Daniel J.
1993-01-01
Trellis coding using multidimensional QAM signal sets is investigated. Finite-size 2D signal sets are presented that have minimum average energy, are 90-deg rotationally symmetric, and have from 16 to 1024 points. The best trellis codes using the finite 16-QAM signal set with two, four, six, and eight dimensions are found by computer search (the multidimensional signal set is constructed from the 2D signal set). The best moderate complexity trellis codes for infinite lattices with two, four, six, and eight dimensions are also found. The minimum free squared Euclidean distance and number of nearest neighbors for these codes were used as the selection criteria. Many of the multidimensional codes are fully rotationally invariant and give asymptotic coding gains up to 6.0 dB. From the infinite lattice codes, the best codes for transmitting J, J + 1/4, J + 1/3, J + 1/2, J + 2/3, and J + 3/4 bit/sym (J an integer) are presented.
Evaluation of a new microphysical aerosol module in the ECMWF Integrated Forecasting System
NASA Astrophysics Data System (ADS)
Woodhouse, Matthew; Mann, Graham; Carslaw, Ken; Morcrette, Jean-Jacques; Schulz, Michael; Kinne, Stefan; Boucher, Olivier
2013-04-01
The Monitoring Atmospheric Composition and Climate II (MACC-II) project will provide a system for monitoring and predicting atmospheric composition. As part of the first phase of MACC, the GLOMAP-mode microphysical aerosol scheme (Mann et al., 2010, GMD) was incorporated within the ECMWF Integrated Forecasting System (IFS). The two-moment modal GLOMAP-mode scheme includes new particle formation, condensation, coagulation, cloud-processing, and wet and dry deposition. GLOMAP-mode is already incorporated as a module within the TOMCAT chemistry transport model and within the UK Met Office HadGEM3 general circulation model. The microphysical, process-based GLOMAP-mode scheme allows an improved representation of aerosol size and composition and can simulate aerosol evolution in the troposphere and stratosphere. The new aerosol forecasting and re-analysis system (known as IFS-GLOMAP) will also provide improved boundary conditions for regional air quality forecasts, and will benefit from assimilation of observed aerosol optical depths in near real time. Presented here is an evaluation of the performance of the IFS-GLOMAP system in comparison to in situ aerosol mass and number measurements, and remotely-sensed aerosol optical depth measurements. Future development will provide a fully-coupled chemistry-aerosol scheme, and the capability to resolve nitrate aerosol.
Rufa, Magdalena; Schubel, Jens; Ulrich, Christian; Schaarschmidt, Jan; Tiliscan, Catalin; Bauer, Adrian; Hausmann, Harald
2015-07-01
At the moment, the main application of minimally invasive extracorporeal circulation (MiECC) is reserved for elective cardiac operations such as coronary artery bypass grafting (CABG) and/or aortic valve replacement. The purpose of this study was to compare the outcome of emergency CABG operations using either MiECC or conventional extracorporeal circulation (CECC) in patients requiring emergency CABG with regard to the perioperative course and the occurrence of major adverse cardiac and cerebral events (MACCE). We analysed the emergency CABG operations performed by a single surgeon, between January 2007 and July 2013, in order to exclude the differences in surgical technique. During this period, 187 emergency CABG patients (113 MiECC vs 74 CECC) were investigated retrospectively with respect to the following parameters: in-hospital mortality, MACCE, postoperative hospital stay and perioperative transfusion rate. The mean logistic European System for Cardiac Operative Risk Evaluation was higher in the CECC group (MiECC 12.1 ± 16 vs CECC 15.0 ± 20.8, P = 0.15) and the number of bypass grafts per patient was similar in both groups (MiECC 2.94 vs CECC 2.93). There was no significant difference in the postoperative hospital stay or in major postoperative complications. The in-hospital mortality was higher in the CECC group 6.8% versus MiECC 4.4% (P = 0.48). The perioperative transfusion rate was lower with MiECC compared with CECC (MiECC 2.6 ± 3.2 vs CECC 3.8 ± 4.2, P = 0.025 units of blood per patient). In our opinion, the use of MiECC in urgent CABG procedures is safe, feasible and shows no disadvantages compared with the use of CECC. Emergency operations using the MiECC system showed a significantly lower blood transfusion rate and better results concerning the unadjusted in-hospital mortality. © The Author 2015. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
A CFD/CSD Interaction Methodology for Aircraft Wings
NASA Technical Reports Server (NTRS)
Bhardwaj, Manoj K.
1997-01-01
With advanced subsonic transports and military aircraft operating in the transonic regime, it is becoming important to determine the effects of the coupling between aerodynamic loads and elastic forces. Since aeroelastic effects can contribute significantly to the design of these aircraft, there is a strong need in the aerospace industry to predict these aero-structure interactions computationally. To perform static aeroelastic analysis in the transonic regime, high fidelity computational fluid dynamics (CFD) analysis tools must be used in conjunction with high fidelity computational structural fluid dynamics (CSD) analysis tools due to the nonlinear behavior of the aerodynamics in the transonic regime. There is also a need to be able to use a wide variety of CFD and CSD tools to predict these aeroelastic effects in the transonic regime. Because source codes are not always available, it is necessary to couple the CFD and CSD codes without alteration of the source codes. In this study, an aeroelastic coupling procedure is developed which will perform static aeroelastic analysis using any CFD and CSD code with little code integration. The aeroelastic coupling procedure is demonstrated on an F/A-18 Stabilator using NASTD (an in-house McDonnell Douglas CFD code) and NASTRAN. In addition, the Aeroelastic Research Wing (ARW-2) is used for demonstration of the aeroelastic coupling procedure by using ENSAERO (NASA Ames Research Center CFD code) and a finite element wing-box code (developed as part of this research).
HELIOS-R: An Ultrafast, Open-Source Retrieval Code For Exoplanetary Atmosphere Characterization
NASA Astrophysics Data System (ADS)
LAVIE, Baptiste
2015-12-01
Atmospheric retrieval is a growing, new approach in the theory of exoplanet atmosphere characterization. Unlike self-consistent modeling it allows us to fully explore the parameter space, as well as the degeneracies between the parameters using a Bayesian framework. We present HELIOS-R, a very fast retrieving code written in Python and optimized for GPU computation. Once it is ready, HELIOS-R will be the first open-source atmospheric retrieval code accessible to the exoplanet community. As the new generation of direct imaging instruments (SPHERE, GPI) have started to gather data, the first version of HELIOS-R focuses on emission spectra. We use a 1D two-stream forward model for computing fluxes and couple it to an analytical temperature-pressure profile that is constructed to be in radiative equilibrium. We use our ultra-fast opacity calculator HELIOS-K (also open-source) to compute the opacities of CO2, H2O, CO and CH4 from the HITEMP database. We test both opacity sampling (which is typically used by other workers) and the method of k-distributions. Using this setup, we compute a grid of synthetic spectra and temperature-pressure profiles, which is then explored using a nested sampling algorithm. By focusing on model selection (Occam’s razor) through the explicit computation of the Bayesian evidence, nested sampling allows us to deal with current sparse data as well as upcoming high-resolution observations. Once the best model is selected, HELIOS-R provides posterior distributions of the parameters. As a test for our code we studied HR8799 system and compared our results with the previous analysis of Lee, Heng & Irwin (2013), which used the proprietary NEMESIS retrieval code. HELIOS-R and HELIOS-K are part of the set of open-source community codes we named the Exoclimes Simulation Platform (www.exoclime.org).
Evaluation of three coding schemes designed for improved data communication
NASA Technical Reports Server (NTRS)
Snelsire, R. W.
1974-01-01
Three coding schemes designed for improved data communication are evaluated. Four block codes are evaluated relative to a quality function, which is a function of both the amount of data rejected and the error rate. The Viterbi maximum likelihood decoding algorithm as a decoding procedure is reviewed. This evaluation is obtained by simulating the system on a digital computer. Short constraint length rate 1/2 quick-look codes are studied, and their performance is compared to general nonsystematic codes.
Volume accumulator design analysis computer codes
NASA Technical Reports Server (NTRS)
Whitaker, W. D.; Shimazaki, T. T.
1973-01-01
The computer codes, VANEP and VANES, were written and used to aid in the design and performance calculation of the volume accumulator units (VAU) for the 5-kwe reactor thermoelectric system. VANEP computes the VAU design which meets the primary coolant loop VAU volume and pressure performance requirements. VANES computes the performance of the VAU design, determined from the VANEP code, at the conditions of the secondary coolant loop. The codes can also compute the performance characteristics of the VAU's under conditions of possible modes of failure which still permit continued system operation.
Proteus two-dimensional Navier-Stokes computer code, version 2.0. Volume 2: User's guide
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Bui, Trong T.
1993-01-01
A computer code called Proteus 2D was developed to solve the two-dimensional planar or axisymmetric, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The objective in this effort was to develop a code for aerospace propulsion applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The governing equations are solved in generalized nonorthogonal body-fitted coordinates, by marching in time using a fully-coupled ADI solution procedure. The boundary conditions are treated implicitly. All terms, including the diffusion terms, are linearized using second-order Taylor series expansions. Turbulence is modeled using either an algebraic or two-equation eddy viscosity model. The thin-layer or Euler equations may also be solved. The energy equation may be eliminated by the assumption of constant total enthalpy. Explicit and implicit artificial viscosity may be used. Several time step options are available for convergence acceleration. The documentation is divided into three volumes. This is the User's Guide, and describes the program's features, the input and output, the procedure for setting up initial conditions, the computer resource requirements, the diagnostic messages that may be generated, the job control language used to run the program, and several test cases.
"Hour of Code": Can It Change Students' Attitudes toward Programming?
ERIC Educational Resources Information Center
Du, Jie; Wimmer, Hayden; Rada, Roy
2016-01-01
The Hour of Code is a one-hour introduction to computer science organized by Code.org, a non-profit dedicated to expanding participation in computer science. This study investigated the impact of the Hour of Code on students' attitudes towards computer programming and their knowledge of programming. A sample of undergraduate students from two…
[Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].
Furuta, Takuya; Sato, Tatsuhiko
2015-01-01
Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.
NASA Technical Reports Server (NTRS)
Marconi, F.; Yaeger, L.
1976-01-01
A numerical procedure was developed to compute the inviscid super/hypersonic flow field about complex vehicle geometries accurately and efficiently. A second-order accurate finite difference scheme is used to integrate the three-dimensional Euler equations in regions of continuous flow, while all shock waves are computed as discontinuities via the Rankine-Hugoniot jump conditions. Conformal mappings are used to develop a computational grid. The effects of blunt nose entropy layers are computed in detail. Real gas effects for equilibrium air are included using curve fits of Mollier charts. Typical calculated results for shuttle orbiter, hypersonic transport, and supersonic aircraft configurations are included to demonstrate the usefulness of this tool.
26 CFR 1.441-1 - Period for computation of taxable income.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Internal Revenue Code, and the regulations thereunder. (2) Length of taxable year. Except as otherwise provided in the Internal Revenue Code and the regulations thereunder (e.g., § 1.441-2 regarding 52-53-week... and definitions. The general rules and definitions in this paragraph (b) apply for purposes of...
26 CFR 1.441-1 - Period for computation of taxable income.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Internal Revenue Code, and the regulations thereunder. (2) Length of taxable year. Except as otherwise provided in the Internal Revenue Code and the regulations thereunder (e.g., § 1.441-2 regarding 52-53-week... and definitions. The general rules and definitions in this paragraph (b) apply for purposes of...
26 CFR 1.441-1 - Period for computation of taxable income.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Internal Revenue Code, and the regulations thereunder. (2) Length of taxable year. Except as otherwise provided in the Internal Revenue Code and the regulations thereunder (e.g., § 1.441-2 regarding 52-53-week... and definitions. The general rules and definitions in this paragraph (b) apply for purposes of...
Influence of temperature fluctuations on infrared limb radiance: a new simulation code
NASA Astrophysics Data System (ADS)
Rialland, Valérie; Chervet, Patrick
2006-08-01
Airborne infrared limb-viewing detectors may be used as surveillance sensors in order to detect dim military targets. These systems' performances are limited by the inhomogeneous background in the sensor field of view which impacts strongly on target detection probability. This background clutter, which results from small-scale fluctuations of temperature, density or pressure must therefore be analyzed and modeled. Few existing codes are able to model atmospheric structures and their impact on limb-observed radiance. SAMM-2 (SHARC-4 and MODTRAN4 Merged), the Air Force Research Laboratory (AFRL) background radiance code can be used to in order to predict the radiance fluctuation as a result of a normalized temperature fluctuation, as a function of the line-of-sight. Various realizations of cluttered backgrounds can then be computed, based on these transfer functions and on a stochastic temperature field. The existing SIG (SHARC Image Generator) code was designed to compute the cluttered background which would be observed from a space-based sensor. Unfortunately, this code was not able to compute accurate scenes as seen by an airborne sensor especially for lines-of-sight close to the horizon. Recently, we developed a new code called BRUTE3D and adapted to our configuration. This approach is based on a method originally developed in the SIG model. This BRUTE3D code makes use of a three-dimensional grid of temperature fluctuations and of the SAMM-2 transfer functions to synthesize an image of radiance fluctuations according to sensor characteristics. This paper details the working principles of the code and presents some output results. The effects of the small-scale temperature fluctuations on infrared limb radiance as seen by an airborne sensor are highlighted.
NASA Technical Reports Server (NTRS)
Hribar, Michelle R.; Frumkin, Michael; Jin, Haoqiang; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
Over the past decade, high performance computing has evolved rapidly; systems based on commodity microprocessors have been introduced in quick succession from at least seven vendors/families. Porting codes to every new architecture is a difficult problem; in particular, here at NASA, there are many large CFD applications that are very costly to port to new machines by hand. The LCM ("Legacy Code Modernization") Project is the development of an integrated parallelization environment (IPE) which performs the automated mapping of legacy CFD (Fortran) applications to state-of-the-art high performance computers. While most projects to port codes focus on the parallelization of the code, we consider porting to be an iterative process consisting of several steps: 1) code cleanup, 2) serial optimization,3) parallelization, 4) performance monitoring and visualization, 5) intelligent tools for automated tuning using performance prediction and 6) machine specific optimization. The approach for building this parallelization environment is to build the components for each of the steps simultaneously and then integrate them together. The demonstration will exhibit our latest research in building this environment: 1. Parallelizing tools and compiler evaluation. 2. Code cleanup and serial optimization using automated scripts 3. Development of a code generator for performance prediction 4. Automated partitioning 5. Automated insertion of directives. These demonstrations will exhibit the effectiveness of an automated approach for all the steps involved with porting and tuning a legacy code application for a new architecture.
HYDRA-II: A hydrothermal analysis computer code: Volume 2, User's manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCann, R.A.; Lowery, P.S.; Lessor, D.L.
1987-09-01
HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite-difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations formore » conservation of momentum incorporate directional porosities and permeabilities that are available to model solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated methods are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume 1 - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. This volume, Volume 2 - User's Manual, contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a sample problem. The final volume, Volume 3 - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. 6 refs.« less
Polyanskiy, Mikhail N.
2015-01-01
We describe a computer code for simulating the amplification of ultrashort mid-infrared laser pulses in CO 2 amplifiers and their propagation through arbitrary optical systems. This code is based on a comprehensive model that includes an accurate consideration of the CO 2 active medium and a physical optics propagation algorithm, and takes into account the interaction of the laser pulse with the material of the optical elements. Finally, the application of the code for optimizing an isotopic regenerative amplifier is described.
NASA Technical Reports Server (NTRS)
Wang, Xiao-Yen; Wey, Thomas; Buehrle, Robert
2009-01-01
A computational fluid dynamic (CFD) code is used to simulate the J-2X engine exhaust in the center-body diffuser and spray chamber at the Spacecraft Propulsion Facility (B-2). The CFD code is named as the space-time conservation element and solution element (CESE) Euler solver and is very robust at shock capturing. The CESE results are compared with independent analysis results obtained by using the National Combustion Code (NCC) and show excellent agreement.
Talking about Code: Integrating Pedagogical Code Reviews into Early Computing Courses
ERIC Educational Resources Information Center
Hundhausen, Christopher D.; Agrawal, Anukrati; Agarwal, Pawan
2013-01-01
Given the increasing importance of soft skills in the computing profession, there is good reason to provide students withmore opportunities to learn and practice those skills in undergraduate computing courses. Toward that end, we have developed an active learning approach for computing education called the "Pedagogical Code Review"…
NASA Technical Reports Server (NTRS)
Kandula, Max; Pearce, Daniel
1989-01-01
A steady incompressible three-dimensional (3-D) viscous flow analysis was conducted for the Space Shuttle Main Propulsion External Tank (ET)/Orbiter (ORB) propellant feed line quick separable 17-inch disconnect flapper valves for liquid oxygen (LO2) and liquid hydrogen (LH2). The main objectives of the analysis were to predict and correlate the hydrodynamic stability of the flappers and pressure drop with available water test data. Computational Fluid Dynamics (CFD) computer codes were procured at no cost from the public domain, and were modified and extended to carry out the disconnect flow analysis. The grid generator codes SVTGD3D and INGRID were obtained. NASA Ames Research Center supplied the flow solution code INS3D, and the color graphics code PLOT3D. A driver routine was developed to automate the grid generation process. Components such as pipes, elbows, and flappers can be generated with simple commands, and flapper angles can be varied easily. The flow solver INS3D code was modified to treat interior flappers, and other interfacing routines were developed, which include a turbulence model, a force/moment routine, a time-step routine, and initial and boundary conditions. In particular, an under-relaxation scheme was implemented to enhance the solution stability. Major physical assumptions and simplifications made in the analysis include the neglect of linkages, slightly reduced flapper diameter, and smooth solid surfaces. A grid size of 54 x 21 x 25 was employed for both the LO2 and LH2 units. Mixing length theory applied to turbulent shear flow in pipes formed the basis for the simple turbulence model. Results of the analysis are presented for LO2 and LH2 disconnects.
Computational strategies for three-dimensional flow simulations on distributed computer systems
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Weed, Richard A.
1995-01-01
This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.
Computational strategies for three-dimensional flow simulations on distributed computer systems
NASA Astrophysics Data System (ADS)
Sankar, Lakshmi N.; Weed, Richard A.
1995-08-01
This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.
Guidelines for developing vectorizable computer programs
NASA Technical Reports Server (NTRS)
Miner, E. W.
1982-01-01
Some fundamental principles for developing computer programs which are compatible with array-oriented computers are presented. The emphasis is on basic techniques for structuring computer codes which are applicable in FORTRAN and do not require a special programming language or exact a significant penalty on a scalar computer. Researchers who are using numerical techniques to solve problems in engineering can apply these basic principles and thus develop transportable computer programs (in FORTRAN) which contain much vectorizable code. The vector architecture of the ASC is discussed so that the requirements of array processing can be better appreciated. The "vectorization" of a finite-difference viscous shock-layer code is used as an example to illustrate the benefits and some of the difficulties involved. Increases in computing speed with vectorization are illustrated with results from the viscous shock-layer code and from a finite-element shock tube code. The applicability of these principles was substantiated through running programs on other computers with array-associated computing characteristics, such as the Hewlett-Packard (H-P) 1000-F.
User Manual for the NASA Glenn Ice Accretion Code LEWICE: Version 2.0
NASA Technical Reports Server (NTRS)
Wright, William B.
1999-01-01
A research project is underway at NASA Glenn to produce a computer code which can accurately predict ice growth under a wide range of meteorological conditions for any aircraft surface. This report will present a description of the code inputs and outputs from version 2.0 of this code, which is called LEWICE. This version differs from previous releases due to its robustness and its ability to reproduce results accurately for different spacing and time step criteria across computing platform. It also differs in the extensive effort undertaken to compare the results against the database of ice shapes which have been generated in the NASA Glenn Icing Research Tunnel (IRT) 1. This report will only describe the features of the code related to the use of the program. The report will not describe the inner working of the code or the physical models used. This information is available in the form of several unpublished documents which will be collectively referred to as a Programmers Manual for LEWICE 2 in this report. These reports are intended as an update/replacement for all previous user manuals of LEWICE. In addition to describing the changes and improvements made for this version, information from previous manuals may be duplicated so that the user will not need to consult previous manuals to use this code.
NASA Astrophysics Data System (ADS)
Bauwens, Maite; Stavrakou, Trissevgeni; Müller, Jean-François; De Smedt, Isabelle; Van Roozendael, Michel
2016-04-01
Isoprene is one of the most largely emitted hydrocarbons in the atmosphere, with global annual emissions estimated at about 500 Tg, but with large uncertainties (Arneth et al., 2011). Here we use the source inversion approach to derive top-down biogenic isoprene emission estimates for the period between 2005 and 2014 constrained by formaldehyde observations, a high-yield intermediate in the oxidation of isoprene in the atmosphere. Formaldehyde columns retrieved from the Ozone Monitoring Instrument (OMI) are used to constrain the IMAGESv2 global chemistry-transport model and its adjoint code (Stavrakou et al., 2009). The MEGAN-MOHYCAN isoprene emissions (Stavrakou et al., 2014) are used as bottom-up inventory in the model. The inversions are performed separately for each year of the study period, and monthly emissions are derived for every model grid cell. The inversion results are compared to independent isoprene emissions from GUESS-ES (Arneth et al., 2007) and MEGAN-MACC (Sinderalova et al., 2014) and to top-down fluxes based on GOME-2 formaldehyde columns (Bauwens et al., 2014; Stavrakou et al., 2015). The mean global annual OMI-based isoprene flux for the period 2005-2014 is estimated to be 270 Tg, with small interannual variation. This estimate is by 20% lower with regard to the a priori inventory on average, but on the regional scale strong emission updates are inferred. The OMI-based emissions are substantially lower than the MEGAN-MACC and the GUESS-ES inventory, but agree well with the isoprene fluxes constrained by GOME-2 formaldehyde columns. Strong emission reductions are derived over tropical regions. The seasonal pattern of isoprene emissions is generally well preserved after inversion and relatively consistent with other inventories, lending confidence to the MEGAN parameterization of the a priori inventory. In boreal regions the isoprene emission trend is positive and reinforced after inversion, whereas the inversion suggests negative trends in the rainforests of Equatorial Africa and South America. The top-down isoprene fluxes are available at a resolution of 0.5°x0.5° between 2005 and 2014 at the GlobEmission website (http://www.globemission.eu). References: Arneth, A., et al.: Process-based estimates of terrestrial ecosystem isoprene emissions: incorporating the effects of a direct CO 2-isoprene interaction, Atmos. Chem. Phys., 7(1), 31-53, 2007. Arneth, A., et al.: Global terrestrial isoprene emission models: sensitivity to variability in climate and vegetation, Atmos. Chem. Phys., 11(15), 8037-8052, 2011. Bauwens, M., et al.: Satellite-based isoprene emission estimates (2007-2012) from the GlobEmission project, in ACCENT-Plus Symposium 2013 Proceedings., 2014. Stavrakou, T., et al.: Isoprene emissions over Asia 1979 - 2012: impact of climate and land-use changes, Atmos. Chem. Phys., 14(9), 4587-4605, doi:10.5194/acp-14-4587-2014, 2014. Stavrakou, T., et al.: How consistent are top-down hydrocarbon emissions based on formaldehyde observations from GOME-2 and OMI?, Atmos. Chem. Phys., 15(20), 11861-11884, doi:10.5194/acp-15-11861-2015, 2015. Stavrakou, T., et al.: Evaluating the performance of pyrogenic and biogenic emission inventories against one decade of space-based formaldehyde columns, Atmos. Chem. Phys., 9(3), 1037-1060, doi:10.5194/acp-9-1037-2009, 2009.
The Helicopter Antenna Radiation Prediction Code (HARP)
NASA Technical Reports Server (NTRS)
Klevenow, F. T.; Lynch, B. G.; Newman, E. H.; Rojas, R. G.; Scheick, J. T.; Shamansky, H. T.; Sze, K. Y.
1990-01-01
The first nine months effort in the development of a user oriented computer code, referred to as the HARP code, for analyzing the radiation from helicopter antennas is described. The HARP code uses modern computer graphics to aid in the description and display of the helicopter geometry. At low frequencies the helicopter is modeled by polygonal plates, and the method of moments is used to compute the desired patterns. At high frequencies the helicopter is modeled by a composite ellipsoid and flat plates, and computations are made using the geometrical theory of diffraction. The HARP code will provide a user friendly interface, employing modern computer graphics, to aid the user to describe the helicopter geometry, select the method of computation, construct the desired high or low frequency model, and display the results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, J.E.; Roussin, R.W.; Gilpin, H.
A version of the CRAC2 computer code applicable for use in analyses of consequences and risks of reactor accidents in case work for environmental statements has been implemented for use on the Nuclear Regulatory Commission Data General MV/8000 computer system. Input preparation is facilitated through the use of an interactive computer program which operates on an IBM personal computer. The resulting CRAC2 input deck is transmitted to the MV/8000 by using an error-free file transfer mechanism. To facilitate the use of CRAC2 at NRC, relevant background material on input requirements and model descriptions has been extracted from four reports -more » ''Calculations of Reactor Accident Consequences,'' Version 2, NUREG/CR-2326 (SAND81-1994) and ''CRAC2 Model Descriptions,'' NUREG/CR-2552 (SAND82-0342), ''CRAC Calculations for Accident Sections of Environmental Statements, '' NUREG/CR-2901 (SAND82-1693), and ''Sensitivity and Uncertainty Studies of the CRAC2 Computer Code,'' NUREG/CR-4038 (ORNL-6114). When this background information is combined with instructions on the input processor, this report provides a self-contained guide for preparing CRAC2 input data with a specific orientation toward applications on the MV/8000. 8 refs., 11 figs., 10 tabs.« less
Multi-dimensional computer simulation of MHD combustor hydrodynamics
NASA Astrophysics Data System (ADS)
Berry, G. F.; Chang, S. L.; Lottes, S. A.; Rimkus, W. A.
1991-04-01
Argonne National Laboratory is investigating the nonreacting jet gas mixing patterns in an MHD second stage combustor by using a 2-D multiphase hydrodynamics computer program and a 3-D single phase hydrodynamics computer program. The computer simulations are intended to enhance the understanding of flow and mixing patterns in the combustor, which in turn may lead to improvement of the downstream MHD channel performance. A 2-D steady state computer model, based on mass and momentum conservation laws for multiple gas species, is used to simulate the hydrodynamics of the combustor in which a jet of oxidizer is injected into an unconfined cross stream gas flow. A 3-D code is used to examine the effects of the side walls and the distributed jet flows on the non-reacting jet gas mixing patterns. The code solves the conservation equations of mass, momentum, and energy, and a transport equation of a turbulence parameter and allows permeable surfaces to be specified for any computational cell.
Enhanced fault-tolerant quantum computing in d-level systems.
Campbell, Earl T
2014-12-05
Error-correcting codes protect quantum information and form the basis of fault-tolerant quantum computing. Leading proposals for fault-tolerant quantum computation require codes with an exceedingly rare property, a transversal non-Clifford gate. Codes with the desired property are presented for d-level qudit systems with prime d. The codes use n=d-1 qudits and can detect up to ∼d/3 errors. We quantify the performance of these codes for one approach to quantum computation known as magic-state distillation. Unlike prior work, we find performance is always enhanced by increasing d.
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1992-01-01
Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.
Nonuniform code concatenation for universal fault-tolerant quantum computing
NASA Astrophysics Data System (ADS)
Nikahd, Eesa; Sedighi, Mehdi; Saheb Zamani, Morteza
2017-09-01
Using transversal gates is a straightforward and efficient technique for fault-tolerant quantum computing. Since transversal gates alone cannot be computationally universal, they must be combined with other approaches such as magic state distillation, code switching, or code concatenation to achieve universality. In this paper we propose an alternative approach for universal fault-tolerant quantum computing, mainly based on the code concatenation approach proposed in [T. Jochym-O'Connor and R. Laflamme, Phys. Rev. Lett. 112, 010505 (2014), 10.1103/PhysRevLett.112.010505], but in a nonuniform fashion. The proposed approach is described based on nonuniform concatenation of the 7-qubit Steane code with the 15-qubit Reed-Muller code, as well as the 5-qubit code with the 15-qubit Reed-Muller code, which lead to two 49-qubit and 47-qubit codes, respectively. These codes can correct any arbitrary single physical error with the ability to perform a universal set of fault-tolerant gates, without using magic state distillation.
A high performance scientific cloud computing environment for materials simulations
NASA Astrophysics Data System (ADS)
Jorissen, K.; Vila, F. D.; Rehr, J. J.
2012-09-01
We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKeown, J.; Labrie, J.P.
1983-08-01
A general purpose finite element computer code called MARC is used to calculate the temperature distribution and dimensional changes in linear accelerator rf structures. Both steady state and transient behaviour are examined with the computer model. Combining results from MARC with the cavity evaluation computer code SUPERFISH, the static and dynamic behaviour of a structure under power is investigated. Structure cooling is studied to minimize loss in shunt impedance and frequency shifts during high power operation. Results are compared with an experimental test carried out on a cw 805 MHz on-axis coupled structure at an energy gradient of 1.8 MeV/m.more » The model has also been used to compare the performance of on-axis and coaxial structures and has guided the mechanical design of structures suitable for average gradients in excess of 2.0 MeV/m at 2.45 GHz.« less
Improvements to Busquet's Non LTE algorithm in NRL's Hydro code
NASA Astrophysics Data System (ADS)
Klapisch, M.; Colombant, D.
1996-11-01
Implementation of the Non LTE model RADIOM (M. Busquet, Phys. Fluids B, 5, 4191 (1993)) in NRL's RAD2D Hydro code in conservative form was reported previously(M. Klapisch et al., Bull. Am. Phys. Soc., 40, 1806 (1995)).While the results were satisfactory, the algorithm was slow and not always converging. We describe here modifications that address the latter two shortcomings. This method is quicker and more stable than the original. It also gives information about the validity of the fitting. It turns out that the number and distribution of groups in the multigroup diffusion opacity tables - a basis for the computation of radiation effects in the ionization balance in RADIOM- has a large influence on the robustness of the algorithm. These modifications give insight about the algorithm, and allow to check that the obtained average charge state is the true average. In addition, code optimization resulted in greatly reduced computing time: The ratio of Non LTE to LTE computing times being now between 1.5 and 2.
Evolvix BEST Names for semantic reproducibility across code2brain interfaces.
Loewe, Laurence; Scheuer, Katherine S; Keel, Seth A; Vyas, Vaibhav; Liblit, Ben; Hanlon, Bret; Ferris, Michael C; Yin, John; Dutra, Inês; Pietsch, Anthony; Javid, Christine G; Moog, Cecilia L; Meyer, Jocelyn; Dresel, Jerdon; McLoone, Brian; Loberger, Sonya; Movaghar, Arezoo; Gilchrist-Scott, Morgaine; Sabri, Yazeed; Sescleifer, Dave; Pereda-Zorrilla, Ivan; Zietlow, Andrew; Smith, Rodrigo; Pietenpol, Samantha; Goldfinger, Jacob; Atzen, Sarah L; Freiberg, Erika; Waters, Noah P; Nusbaum, Claire; Nolan, Erik; Hotz, Alyssa; Kliman, Richard M; Mentewab, Ayalew; Fregien, Nathan; Loewe, Martha
2017-01-01
Names in programming are vital for understanding the meaning of code and big data. We define code2brain (C2B) interfaces as maps in compilers and brains between meaning and naming syntax, which help to understand executable code. While working toward an Evolvix syntax for general-purpose programming that makes accurate modeling easy for biologists, we observed how names affect C2B quality. To protect learning and coding investments, C2B interfaces require long-term backward compatibility and semantic reproducibility (accurate reproduction of computational meaning from coder-brains to reader-brains by code alone). Semantic reproducibility is often assumed until confusing synonyms degrade modeling in biology to deciphering exercises. We highlight empirical naming priorities from diverse individuals and roles of names in different modes of computing to show how naming easily becomes impossibly difficult. We present the Evolvix BEST (Brief, Explicit, Summarizing, Technical) Names concept for reducing naming priority conflicts, test it on a real challenge by naming subfolders for the Project Organization Stabilizing Tool system, and provide naming questionnaires designed to facilitate C2B debugging by improving names used as keywords in a stabilizing programming language. Our experiences inspired us to develop Evolvix using a flipped programming language design approach with some unexpected features and BEST Names at its core. © 2016 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of New York Academy of Sciences.
Performance analysis of a cascaded coding scheme with interleaved outer code
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.
NASA Technical Reports Server (NTRS)
Saini, Subhash; Frumkin, Michael; Hribar, Michelle; Jin, Hao-Qiang; Waheed, Abdul; Yan, Jerry
1998-01-01
Porting applications to new high performance parallel and distributed computing platforms is a challenging task. Since writing parallel code by hand is extremely time consuming and costly, porting codes would ideally be automated by using some parallelization tools and compilers. In this paper, we compare the performance of the hand written NAB Parallel Benchmarks against three parallel versions generated with the help of tools and compilers: 1) CAPTools: an interactive computer aided parallelization too] that generates message passing code, 2) the Portland Group's HPF compiler and 3) using compiler directives with the native FORTAN77 compiler on the SGI Origin2000.
Porting a Hall MHD Code to a Graphic Processing Unit
NASA Technical Reports Server (NTRS)
Dorelli, John C.
2011-01-01
We present our experience porting a Hall MHD code to a Graphics Processing Unit (GPU). The code is a 2nd order accurate MUSCL-Hancock scheme which makes use of an HLL Riemann solver to compute numerical fluxes and second-order finite differences to compute the Hall contribution to the electric field. The divergence of the magnetic field is controlled with Dedner?s hyperbolic divergence cleaning method. Preliminary benchmark tests indicate a speedup (relative to a single Nehalem core) of 58x for a double precision calculation. We discuss scaling issues which arise when distributing work across multiple GPUs in a CPU-GPU cluster.
NASA Technical Reports Server (NTRS)
Steinke, R. J.
1982-01-01
A FORTRAN computer code is presented for off-design performance prediction of axial-flow compressors. Stage and compressor performance is obtained by a stage-stacking method that uses representative velocity diagrams at rotor inlet and outlet meanline radii. The code has options for: (1) direct user input or calculation of nondimensional stage characteristics; (2) adjustment of stage characteristics for off-design speed and blade setting angle; (3) adjustment of rotor deviation angle for off-design conditions; and (4) SI or U.S. customary units. Correlations from experimental data are used to model real flow conditions. Calculations are compared with experimental data.
Higher order turbulence closure models
NASA Technical Reports Server (NTRS)
Amano, Ryoichi S.; Chai, John C.; Chen, Jau-Der
1988-01-01
Theoretical models are developed and numerical studies conducted on various types of flows including both elliptic and parabolic. The purpose of this study is to find better higher order closure models for the computations of complex flows. This report summarizes three new achievements: (1) completion of the Reynolds-stress closure by developing a new pressure-strain correlation; (2) development of a parabolic code to compute jets and wakes; and, (3) application to a flow through a 180 deg turnaround duct by adopting a boundary fitted coordinate system. In the above mentioned models near-wall models are developed for pressure-strain correlation and third-moment, and incorporated into the transport equations. This addition improved the results considerably and is recommended for future computations. A new parabolic code to solve shear flows without coordinate tranformations is developed and incorporated in this study. This code uses the structure of the finite volume method to solve the governing equations implicitly. The code was validated with the experimental results available in the literature.
NASA Technical Reports Server (NTRS)
Pao, J. L.; Mehrotra, S. C.; Lan, C. E.
1982-01-01
A computer code base on an improved vortex filament/vortex core method for predicting aerodynamic characteristics of slender wings with edge vortex separations is developed. The code is applicable to camber wings, straked wings or wings with leading edge vortex flaps at subsonic speeds. The prediction of lifting pressure distribution and the computer time are improved by using a pair of concentrated vortex cores above the wing surface. The main features of this computer program are: (1) arbitrary camber shape may be defined and an option for exactly defining leading edge flap geometry is also provided; (2) the side edge vortex system is incorporated.
Green's function methods in heavy ion shielding
NASA Technical Reports Server (NTRS)
Wilson, John W.; Costen, Robert C.; Shinn, Judy L.; Badavi, Francis F.
1993-01-01
An analytic solution to the heavy ion transport in terms of Green's function is used to generate a highly efficient computer code for space applications. The efficiency of the computer code is accomplished by a nonperturbative technique extending Green's function over the solution domain. The computer code can also be applied to accelerator boundary conditions to allow code validation in laboratory experiments.
NASA Technical Reports Server (NTRS)
Oliver, A. B.; Lillard, R. P.; Blaisdell, G. A.; Lyrintizis, A. S.
2006-01-01
The capability of the OVERFLOW code to accurately compute high-speed turbulent boundary layers and turbulent shock-boundary layer interactions is being evaluated. Configurations being investigated include a Mach 2.87 flat plate to compare experimental velocity profiles and boundary layer growth, a Mach 6 flat plate to compare experimental surface heat transfer,a direct numerical simulation (DNS) at Mach 2.25 for turbulent quantities, and several Mach 3 compression ramps to compare computations of shock-boundary layer interactions to experimental laser doppler velocimetry (LDV) data and hot-wire data. The present paper describes outlines the study and presents preliminary results for two of the flat plate cases and two small-angle compression corner test cases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunn, W.N.
1998-03-01
LUG and Sway brace ANalysis (LUGSAN) II is an analysis and database computer program that is designed to calculate store lug and sway brace loads for aircraft captive carriage. LUGSAN II combines the rigid body dynamics code, SWAY85, with a Macintosh Hypercard database to function both as an analysis and archival system. This report describes the LUGSAN II application program, which operates on the Macintosh System (Hypercard 2.2 or later) and includes function descriptions, layout examples, and sample sessions. Although this report is primarily a user`s manual, a brief overview of the LUGSAN II computer code is included with suggestedmore » resources for programmers.« less
Game-Coding Workshops in New Zealand Public Libraries: Evaluation of a Pilot Project
ERIC Educational Resources Information Center
Bolstad, Rachel
2016-01-01
This report evaluates a game coding workshop offered to young people and adults in seven public libraries round New Zealand. Participants were taken step by step through the process of creating their own simple 2D videogame, learning the basics of coding, computational thinking, and digital game design. The workshops were free and drew 426 people…
Kereiakes, Dean J; Yeh, Robert W; Massaro, Joseph M; Driscoll-Shempp, Priscilla; Cutlip, Donald E; Steg, P Gabriel; Gershlick, Anthony H; Darius, Harald; Meredith, Ian T; Ormiston, John; Tanguay, Jean-François; Windecker, Stephan; Garratt, Kirk N; Kandzari, David E; Lee, David P; Simon, Daniel I; Iancu, Adrian Corneliu; Trebacz, Jaroslaw; Mauri, Laura
2015-10-01
This study sought to compare rates of stent thrombosis and major adverse cardiac and cerebrovascular events (MACCE) (composite of death, myocardial infarction, or stroke) after coronary stenting with drug-eluting stents (DES) versus bare-metal stents (BMS) in patients who participated in the DAPT (Dual Antiplatelet Therapy) study, an international multicenter randomized trial comparing 30 versus 12 months of dual antiplatelet therapy in subjects undergoing coronary stenting with either DES or BMS. Despite antirestenotic efficacy of coronary DES compared with BMS, the relative risk of stent thrombosis and adverse cardiovascular events is unclear. Many clinicians perceive BMS to be associated with fewer adverse ischemic events and to require shorter-duration dual antiplatelet therapy than DES. Prospective propensity-matched analysis of subjects enrolled into a randomized trial of dual antiplatelet therapy duration was performed. DES- and BMS-treated subjects were propensity-score matched in a many-to-one fashion. The study design was observational for all subjects 0 to 12 months following stenting. A subset of eligible subjects without major ischemic or bleeding events were randomized at 12 months to continued thienopyridine versus placebo; all subjects were followed through 33 months. Among 10,026 propensity-matched subjects, DES-treated subjects (n = 8,308) had a lower rate of stent thrombosis through 33 months compared with BMS-treated subjects (n = 1,718, 1.7% vs. 2.6%; weighted risk difference -1.1%, p = 0.01) and a noninferior rate of MACCE (11.4% vs. 13.2%, respectively, weighted risk difference -1.8%, p = 0.053, noninferiority p < 0.001). DES-treated subjects have long-term rates of stent thrombosis that are lower than BMS-treated subjects. (The Dual Antiplatelet Therapy Study [DAPT study]; NCT00977938). Copyright © 2015 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Kappetein, Arie Pieter; Feldman, Ted E; Mack, Michael J; Morice, Marie-Claude; Holmes, David R; Ståhle, Elisabeth; Dawkins, Keith D; Mohr, Friedrich W; Serruys, Patrick W; Colombo, Antonio
2011-09-01
Long-term randomized comparisons of percutaneous coronary intervention (PCI) to coronary artery bypass grafting (CABG) in left main coronary (LM) disease and/or three-vessel disease (3VD) patients have been limited. This analysis compares 3-year outcomes in LM and/or 3VD patients treated with CABG or PCI with TAXUS Express stents. SYNTAX is an 85-centre randomized clinical trial (n= 1800). Prospectively screened, consecutive LM and/or 3VD patients were randomized if amenable to equivalent revascularization using either technique; if not, they were entered into a registry. Patients in the randomized cohort will continue to be followed for 5 years. At 3 years, major adverse cardiac and cerebrovascular events [MACCE: death, stroke, myocardial infarction (MI), and repeat revascularization; CABG 20.2% vs. PCI 28.0%, P< 0.001], repeat revascularization (10.7 vs. 19.7%, P< 0.001), and MI (3.6 vs. 7.1%, P= 0.002) were elevated in the PCI arm. Rates of the composite safety endpoint (death/stroke/MI 12.0 vs. 14.1%, P= 0.21) and stroke alone (3.4 vs. 2.0%, P= 0.07) were not significantly different between treatment groups. Major adverse cardiac and cerebrovascular event rates were not significantly different between arms in the LM subgroup (22.3 vs. 26.8%, P= 0.20) but were higher with PCI in the 3VD subgroup (18.8 vs. 28.8%, P< 0.001). At 3 years, MACCE was significantly higher in PCI- compared with CABG-treated patients. In patients with less complex disease (low SYNTAX scores for 3VD or low/intermediate terciles for LM patients), PCI is an acceptable revascularization, although longer follow-up is needed to evaluate these two revascularization strategies.
Automated apparatus and method of generating native code for a stitching machine
NASA Technical Reports Server (NTRS)
Miller, Jeffrey L. (Inventor)
2000-01-01
A computer system automatically generates CNC code for a stitching machine. The computer determines the locations of a present stitching point and a next stitching point. If a constraint is not found between the present stitching point and the next stitching point, the computer generates code for making a stitch at the next stitching point. If a constraint is found, the computer generates code for changing a condition (e.g., direction) of the stitching machine's stitching head.
Development of a Model and Computer Code to Describe Solar Grade Silicon Production Processes
NASA Technical Reports Server (NTRS)
Srivastava, R.; Gould, R. K.
1979-01-01
Mathematical models and computer codes based on these models, which allow prediction of the product distribution in chemical reactors for converting gaseous silicon compounds to condensed-phase silicon were developed. The following tasks were accomplished: (1) formulation of a model for silicon vapor separation/collection from the developing turbulent flow stream within reactors of the Westinghouse (2) modification of an available general parabolic code to achieve solutions to the governing partial differential equations (boundary layer type) which describe migration of the vapor to the reactor walls, (3) a parametric study using the boundary layer code to optimize the performance characteristics of the Westinghouse reactor, (4) calculations relating to the collection efficiency of the new AeroChem reactor, and (5) final testing of the modified LAPP code for use as a method of predicting Si(1) droplet sizes in these reactors.
Development of structured ICD-10 and its application to computer-assisted ICD coding.
Imai, Takeshi; Kajino, Masayuki; Sato, Megumi; Ohe, Kazuhiko
2010-01-01
This paper presents: (1) a framework of formal representation of ICD10, which functions as a bridge between ontological information and natural language expressions; and (2) a methodology to use formally described ICD10 for computer-assisted ICD coding. First, we analyzed and structurized the meanings of categories in 15 chapters of ICD10. Then we expanded the structured ICD10 (S-ICD10) by adding subordinate concepts and labels derived from Japanese Standard Disease Names. The information model to describe formal representation was refined repeatedly. The resultant model includes 74 types of semantic links. We also developed an ICD coding module based on S-ICD10 and a 'Coding Principle,' which achieved high accuracy (>70%) for four chapters. These results not only demonstrate the basic feasibility of our coding framework but might also inform the development of the information model for formal description framework in the ICD11 revision.
Operational source receptor calculations for large agglomerations
NASA Astrophysics Data System (ADS)
Gauss, Michael; Shamsudheen, Semeena V.; Valdebenito, Alvaro; Pommier, Matthieu; Schulz, Michael
2016-04-01
For Air quality policy an important question is how much of the air pollution within an urbanized region can be attributed to local sources and how much of it is imported through long-range transport. This is critical information for a correct assessment of the effectiveness of potential emission measures. The ratio between indigenous and long-range transported air pollution for a given region depends on its geographic location, the size of its area, the strength and spatial distribution of emission sources, the time of the year, but also - very strongly - on the current meteorological conditions, which change from day to day and thus make it important to provide such calculations in near-real-time to support short-term legislation. Similarly, long-term analysis over longer periods (e.g. one year), or of specific air quality episodes in the past, can help to scientifically underpin multi-regional agreements and long-term legislation. Within the European MACC projects (Monitoring Atmospheric Composition and Climate) and the transition to the operational CAMS service (Copernicus Atmosphere Monitoring Service) the computationally efficient EMEP MSC-W air quality model has been applied with detailed emission data, comprehensive calculations of chemistry and microphysics, driven by high quality meteorological forecast data (up to 96-hour forecasts), to provide source-receptor calculations on a regular basis in forecast mode. In its current state, the product allows the user to choose among different regions and regulatory pollutants (e.g. ozone and PM) to assess the effectiveness of fictive emission reductions in air pollutant emissions that are implemented immediately, either within the agglomeration or outside. The effects are visualized as bar charts, showing resulting changes in air pollution levels within the agglomeration as a function of time (hourly resolution, 0 to 4 days into the future). The bar charts not only allow assessing the effects of emission reduction measures but they also indicate the relative importance of indigenous versus imported air pollution. The calculations are currently performed weekly by MET Norway for the Paris, London, Berlin, Oslo, Po Valley and Rhine-Ruhr regions and the results are provided free of charge at the MACC website (http://www.gmes-atmosphere.eu/services/aqac/policy_interface/regional_sr/). A proposal to extend this service to all EU capitals on a daily basis within the Copernicus Atmosphere Monitoring Service is currently under review. The tool is an important example illustrating the increased application of scientific tools to operational services that support Air Quality policy. This paper will describe this tool in more detail, focusing on the experimental setup, underlying assumptions, uncertainties, computational demand, and the usefulness for air quality for policy. Options to apply the tool for agglomerations outside the EU will also be discussed (making reference to, e.g., PANDA, which is a European-Chinese collaboration project).
Cloud Fingerprinting: Using Clock Skews To Determine Co Location Of Virtual Machines
2016-09-01
DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Cloud computing has quickly revolutionized computing practices of organizations, to include the Department of... Cloud computing has quickly revolutionized computing practices of organizations, to in- clude the Department of Defense. However, security concerns...vi Table of Contents 1 Introduction 1 1.1 Proliferation of Cloud Computing . . . . . . . . . . . . . . . . . . 1 1.2 Problem Statement
NASA Technical Reports Server (NTRS)
Gardner, Kevin D.; Liu, Jong-Shang; Murthy, Durbha V.; Kruse, Marlin J.; James, Darrell
1999-01-01
AlliedSignal Engines, in cooperation with NASA GRC (National Aeronautics and Space Administration Glenn Research Center), completed an evaluation of recently-developed aeroelastic computer codes using test cases from the AlliedSignal Engines fan blisk and turbine databases. Test data included strain gage, performance, and steady-state pressure information obtained for conditions where synchronous or flutter vibratory conditions were found to occur. Aeroelastic codes evaluated included quasi 3-D UNSFLO (MIT Developed/AE Modified, Quasi 3-D Aeroelastic Computer Code), 2-D FREPS (NASA-Developed Forced Response Prediction System Aeroelastic Computer Code), and 3-D TURBO-AE (NASA/Mississippi State University Developed 3-D Aeroelastic Computer Code). Unsteady pressure predictions for the turbine test case were used to evaluate the forced response prediction capabilities of each of the three aeroelastic codes. Additionally, one of the fan flutter cases was evaluated using TURBO-AE. The UNSFLO and FREPS evaluation predictions showed good agreement with the experimental test data trends, but quantitative improvements are needed. UNSFLO over-predicted turbine blade response reductions, while FREPS under-predicted them. The inviscid TURBO-AE turbine analysis predicted no discernible blade response reduction, indicating the necessity of including viscous effects for this test case. For the TURBO-AE fan blisk test case, significant effort was expended getting the viscous version of the code to give converged steady flow solutions for the transonic flow conditions. Once converged, the steady solutions provided an excellent match with test data and the calibrated DAWES (AlliedSignal 3-D Viscous Steady Flow CFD Solver). However, efforts expended establishing quality steady-state solutions prevented exercising the unsteady portion of the TURBO-AE code during the present program. AlliedSignal recommends that unsteady pressure measurement data be obtained for both test cases examined for use in aeroelastic code validation.
NASA Technical Reports Server (NTRS)
Rajagopal, K. R.
1992-01-01
The technical effort and computer code development is summarized. Several formulations for Probabilistic Finite Element Analysis (PFEA) are described with emphasis on the selected formulation. The strategies being implemented in the first-version computer code to perform linear, elastic PFEA is described. The results of a series of select Space Shuttle Main Engine (SSME) component surveys are presented. These results identify the critical components and provide the information necessary for probabilistic structural analysis. Volume 2 is a summary of critical SSME components.
NASA Astrophysics Data System (ADS)
Frankenberg, Christian; Kulawik, Susan S.; Wofsy, Steven C.; Chevallier, Frédéric; Daube, Bruce; Kort, Eric A.; O'Dell, Christopher; Olsen, Edward T.; Osterman, Gregory
2016-06-01
In recent years, space-borne observations of atmospheric carbon dioxide (CO2) have been increasingly used in global carbon-cycle studies. In order to obtain added value from space-borne measurements, they have to suffice stringent accuracy and precision requirements, with the latter being less crucial as it can be reduced by just enhanced sample size. Validation of CO2 column-averaged dry air mole fractions (XCO2) heavily relies on measurements of the Total Carbon Column Observing Network (TCCON). Owing to the sparseness of the network and the requirements imposed on space-based measurements, independent additional validation is highly valuable. Here, we use observations from the High-Performance Instrumented Airborne Platform for Environmental Research (HIAPER) Pole-to-Pole Observations (HIPPO) flights from 01/2009 through 09/2011 to validate CO2 measurements from satellites (Greenhouse Gases Observing Satellite - GOSAT, Thermal Emission Sounder - TES, Atmospheric Infrared Sounder - AIRS) and atmospheric inversion models (CarbonTracker CT2013B, Monitoring Atmospheric Composition and Climate (MACC) v13r1). We find that the atmospheric models capture the XCO2 variability observed in HIPPO flights very well, with correlation coefficients (r2) of 0.93 and 0.95 for CT2013B and MACC, respectively. Some larger discrepancies can be observed in profile comparisons at higher latitudes, in particular at 300 hPa during the peaks of either carbon uptake or release. These deviations can be up to 4 ppm and hint at misrepresentation of vertical transport. Comparisons with the GOSAT satellite are of comparable quality, with an r2 of 0.85, a mean bias μ of -0.06 ppm, and a standard deviation σ of 0.45 ppm. TES exhibits an r2 of 0.75, μ of 0.34 ppm, and σ of 1.13 ppm. For AIRS, we find an r2 of 0.37, μ of 1.11 ppm, and σ of 1.46 ppm, with latitude-dependent biases. For these comparisons at least 6, 20, and 50 atmospheric soundings have been averaged for GOSAT, TES, and AIRS, respectively. Overall, we find that GOSAT soundings over the remote Pacific Ocean mostly meet the stringent accuracy requirements of about 0.5 ppm for space-based CO2 observations.
Multiprocessing on supercomputers for computational aerodynamics
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; Mehta, Unmeel B.
1990-01-01
Very little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPs or more) in computational aerodynamics to significantly improve turnaround time. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, the improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) through multi-tasking is applied via a strategy which requires relatively minor modifications to an existing code for a single processor. Essentially, this approach maps the available memory to multiple processors, exploiting the C-FORTRAN-Unix interface. The existing single processor code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor. As a demonstration of this approach, a Multiple Processor Multiple Grid (MPMG) code is developed. It is capable of using nine processors, and can be easily extended to a larger number of processors. This code solves the three-dimensional, Reynolds averaged, thin-layer and slender-layer Navier-Stokes equations with an implicit, approximately factored and diagonalized method. The solver is applied to generic oblique-wing aircraft problem on a four processor Cray-2 computer. A tricubic interpolation scheme is developed to increase the accuracy of coupling of overlapped grids. For the oblique-wing aircraft problem, a speedup of two in elapsed (turnaround) time is observed in a saturated time-sharing environment.
Numerical, analytical, experimental study of fluid dynamic forces in seals
NASA Technical Reports Server (NTRS)
Shapiro, William; Artiles, Antonio; Aggarwal, Bharat; Walowit, Jed; Athavale, Mahesh M.; Preskwas, Andrzej J.
1992-01-01
NASA/Lewis Research Center is sponsoring a program for providing computer codes for analyzing and designing turbomachinery seals for future aerospace and engine systems. The program is made up of three principal components: (1) the development of advanced three dimensional (3-D) computational fluid dynamics codes, (2) the production of simpler two dimensional (2-D) industrial codes, and (3) the development of a knowledge based system (KBS) that contains an expert system to assist in seal selection and design. The first task has been to concentrate on cylindrical geometries with straight, tapered, and stepped bores. Improvements have been made by adoption of a colocated grid formulation, incorporation of higher order, time accurate schemes for transient analysis and high order discretization schemes for spatial derivatives. This report describes the mathematical formulations and presents a variety of 2-D results, including labyrinth and brush seal flows. Extensions of 3-D are presently in progress.
VizieR Online Data Catalog: Habitable zone code (Valle+, 2014)
NASA Astrophysics Data System (ADS)
Valle, G.; Dell'Omodarme, M.; Prada Moroni, P. G.; Degl'Innocenti, S.
2014-06-01
A C computation code that provide in output the distance dm (i for which the duration of habitability is longest, the corresponding duration tm (in Gyr), the width W (in AU) of the zone for which the habitability lasts tm/2, the inner (Ri) and outer (Ro) boundaries of the 4Gyr continuously habitable zone. The code read the input file HZ-input.dat, containing in each row the mass of the host star (range: 0.70-1.10M⊙), its metallicity (either Z (range: 0.005-0.004) or [Fe/H]), the helium-to-metal enrichment ratio (range: 1-3, standard value = 2), the equilibrium temperature for habitable zone outer boundary computation (range: 169-203K) and the planet Bond Albedo (range: 0.0-1.0, Earth = 0.3). The output is printed on-screen. Compilation: just use your favorite C compiler: gcc hz.c -lm -o HZ (2 data files).
10 CFR 2.1003 - Availability of material.
Code of Federal Regulations, 2011 CFR
2011-01-01
... months in advance of submitting its license application for a geologic repository, the NRC shall make... of privilege in § 2.1006, graphic-oriented documentary material that includes raw data, computer runs, computer programs and codes, field notes, laboratory notes, maps, diagrams and photographs, which have been...
10 CFR 2.1003 - Availability of material.
Code of Federal Regulations, 2012 CFR
2012-01-01
... months in advance of submitting its license application for a geologic repository, the NRC shall make... of privilege in § 2.1006, graphic-oriented documentary material that includes raw data, computer runs, computer programs and codes, field notes, laboratory notes, maps, diagrams and photographs, which have been...
Spatial transform coding of color images.
NASA Technical Reports Server (NTRS)
Pratt, W. K.
1971-01-01
The application of the transform-coding concept to the coding of color images represented by three primary color planes of data is discussed. The principles of spatial transform coding are reviewed and the merits of various methods of color-image representation are examined. A performance analysis is presented for the color-image transform-coding system. Results of a computer simulation of the coding system are also given. It is shown that, by transform coding, the chrominance content of a color image can be coded with an average of 1.0 bits per element or less without serious degradation. If luminance coding is also employed, the average rate reduces to about 2.0 bits per element or less.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zizin, M. N.; Zimin, V. G.; Zizina, S. N., E-mail: zizin@adis.vver.kiae.ru
2010-12-15
The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit ofmore » the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.« less
NASA Astrophysics Data System (ADS)
Zizin, M. N.; Zimin, V. G.; Zizina, S. N.; Kryakvin, L. V.; Pitilimov, V. A.; Tereshonok, V. A.
2010-12-01
The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit of the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.
An Object-Oriented Approach to Writing Computational Electromagnetics Codes
NASA Technical Reports Server (NTRS)
Zimmerman, Martin; Mallasch, Paul G.
1996-01-01
Presently, most computer software development in the Computational Electromagnetics (CEM) community employs the structured programming paradigm, particularly using the Fortran language. Other segments of the software community began switching to an Object-Oriented Programming (OOP) paradigm in recent years to help ease design and development of highly complex codes. This paper examines design of a time-domain numerical analysis CEM code using the OOP paradigm, comparing OOP code and structured programming code in terms of software maintenance, portability, flexibility, and speed.
BEARCLAW: Boundary Embedded Adaptive Refinement Conservation LAW package
NASA Astrophysics Data System (ADS)
Mitran, Sorin
2011-04-01
The BEARCLAW package is a multidimensional, Eulerian AMR-capable computational code written in Fortran to solve hyperbolic systems for astrophysical applications. It is part of AstroBEAR, a hydrodynamic & magnetohydrodynamic code environment designed for a variety of astrophysical applications which allows simulations in 2, 2.5 (i.e., cylindrical), and 3 dimensions, in either cartesian or curvilinear coordinates.
1983-05-01
empirical erosion model, with use of the debris-layer model optional. 1.1 INTERFACE WITH ISPP ISPP is a collection of computer codes designed to calculate...expansion with the ODK code, 4. A two-dimensional, two-phase nozzle expansion with the TD2P code, 5. A turbulent boundary layer solution along the...INPUT THERMODYNAMIC DATA FOR TEMPERATURESBELOW 300°K OIF NEEDED) NO A• 11 READ SSP NAMELIST (ODE. BAL. ODK . TD2P. TEL. NOZZLE GEOMETRY) PROfLM 2
Buried Underwater Munitions and Clutter Discrimination
2010-10-01
closest point of approach of the cylinder. The k space amplitude beam pattern, sin Δ( ) Δ , in Stanton’s treatment is obtained from the Fourier ...simple modifications to be useful here. First, the amplitude of the incident plane wave P0 should be replaced by P1r0/r, where P1 is the magnitude of...Instrument Source Information Site Selec- tion MACC Phase I Input Location Resolution Age Bathymetry SEA Ltd. SWATHPlus McNinch
Particle-in-cell simulations with charge-conserving current deposition on graphic processing units
NASA Astrophysics Data System (ADS)
Ren, Chuang; Kong, Xianglong; Huang, Michael; Decyk, Viktor; Mori, Warren
2011-10-01
Recently using CUDA, we have developed an electromagnetic Particle-in-Cell (PIC) code with charge-conserving current deposition for Nvidia graphic processing units (GPU's) (Kong et al., Journal of Computational Physics 230, 1676 (2011). On a Tesla M2050 (Fermi) card, the GPU PIC code can achieve a one-particle-step process time of 1.2 - 3.2 ns in 2D and 2.3 - 7.2 ns in 3D, depending on plasma temperatures. In this talk we will discuss novel algorithms for GPU-PIC including charge-conserving current deposition scheme with few branching and parallel particle sorting. These algorithms have made efficient use of the GPU shared memory. We will also discuss how to replace the computation kernels of existing parallel CPU codes while keeping their parallel structures. This work was supported by U.S. Department of Energy under Grant Nos. DE-FG02-06ER54879 and DE-FC02-04ER54789 and by NSF under Grant Nos. PHY-0903797 and CCF-0747324.
Inclusion of pressure and flow in a new 3D MHD equilibrium code
NASA Astrophysics Data System (ADS)
Raburn, Daniel; Fukuyama, Atsushi
2012-10-01
Flow and nonsymmetric effects can play a large role in plasma equilibria and energy confinement. A concept for such a 3D equilibrium code was developed and presented in 2011. The code is called the Kyoto ITerative Equilibrium Solver (KITES) [1], and the concept is based largely on the PIES code [2]. More recently, the work-in-progress KITES code was used to calculate force-free equilibria. Here, progress and results on the inclusion of pressure and flow in the code are presented. [4pt] [1] Daniel Raburn and Atsushi Fukuyama, Plasma and Fusion Research: Regular Articles, 7:240381 (2012).[0pt] [2] H. S. Greenside, A. H. Reiman, and A. Salas, J. Comput. Phys, 81(1):102-136 (1989).
SU-D-BRD-03: A Gateway for GPU Computing in Cancer Radiotherapy Research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jia, X; Folkerts, M; Shi, F
Purpose: Graphics Processing Unit (GPU) has become increasingly important in radiotherapy. However, it is still difficult for general clinical researchers to access GPU codes developed by other researchers, and for developers to objectively benchmark their codes. Moreover, it is quite often to see repeated efforts spent on developing low-quality GPU codes. The goal of this project is to establish an infrastructure for testing GPU codes, cross comparing them, and facilitating code distributions in radiotherapy community. Methods: We developed a system called Gateway for GPU Computing in Cancer Radiotherapy Research (GCR2). A number of GPU codes developed by our group andmore » other developers can be accessed via a web interface. To use the services, researchers first upload their test data or use the standard data provided by our system. Then they can select the GPU device on which the code will be executed. Our system offers all mainstream GPU hardware for code benchmarking purpose. After the code running is complete, the system automatically summarizes and displays the computing results. We also released a SDK to allow the developers to build their own algorithm implementation and submit their binary codes to the system. The submitted code is then systematically benchmarked using a variety of GPU hardware and representative data provided by our system. The developers can also compare their codes with others and generate benchmarking reports. Results: It is found that the developed system is fully functioning. Through a user-friendly web interface, researchers are able to test various GPU codes. Developers also benefit from this platform by comprehensively benchmarking their codes on various GPU platforms and representative clinical data sets. Conclusion: We have developed an open platform allowing the clinical researchers and developers to access the GPUs and GPU codes. This development will facilitate the utilization of GPU in radiation therapy field.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Peiyuan; Brown, Timothy; Fullmer, William D.
Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling ofmore » the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.« less
Computation of transonic separated wing flows using an Euler/Navier-Stokes zonal approach
NASA Technical Reports Server (NTRS)
Kaynak, Uenver; Holst, Terry L.; Cantwell, Brian J.
1986-01-01
A computer program called Transonic Navier Stokes (TNS) has been developed which solves the Euler/Navier-Stokes equations around wings using a zonal grid approach. In the present zonal scheme, the physical domain of interest is divided into several subdomains called zones and the governing equations are solved interactively. The advantages of the Zonal Grid approach are as follows: (1) the grid for any subdomain can be generated easily; (2) grids can be, in a sense, adapted to the solution; (3) different equation sets can be used in different zones; and, (4) this approach allows for a convenient data base organization scheme. Using this code, separated flows on a NACA 0012 section wing and on the NASA Ames WING C have been computed. First, the effects of turbulence and artificial dissipation models incorporated into the code are assessed by comparing the TNS results with other CFD codes and experiments. Then a series of flow cases is described where data are available. The computed results, including cases with shock-induced separation, are in good agreement with experimental data. Finally, some futuristic cases are presented to demonstrate the abilities of the code for massively separated cases which do not have experimental data.
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2011 CFR
2011-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2012 CFR
2012-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2014 CFR
2014-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2010 CFR
2010-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar data produced for...
Computer Power: Part 1: Distribution of Power (and Communications).
ERIC Educational Resources Information Center
Price, Bennett J.
1988-01-01
Discussion of the distribution of power to personal computers and computer terminals addresses options such as extension cords, perimeter raceways, and interior raceways. Sidebars explain: (1) the National Electrical Code; (2) volts, amps, and watts; (3) transformers, circuit breakers, and circuits; and (4) power vs. data wiring. (MES)
NASA Astrophysics Data System (ADS)
Ethier, Stephane; Lin, Zhihong
2001-10-01
Earlier this year, the National Energy Research Scientific Computing center (NERSC) took delivery of the second most powerful computer in the world. With its 2,528 processors running at a peak performance of 1.5 GFlops, this IBM SP machine has a theoretical performance of almost 3.8 TFlops. To efficiently harness such computing power in one single code is not an easy task and requires a good knowledge of the computer's architecture. Here we present the steps that we followed to improve our gyrokinetic micro-turbulence code GTC in order to take advantage of the new 16-way shared memory nodes of the NERSC IBM SP. Performance results are shown as well as details about the improved mixed-mode MPI-OpenMP model that we use. The enhancements to the code allowed us to tackle much bigger problem sizes, getting closer to our goal of simulating an ITER-size tokamak with both kinetic ions and electrons.(This work is supported by DOE Contract No. DE-AC02-76CH03073 (PPPL), and in part by the DOE Fusion SciDAC Project.)
NASA Technical Reports Server (NTRS)
Hardman, R. R.; Mahan, J. R.; Smith, M. H.; Gelhausen, P. A.; Van Dalsem, W. R.
1991-01-01
The need for a validation technique for computational fluid dynamics (CFD) codes in STOVL applications has led to research efforts to apply infrared thermal imaging techniques to visualize gaseous flow fields. Specifically, a heated, free-jet test facility was constructed. The gaseous flow field of the jet exhaust was characterized using an infrared imaging technique in the 2 to 5.6 micron wavelength band as well as conventional pitot tube and thermocouple methods. These infrared images are compared to computer-generated images using the equations of radiative exchange based on the temperature distribution in the jet exhaust measured with the thermocouple traverses. Temperature and velocity measurement techniques, infrared imaging, and the computer model of the infrared imaging technique are presented and discussed. From the study, it is concluded that infrared imaging techniques coupled with the radiative exchange equations applied to CFD models are a valid method to qualitatively verify CFD codes used in STOVL applications.
15 CFR 740.7 - Computers (APP).
Code of Federal Regulations, 2010 CFR
2010-01-01
... 4A003. (2) Technology and software. License Exception APP authorizes exports of technology and software... programmability. (ii) Technology and source code. Technology and source code eligible for License Exception APP..., reexports and transfers (in-country) for nuclear, chemical, biological, or missile end-users and end-uses...
ATHENA 3D: A finite element code for ultrasonic wave propagation
NASA Astrophysics Data System (ADS)
Rose, C.; Rupin, F.; Fouquet, T.; Chassignole, B.
2014-04-01
The understanding of wave propagation phenomena requires use of robust numerical models. 3D finite element (FE) models are generally prohibitively time consuming. However, advances in computing processor speed and memory allow them to be more and more competitive. In this context, EDF R&D developed the 3D version of the well-validated FE code ATHENA2D. The code is dedicated to the simulation of wave propagation in all kinds of elastic media and in particular, heterogeneous and anisotropic materials like welds. It is based on solving elastodynamic equations in the calculation zone expressed in terms of stress and particle velocities. The particularity of the code relies on the fact that the discretization of the calculation domain uses a Cartesian regular 3D mesh while the defect of complex geometry can be described using a separate (2D) mesh using the fictitious domains method. This allows combining the rapidity of regular meshes computation with the capability of modelling arbitrary shaped defects. Furthermore, the calculation domain is discretized with a quasi-explicit time evolution scheme. Thereby only local linear systems of small size have to be solved. The final step to reduce the computation time relies on the fact that ATHENA3D has been parallelized and adapted to the use of HPC resources. In this paper, the validation of the 3D FE model is discussed. A cross-validation of ATHENA 3D and CIVA is proposed for several inspection configurations. The performances in terms of calculation time are also presented in the cases of both local computer and computation cluster use.
Three-dimensional structural analysis using interactive graphics
NASA Technical Reports Server (NTRS)
Biffle, J.; Sumlin, H. A.
1975-01-01
The application of computer interactive graphics to three-dimensional structural analysis was described, with emphasis on the following aspects: (1) structural analysis, and (2) generation and checking of input data and examination of the large volume of output data (stresses, displacements, velocities, accelerations). Handling of three-dimensional input processing with a special MESH3D computer program was explained. Similarly, a special code PLTZ may be used to perform all the needed tasks for output processing from a finite element code. Examples were illustrated.
1975-05-01
Conference on Earthquake Engineering, Santiago de Chile, 13-18 January 1969, Vol. I , Session B2, Chilean Association oil Seismology and Earth- quake...Nuclear Agency May 1975 DISTRIBUTED BY: KJ National Technical Information Service U. S. DEPARTMENT OF COMMERCE ^804J AFWL-TR-74-228, Vol. I ...CM o / i ’•fu.r ) V V AFWL-TR- 74-228 Vol. I SINGER: A COMPUTER CODE FOR GENERAL ANALYSIS OF TWO-DIMENSIONAL CONCRETE STRUCTURES Volum« I
Subscale Fast Cookoff Testing and Modeling for the Hazard Assessment of Large Rocket Motors
2001-03-01
41 LIST OF TABLES Table 1 Heats of Vaporization Parameter for Two-liner Phase Transformation - Complete Liner Sublimation and/or Combined Liner...One-dimensional 2-D Two-dimensional ALE3D Arbitrary-Lagrange-Eulerian (3-D) Computer Code ALEGRA 3-D Arbitrary-Lagrange-Eulerian Computer Code for...case-liner bond areas and in the grain inner bore to explore the pre-ignition and ignition phases , as well as burning evolution in rocket motor fast
Quantum error correcting codes and 4-dimensional arithmetic hyperbolic manifolds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guth, Larry, E-mail: lguth@math.mit.edu; Lubotzky, Alexander, E-mail: alex.lubotzky@mail.huji.ac.il
2014-08-15
Using 4-dimensional arithmetic hyperbolic manifolds, we construct some new homological quantum error correcting codes. They are low density parity check codes with linear rate and distance n{sup ε}. Their rate is evaluated via Euler characteristic arguments and their distance using Z{sub 2}-systolic geometry. This construction answers a question of Zémor [“On Cayley graphs, surface codes, and the limits of homological coding for quantum error correction,” in Proceedings of Second International Workshop on Coding and Cryptology (IWCC), Lecture Notes in Computer Science Vol. 5557 (2009), pp. 259–273], who asked whether homological codes with such parameters could exist at all.
Multi-Zone Liquid Thrust Chamber Performance Code with Domain Decomposition for Parallel Processing
NASA Technical Reports Server (NTRS)
Navaz, Homayun K.
2002-01-01
Computational Fluid Dynamics (CFD) has considerably evolved in the last decade. There are many computer programs that can perform computations on viscous internal or external flows with chemical reactions. CFD has become a commonly used tool in the design and analysis of gas turbines, ramjet combustors, turbo-machinery, inlet ducts, rocket engines, jet interaction, missile, and ramjet nozzles. One of the problems of interest to NASA has always been the performance prediction for rocket and air-breathing engines. Due to the complexity of flow in these engines it is necessary to resolve the flowfield into a fine mesh to capture quantities like turbulence and heat transfer. However, calculation on a high-resolution grid is associated with a prohibitively increasing computational time that can downgrade the value of the CFD for practical engineering calculations. The Liquid Thrust Chamber Performance (LTCP) code was developed for NASA/MSFC (Marshall Space Flight Center) to perform liquid rocket engine performance calculations. This code is a 2D/axisymmetric full Navier-Stokes (NS) solver with fully coupled finite rate chemistry and Eulerian treatment of liquid fuel and/or oxidizer droplets. One of the advantages of this code has been the resemblance of its input file to the JANNAF (Joint Army Navy NASA Air Force Interagency Propulsion Committee) standard TDK code, and its automatic grid generation for JANNAF defined combustion chamber wall geometry. These options minimize the learning effort for TDK users, and make the code a good candidate for performing engineering calculations. Although the LTCP code was developed for liquid rocket engines, it is a general-purpose code and has been used for solving many engineering problems. However, the single zone formulation of the LTCP has limited the code to be applicable to problems with complex geometry. Furthermore, the computational time becomes prohibitively large for high-resolution problems with chemistry, two-equation turbulence model, and two-phase flow. To overcome these limitations, the LTCP code is rewritten to include the multi-zone capability with domain decomposition that makes it suitable for parallel processing, i.e., enabling the code to run every zone or sub-domain on a separate processor. This can reduce the run time by a factor of 6 to 8, depending on the problem.
Optimal periodic binary codes of lengths 28 to 64
NASA Technical Reports Server (NTRS)
Tyler, S.; Keston, R.
1980-01-01
Results from computer searches performed to find repeated binary phase coded waveforms with optimal periodic autocorrelation functions are discussed. The best results for lengths 28 to 64 are given. The code features of major concern are where (1) the peak sidelobe in the autocorrelation function is small and (2) the sum of the squares of the sidelobes in the autocorrelation function is small.
CFL3D User's Manual (Version 5.0)
NASA Technical Reports Server (NTRS)
Krist, Sherrie L.; Biedron, Robert T.; Rumsey, Christopher L.
1998-01-01
This document is the User's Manual for the CFL3D computer code, a thin-layer Reynolds-averaged Navier-Stokes flow solver for structured multiple-zone grids. Descriptions of the code's input parameters, non-dimensionalizations, file formats, boundary conditions, and equations are included. Sample 2-D and 3-D test cases are also described, and many helpful hints for using the code are provided.
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2013 CFR
2013-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... funds; (ii) Studies, analyses, test data, or similar data produced for this contract, when the study...
Parallel Computation of the Jacobian Matrix for Nonlinear Equation Solvers Using MATLAB
NASA Technical Reports Server (NTRS)
Rose, Geoffrey K.; Nguyen, Duc T.; Newman, Brett A.
2017-01-01
Demonstrating speedup for parallel code on a multicore shared memory PC can be challenging in MATLAB due to underlying parallel operations that are often opaque to the user. This can limit potential for improvement of serial code even for the so-called embarrassingly parallel applications. One such application is the computation of the Jacobian matrix inherent to most nonlinear equation solvers. Computation of this matrix represents the primary bottleneck in nonlinear solver speed such that commercial finite element (FE) and multi-body-dynamic (MBD) codes attempt to minimize computations. A timing study using MATLAB's Parallel Computing Toolbox was performed for numerical computation of the Jacobian. Several approaches for implementing parallel code were investigated while only the single program multiple data (spmd) method using composite objects provided positive results. Parallel code speedup is demonstrated but the goal of linear speedup through the addition of processors was not achieved due to PC architecture.
NASA Technical Reports Server (NTRS)
Eklund, Dean R.; Northam, G. B.; Mcdaniel, J. C.; Smith, Cliff
1992-01-01
A CFD (Computational Fluid Dynamics) competition was held at the Third Scramjet Combustor Modeling Workshop to assess the current state-of-the-art in CFD codes for the analysis of scramjet combustors. Solutions from six three-dimensional Navier-Stokes codes were compared for the case of staged injection of air behind a step into a Mach 2 flow. This case was investigated experimentally at the University of Virginia and extensive in-stream data was obtained. Code-to-code comparisons have been made with regard to both accuracy and efficiency. The turbulence models employed in the solutions are believed to be a major source of discrepancy between the six solutions.
NASA Astrophysics Data System (ADS)
Walker, Ernest; Chen, Xinjia; Cooper, Reginald L.
2010-04-01
An arbitrarily accurate approach is used to determine the bit-error rate (BER) performance for generalized asynchronous DS-CDMA systems, in Gaussian noise with Raleigh fading. In this paper, and the sequel, new theoretical work has been contributed which substantially enhances existing performance analysis formulations. Major contributions include: substantial computational complexity reduction, including a priori BER accuracy bounding; an analytical approach that facilitates performance evaluation for systems with arbitrary spectral spreading distributions, with non-uniform transmission delay distributions. Using prior results, augmented by these enhancements, a generalized DS-CDMA system model is constructed and used to evaluated the BER performance, in a variety of scenarios. In this paper, the generalized system modeling was used to evaluate the performance of both Walsh- Hadamard (WH) and Walsh-Hadamard-seeded zero-correlation-zone (WH-ZCZ) coding. The selection of these codes was informed by the observation that WH codes contain N spectral spreading values (0 to N - 1), one for each code sequence; while WH-ZCZ codes contain only two spectral spreading values (N/2 - 1,N/2); where N is the sequence length in chips. Since these codes span the spectral spreading range for DS-CDMA coding, by invoking an induction argument, the generalization of the system model is sufficiently supported. The results in this paper, and the sequel, support the claim that an arbitrary accurate performance analysis for DS-CDMA systems can be evaluated over the full range of binary coding, with minimal computational complexity.
NASA Technical Reports Server (NTRS)
Hall, E. J.; Topp, D. A.; Delaney, R. A.
1996-01-01
The overall objective of this study was to develop a 3-D numerical analysis for compressor casing treatment flowfields. The current version of the computer code resulting from this study is referred to as ADPAC (Advanced Ducted Propfan Analysis Codes-Version 7). This report is intended to serve as a computer program user's manual for the ADPAC code developed under Tasks 6 and 7 of the NASA Contract. The ADPAC program is based on a flexible multiple- block grid discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. An iterative implicit algorithm is available for rapid time-dependent flow calculations, and an advanced two equation turbulence model is incorporated to predict complex turbulent flows. The consolidated code generated during this study is capable of executing in either a serial or parallel computing mode from a single source code. Numerous examples are given in the form of test cases to demonstrate the utility of this approach for predicting the aerodynamics of modem turbomachinery configurations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eslinger, Paul W.; Aaberg, Rosanne L.; Lopresti, Charles A.
2004-09-14
This document contains detailed user instructions for a suite of utility codes developed for Rev. 1 of the Systems Assessment Capability. The suite of computer codes for Rev. 1 of Systems Assessment Capability performs many functions.
Orion Service Module Reaction Control System Plume Impingement Analysis Using PLIMP/RAMP2
NASA Technical Reports Server (NTRS)
Wang, Xiao-Yen; Lumpkin, Forrest E., III; Gati, Frank; Yuko, James R.; Motil, Brian J.
2009-01-01
The Orion Crew Exploration Vehicle Service Module Reaction Control System engine plume impingement was computed using the plume impingement program (PLIMP). PLIMP uses the plume solution from RAMP2, which is the refined version of the reacting and multiphase program (RAMP) code. The heating rate and pressure (force and moment) on surfaces or components of the Service Module were computed. The RAMP2 solution of the flow field inside the engine and the plume was compared with those computed using GASP, a computational fluid dynamics code, showing reasonable agreement. The computed heating rate and pressure using PLIMP were compared with the Reaction Control System plume model (RPM) solution and the plume impingement dynamics (PIDYN) solution. RPM uses the GASP-based plume solution, whereas PIDYN uses the SCARF plume solution. Three sets of the heating rate and pressure solutions agree well. Further thermal analysis on the avionic ring of the Service Module was performed using MSC Patran/Pthermal. The obtained temperature results showed that thermal protection is necessary because of significant heating from the plume.
Adjoint-Based Sensitivity and Uncertainty Analysis for Density and Composition: A User’s Guide
Favorite, Jeffrey A.; Perko, Zoltan; Kiedrowski, Brian C.; ...
2017-03-01
The ability to perform sensitivity analyses using adjoint-based first-order sensitivity theory has existed for decades. This paper provides guidance on how adjoint sensitivity methods can be used to predict the effect of material density and composition uncertainties in critical experiments, including when these uncertain parameters are correlated or constrained. Two widely used Monte Carlo codes, MCNP6 (Ref. 2) and SCALE 6.2 (Ref. 3), are both capable of computing isotopic density sensitivities in continuous energy and angle. Additionally, Perkó et al. have shown how individual isotope density sensitivities, easily computed using adjoint methods, can be combined to compute constrained first-order sensitivitiesmore » that may be used in the uncertainty analysis. This paper provides details on how the codes are used to compute first-order sensitivities and how the sensitivities are used in an uncertainty analysis. Constrained first-order sensitivities are computed in a simple example problem.« less
Computation of Sound Generated by Flow Over a Circular Cylinder: An Acoustic Analogy Approach
NASA Technical Reports Server (NTRS)
Brentner, Kenneth S.; Cox, Jared S.; Rumsey, Christopher L.; Younis, Bassam A.
1997-01-01
The sound generated by viscous flow past a circular cylinder is predicted via the Lighthill acoustic analogy approach. The two dimensional flow field is predicted using two unsteady Reynolds-averaged Navier-Stokes solvers. Flow field computations are made for laminar flow at three Reynolds numbers (Re = 1000, Re = 10,000, and Re = 90,000) and two different turbulent models at Re = 90,000. The unsteady surface pressures are utilized by an acoustics code that implements Farassat's formulation 1A to predict the acoustic field. The acoustic code is a 3-D code - 2-D results are found by using a long cylinder length. The 2-D predictions overpredict the acoustic amplitude; however, if correlation lengths in the range of 3 to 10 cylinder diameters are used, the predicted acoustic amplitude agrees well with experiment.
Majorana fermion surface code for universal quantum computation
Vijay, Sagar; Hsieh, Timothy H.; Fu, Liang
2015-12-10
In this study, we introduce an exactly solvable model of interacting Majorana fermions realizing Z 2 topological order with a Z 2 fermion parity grading and lattice symmetries permuting the three fundamental anyon types. We propose a concrete physical realization by utilizing quantum phase slips in an array of Josephson-coupled mesoscopic topological superconductors, which can be implemented in a wide range of solid-state systems, including topological insulators, nanowires, or two-dimensional electron gases, proximitized by s-wave superconductors. Our model finds a natural application as a Majorana fermion surface code for universal quantum computation, with a single-step stabilizer measurement requiring no physicalmore » ancilla qubits, increased error tolerance, and simpler logical gates than a surface code with bosonic physical qubits. We thoroughly discuss protocols for stabilizer measurements, encoding and manipulating logical qubits, and gate implementations.« less
Lewis Structures Technology, 1988. Volume 2: Structural Mechanics
NASA Technical Reports Server (NTRS)
1988-01-01
Lewis Structures Div. performs and disseminates results of research conducted in support of aerospace engine structures. These results have a wide range of applicability to practitioners of structural engineering mechanics beyond the aerospace arena. The engineering community was familiarized with the depth and range of research performed by the division and its academic and industrial partners. Sessions covered vibration control, fracture mechanics, ceramic component reliability, parallel computing, nondestructive evaluation, constitutive models and experimental capabilities, dynamic systems, fatigue and damage, wind turbines, hot section technology (HOST), aeroelasticity, structural mechanics codes, computational methods for dynamics, structural optimization, and applications of structural dynamics, and structural mechanics computer codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rohatgi, U.S.; Cheng, H.S.; Khan, H.J.
This document is the User`s Manual for the Boiling Water Reactor (BWR), and Simplified Boiling Water Reactor (SBWR) systems transient code RAMONA-4B. The code uses a three-dimensional neutron-kinetics model coupled with a multichannel, nonequilibrium, drift-flux, phase-flow model of the thermal hydraulics of the reactor vessel. The code is designed to analyze a wide spectrum of BWR core and system transients. Chapter 1 gives an overview of the code`s capabilities and limitations; Chapter 2 describes the code`s structure, lists major subroutines, and discusses the computer requirements. Chapter 3 is on code, auxillary codes, and instructions for running RAMONA-4B on Sun SPARCmore » and IBM Workstations. Chapter 4 contains component descriptions and detailed card-by-card input instructions. Chapter 5 provides samples of the tabulated output for the steady-state and transient calculations and discusses the plotting procedures for the steady-state and transient calculations. Three appendices contain important user and programmer information: lists of plot variables (Appendix A) listings of input deck for sample problem (Appendix B), and a description of the plotting program PAD (Appendix C). 24 refs., 18 figs., 11 tabs.« less
Coding efficiency of AVS 2.0 for CBAC and CABAC engines
NASA Astrophysics Data System (ADS)
Cui, Jing; Choi, Youngkyu; Chae, Soo-Ik
2015-12-01
In this paper we compare the coding efficiency of AVS 2.0[1] for engines of the Context-based Binary Arithmetic Coding (CBAC)[2] in the AVS 2.0 and the Context-Adaptive Binary Arithmetic Coder (CABAC)[3] in the HEVC[4]. For fair comparison, the CABAC is embedded in the reference code RD10.1 because the CBAC is in the HEVC in our previous work[5]. The rate estimation table is employed only for RDOQ in the RD code. To reduce the computation complexity of the video encoder, therefore we modified the RD code so that the rate estimation table is employed for all RDO decision. Furthermore, we also simplify the complexity of rate estimation table by reducing the bit depth of its fractional part to 2 from 8. The simulation result shows that the CABAC has the BD-rate loss of about 0.7% compared to the CBAC. It seems that the CBAC is a little more efficient than that the CABAC in the AVS 2.0.
Extensions and improvements on XTRAN3S
NASA Technical Reports Server (NTRS)
Borland, C. J.
1989-01-01
Improvements to the XTRAN3S computer program are summarized. Work on this code, for steady and unsteady aerodynamic and aeroelastic analysis in the transonic flow regime has concentrated on the following areas: (1) Maintenance of the XTRAN3S code, including correction of errors, enhancement of operational capability, and installation on the Cray X-MP system; (2) Extension of the vectorization concepts in XTRAN3S to include additional areas of the code for improved execution speed; (3) Modification of the XTRAN3S algorithm for improved numerical stability for swept, tapered wing cases and improved computational efficiency; and (4) Extension of the wing-only version of XTRAN3S to include pylon and nacelle or external store capability.
POLYSHIFT Communications Software for the Connection Machine System CM-200
George, William; Brickner, Ralph G.; Johnsson, S. Lennart
1994-01-01
We describe the use and implementation of a polyshift function PSHIFT for circular shifts and end-offs shifts. Polyshift is useful in many scientific codes using regular grids, such as finite difference codes in several dimensions, and multigrid codes, molecular dynamics computations, and in lattice gauge physics computations, such as quantum chromodynamics (QCD) calculations. Our implementation of the PSHIFT function on the Connection Machine systems CM-2 and CM-200 offers a speedup of up to a factor of 3–4 compared with CSHIFT when the local data motion within a node is small. The PSHIFT routine is included in the Connection Machine Scientificmore » Software Library (CMSSL).« less
NEAMS Update. Quarterly Report for October - December 2011.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, K.
2012-02-16
The Advanced Modeling and Simulation Office within the DOE Office of Nuclear Energy (NE) has been charged with revolutionizing the design tools used to build nuclear power plants during the next 10 years. To accomplish this, the DOE has brought together the national laboratories, U.S. universities, and the nuclear energy industry to establish the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Program. The mission of NEAMS is to modernize computer modeling of nuclear energy systems and improve the fidelity and validity of modeling results using contemporary software environments and high-performance computers. NEAMS will create a set of engineering-level codes aimedmore » at designing and analyzing the performance and safety of nuclear power plants and reactor fuels. The truly predictive nature of these codes will be achieved by modeling the governing phenomena at the spatial and temporal scales that dominate the behavior. These codes will be executed within a simulation environment that orchestrates code integration with respect to spatial meshing, computational resources, and execution to give the user a common 'look and feel' for setting up problems and displaying results. NEAMS is building upon a suite of existing simulation tools, including those developed by the federal Scientific Discovery through Advanced Computing and Advanced Simulation and Computing programs. NEAMS also draws upon existing simulation tools for materials and nuclear systems, although many of these are limited in terms of scale, applicability, and portability (their ability to be integrated into contemporary software and hardware architectures). NEAMS investments have directly and indirectly supported additional NE research and development programs, including those devoted to waste repositories, safeguarded separations systems, and long-term storage of used nuclear fuel. NEAMS is organized into two broad efforts, each comprising four elements. The quarterly highlights October-December 2011 are: (1) Version 1.0 of AMP, the fuel assembly performance code, was tested on the JAGUAR supercomputer and released on November 1, 2011, a detailed discussion of this new simulation tool is given; (2) A coolant sub-channel model and a preliminary UO{sub 2} smeared-cracking model were implemented in BISON, the single-pin fuel code, more information on how these models were developed and benchmarked is given; (3) The Object Kinetic Monte Carlo model was implemented to account for nucleation events in meso-scale simulations and a discussion of the significance of this advance is given; (4) The SHARP neutronics module, PROTEUS, was expanded to be applicable to all types of reactors, and a discussion of the importance of PROTEUS is given; (5) A plan has been finalized for integrating the high-fidelity, three-dimensional reactor code SHARP with both the systems-level code RELAP7 and the fuel assembly code AMP. This is a new initiative; (6) Work began to evaluate the applicability of AMP to the problem of dry storage of used fuel and to define a relevant problem to test the applicability; (7) A code to obtain phonon spectra from the force-constant matrix for a crystalline lattice has been completed. This important bridge between subcontinuum and continuum phenomena is discussed; (8) Benchmarking was begun on the meso-scale, finite-element fuels code MARMOT to validate its new variable splitting algorithm; (9) A very computationally demanding simulation of diffusion-driven nucleation of new microstructural features has been completed. An explanation of the difficulty of this simulation is given; (10) Experiments were conducted with deformed steel to validate a crystal plasticity finite-element code for bodycentered cubic iron; (11) The Capability Transfer Roadmap was completed and published as an internal laboratory technical report; (12) The AMP fuel assembly code input generator was integrated into the NEAMS Integrated Computational Environment (NiCE). More details on the planned NEAMS computing environment is given; and (13) The NEAMS program website (neams.energy.gov) is nearly ready to launch.« less
NASA Technical Reports Server (NTRS)
Stricker, L. T.
1975-01-01
The LOVES computer program was employed to analyze the geosynchronous portion of the NASA's 1973 automated satellite mission model from 1980 to 1990. The objectives of the analyses were: (1) to demonstrate the capability of the LOVES code to provide the depth and accuracy of data required to support the analyses; and (2) to tradeoff the concept of space servicing automated satellites composed of replaceable modules against the concept of replacing expendable satellites upon failure. The computer code proved to be an invaluable tool in analyzing the logistic requirements of the various test cases required in the tradeoff. It is indicated that the concept of space servicing offers the potential for substantial savings in the cost of operating automated satellite systems.
Annual Report of the ECSU Home-Institution Support Program (1993)
1993-09-30
summer of 1992. Stephanie plans to attend graduate school at the University of Alabama at Birmingham. r 3 . Deborah Jones has attended the ISSP program for...computer equipment Component #2 A visiting lecturer series Component # 3 : Students pay & faculty release time Component #4 Student/sponsor travel program...DTXC QUA, ty rNpBT 3 S. 0. CODE: 1133 DISBURSING CODE: N001 79 AGO CODE: N66005 CAGE CODE: OJLKO 3 PART I: A succinct narrative which should
Development of a model and computer code to describe solar grade silicon production processes
NASA Technical Reports Server (NTRS)
Gould, R. K.; Srivastava, R.
1979-01-01
Two computer codes were developed for describing flow reactors in which high purity, solar grade silicon is produced via reduction of gaseous silicon halides. The first is the CHEMPART code, an axisymmetric, marching code which treats two phase flows with models describing detailed gas-phase chemical kinetics, particle formation, and particle growth. It can be used to described flow reactors in which reactants, mix, react, and form a particulate phase. Detailed radial gas-phase composition, temperature, velocity, and particle size distribution profiles are computed. Also, deposition of heat, momentum, and mass (either particulate or vapor) on reactor walls is described. The second code is a modified version of the GENMIX boundary layer code which is used to compute rates of heat, momentum, and mass transfer to the reactor walls. This code lacks the detailed chemical kinetics and particle handling features of the CHEMPART code but has the virtue of running much more rapidly than CHEMPART, while treating the phenomena occurring in the boundary layer in more detail.
Myint, Kyaw Z.; Xie, Xiang-Qun
2015-01-01
This chapter focuses on the fingerprint-based artificial neural networks QSAR (FANN-QSAR) approach to predict biological activities of structurally diverse compounds. Three types of fingerprints, namely ECFP6, FP2, and MACCS, were used as inputs to train the FANN-QSAR models. The results were benchmarked against known 2D and 3D QSAR methods, and the derived models were used to predict cannabinoid (CB) ligand binding activities as a case study. In addition, the FANN-QSAR model was used as a virtual screening tool to search a large NCI compound database for lead cannabinoid compounds. We discovered several compounds with good CB2 binding affinities ranging from 6.70 nM to 3.75 μM. The studies proved that the FANN-QSAR method is a useful approach to predict bioactivities or properties of ligands and to find novel lead compounds for drug discovery research. PMID:25502380
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korovin, Yu. A.; Maksimushkina, A. V., E-mail: AVMaksimushkina@mephi.ru; Frolova, T. A.
2016-12-15
The cross sections of nuclear reactions involving emission of clusters of light nuclei in proton collisions with a heavy-metal target are computed for incident-proton energies between 30 MeV and 2.6 GeV. The calculation relies on the ALICE/ASH and CASCADE/INPE computer codes. The parameters determining the pre-equilibrium cluster emission are varied in the computation.
A prototype Knowledge-Based System to Aid Space System Restoration Management.
1986-12-01
Systems. ......... 122 Appendix B: Computation of Weights With AHP . . .. 132 Appendix C: ART Code .. ............... 138 Appendix D: Test Outputs...45 5.1 Earth Coverage With Geosynchronous Satellites 49 5.2 Space System Configurations ... ........... . 50 5.3 AHP Hierarchy...67 5.4 AHP Hierarchy With Weights .... ............ 68 6.1 TALK Schema Structure ..... .............. 75 6.2 ART Code for TALK Satellite C
Comparison of two computer codes for crack growth analysis: NASCRAC Versus NASA/FLAGRO
NASA Technical Reports Server (NTRS)
Stallworth, R.; Meyers, C. A.; Stinson, H. C.
1989-01-01
Results are presented from the comparison study of two computer codes for crack growth analysis - NASCRAC and NASA/FLAGRO. The two computer codes gave compatible conservative results when the part through crack analysis solutions were analyzed versus experimental test data. Results showed good correlation between the codes for the through crack at a lug solution. For the through crack at a lug solution, NASA/FLAGRO gave the most conservative results.
Computational Predictions of the Performance Wright 'Bent End' Propellers
NASA Technical Reports Server (NTRS)
Wang, Xiang-Yu; Ash, Robert L.; Bobbitt, Percy J.; Prior, Edwin (Technical Monitor)
2002-01-01
Computational analysis of two 1911 Wright brothers 'Bent End' wooden propeller reproductions have been performed and compared with experimental test results from the Langley Full Scale Wind Tunnel. The purpose of the analysis was to check the consistency of the experimental results and to validate the reliability of the tests. This report is one part of the project on the propeller performance research of the Wright 'Bent End' propellers, intend to document the Wright brothers' pioneering propeller design contributions. Two computer codes were used in the computational predictions. The FLO-MG Navier-Stokes code is a CFD (Computational Fluid Dynamics) code based on the Navier-Stokes Equations. It is mainly used to compute the lift coefficient and the drag coefficient at specified angles of attack at different radii. Those calculated data are the intermediate results of the computation and a part of the necessary input for the Propeller Design Analysis Code (based on Adkins and Libeck method), which is a propeller design code used to compute the propeller thrust coefficient, the propeller power coefficient and the propeller propulsive efficiency.
[The QR code in society, economy and medicine--fields of application, options and chances].
Flaig, Benno; Parzeller, Markus
2011-01-01
2D codes like the QR Code ("Quick Response") are becoming more and more common in society and medicine. The application spectrum and benefits in medicine and other fields are described. 2D codes can be created free of charge on any computer with internet access without any previous knowledge. The codes can be easily used in publications, presentations, on business cards and posters. Editors choose between contact details, text or a hyperlink as information behind the code. At expert conferences, linkage by QR Code allows the audience to download presentations and posters quickly. The documents obtained can then be saved, printed, processed etc. Fast access to stored data in the internet makes it possible to integrate additional and explanatory multilingual videos into medical posters. In this context, a combination of different technologies (printed handout, QR Code and screen) may be reasonable.
Proceduracy: Computer Code Writing in the Continuum of Literacy
ERIC Educational Resources Information Center
Vee, Annette
2010-01-01
This dissertation looks at computer programming through the lens of literacy studies, building from the concept of code as a written text with expressive and rhetorical power. I focus on the intersecting technological and social factors of computer code writing as a literacy--a practice I call "proceduracy". Like literacy, proceduracy is a human…
Computational electronics and electromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, C C
The Computational Electronics and Electromagnetics thrust area serves as the focal point for Engineering R and D activities for developing computer-based design and analysis tools. Representative applications include design of particle accelerator cells and beamline components; design of transmission line components; engineering analysis and design of high-power (optical and microwave) components; photonics and optoelectronics circuit design; electromagnetic susceptibility analysis; and antenna synthesis. The FY-97 effort focuses on development and validation of (1) accelerator design codes; (2) 3-D massively parallel, time-dependent EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; andmore » (5) development of beam control algorithms coupled to beam transport physics codes. These efforts are in association with technology development in the power conversion, nondestructive evaluation, and microtechnology areas. The efforts complement technology development in Lawrence Livermore National programs.« less
COMSAC: Computational Methods for Stability and Control. Part 2
NASA Technical Reports Server (NTRS)
Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)
2004-01-01
The unprecedented advances being made in computational fluid dynamic (CFD) technology have demonstrated the powerful capabilities of codes in applications to civil and military aircraft. Used in conjunction with wind-tunnel and flight investigations, many codes are now routinely used by designers in diverse applications such as aerodynamic performance predictions and propulsion integration. Typically, these codes are most reliable for attached, steady, and predominantly turbulent flows. As a result of increasing reliability and confidence in CFD, wind-tunnel testing for some new configurations has been substantially reduced in key areas, such as wing trade studies for mission performance guarantees. Interest is now growing in the application of computational methods to other critical design challenges. One of the most important disciplinary elements for civil and military aircraft is prediction of stability and control characteristics. CFD offers the potential for significantly increasing the basic understanding, prediction, and control of flow phenomena associated with requirements for satisfactory aircraft handling characteristics.
Simulation of Jet Noise with OVERFLOW CFD Code and Kirchhoff Surface Integral
NASA Technical Reports Server (NTRS)
Kandula, M.; Caimi, R.; Voska, N. (Technical Monitor)
2002-01-01
An acoustic prediction capability for supersonic axisymmetric jets was developed on the basis of OVERFLOW Navier-Stokes CFD (Computational Fluid Dynamics) code of NASA Langley Research Center. Reynolds-averaged turbulent stresses in the flow field are modeled with the aid of Spalart-Allmaras one-equation turbulence model. Appropriate acoustic and outflow boundary conditions were implemented to compute time-dependent acoustic pressure in the nonlinear source-field. Based on the specification of acoustic pressure, its temporal and normal derivatives on the Kirchhoff surface, the near-field and the far-field sound pressure levels are computed via Kirchhoff surface integral, with the Kirchhoff surface chosen to enclose the nonlinear sound source region described by the CFD code. The methods are validated by a comparison of the predictions of sound pressure levels with the available data for an axisymmetric turbulent supersonic (Mach 2) perfectly expanded jet.
NASA Technical Reports Server (NTRS)
Kandula, Max; Caimi, Raoul; Steinrock, T. (Technical Monitor)
2001-01-01
An acoustic prediction capability for supersonic axisymmetric jets was developed on the basis of OVERFLOW Navier-Stokes CFD (Computational Fluid Dynamics) code of NASA Langley Research Center. Reynolds-averaged turbulent stresses in the flow field are modeled with the aid of Spalart-Allmaras one-equation turbulence model. Appropriate acoustic and outflow boundary conditions were implemented to compute time-dependent acoustic pressure in the nonlinear source-field. Based on the specification of acoustic pressure, its temporal and normal derivatives on the Kirchhoff surface, the near-field and the far-field sound pressure levels are computed via Kirchhoff surface integral, with the Kirchhoff surface chosen to enclose the nonlinear sound source region described by the CFD code. The methods are validated by a comparison of the predictions of sound pressure levels with the available data for an axisymmetric turbulent supersonic (Mach 2) perfectly expanded jet.
Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murata, K.K.; Williams, D.C.; Griffith, R.O.
1997-12-01
The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of themore » input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.« less
Pattern-based integer sample motion search strategies in the context of HEVC
NASA Astrophysics Data System (ADS)
Maier, Georg; Bross, Benjamin; Grois, Dan; Marpe, Detlev; Schwarz, Heiko; Veltkamp, Remco C.; Wiegand, Thomas
2015-09-01
The H.265/MPEG-H High Efficiency Video Coding (HEVC) standard provides a significant increase in coding efficiency compared to its predecessor, the H.264/MPEG-4 Advanced Video Coding (AVC) standard, which however comes at the cost of a high computational burden for a compliant encoder. Motion estimation (ME), which is a part of the inter-picture prediction process, typically consumes a high amount of computational resources, while significantly increasing the coding efficiency. In spite of the fact that both H.265/MPEG-H HEVC and H.264/MPEG-4 AVC standards allow processing motion information on a fractional sample level, the motion search algorithms based on the integer sample level remain to be an integral part of ME. In this paper, a flexible integer sample ME framework is proposed, thereby allowing to trade off significant reduction of ME computation time versus coding efficiency penalty in terms of bit rate overhead. As a result, through extensive experimentation, an integer sample ME algorithm that provides a good trade-off is derived, incorporating a combination and optimization of known predictive, pattern-based and early termination techniques. The proposed ME framework is implemented on a basis of the HEVC Test Model (HM) reference software, further being compared to the state-of-the-art fast search algorithm, which is a native part of HM. It is observed that for high resolution sequences, the integer sample ME process can be speed-up by factors varying from 3.2 to 7.6, resulting in the bit-rate overhead of 1.5% and 0.6% for Random Access (RA) and Low Delay P (LDP) configurations, respectively. In addition, the similar speed-up is observed for sequences with mainly Computer-Generated Imagery (CGI) content while trading off the bit rate overhead of up to 5.2%.
NASA Technical Reports Server (NTRS)
Talcott, N. A., Jr.
1977-01-01
Equations and computer code are given for the thermodynamic properties of gaseous fluorocarbons in chemical equilibrium. In addition, isentropic equilibrium expansions of two binary mixtures of fluorocarbons and argon are included. The computer code calculates the equilibrium thermodynamic properties and, in some cases, the transport properties for the following fluorocarbons: CCl2F, CCl2F2, CBrF3, CF4, CHCl2F, CHF3, CCL2F-CCl2F, CCLF2-CClF2, CF3-CF3, and C4F8. Equilibrium thermodynamic properties are tabulated for six of the fluorocarbons(CCl3F, CCL2F2, CBrF3, CF4, CF3-CF3, and C4F8) and pressure-enthalpy diagrams are presented for CBrF3.
NASA Technical Reports Server (NTRS)
Walsh, J. L.; Weston, R. P.; Samareh, J. A.; Mason, B. H.; Green, L. L.; Biedron, R. T.
2000-01-01
An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity finite-element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a high-speed civil transport configuration. The paper describes both the preliminary results from implementing and validating the multidisciplinary analysis and the results from an aerodynamic optimization. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture compliant software product. A companion paper describes the formulation of the multidisciplinary analysis and optimization system.
APC: A New Code for Atmospheric Polarization Computations
NASA Technical Reports Server (NTRS)
Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.
2014-01-01
A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.
NASA Astrophysics Data System (ADS)
Messitt, Donald G.
1999-11-01
The WIND code was employed to compute the hypersonic flow in the shock wave boundary layer merged region near the leading edge of a sharp flat plate. Solutions were obtained at Mach numbers from 9.86 to 15.0 and free stream Reynolds numbers of 3,467 to 346,700 in-1 (1.365 · 105 to 1.365 · 107 m-1) for perfect gas conditions. The numerical results indicated a merged shock wave and viscous layer near the leading edge. The merged region grew in size with increasing free stream Mach number, proportional to Minfinity 2/Reinfinity. Profiles of the static pressure in the merged region indicated a strong normal pressure gradient (∂p/∂y). The normal pressure gradient has been neglected in previous analyses which used the boundary layer equations. The shock wave near the leading edge was thick, as has been experimentally observed. Computed shock wave locations and surface pressures agreed well within experimental error for values of the rarefaction parameter, chi/M infinity2 < 0.3. A preliminary analysis using kinetic theory indicated that rarefied flow effects became important above this value. In particular, the WIND solution agreed well in the transition region between the merged flow, which was predicted well by the theory of Li and Nagamatsu, and the downstream region where the strong interaction theory applied. Additional computations with the NPARC code, WIND's predecessor, demonstrated the ability of the code to compute hypersonic inlet flows at free stream Mach numbers up to 20. Good qualitative agreement with measured pressure data indicated that the code captured the important physical features of the shock wave - boundary layer interactions. The computed surface and pitot pressures fell within the combined experimental and numerical error bounds for most points. The calculations demonstrated the need for extremely fine grids when computing hypersonic interaction flows.
GIRAFE, a campaign forecast tool for anthropogenic and biomass burning plumes
NASA Astrophysics Data System (ADS)
Fontaine, Alain; Mari, Céline; Drouin, Marc-Antoine; Lussac, Laure
2015-04-01
GIRAFE (reGIonal ReAl time Fire plumEs, http://girafe.pole-ether.fr, alain.fontaine@obs-mip.fr) is a forecast tool supported by the French atmospheric chemistry data centre Ether (CNES and CNRS), build on the lagrangian particle dispersion model FLEXPART coupled with ECMWF meteorological fields and emission inventories. GIRAFE was used during the CHARMEX campaign (Chemistry-Aerosol Mediterranean Experiment http://charmex.lsce.ipsl.fr) in order to provide daily 5-days plumes trajectory forecast over the Mediterranean Sea. For this field experiment, the lagrangian model was used to mimic carbon monoxide pollution plumes emitted either by anthropogenic or biomass burning emissions. Sources from major industrial areas as Fos-Berre or the Po valley were extracted from the MACC-TNO inventory. Biomass burning sources were estimated based on MODIS fire detection. Comparison with MACC and CHIMERE APIFLAME models revealed that GIRAFE followed pollution plumes from small and short-duration fires which were not captured by low resolution models. GIRAFE was used as a decision-making tool to schedule field campaign like airbone operations or balloons launching. Thanks to recent features, GIRAFE is able to read the ECCAD database (http://eccad.pole-ether.fr) inventories. Global inventories such as MACCITY and ECLIPSE will be used to predict CO plumes trajectories from major urban and industrial sources over West Africa for the DACCIWA campaign (Dynamic-Aerosol-Chemistry-Cloud interactions in West Africa).
Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators
NASA Astrophysics Data System (ADS)
Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.
2018-03-01
We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.
Hypercube matrix computation task
NASA Technical Reports Server (NTRS)
Calalo, Ruel H.; Imbriale, William A.; Jacobi, Nathan; Liewer, Paulett C.; Lockhart, Thomas G.; Lyzenga, Gregory A.; Lyons, James R.; Manshadi, Farzin; Patterson, Jean E.
1988-01-01
A major objective of the Hypercube Matrix Computation effort at the Jet Propulsion Laboratory (JPL) is to investigate the applicability of a parallel computing architecture to the solution of large-scale electromagnetic scattering problems. Three scattering analysis codes are being implemented and assessed on a JPL/California Institute of Technology (Caltech) Mark 3 Hypercube. The codes, which utilize different underlying algorithms, give a means of evaluating the general applicability of this parallel architecture. The three analysis codes being implemented are a frequency domain method of moments code, a time domain finite difference code, and a frequency domain finite elements code. These analysis capabilities are being integrated into an electromagnetics interactive analysis workstation which can serve as a design tool for the construction of antennas and other radiating or scattering structures. The first two years of work on the Hypercube Matrix Computation effort is summarized. It includes both new developments and results as well as work previously reported in the Hypercube Matrix Computation Task: Final Report for 1986 to 1987 (JPL Publication 87-18).
Research in Parallel Algorithms and Software for Computational Aerosciences
NASA Technical Reports Server (NTRS)
Domel, Neal D.
1996-01-01
Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.
Research in Parallel Algorithms and Software for Computational Aerosciences
NASA Technical Reports Server (NTRS)
Domel, Neal D.
1996-01-01
Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rest, J; Gehl, S M
1979-01-01
GRASS-SST and FASTGRASS are mechanistic computer codes for predicting fission-gas behavior in UO/sub 2/-base fuels during steady-state and transient conditions. FASTGRASS was developed in order to satisfy the need for a fast-running alternative to GRASS-SST. Althrough based on GRASS-SST, FASTGRASS is approximately an order of magnitude quicker in execution. The GRASS-SST transient analysis has evolved through comparisons of code predictions with the fission-gas release and physical phenomena that occur during reactor operation and transient direct-electrical-heating (DEH) testing of irradiated light-water reactor fuel. The FASTGRASS calculational procedure is described in this paper, along with models of key physical processes included inmore » both FASTGRASS and GRASS-SST. Predictions of fission-gas release obtained from GRASS-SST and FASTGRASS analyses are compared with experimental observations from a series of DEH tests. The major conclusions is that the computer codes should include an improved model for the evolution of the grain-edge porosity.« less
PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 3: Programmer's reference
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady
1990-01-01
A new computer code was developed to solve the 2-D or axisymmetric, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating-direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 3 is the Programmer's Reference, and describes the program structure, the FORTRAN variables stored in common blocks, and the details of each subprogram.
Plouff, Donald
2000-01-01
Gravity observations are directly made or are obtained from other sources by the U.S. Geological Survey in order to prepare maps of the anomalous gravity field and consequently to interpret the subsurface distribution of rock densities and associated lithologic or geologic units. Observations are made in the field with gravity meters at new locations and at reoccupations of previously established gravity "stations." This report illustrates an interactively-prompted series of steps needed to convert gravity "readings" to values that are tied to established gravity datums and includes computer programs to implement those steps. Inasmuch as individual gravity readings have small variations, gravity-meter (instrument) drift may not be smoothly variable, and acommodations may be needed for ties to previously established stations, the reduction process is iterative. Decision-making by the program user is prompted by lists of best values and graphical displays. Notes about irregularities of topography, which affect the value of observed gravity but are not shown in sufficient detail on topographic maps, must be recorded in the field. This report illustrates ways to record field notes (distances, heights, and slope angles) and includes computer programs to convert field notes to gravity terrain corrections. This report includes approaches that may serve as models for other applications, for example: portrayal of system flow; style of quality control to document and validate computer applications; lack of dependence on proprietary software except source code compilation; method of file-searching with a dwindling list; interactive prompting; computer code to write directly in the PostScript (Adobe Systems Incorporated) printer language; and high-lighting the four-digit year on the first line of time-dependent data sets for assured Y2K compatibility. Computer source codes provided are written in the Fortran scientific language. In order for the programs to operate, they first must be converted (compiled) into an executable form on the user's computer. Although program testing was done in a UNIX (tradename of American Telephone and Telegraph Company) computer environment, it is anticipated that only a system-dependent date-and-time function may need to be changed for adaptation to other computer platforms that accept standard Fortran code.d del iliscipit volorer sequi ting etue feum zzriliquatum zzriustrud esenibh ex esto esequat.
Nonoccurrence of Negotiation of Meaning in Task-Based Synchronous Computer-Mediated Communication
ERIC Educational Resources Information Center
Van Der Zwaard, Rose; Bannink, Anne
2016-01-01
This empirical study investigated the occurrence of meaning negotiation in an interactive synchronous computer-mediated second language (L2) environment. Sixteen dyads (N = 32) consisting of nonnative speakers (NNSs) and native speakers (NSs) of English performed 2 different tasks using videoconferencing and written chat. The data were coded and…
Utilizing GPUs to Accelerate Turbomachinery CFD Codes
NASA Technical Reports Server (NTRS)
MacCalla, Weylin; Kulkarni, Sameer
2016-01-01
GPU computing has established itself as a way to accelerate parallel codes in the high performance computing world. This work focuses on speeding up APNASA, a legacy CFD code used at NASA Glenn Research Center, while also drawing conclusions about the nature of GPU computing and the requirements to make GPGPU worthwhile on legacy codes. Rewriting and restructuring of the source code was avoided to limit the introduction of new bugs. The code was profiled and investigated for parallelization potential, then OpenACC directives were used to indicate parallel parts of the code. The use of OpenACC directives was not able to reduce the runtime of APNASA on either the NVIDIA Tesla discrete graphics card, or the AMD accelerated processing unit. Additionally, it was found that in order to justify the use of GPGPU, the amount of parallel work being done within a kernel would have to greatly exceed the work being done by any one portion of the APNASA code. It was determined that in order for an application like APNASA to be accelerated on the GPU, it should not be modular in nature, and the parallel portions of the code must contain a large portion of the code's computation time.
PASCO: Structural panel analysis and sizing code: Users manual - Revised
NASA Technical Reports Server (NTRS)
Anderson, M. S.; Stroud, W. J.; Durling, B. J.; Hennessy, K. W.
1981-01-01
A computer code denoted PASCO is described for analyzing and sizing uniaxially stiffened composite panels. Buckling and vibration analyses are carried out with a linked plate analysis computer code denoted VIPASA, which is included in PASCO. Sizing is based on nonlinear mathematical programming techniques and employs a computer code denoted CONMIN, also included in PASCO. Design requirements considered are initial buckling, material strength, stiffness and vibration frequency. A user's manual for PASCO is presented.
Computation of Reacting Flows in Combustion Processes
NASA Technical Reports Server (NTRS)
Keith, Theo G., Jr.; Chen, Kuo-Huey
1997-01-01
The main objective of this research was to develop an efficient three-dimensional computer code for chemically reacting flows. The main computer code developed is ALLSPD-3D. The ALLSPD-3D computer program is developed for the calculation of three-dimensional, chemically reacting flows with sprays. The ALL-SPD code employs a coupled, strongly implicit solution procedure for turbulent spray combustion flows. A stochastic droplet model and an efficient method for treatment of the spray source terms in the gas-phase equations are used to calculate the evaporating liquid sprays. The chemistry treatment in the code is general enough that an arbitrary number of reaction and species can be defined by the users. Also, it is written in generalized curvilinear coordinates with both multi-block and flexible internal blockage capabilities to handle complex geometries. In addition, for general industrial combustion applications, the code provides both dilution and transpiration cooling capabilities. The ALLSPD algorithm, which employs the preconditioning and eigenvalue rescaling techniques, is capable of providing efficient solution for flows with a wide range of Mach numbers. Although written for three-dimensional flows in general, the code can be used for two-dimensional and axisymmetric flow computations as well. The code is written in such a way that it can be run in various computer platforms (supercomputers, workstations and parallel processors) and the GUI (Graphical User Interface) should provide a user-friendly tool in setting up and running the code.
Global forestry emission projections and abatement costs
NASA Astrophysics Data System (ADS)
Böttcher, H.; Gusti, M.; Mosnier, A.; Havlik, P.; Obersteiner, M.
2012-04-01
In this paper we present forestry emission projections and associated Marginal Abatement Cost Curves (MACCs) for individual countries, based on economic, social and policy drivers. The activities cover deforestation, afforestation, and forestry management. The global model tools G4M and GLOBIOM, developed at IIASA, are applied. GLOBIOM uses global scenarios of population, diet, GDP and energy demand to inform G4M about future land and commodity prices and demand for bioenergy and timber. G4M projects emissions from afforestation, deforestation and management of existing forests. Mitigation measures are simulated by introducing a carbon tax. Mitigation activities like reducing deforestation or enhancing afforestation are not independent of each other. In contrast to existing forestry mitigation cost curves the presented MACCs are not developed for individual activities but total forest land management which makes the estimated potentials more realistic. In the assumed baseline gross deforestation drops globally from about 12 Mha in 2005 to below 10 Mha after 2015 and reach 0.5 Mha in 2050. Afforestation rates remain fairly constant at about 7 Mha annually. Although we observe a net area increase of global forest area after 2015 net emissions from deforestation and afforestation are positive until 2045 as the newly afforested areas accumulate carbon rather slowly. About 200 Mt CO2 per year in 2030 in Annex1 countries could be mitigated at a carbon price of 50 USD. The potential for forest management improvement is very similar. Above 200 USD the potential is clearly constrained for both options. In Non-Annex1 countries avoided deforestation can achieve about 1200 Mt CO2 per year at a price of 50 USD. The potential is less constrained compared to the potential in Annex1 countries, achieving a potential of 1800 Mt CO2 annually in 2030 at a price of 1000 USD. The potential from additional afforestation is rather limited due to high baseline afforestation rates assumed. In addition we present results of several sensitivity analyses that were run to understand better model uncertainties and the mechanisms of drivers such as agricultural productivity, GDP, wood demand and national corruption rates.
Computer program optimizes design of nuclear radiation shields
NASA Technical Reports Server (NTRS)
Lahti, G. P.
1971-01-01
Computer program, OPEX 2, determines minimum weight, volume, or cost for shields. Program incorporates improved coding, simplified data input, spherical geometry, and an expanded output. Method is capable of altering dose-thickness relationship when a shield layer has been removed.
MHD code using multi graphical processing units: SMAUG+
NASA Astrophysics Data System (ADS)
Gyenge, N.; Griffiths, M. K.; Erdélyi, R.
2018-01-01
This paper introduces the Sheffield Magnetohydrodynamics Algorithm Using GPUs (SMAUG+), an advanced numerical code for solving magnetohydrodynamic (MHD) problems, using multi-GPU systems. Multi-GPU systems facilitate the development of accelerated codes and enable us to investigate larger model sizes and/or more detailed computational domain resolutions. This is a significant advancement over the parent single-GPU MHD code, SMAUG (Griffiths et al., 2015). Here, we demonstrate the validity of the SMAUG + code, describe the parallelisation techniques and investigate performance benchmarks. The initial configuration of the Orszag-Tang vortex simulations are distributed among 4, 16, 64 and 100 GPUs. Furthermore, different simulation box resolutions are applied: 1000 × 1000, 2044 × 2044, 4000 × 4000 and 8000 × 8000 . We also tested the code with the Brio-Wu shock tube simulations with model size of 800 employing up to 10 GPUs. Based on the test results, we observed speed ups and slow downs, depending on the granularity and the communication overhead of certain parallel tasks. The main aim of the code development is to provide massively parallel code without the memory limitation of a single GPU. By using our code, the applied model size could be significantly increased. We demonstrate that we are able to successfully compute numerically valid and large 2D MHD problems.
NASA Rotor 37 CFD Code Validation: Glenn-HT Code
NASA Technical Reports Server (NTRS)
Ameri, Ali A.
2010-01-01
In order to advance the goals of NASA aeronautics programs, it is necessary to continuously evaluate and improve the computational tools used for research and design at NASA. One such code is the Glenn-HT code which is used at NASA Glenn Research Center (GRC) for turbomachinery computations. Although the code has been thoroughly validated for turbine heat transfer computations, it has not been utilized for compressors. In this work, Glenn-HT was used to compute the flow in a transonic compressor and comparisons were made to experimental data. The results presented here are in good agreement with this data. Most of the measures of performance are well within the measurement uncertainties and the exit profiles of interest agree with the experimental measurements.
Final report for the Tera Computer TTI CRADA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidson, G.S.; Pavlakos, C.; Silva, C.
1997-01-01
Tera Computer and Sandia National Laboratories have completed a CRADA, which examined the Tera Multi-Threaded Architecture (MTA) for use with large codes of importance to industry and DOE. The MTA is an innovative architecture that uses parallelism to mask latency between memories and processors. The physical implementation is a parallel computer with high cross-section bandwidth and GaAs processors designed by Tera, which support many small computation threads and fast, lightweight context switches between them. When any thread blocks while waiting for memory accesses to complete, another thread immediately begins execution so that high CPU utilization is maintained. The Tera MTAmore » parallel computer has a single, global address space, which is appealing when porting existing applications to a parallel computer. This ease of porting is further enabled by compiler technology that helps break computations into parallel threads. DOE and Sandia National Laboratories were interested in working with Tera to further develop this computing concept. While Tera Computer would continue the hardware development and compiler research, Sandia National Laboratories would work with Tera to ensure that their compilers worked well with important Sandia codes, most particularly CTH, a shock physics code used for weapon safety computations. In addition to that important code, Sandia National Laboratories would complete research on a robotic path planning code, SANDROS, which is important in manufacturing applications, and would evaluate the MTA performance on this code. Finally, Sandia would work directly with Tera to develop 3D visualization codes, which would be appropriate for use with the MTA. Each of these tasks has been completed to the extent possible, given that Tera has just completed the MTA hardware. All of the CRADA work had to be done on simulators.« less
NASA Technical Reports Server (NTRS)
Juhasz, Albert J.
2001-01-01
The purpose of this report was to analyze the heat-transfer problem posed by the determination of spacecraft temperatures and to incorporate the theoretically derived relationships in the computational code TSCALC. The basis for the code was a theoretical analysis of the thermal radiative equilibrium in space, particularly in the Solar System. Beginning with the solar luminosity, the code takes into account these key variables: (1) the spacecraft-to-Sun distance expressed in astronomical units (AU), where 1 AU represents the average Sun-to-Earth distance of 149.6 million km; (2) the angle (arc degrees) at which solar radiation is incident upon a spacecraft surface (ILUMANG); (3) the spacecraft surface temperature (a radiator or photovoltaic array) in kelvin, the surface absorptivity-to-emissivity ratio alpha/epsilon with respect to the solar radiation and (alpha/epsilon)(sub 2) with respect to planetary radiation; and (4) the surface view factor to space F. Outputs from the code have been used to determine environmental temperatures in various Earth orbits. The code was also utilized as a subprogram in the design of power system radiators for deep-space probes.
NASA Astrophysics Data System (ADS)
Cavaglieri, Daniele; Bewley, Thomas; Mashayek, Ali
2015-11-01
We present a new code, Diablo 2.0, for the simulation of the incompressible NSE in channel and duct flows with strong grid stretching near walls. The code leverages the fractional step approach with a few twists. New low-storage IMEX (implicit-explicit) Runge-Kutta time-marching schemes are tested which are superior to the traditional and widely-used CN/RKW3 (Crank-Nicolson/Runge-Kutta-Wray) approach; the new schemes tested are L-stable in their implicit component, and offer improved overall order of accuracy and stability with, remarkably, similar computational cost and storage requirements. For duct flow simulations, our new code also introduces a new smoother for the multigrid solver for the pressure Poisson equation. The classic approach, involving alternating-direction zebra relaxation, is replaced by a new scheme, dubbed tweed relaxation, which achieves the same convergence rate with roughly half the computational cost. The code is then tested on the simulation of a shear flow instability in a duct, a classic problem in fluid mechanics which has been the object of extensive numerical modelling for its role as a canonical pathway to energetic turbulence in several fields of science and engineering.
Development of a dynamic coupled hydro-geomechanical code and its application to induced seismicity
NASA Astrophysics Data System (ADS)
Miah, Md Mamun
This research describes the importance of a hydro-geomechanical coupling in the geologic sub-surface environment from fluid injection at geothermal plants, large-scale geological CO2 sequestration for climate mitigation, enhanced oil recovery, and hydraulic fracturing during wells construction in the oil and gas industries. A sequential computational code is developed to capture the multiphysics interaction behavior by linking a flow simulation code TOUGH2 and a geomechanics modeling code PyLith. Numerical formulation of each code is discussed to demonstrate their modeling capabilities. The computational framework involves sequential coupling, and solution of two sub-problems- fluid flow through fractured and porous media and reservoir geomechanics. For each time step of flow calculation, pressure field is passed to the geomechanics code to compute effective stress field and fault slips. A simplified permeability model is implemented in the code that accounts for the permeability of porous and saturated rocks subject to confining stresses. The accuracy of the TOUGH-PyLith coupled simulator is tested by simulating Terzaghi's 1D consolidation problem. The modeling capability of coupled poroelasticity is validated by benchmarking it against Mandel's problem. The code is used to simulate both quasi-static and dynamic earthquake nucleation and slip distribution on a fault from the combined effect of far field tectonic loading and fluid injection by using an appropriate fault constitutive friction model. Results from the quasi-static induced earthquake simulations show a delayed response in earthquake nucleation. This is attributed to the increased total stress in the domain and not accounting for pressure on the fault. However, this issue is resolved in the final chapter in simulating a single event earthquake dynamic rupture. Simulation results show that fluid pressure has a positive effect on slip nucleation and subsequent crack propagation. This is confirmed by running a sensitivity analysis that shows an increase in injection well distance results in delayed slip nucleation and rupture propagation on the fault.
ERIC Educational Resources Information Center
Knowlton, Marie; Wetzel, Robin
2006-01-01
This study compared the length of text in English Braille American Edition, the Nemeth code, and the computer braille code with the Unified English Braille Code (UEBC)--also known as Unified English Braille (UEB). The findings indicate that differences in the length of text are dependent on the type of material that is transcribed and the grade…
A MATLAB based 3D modeling and inversion code for MT data
NASA Astrophysics Data System (ADS)
Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.
2017-07-01
The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.
Enhancing Scalability and Efficiency of the TOUGH2_MP for LinuxClusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Keni; Wu, Yu-Shu
2006-04-17
TOUGH2{_}MP, the parallel version TOUGH2 code, has been enhanced by implementing more efficient communication schemes. This enhancement is achieved through reducing the amount of small-size messages and the volume of large messages. The message exchange speed is further improved by using non-blocking communications for both linear and nonlinear iterations. In addition, we have modified the AZTEC parallel linear-equation solver to nonblocking communication. Through the improvement of code structuring and bug fixing, the new version code is now more stable, while demonstrating similar or even better nonlinear iteration converging speed than the original TOUGH2 code. As a result, the new versionmore » of TOUGH2{_}MP is improved significantly in its efficiency. In this paper, the scalability and efficiency of the parallel code are demonstrated by solving two large-scale problems. The testing results indicate that speedup of the code may depend on both problem size and complexity. In general, the code has excellent scalability in memory requirement as well as computing time.« less
NASA Astrophysics Data System (ADS)
Grenier, Christophe; Anbergen, Hauke; Bense, Victor; Chanzy, Quentin; Coon, Ethan; Collier, Nathaniel; Costard, François; Ferry, Michel; Frampton, Andrew; Frederick, Jennifer; Gonçalvès, Julio; Holmén, Johann; Jost, Anne; Kokh, Samuel; Kurylyk, Barret; McKenzie, Jeffrey; Molson, John; Mouche, Emmanuel; Orgogozo, Laurent; Pannetier, Romain; Rivière, Agnès; Roux, Nicolas; Rühaak, Wolfram; Scheidegger, Johanna; Selroos, Jan-Olof; Therrien, René; Vidstrand, Patrik; Voss, Clifford
2018-04-01
In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. This issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatial and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.
Toward a first-principles integrated simulation of tokamak edge plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, C S; Klasky, Scott A; Cummings, Julian
2008-01-01
Performance of the ITER is anticipated to be highly sensitive to the edge plasma condition. The edge pedestal in ITER needs to be predicted from an integrated simulation of the necessary firstprinciples, multi-scale physics codes. The mission of the SciDAC Fusion Simulation Project (FSP) Prototype Center for Plasma Edge Simulation (CPES) is to deliver such a code integration framework by (1) building new kinetic codes XGC0 and XGC1, which can simulate the edge pedestal buildup; (2) using and improving the existing MHD codes ELITE, M3D-OMP, M3D-MPP and NIMROD, for study of large-scale edge instabilities called Edge Localized Modes (ELMs); andmore » (3) integrating the codes into a framework using cutting-edge computer science technology. Collaborative effort among physics, computer science, and applied mathematics within CPES has created the first working version of the End-to-end Framework for Fusion Integrated Simulation (EFFIS), which can be used to study the pedestal-ELM cycles.« less
Code OK3 - An upgraded version of OK2 with beam wobbling function
NASA Astrophysics Data System (ADS)
Ogoyski, A. I.; Kawata, S.; Popov, P. H.
2010-07-01
For computer simulations on heavy ion beam (HIB) irradiation onto a target with an arbitrary shape and structure in heavy ion fusion (HIF), the code OK2 was developed and presented in Computer Physics Communications 161 (2004). Code OK3 is an upgrade of OK2 including an important capability of wobbling beam illumination. The wobbling beam introduces a unique possibility for a smooth mechanism of inertial fusion target implosion, so that sufficient fusion energy is released to construct a fusion reactor in future. New version program summaryProgram title: OK3 Catalogue identifier: ADST_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADST_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 221 517 No. of bytes in distributed program, including test data, etc.: 2 471 015 Distribution format: tar.gz Programming language: C++ Computer: PC (Pentium 4, 1 GHz or more recommended) Operating system: Windows or UNIX RAM: 2048 MBytes Classification: 19.7 Catalogue identifier of previous version: ADST_v2_0 Journal reference of previous version: Comput. Phys. Comm. 161 (2004) 143 Does the new version supersede the previous version?: Yes Nature of problem: In heavy ion fusion (HIF), ion cancer therapy, material processing, etc., a precise beam energy deposition is essentially important [1]. Codes OK1 and OK2 have been developed to simulate the heavy ion beam energy deposition in three-dimensional arbitrary shaped targets [2, 3]. Wobbling beam illumination is important to smooth the beam energy deposition nonuniformity in HIF, so that a uniform target implosion is realized and a sufficient fusion output energy is released. Solution method: OK3 code works on the base of OK1 and OK2 [2, 3]. The code simulates a multi-beam illumination on a target with arbitrary shape and structure, including beam wobbling function. Reasons for new version: The code OK3 is based on OK2 [3] and uses the same algorithm with some improvements, the most important one is the beam wobbling function. Summary of revisions:In the code OK3, beams are subdivided on many bunches. The displacement of each bunch center from the initial beam direction is calculated. Code OK3 allows the beamlet number to vary from bunch to bunch. That reduces the calculation error especially in case of very complicated mesh structure with big internal holes. The target temperature rises during the time of energy deposition. Some procedures are improved to perform faster. The energy conservation is checked up on each step of calculation process and corrected if necessary. New procedures included in OK3 Procedure BeamCenterRot( ) rotates the beam axis around the impinging direction of each beam. Procedure BeamletRot( ) rotates the beamlet axes that belong to each beam. Procedure Rotation( ) sets the coordinates of rotated beams and beamlets in chamber and pellet systems. Procedure BeamletOut( ) calculates the lost energy of ions that have not impinged on the target. Procedure TargetT( ) sets the temperature of the target layer of energy deposition during the irradiation process. Procedure ECL( ) checks up the energy conservation law at each step of the energy deposition process. Procedure ECLt( ) performs the final check up of the energy conservation law at the end of deposition process. Modified procedures in OK3 Procedure InitBeam( ): This procedure initializes the beam radius and coefficients A1, A2, A3, A4 and A5 for Gauss distributed beams [2]. It is enlarged in OK3 and can set beams with radii from 1 to 20 mm. Procedure kBunch( ) is modified to allow beamlet number variation from bunch to bunch during the deposition. Procedure ijkSp( ) and procedure Hole( ) are modified to perform faster. Procedure Espl( ) and procedure ChechE( ) are modified to increase the calculation accuracy. Procedure SD( ) calculates the total relative root-mean-square (RMS) deviation and the total relative peak-to-valley (PTV) deviation in energy deposition non-uniformity. This procedure is not included in code OK2 because of its limited applications (for spherical targets only). It is taken from code OK1 and modified to perform with code OK3. Running time: The execution time depends on the pellet mesh number and the number of beams in the simulated illumination as well as on the beam characteristics (beam radius on the pellet surface, beam subdivision, projectile particle energy and so on). In almost all of the practical running tests performed, the typical running time for one beam deposition is about 30 s on a PC with a CPU of Pentium 4, 2.4 GHz. References:A.I. Ogoyski, et al., Heavy ion beam irradiation non-uniformity in inertial fusion, Phys. Lett. A 315 (2003) 372-377. A.I. Ogoyski, et al., Code OK1 - Simulation of multi-beam irradiation on a spherical target in heavy ion fusion, Comput. Phys. Comm. 157 (2004) 160-172. A.I. Ogoyski, et al., Code OK2 - A simulation code of ion-beam illumination on an arbitrary shape and structure target, Comput. Phys. Comm. 161 (2004) 143-150.
Applications of automatic differentiation in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Carle, A.; Bischof, C.; Haigler, Kara J.; Newman, Perry A.
1994-01-01
Automatic differentiation (AD) is a powerful computational method that provides for computing exact sensitivity derivatives (SD) from existing computer programs for multidisciplinary design optimization (MDO) or in sensitivity analysis. A pre-compiler AD tool for FORTRAN programs called ADIFOR has been developed. The ADIFOR tool has been easily and quickly applied by NASA Langley researchers to assess the feasibility and computational impact of AD in MDO with several different FORTRAN programs. These include a state-of-the-art three dimensional multigrid Navier-Stokes flow solver for wings or aircraft configurations in transonic turbulent flow. With ADIFOR the user specifies sets of independent and dependent variables with an existing computer code. ADIFOR then traces the dependency path throughout the code, applies the chain rule to formulate derivative expressions, and generates new code to compute the required SD matrix. The resulting codes have been verified to compute exact non-geometric and geometric SD for a variety of cases. in less time than is required to compute the SD matrix using centered divided differences.
NASA Astrophysics Data System (ADS)
Alipchenkov, V. M.; Anfimov, A. M.; Afremov, D. A.; Gorbunov, V. S.; Zeigarnik, Yu. A.; Kudryavtsev, A. V.; Osipov, S. L.; Mosunova, N. A.; Strizhov, V. F.; Usov, E. V.
2016-02-01
The conceptual fundamentals of the development of the new-generation system thermal-hydraulic computational HYDRA-IBRAE/LM code are presented. The code is intended to simulate the thermalhydraulic processes that take place in the loops and the heat-exchange equipment of liquid-metal cooled fast reactor systems under normal operation and anticipated operational occurrences and during accidents. The paper provides a brief overview of Russian and foreign system thermal-hydraulic codes for modeling liquid-metal coolants and gives grounds for the necessity of development of a new-generation HYDRA-IBRAE/LM code. Considering the specific engineering features of the nuclear power plants (NPPs) equipped with the BN-1200 and the BREST-OD-300 reactors, the processes and the phenomena are singled out that require a detailed analysis and development of the models to be correctly described by the system thermal-hydraulic code in question. Information on the functionality of the computational code is provided, viz., the thermalhydraulic two-phase model, the properties of the sodium and the lead coolants, the closing equations for simulation of the heat-mass exchange processes, the models to describe the processes that take place during the steam-generator tube rupture, etc. The article gives a brief overview of the usability of the computational code, including a description of the support documentation and the supply package, as well as possibilities of taking advantages of the modern computer technologies, such as parallel computations. The paper shows the current state of verification and validation of the computational code; it also presents information on the principles of constructing of and populating the verification matrices for the BREST-OD-300 and the BN-1200 reactor systems. The prospects are outlined for further development of the HYDRA-IBRAE/LM code, introduction of new models into it, and enhancement of its usability. It is shown that the program of development and practical application of the code will allow carrying out in the nearest future the computations to analyze the safety of potential NPP projects at a qualitatively higher level.
Performance assessment of KORAT-3D on the ANL IBM-SP computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexeyev, A.V.; Zvenigorodskaya, O.A.; Shagaliev, R.M.
1999-09-01
The TENAR code is currently being developed at the Russian Federal Nuclear Center (VNIIEF) as a coupled dynamics code for the simulation of transients in VVER and RBMK systems and other nuclear systems. The neutronic module in this code system is KORAT-3D. This module is also one of the most computationally intensive components of the code system. A parallel version of KORAT-3D has been implemented to achieve the goal of obtaining transient solutions in reasonable computational time, particularly for RBMK calculations that involve the application of >100,000 nodes. An evaluation of the KORAT-3D code performance was recently undertaken on themore » Argonne National Laboratory (ANL) IBM ScalablePower (SP) parallel computer located in the Mathematics and Computer Science Division of ANL. At the time of the study, the ANL IBM-SP computer had 80 processors. This study was conducted under the auspices of a technical staff exchange program sponsored by the International Nuclear Safety Center (INSC).« less
Scalability of Parallel Spatial Direct Numerical Simulations on Intel Hypercube and IBM SP1 and SP2
NASA Technical Reports Server (NTRS)
Joslin, Ronald D.; Hanebutte, Ulf R.; Zubair, Mohammad
1995-01-01
The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube and IBM SP1 and SP2 parallel computers is documented. Spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows are computed with the PSDNS code. The feasibility of using the PSDNS to perform transition studies on these computers is examined. The results indicate that PSDNS approach can effectively be parallelized on a distributed-memory parallel machine by remapping the distributed data structure during the course of the calculation. Scalability information is provided to estimate computational costs to match the actual costs relative to changes in the number of grid points. By increasing the number of processors, slower than linear speedups are achieved with optimized (machine-dependent library) routines. This slower than linear speedup results because the computational cost is dominated by FFT routine, which yields less than ideal speedups. By using appropriate compile options and optimized library routines on the SP1, the serial code achieves 52-56 M ops on a single node of the SP1 (45 percent of theoretical peak performance). The actual performance of the PSDNS code on the SP1 is evaluated with a "real world" simulation that consists of 1.7 million grid points. One time step of this simulation is calculated on eight nodes of the SP1 in the same time as required by a Cray Y/MP supercomputer. For the same simulation, 32-nodes of the SP1 and SP2 are required to reach the performance of a Cray C-90. A 32 node SP1 (SP2) configuration is 2.9 (4.6) times faster than a Cray Y/MP for this simulation, while the hypercube is roughly 2 times slower than the Y/MP for this application. KEY WORDS: Spatial direct numerical simulations; incompressible viscous flows; spectral methods; finite differences; parallel computing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Thomas; Hamilton, Steven; Slattery, Stuart
Profugus is an open-source mini-application (mini-app) for radiation transport and reactor applications. It contains the fundamental computational kernels used in the Exnihilo code suite from Oak Ridge National Laboratory. However, Exnihilo is production code with a substantial user base. Furthermore, Exnihilo is export controlled. This makes collaboration with computer scientists and computer engineers difficult. Profugus is designed to bridge that gap. By encapsulating the core numerical algorithms in an abbreviated code base that is open-source, computer scientists can analyze the algorithms and easily make code-architectural changes to test performance without compromising the production code values of Exnihilo. Profugus is notmore » meant to be production software with respect to problem analysis. The computational kernels in Profugus are designed to analyze performance, not correctness. Nonetheless, users of Profugus can setup and run problems with enough real-world features to be useful as proof-of-concept for actual production work.« less
The MCNP6 Analytic Criticality Benchmark Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
2016-06-16
Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less
Large-Scale Computation of Nuclear Magnetic Resonance Shifts for Paramagnetic Solids Using CP2K.
Mondal, Arobendo; Gaultois, Michael W; Pell, Andrew J; Iannuzzi, Marcella; Grey, Clare P; Hutter, Jürg; Kaupp, Martin
2018-01-09
Large-scale computations of nuclear magnetic resonance (NMR) shifts for extended paramagnetic solids (pNMR) are reported using the highly efficient Gaussian-augmented plane-wave implementation of the CP2K code. Combining hyperfine couplings obtained with hybrid functionals with g-tensors and orbital shieldings computed using gradient-corrected functionals, contact, pseudocontact, and orbital-shift contributions to pNMR shifts are accessible. Due to the efficient and highly parallel performance of CP2K, a wide variety of materials with large unit cells can be studied with extended Gaussian basis sets. Validation of various approaches for the different contributions to pNMR shifts is done first for molecules in a large supercell in comparison with typical quantum-chemical codes. This is then extended to a detailed study of g-tensors for extended solid transition-metal fluorides and for a series of complex lithium vanadium phosphates. Finally, lithium pNMR shifts are computed for Li 3 V 2 (PO 4 ) 3 , for which detailed experimental data are available. This has allowed an in-depth study of different approaches (e.g., full periodic versus incremental cluster computations of g-tensors and different functionals and basis sets for hyperfine computations) as well as a thorough analysis of the different contributions to the pNMR shifts. This study paves the way for a more-widespread computational treatment of NMR shifts for paramagnetic materials.
Aerodynamic Interference Due to MSL Reaction Control System
NASA Technical Reports Server (NTRS)
Dyakonov, Artem A.; Schoenenberger, Mark; Scallion, William I.; VanNorman, John W.; Novak, Luke A.; Tang, Chun Y.
2009-01-01
An investigation of effectiveness of the reaction control system (RCS) of Mars Science Laboratory (MSL) entry capsule during atmospheric flight has been conducted. The reason for the investigation is that MSL is designed to fly a lifting actively guided entry with hypersonic bank maneuvers, therefore an understanding of RCS effectiveness is required. In the course of the study several jet configurations were evaluated using Langley Aerothermal Upwind Relaxation Algorithm (LAURA) code, Data Parallel Line Relaxation (DPLR) code, Fully Unstructured 3D (FUN3D) code and an Overset Grid Flowsolver (OVERFLOW) code. Computations indicated that some of the proposed configurations might induce aero-RCS interactions, sufficient to impede and even overwhelm the intended control torques. It was found that the maximum potential for aero-RCS interference exists around peak dynamic pressure along the trajectory. Present analysis largely relies on computational methods. Ground testing, flight data and computational analyses are required to fully understand the problem. At the time of this writing some experimental work spanning range of Mach number 2.5 through 4.5 has been completed and used to establish preliminary levels of confidence for computations. As a result of the present work a final RCS configuration has been designed such as to minimize aero-interference effects and it is a design baseline for MSL entry capsule.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tournier, J.; El-Genk, M.S.; Huang, L.
1999-01-01
The Institute of Space and Nuclear Power Studies at the University of New Mexico has developed a computer simulation of cylindrical geometry alkali metal thermal-to-electric converter cells using a standard Fortran 77 computer code. The objective and use of this code was to compare the experimental measurements with computer simulations, upgrade the model as appropriate, and conduct investigations of various methods to improve the design and performance of the devices for improved efficiency, durability, and longer operational lifetime. The Institute of Space and Nuclear Power Studies participated in vacuum testing of PX series alkali metal thermal-to-electric converter cells and developedmore » the alkali metal thermal-to-electric converter Performance Evaluation and Analysis Model. This computer model consisted of a sodium pressure loss model, a cell electrochemical and electric model, and a radiation/conduction heat transfer model. The code closely predicted the operation and performance of a wide variety of PX series cells which led to suggestions for improvements to both lifetime and performance. The code provides valuable insight into the operation of the cell, predicts parameters of components within the cell, and is a useful tool for predicting both the transient and steady state performance of systems of cells.« less
Fast H.264/AVC FRExt intra coding using belief propagation.
Milani, Simone
2011-01-01
In the H.264/AVC FRExt coder, the coding performance of Intra coding significantly overcomes the previous still image coding standards, like JPEG2000, thanks to a massive use of spatial prediction. Unfortunately, the adoption of an extensive set of predictors induces a significant increase of the computational complexity required by the rate-distortion optimization routine. The paper presents a complexity reduction strategy that aims at reducing the computational load of the Intra coding with a small loss in the compression performance. The proposed algorithm relies on selecting a reduced set of prediction modes according to their probabilities, which are estimated adopting a belief-propagation procedure. Experimental results show that the proposed method permits saving up to 60 % of the coding time required by an exhaustive rate-distortion optimization method with a negligible loss in performance. Moreover, it permits an accurate control of the computational complexity unlike other methods where the computational complexity depends upon the coded sequence.
NASA Technical Reports Server (NTRS)
Goglia, G. L.; Spiegler, E.
1977-01-01
The research activity focused on two main tasks: (1) the further development of the SCRAM program and, in particular, the addition of a procedure for modeling the mechanism of the internal adjustment process of the flow, in response to the imposed thermal load across the combustor and (2) the development of a numerical code for the computation of the variation of concentrations throughout a turbulent field, where finite-rate reactions occur. The code also includes an estimation of the effect of the phenomenon called 'unmixedness'.
Numerical computation of viscous flow around bodies and wings moving at supersonic speeds
NASA Technical Reports Server (NTRS)
Tannehill, J. C.
1984-01-01
Research in aerodynamics is discussed. The development of equilibrium air curve fits; computation of hypersonic rarefield leading edge flows; computation of 2-D and 3-D blunt body laminar flows with an impinging shock; development of a two-dimensional or axisymmetric real gas blunt body code; a study of an over-relaxation procedure forthe MacCormack finite-difference scheme; computation of 2-D blunt body turbulent flows with an impinging shock; computation of supersonic viscous flow over delta wings at high angles of attack; and computation of the Space Shuttle Orbiter flowfield are discussed.
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.
1991-01-01
Computations from two Navier-Stokes codes, NSS and F3D, are presented for a tangent-ogive-cylinder body at high angle of attack. Features of this steady flow include a pair of primary vortices on the leeward side of the body as well as secondary vortices. The topological and physical plausibility of this vortical structure is discussed. The accuracy of these codes are assessed by comparison of the numerical solutions with experimental data. The effects of turbulence model, numerical dissipation, and grid refinement are presented. The overall efficiency of these codes are also assessed by examining their convergence rates, computational time per time step, and maximum allowable time step for time-accurate computations. Overall, the numerical results from both codes compared equally well with experimental data, however, the NSS code was found to be significantly more efficient than the F3D code.