DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprung, J.L.; Jow, H-N; Rollstin, J.A.
1990-12-01
Estimation of offsite accident consequences is the customary final step in a probabilistic assessment of the risks of severe nuclear reactor accidents. Recently, the Nuclear Regulatory Commission reassessed the risks of severe accidents at five US power reactors (NUREG-1150). Offsite accident consequences for NUREG-1150 source terms were estimated using the MELCOR Accident Consequence Code System (MACCS). Before these calculations were performed, most MACCS input parameters were reviewed, and for each parameter reviewed, a best-estimate value was recommended. This report presents the results of these reviews. Specifically, recommended values and the basis for their selection are presented for MACCS atmospheric andmore » biospheric transport, emergency response, food pathway, and economic input parameters. Dose conversion factors and health effect parameters are not reviewed in this report. 134 refs., 15 figs., 110 tabs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Little, M.P.; Muirhead, C.R.; Goossens, L.H.J.
1997-12-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library ofmore » uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA late health effects models. This volume contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures, (3) the rationales and results for the expert panel on late health effects, (4) short biographies of the experts, and (5) the aggregated results of their responses.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goossens, L.H.J.; Kraan, B.C.P.; Cooke, R.M.
1998-04-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library ofmore » uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA internal dosimetry models. This volume contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures, (3) the rationales and results for the panel on internal dosimetry, (4) short biographies of the experts, and (5) the aggregated results of their responses.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goossens, L.H.J.; Kraan, B.C.P.; Cooke, R.M.
1997-12-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library ofmore » uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA deposited material and external dose models. This volume contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures, (3) the rationales and results for the panel on deposited material and external doses, (4) short biographies of the experts, and (5) the aggregated results of their responses.« less
Input-output model for MACCS nuclear accident impacts estimation¹
DOE Office of Scientific and Technical Information (OSTI.GOV)
Outkin, Alexander V.; Bixler, Nathan E.; Vargas, Vanessa N
Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domesticmore » product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.« less
Validation of reactive gases and aerosols in the MACC global analysis and forecast system
NASA Astrophysics Data System (ADS)
Eskes, H.; Huijnen, V.; Arola, A.; Benedictow, A.; Blechschmidt, A.-M.; Botek, E.; Boucher, O.; Bouarar, I.; Chabrillat, S.; Cuevas, E.; Engelen, R.; Flentje, H.; Gaudel, A.; Griesfeller, J.; Jones, L.; Kapsomenakis, J.; Katragkou, E.; Kinne, S.; Langerock, B.; Razinger, M.; Richter, A.; Schultz, M.; Schulz, M.; Sudarchikova, N.; Thouret, V.; Vrekoussis, M.; Wagner, A.; Zerefos, C.
2015-02-01
The European MACC (Monitoring Atmospheric Composition and Climate) project is preparing the operational Copernicus Atmosphere Monitoring Service (CAMS), one of the services of the European Copernicus Programme on Earth observation and environmental services. MACC uses data assimilation to combine in-situ and remote sensing observations with global and regional models of atmospheric reactive gases, aerosols and greenhouse gases, and is based on the Integrated Forecast System of the ECMWF. The global component of the MACC service has a dedicated validation activity to document the quality of the atmospheric composition products. In this paper we discuss the approach to validation that has been developed over the past three years. Topics discussed are the validation requirements, the operational aspects, the measurement data sets used, the structure of the validation reports, the models and assimilation systems validated, the procedure to introduce new upgrades, and the scoring methods. One specific target of the MACC system concerns forecasting special events with high pollution concentrations. Such events receive extra attention in the validation process. Finally, a summary is provided of the results from the validation of the latest set of daily global analysis and forecast products from the MACC system reported in November 2014.
Validation of reactive gases and aerosols in the MACC global analysis and forecast system
NASA Astrophysics Data System (ADS)
Eskes, H.; Huijnen, V.; Arola, A.; Benedictow, A.; Blechschmidt, A.-M.; Botek, E.; Boucher, O.; Bouarar, I.; Chabrillat, S.; Cuevas, E.; Engelen, R.; Flentje, H.; Gaudel, A.; Griesfeller, J.; Jones, L.; Kapsomenakis, J.; Katragkou, E.; Kinne, S.; Langerock, B.; Razinger, M.; Richter, A.; Schultz, M.; Schulz, M.; Sudarchikova, N.; Thouret, V.; Vrekoussis, M.; Wagner, A.; Zerefos, C.
2015-11-01
The European MACC (Monitoring Atmospheric Composition and Climate) project is preparing the operational Copernicus Atmosphere Monitoring Service (CAMS), one of the services of the European Copernicus Programme on Earth observation and environmental services. MACC uses data assimilation to combine in situ and remote sensing observations with global and regional models of atmospheric reactive gases, aerosols, and greenhouse gases, and is based on the Integrated Forecasting System of the European Centre for Medium-Range Weather Forecasts (ECMWF). The global component of the MACC service has a dedicated validation activity to document the quality of the atmospheric composition products. In this paper we discuss the approach to validation that has been developed over the past 3 years. Topics discussed are the validation requirements, the operational aspects, the measurement data sets used, the structure of the validation reports, the models and assimilation systems validated, the procedure to introduce new upgrades, and the scoring methods. One specific target of the MACC system concerns forecasting special events with high-pollution concentrations. Such events receive extra attention in the validation process. Finally, a summary is provided of the results from the validation of the latest set of daily global analysis and forecast products from the MACC system reported in November 2014.
Prognostic Value of MACC1 in Digestive System Neoplasms: A Systematic Review and Meta-Analysis
Wu, Zhenzhen; Zhou, Rui; Su, Yuqi; Sun, Li; Liao, Yulin; Liao, Wangjun
2015-01-01
Metastasis associated in colon cancer 1 (MACC1), a newly identified oncogene, has been associated with poor survival of cancer patients by multiple studies. However, the prognostic value of MACC1 in digestive system neoplasms needs systematic evidence to verify. Therefore, we aimed to provide further evidence on this topic by systematic review and meta-analysis. Literature search was conducted in multiple databases and eligible studies analyzing survival data and MACC1 expression were included for meta-analysis. Hazard ratio (HR) for clinical outcome was chosen as an effect measure of interest. According to our inclusion criteria, 18 studies with a total of 2,948 patients were identified. Pooled HRs indicated that high MACC1 expression significantly correlates with poorer OS in patients with digestive system neoplasms (HR = 1.94; 95% CI: 1.49–2.53) as well as poorer relapse-free survival (HR = 1.94, 95% CI: 1.33–2.82). The results of subgroup studies categorized by methodology, anatomic structure, and cancer subtype for pooled OS were all consistent with the overall pooled HR for OS as well. No publication bias was detected according to test of funnel plot asymmetry and Egger's test. In conclusion, high MACC1 expression may serve as a prognostic biomarker to guide individualized management in clinical practice for digestive system neoplasms. PMID:26090393
Prognostic Value of MACC1 in Digestive System Neoplasms: A Systematic Review and Meta-Analysis.
Wu, Zhenzhen; Zhou, Rui; Su, Yuqi; Sun, Li; Liao, Yulin; Liao, Wangjun
2015-01-01
Metastasis associated in colon cancer 1 (MACC1), a newly identified oncogene, has been associated with poor survival of cancer patients by multiple studies. However, the prognostic value of MACC1 in digestive system neoplasms needs systematic evidence to verify. Therefore, we aimed to provide further evidence on this topic by systematic review and meta-analysis. Literature search was conducted in multiple databases and eligible studies analyzing survival data and MACC1 expression were included for meta-analysis. Hazard ratio (HR) for clinical outcome was chosen as an effect measure of interest. According to our inclusion criteria, 18 studies with a total of 2,948 patients were identified. Pooled HRs indicated that high MACC1 expression significantly correlates with poorer OS in patients with digestive system neoplasms (HR = 1.94; 95% CI: 1.49-2.53) as well as poorer relapse-free survival (HR = 1.94, 95% CI: 1.33-2.82). The results of subgroup studies categorized by methodology, anatomic structure, and cancer subtype for pooled OS were all consistent with the overall pooled HR for OS as well. No publication bias was detected according to test of funnel plot asymmetry and Egger's test. In conclusion, high MACC1 expression may serve as a prognostic biomarker to guide individualized management in clinical practice for digestive system neoplasms.
Rohr, U-P; Herrmann, P; Ilm, K; Zhang, H; Lohmann, S; Reiser, A; Muranyi, A; Smith, J; Burock, S; Osterland, M; Leith, K; Singh, S; Brunhoeber, P; Bowermaster, R; Tie, J; Christie, M; Wong, H-L; Waring, P; Shanmugam, K; Gibbs, P; Stein, U
2017-08-01
We assessed the novel MACC1 gene to further stratify stage II colon cancer patients with proficient mismatch repair (pMMR). Four cohorts with 596 patients were analyzed: Charité 1 discovery cohort was assayed for MACC1 mRNA expression and MMR in cryo-preserved tumors. Charité 2 comparison cohort was used to translate MACC1 qRT-PCR analyses to FFPE samples. In the BIOGRID 1 training cohort MACC1 mRNA levels were related to MACC1 protein levels from immunohistochemistry in FFPE sections; also analyzed for MMR. Chemotherapy-naïve pMMR patients were stratified by MACC1 mRNA and protein expression to establish risk groups based on recurrence-free survival (RFS). Risk stratification from BIOGRID 1 was confirmed in the BIOGRID 2 validation cohort. Pooled BIOGRID datasets produced a best effect-size estimate. In BIOGRID 1, using qRT-PCR and immunohistochemistry for MACC1 detection, pMMR/MACC1-low patients had a lower recurrence probability versus pMMR/MACC1-high patients (5-year RFS of 92% and 67% versus 100% and 68%, respectively). In BIOGRID 2, longer RFS was confirmed for pMMR/MACC1-low versus pMMR/MACC1-high patients (5-year RFS of 100% versus 90%, respectively). In the pooled dataset, 6.5% of patients were pMMR/MACC1-low with no disease recurrence, resulting in a 17% higher 5-year RFS [95% confidence interval (CI) (12.6%-21.3%)] versus pMMR/MACC1-high patients (P = 0.037). Outcomes were similar for pMMR/MACC1-low and deficient MMR (dMMR) patients (5-year RFS of 100% and 96%, respectively). MACC1 expression stratifies colon cancer patients with unfavorable pMMR status. Stage II colon cancer patients with pMMR/MACC1-low tumors have a similar favorable prognosis to those with dMMR with potential implications for the role of adjuvant therapy. © The Author 2017. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Engebretson, M. J.; Valentic, T. A.; Stehle, R. H.; Hughes, W. J.
2004-05-01
The Magnetometer Array for Cusp and Cleft Studies (MACCS) is a two-dimensional array of eight fluxgate magnetometers that was established in 1992-1993 in the Eastern Canadian Arctic from 75° to over 80° MLAT to study electrodynamic interactions between the solar wind and Earth's magnetosphere and high-latitude ionosphere. A ninth site in Nain, Labrador, extends coverage down to 66° between existing Canadian and Greenland stations. Originally designed as part of NSF's GEM (Geospace Environment Modeling) Program, MACCS has contributed to the study of transients and waves at the magnetospheric boundary and in the near-cusp region as well as to large, cooperative, studies of ionospheric convection and substorm processes. Because of the limitations of existing telephone lines to each site, it has not been possible to economically access MACCS data promptly; instead, each month's collected data is recorded and mailed to the U.S. for processing and eventual posting on a publicly-accessible web site, http://space.augsburg.edu/space. As part of its recently renewed funding, NSF has supported the development of a near-real-time data transport system using the Iridium satellite network, which will be implemented at two MACCS sites in summer 2004. At the core of the new MACCS communications system is the Data Transport Network, software developed with NSF-ITR funding to automate the transfer of scientific data from remote field stations over unreliable, bandwidth-constrained network connections. The system utilizes a store-and-forward architecture based on sending data files as attachments to Usenet messages. This scheme not only isolates the instruments from network outages, but also provides a consistent framework for organizing and accessing multiple data feeds. Client programs are able to subscribe to data feeds to perform tasks such as system health monitoring, data processing, web page updates and e-mail alerts. The MACCS sites will employ the Data Transport Network on a small local Linux-based computer connected to an Iridium transceiver. Between 3-5Mb of data a day will be collected from the magnetometers and delivered in near-real-time for automatic distribution to modelers and index developers. More information about the Data Transport Network can be found at http://transport.sri.com/TransportDevel .
Juneja, Manisha; Kobelt, Dennis; Walther, Wolfgang; Voss, Cynthia; Smith, Janice; Specker, Edgar; Neuenschwander, Martin; Gohlke, Björn-Oliver; Dahlmann, Mathias; Radetzki, Silke; Preissner, Robert; von Kries, Jens Peter; Schlag, Peter Michael; Stein, Ulrike
2017-06-01
MACC1 (Metastasis Associated in Colon Cancer 1) is a key driver and prognostic biomarker for cancer progression and metastasis in a large variety of solid tumor types, particularly colorectal cancer (CRC). However, no MACC1 inhibitors have been identified yet. Therefore, we aimed to target MACC1 expression using a luciferase reporter-based high-throughput screening with the ChemBioNet library of more than 30,000 compounds. The small molecules lovastatin and rottlerin emerged as the most potent MACC1 transcriptional inhibitors. They remarkably inhibited MACC1 promoter activity and expression, resulting in reduced cell motility. Lovastatin impaired the binding of the transcription factors c-Jun and Sp1 to the MACC1 promoter, thereby inhibiting MACC1 transcription. Most importantly, in CRC-xenografted mice, lovastatin and rottlerin restricted MACC1 expression and liver metastasis. This is-to the best of our knowledge-the first identification of inhibitors restricting cancer progression and metastasis via the novel target MACC1. This drug repositioning might be of therapeutic value for CRC patients.
Air Support Control Officer Individual Position Training Simulation
2017-06-01
Analysis design development implementation evaluation ASCO Air support control officer ASLT Air support liaison team ASNO Air support net operator...Instructional system design LSTM Long-short term memory MACCS Marine Air Command and Control System MAGTF Marine Air Ground Task Force MASS Marine Air...information to designated MACCS agencies. ASCOs play an important part in facilitating the safe and successful conduct of air operations in DASC- controlled
MACC1 - a novel target for solid cancers.
Stein, Ulrike
2013-09-01
The metastatic dissemination of primary tumors is directly linked to patient survival in many tumor entities. The previously undescribed gene metastasis-associated in colon cancer 1 (MACC1) was discovered by genome-wide analyses in colorectal cancer (CRC) tissues. MACC1 is a tumor stage-independent predictor for CRC metastasis linked to metastasis-free survival. In this review, the discovery of MACC1 is briefly presented. In the following, the overwhelming confirmation of these data is provided supporting MACC1 as a new remarkable biomarker for disease prognosis and prediction of therapy response for CRC and also for a variety of additional forms of solid cancers. Lastly, the potential clinical utility of MACC1 as a target for prevention or restriction of tumor progression and metastasis is envisioned. MACC1 has been identified as a prognostic biomarker in a variety of solid cancers. MACC1 correlated with tumor formation and progression, development of metastases and patient survival representing a decisive driver for tumorigenesis and metastasis. MACC1 was also demonstrated to be of predictive value for therapy response. MACC1 is a promising therapeutic target for anti-tumor and anti-metastatic intervention strategies of solid cancers. Its clinical utility, however, must be demonstrated in clinical trials.
Wang, Lin; Lin, Li; Chen, Xi; Sun, Li; Liao, Yulin; Huang, Na; Liao, Wangjun
2015-01-01
Vasculogenic mimicry (VM) is a blood supply modality that is strongly associated with the epithelial-mesenchymal transition (EMT), TWIST1 activation and tumor progression. We previously reported that metastasis-associated in colon cancer-1 (MACC1) induced the EMT and was associated with a poor prognosis of patients with gastric cancer (GC), but it remains unknown whether MACC1 promotes VM and regulates the TWIST signaling pathway in GC. In this study, we investigated MACC1 expression and VM by immunohistochemistry in 88 patients with stage IV GC, and also investigated the role of TWIST1 and TWIST2 in MACC1-induced VM by using nude mice with GC xenografts and GC cell lines. We found that the VM density was significantly increased in the tumors of patients who died of GC and was positively correlated with MACC1 immunoreactivity (p < 0.05). The 3-year survival rate was only 8.6% in patients whose tumors showed double positive staining for MACC1 and VM, whereas it was 41.7% in patients whose tumors were negative for both MACC1 and VM. Moreover, nuclear expression of MACC1, TWIST1, and TWIST2 was upregulated in GC tissues compared with matched adjacent non-tumorous tissues (p < 0.05). Overexpression of MACC1 increased TWIST1/2 expression and induced typical VM in the GC xenografts of nude mice and in GC cell lines. MACC1 enhanced TWIST1/2 promoter activity and facilitated VM, while silencing of TWIST1 or TWIST2 inhibited VM. Hepatocyte growth factor (HGF) increased the nuclear translocation of MACC1, TWIST1, and TWIST2, while a c-Met inhibitor reduced these effects. These findings indicate that MACC1 promotes VM in GC by regulating the HGF/c-Met-TWIST1/2 signaling pathway, which means that MACC1 and this pathway are potential new therapeutic targets for GC. PMID:25895023
MACC1 regulates Fas mediated apoptosis through STAT1/3 - Mcl-1 signaling in solid cancers.
Radhakrishnan, Harikrishnan; Ilm, Katharina; Walther, Wolfgang; Shirasawa, Senji; Sasazuki, Takehiko; Daniel, Peter T; Gillissen, Bernhard; Stein, Ulrike
2017-09-10
MACC1 was identified as a novel player in cancer progression and metastasis, but its role in death receptor-mediated apoptosis is still unexplored. We show that MACC1 knockdown sensitizes cancer cells to death receptor-mediated apoptosis. For the first time, we provide evidence for STAT signaling as a MACC1 target. MACC1 knockdown drastically reduced STAT1/3 activating phosphorylation, thereby regulating the expression of its apoptosis targets Mcl-1 and Fas. STAT signaling inhibition by the JAK1/2 inhibitor ruxolitinib mimicked MACC1 knockdown-mediated molecular signatures and apoptosis sensitization to Fas activation. Despite the increased Fas expression, the reduced Mcl-1 expression was instrumental in apoptosis sensitization. This reduced Mcl-1-mediated apoptosis sensitization was Bax and Bak dependent. MACC1 knockdown also increased TRAIL-induced apoptosis. MACC1 overexpression enhanced STAT1/3 phosphorylation and increased Mcl-1 expression, which was abrogated by ruxolitinib. The central role of Mcl-1 was strengthened by the resistance of Mcl-1 overexpressing cells to apoptosis induction. The clinical relevance of Mcl-1 regulation by MACC1 was supported by their positive expression correlation in patient-derived tumors. Altogether, we reveal a novel death receptor-mediated apoptosis regulatory mechanism by MACC1 in solid cancers through modulation of the STAT1/3-Mcl-1 axis. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harper, F.T.; Young, M.L.; Miller, L.A.
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, completed in 1990, estimate the risks presented by nuclear installations based on postulated frequencies and magnitudes of potential accidents. In 1991, the US Nuclear Regulatory Commission (NRC) and the Commission of the European Communities (CEC) began a joint uncertainty analysis of the two codes. The objective was to develop credible and traceable uncertainty distributions for the input variables of the codes. Expert elicitation, developed independently, was identified as the best technology available for developing a library of uncertainty distributions for the selected consequence parameters. The study was formulatedmore » jointly and was limited to the current code models and to physical quantities that could be measured in experiments. To validate the distributions generated for the wet deposition input variables, samples were taken from these distributions and propagated through the wet deposition code model along with the Gaussian plume model (GPM) implemented in the MACCS and COSYMA codes. Resulting distributions closely replicated the aggregated elicited wet deposition distributions. Project teams from the NRC and CEC cooperated successfully to develop and implement a unified process for the elaboration of uncertainty distributions on consequence code input parameters. Formal expert judgment elicitation proved valuable for synthesizing the best available information. Distributions on measurable atmospheric dispersion and deposition parameters were successfully elicited from experts involved in the many phenomenological areas of consequence analysis. This volume is the second of a three-volume document describing the project and contains two appendices describing the rationales for the dispersion and deposition data along with short biographies of the 16 experts who participated in the project.« less
Zhao, Yang; Dai, Cong; Wang, Meng; Kang, Huafeng; Lin, Shuai; Yang, Pengtao; Liu, Xinghan; Liu, Kang; Xu, Peng; Zheng, Yi; Li, Shanli; Dai, Zhijun
2016-01-01
Metastasis-associated in colon cancer-1 (MACC1) has been reported to be overexpressed in diverse human malignancies, and the increasing amount of evidences suggest that its overexpression is associated with the development and progression of many human tumors. However, the prognostic and clinicopathological value of MACC1 in colorectal cancer remains inconclusive. Therefore, we conducted this meta-analysis to investigate the effect of MACC1 overexpression on clinicopathological features and survival outcomes in colorectal cancer. PubMed, CNKI, and Wanfang databases were searched for relevant articles published update to December 2015. Correlation of MACC1 expression level with overall survival (OS), disease-free survival (DFS), and clinicopathological features were analyzed. In this meta-analysis, fifteen studies with a total of 2,161 colorectal cancer patients were included. Our results showed that MACC1 overexpression was significantly associated with poorer OS and DFS. Moreover, MACC1 overexpression was significantly associated with gender, localization, TNM stage, T stage, and N stage. Together, our meta-analysis showed that MACC1 overexpression was significantly associated with poor survival rates, regional invasion and lymph-node metastasis. MACC1 expression level can serve as a novel prognostic factor in colorectal cancer patients. PMID:27542234
Chen, Shuo; Zong, Zhi-Hong; Wu, Dan-Dan; Sun, Kai-Xuan; Liu, Bo-Liang; Zhao, Yang
2017-04-01
Metastasis-associated in colon cancer-1 (MACC1), has recently been identified as a key regulator in the progression of many cancers. However, its role in endometrial carcinoma (EC) remains unknown. MACC1 expression was determined in EC and normal endometrial tissues by immunohistochemistry. EC cell phenotypes and related molecules were examined after MACC1 downregulation by Small interfering RNA (siRNA) or microRNA (miRNA) transfection. We found that MACC1 was highly expressed in EC tissues than normal samples, and was significantly different in FIGO staging (I and II vs. III and IV), the depth of myometrial infiltration (<1/2 vs. ≥1/2), lymph nodes metastasis (negative vs. positive), besides, MACC1 overexpression was correlated with lower cumulative and relapse-free survival rate. MACC1 downregulation by siRNA transfection significantly induced G1 phrase arrest, suppressed EC cell proliferation, migration, and invasion. In addition, MACC1 downregulation also reduced expression of Cyclin D1 and Cyclin-dependent Kinase 2 (CDK2), N-cadherin (N-Ca), α-SMA, matrix metalloproteinase 2 (MMP2), and MMP9, but increased expression of E-cadherin (E-Ca). Bioinformatic predictions and dual-luciferase reporter assays indicate that MACC1 is a possible target of miR-23b. MiR-23b overexpression reduced MACC1 expression in vitro and induced G1 phrase arrest, suppressed cell proliferation, migration, and invasion. MiR-23b transfection also reduced Cyclin D1 and CDK2, N-Ca, α-SMA, MMP2, MMP9 expression, but increased E-Ca expression. Furthermore, the nude mouse xenograft assay showed that miR-23b overexpression suppressed tumour growth through downregulating MACC1 expression. Taken together, our results demonstrate for the first time that MACC1 may be a new and important diagnosis and therapeutic target of endometrial carcinoma. © 2017 Wiley Periodicals, Inc.
Investigation of MACC1 Gene Expression in Head and Neck Cancer and Cancer Stem Cells.
Evran, Ebru; Şahin, Hilal; Akbaş, Kübra; Çiğdem, Sadik; Gündüz, Esra
2016-12-01
By investigating the MACC1 gene (metastasis-associated in colon cancer 1) in cancer stem cells (CSC) resistant to chemotherapy and in cancer stem cells (CSC) resistant to chemotherapy and in cancer cells (CS) sensitive to chemotherapy we determineda steady expression in both types of cells in head and neck cancer. In conformity with the result we examined if this gene could be a competitor gene for chemotherapy. According to literature, the MACC1 gene shows a clear expression in head and neck cancer cells [1]. Here we examined MACC1 expression in CSC and investigated it as a possible biomarker. Our experiments were performed in the UT -SCC -74 in primary head and neck cancer cell line. We examined the MACC -1 gene expression by Real Time PCR from both isolated CSC and CS. Expression of MACC -1 gene of cancer stem cells showed an two-fold increase compared with cancer cells. Based on the positive expression of MACC1 in both CS and CSC, this gene may serve as a potential biomarker in head and neck cancer. By comparing the results of this study with the novel features of MACC1, two important hypotheses could be examined. The first hypothesis is that MACC1 is a possible transcripton factor in colon cancer, which influences a high expression of CSC in head and neck and affects the expression of three biomarkers of the CSC control group biomarkers. The second hypothesisis is that the positive expression of MACC1 in patients with a malignant prognosis of tongue cancer, which belongs to head and neck cancer types, operates a faster development of CSC to cancer cells.
Gao, Yue-chun; Yu, Xian-peng; He, Ji-qiang; Chen, Fang
2012-01-01
To assess the value of SYNTAX score to predict major adverse cardiac and cerebrovascular events (MACCE) among patients with three-vessel or left-main coronary artery disease undergoing percutaneous coronary intervention. 190 patients with three-vessel or left-main coronary artery disease undergoing percutaneous coronary intervention (PCI) with Cypher select drug-eluting stent were enrolled. SYNTAX score and clinical SYNTAX score were retrospectively calculated. Our clinical Endpoint focused on MACCE, a composite of death, nonfatal myocardial infarction (MI), stroke and repeat revascularization. The value of SYNTAX score and clinical SYNTAX score to predict MACCE were studied respectively. 29 patients were observed to suffer from MACCE, accounting 18.5% of the overall 190 patients. MACCE rates of low (≤ 20.5), intermediate (21.0 - 31.0), and high (≥ 31.5) tertiles according to SYNTAX score were 9.1%, 16.2% and 30.9% respectively. Both univariate and multivariate analysis showed that SYNTAX score was the independent predictor of MACCE. MACCE rates of low (≤ 19.5), intermediate (19.6 - 29.1), and high (≥ 29.2) tertiles according to clinical SYNTAX score were 14.9%, 9.8% and 30.6% respectively. Both univariate and multivariate analysis showed that clinical SYNTAX score was the independent predictor of MACCE. ROC analysis showed both SYNTAX score (AUC = 0.667, P = 0.004) and clinical SYNTAX score (AUC = 0.636, P = 0.020) had predictive value of MACCE. Clinical SYNTAX score failed to show better predictive ability than the SYNTAX score. Both SYNTAX score and clinical SYNTAX score could be independent risk predictors for MACCE among patients with three-vessel or left-main coronary artery disease undergoing percutaneous coronary intervention. Clinical SYNTAX score failed to show better predictive ability than the SYNTAX score in this group of patients.
Loughlin, Daniel H; Macpherson, Alexander J; Kaufman, Katherine R; Keaveny, Brian N
2017-10-01
A marginal abatement cost curve (MACC) traces out the relationship between the quantity of pollution abated and the marginal cost of abating each additional unit. In the context of air quality management, MACCs are typically developed by sorting control technologies by their relative cost-effectiveness. Other potentially important abatement measures such as renewable electricity, energy efficiency, and fuel switching (RE/EE/FS) are often not incorporated into MACCs, as it is difficult to quantify their costs and abatement potential. In this paper, a U.S. energy system model is used to develop a MACC for nitrogen oxides (NO x ) that incorporates both traditional controls and these additional measures. The MACC is decomposed by sector, and the relative cost-effectiveness of RE/EE/FS and traditional controls are compared. RE/EE/FS are shown to have the potential to increase emission reductions beyond what is possible when applying traditional controls alone. Furthermore, a portion of RE/EE/FS appear to be cost-competitive with traditional controls. Renewable electricity, energy efficiency, and fuel switching can be cost-competitive with traditional air pollutant controls for abating air pollutant emissions. The application of renewable electricity, energy efficiency, and fuel switching is also shown to have the potential to increase emission reductions beyond what is possible when applying traditional controls alone.
Atmospheric Science Data Center
2016-11-25
Gulf of Mexico Atmospheric Composition and Climate Study (GoMACCS) Gulf of Mexico Atmospheric Composition and Climate Study (GoMACCS) is an intensive ... study area encompasses Texas and the northwestern Gulf of Mexico during July, August, September, and October, 2006. The Multi-angle ...
Burock, Susen; Herrmann, Pia; Wendler, Ina; Niederstrasser, Markus; Wernecke, Klaus-Dieter; Stein, Ulrike
2015-01-01
AIM: To evaluate the diagnostic and prognostic value of circulating Metastasis Associated in Colon Cancer 1 (MACC1) transcripts in plasma of gastric cancer patients. METHODS: We provide for the first time a blood-based assay for transcript quantification of the metastasis inducer MACC1 in a prospective study of gastric cancer patient plasma. MACC1 is a strong prognostic biomarker for tumor progression and metastasis in a variety of solid cancers. We conducted a study to define the diagnostic and prognostic power of MACC1 transcripts using 76 plasma samples from gastric cancer patients, either newly diagnosed with gastric cancer, newly diagnosed with metachronous metastasis of gastric cancer, as well as follow-up patients. Findings were controlled by using plasma samples from 54 tumor-free volunteers. Plasma was separated, RNA was isolated, and levels of MACC1 as well as S100A4 transcripts were determined by quantitative RT-PCR. RESULTS: Based on the levels of circulating MACC1 transcripts in plasma we significantly discriminated tumor-free volunteers and gastric cancer patients (P < 0.001). Levels of circulating MACC1 transcripts were increased in gastric cancer patients of each disease stage, compared to tumor-free volunteers: patients with tumors without metastasis (P = 0.005), with synchronous metastasis (P = 0.002), with metachronous metastasis (P = 0.005), and patients during follow-up (P = 0.021). Sensitivity was 0.68 (95%CI: 0.45-0.85) and specificity was 0.89 (95%CI: 0.77-0.95), respectively. Importantly, gastric cancer patients with high circulating MACC1 transcript levels in plasma demonstrated significantly shorter survival when compared with patients demonstrating low MACC1 levels (P = 0.0015). Furthermore, gastric cancer patients with high circulating transcript levels of MACC1 as well as of S100A4 in plasma demonstrated significantly shorter survival when compared with patients demonstrating low levels of both biomarkers or with only one biomarker elevated (P = 0.001). CONCLUSION: Levels of circulating MACC1 transcripts in plasma of gastric cancer patients are of diagnostic value and are prognostic for patient survival in a prospective study. PMID:25574109
MISR Regional GoMACCS Imagery Overview
Atmospheric Science Data Center
2016-08-24
... View Data | Download Data About this Web Site: Visualizations of select MISR Level 3 data for special regional ... version used in support of the GoMACCS Campaign. More information about the Level 1 and Level 2 products subsetted for the GoMACCS ...
A Web Server for MACCS Magnetometer Data
NASA Technical Reports Server (NTRS)
Engebretson, Mark J.
1998-01-01
NASA Grant NAG5-3719 was provided to Augsburg College to support the development of a web server for the Magnetometer Array for Cusp and Cleft Studies (MACCS), a two-dimensional array of fluxgate magnetometers located at cusp latitudes in Arctic Canada. MACCS was developed as part of the National Science Foundation's GEM (Geospace Environment Modeling) Program, which was designed in part to complement NASA's Global Geospace Science programs during the decade of the 1990s. This report describes the successful use of these grant funds to support a working web page that provides both daily plots and file access to any user accessing the worldwide web. The MACCS home page can be accessed at http://space.augsburg.edu/space/MaccsHome.html.
Marginal abatement cost curves for NOx that account for ...
A marginal abatement cost curve (MACC) traces out the relationship between the quantity of pollution abated and the marginal cost of abating each additional unit. In the context of air quality management, MACCs typically are developed by sorting end-of-pipe controls by their respective cost effectiveness. Alternative measures, such as renewable electricity, energy efficiency, and fuel switching (RE/EE/FS), are not considered as it is difficult to quantify their abatement potential. In this paper, we demonstrate the use of an energy system model to develop a MACC for nitrogen oxides (NOx) that incorporates both end-of-pipe controls and these alternative measures. We decompose the MACC by sector, and evaluate the cost-effectiveness of RE/EE/FS relative to end-of-pipe controls. RE/EE/FS are shown to produce considerable emission reductions after end-of-pipe controls have been exhausted. Furthermore, some RE/EE/FS are shown to be cost-competitive with end-of-pipe controls. Demonstrate how the MARKAL energy system model can be used to evaluate the potential role of renewable electricity, energy efficiency and fuel switching (RE/EE/FS) in achieving NOx reductions. For this particular analysis, we show that RE/EE/FSs are able to increase the quantity of NOx reductions available for a particular marginal cost (ranging from $5k per ton to $40k per ton) by approximately 50%.
A regional air quality forecasting system over Europe: the MACC-II daily ensemble production
NASA Astrophysics Data System (ADS)
Marécal, V.; Peuch, V.-H.; Andersson, C.; Andersson, S.; Arteta, J.; Beekmann, M.; Benedictow, A.; Bergström, R.; Bessagnet, B.; Cansado, A.; Chéroux, F.; Colette, A.; Coman, A.; Curier, R. L.; Denier van der Gon, H. A. C.; Drouin, A.; Elbern, H.; Emili, E.; Engelen, R. J.; Eskes, H. J.; Foret, G.; Friese, E.; Gauss, M.; Giannaros, C.; Guth, J.; Joly, M.; Jaumouillé, E.; Josse, B.; Kadygrov, N.; Kaiser, J. W.; Krajsek, K.; Kuenen, J.; Kumar, U.; Liora, N.; Lopez, E.; Malherbe, L.; Martinez, I.; Melas, D.; Meleux, F.; Menut, L.; Moinat, P.; Morales, T.; Parmentier, J.; Piacentini, A.; Plu, M.; Poupkou, A.; Queguiner, S.; Robertson, L.; Rouïl, L.; Schaap, M.; Segers, A.; Sofiev, M.; Thomas, M.; Timmermans, R.; Valdebenito, Á.; van Velthoven, P.; van Versendaal, R.; Vira, J.; Ung, A.
2015-03-01
This paper describes the pre-operational analysis and forecasting system developed during MACC (Monitoring Atmospheric Composition and Climate) and continued in MACC-II (Monitoring Atmospheric Composition and Climate: Interim Implementation) European projects to provide air quality services for the European continent. The paper gives an overall picture of its status at the end of MACC-II (summer 2014). This system is based on seven state-of-the art models developed and run in Europe (CHIMERE, EMEP, EURAD-IM, LOTOS-EUROS, MATCH, MOCAGE and SILAM). These models are used to calculate multi-model ensemble products. The MACC-II system provides daily 96 h forecasts with hourly outputs of 10 chemical species/aerosols (O3, NO2, SO2, CO, PM10, PM2.5, NO, NH3, total NMVOCs and PAN + PAN precursors) over 8 vertical levels from the surface to 5 km height. The hourly analysis at the surface is done a posteriori for the past day using a selection of representative air quality data from European monitoring stations. The performances of the system are assessed daily, weekly and 3 monthly (seasonally) through statistical indicators calculated using the available representative air quality data from European monitoring stations. Results for a case study show the ability of the median ensemble to forecast regional ozone pollution events. The time period of this case study is also used to illustrate that the median ensemble generally outperforms each of the individual models and that it is still robust even if two of the seven models are missing. The seasonal performances of the individual models and of the multi-model ensemble have been monitored since September 2009 for ozone, NO2 and PM10 and show an overall improvement over time. The change of the skills of the ensemble over the past two summers for ozone and the past two winters for PM10 are discussed in the paper. While the evolution of the ozone scores is not significant, there are improvements of PM10 over the past two winters that can be at least partly attributed to new developments on aerosols in the seven individual models. Nevertheless, the year to year changes in the models and ensemble skills are also linked to the variability of the meteorological conditions and of the set of observations used to calculate the statistical indicators. In parallel, a scientific analysis of the results of the seven models and of the ensemble is also done over the Mediterranean area because of the specificity of its meteorology and emissions. The system is robust in terms of the production availability. Major efforts have been done in MACC-II towards the operationalisation of all its components. Foreseen developments and research for improving its performances are discussed in the conclusion.
NASA Astrophysics Data System (ADS)
Wagner, A.; Blechschmidt, A.-M.; Bouarar, I.; Brunke, E.-G.; Clerbaux, C.; Cupeiro, M.; Cristofanelli, P.; Eskes, H.; Flemming, J.; Flentje, H.; George, M.; Gilge, S.; Hilboll, A.; Inness, A.; Kapsomenakis, J.; Richter, A.; Ries, L.; Spangl, W.; Stein, O.; Weller, R.; Zerefos, C.
2015-12-01
The Monitoring Atmospheric Composition and Climate (MACC) project represents the European Union's Copernicus Atmosphere Monitoring Service (CAMS) (
Wang, Chunlin; Wen, Zhaowei; Xie, Jianming; Zhao, Yang; Zhao, Liang; Zhang, Shuyi; Liu, Yajing; Xue, Yan; Shi, Min
2017-04-08
Chemotherapeutic insensitivity is a main obstacle for effective treatment of gastric cancer (GC), the underlying mechanism remains to be investigated. Metastasis-associated in colon cancer-1 (MACC1), a transcription factor highly expressed in GC, is found to be related to chemotherapy sensitivity. Monocarboxylate transporter 1 (MCT1), a plasma membrane protein co-transporting lactate and H + , mediates drug sensitivity by regulating lactate metabolism. Targeting MCT1 has recently been regarded as a promising way to treat cancers and MCT1 inhibitor has entered the clinical trial for GC treatment. However, the correlation of these two genes and their combined effects on chemotherapy sensitivity has not been clarified. In this study, we found that MACC1 and MCT1 were both highly expressed in GC and exhibited a positive correlation in clinical samples. Further, we demonstrated that MACC1 could mediate sensitivity of 5-FU and cisplatin in GC cells, and MACC1 mediated MCT1 regulation was closely related to this sensitivity. A MCT1 inhibitor AZD3965 recovered the sensitivity of 5-FU and cisplatin in GC cells which overexpressed MACC1. These results suggested that MACC1 could influence the chemotherapy sensitivity by regulating MCT1 expression, providing new ideas and strategy for GC treatment. Copyright © 2017 Elsevier Inc. All rights reserved.
Ilm, Katharina; Kemmner, Wolfgang; Osterland, Marc; Burock, Susen; Koch, Gudrun; Herrmann, Pia; Schlag, Peter M; Stein, Ulrike
2015-02-14
The metastasis-associated in colon cancer 1 (MACC1) gene has been identified as prognostic biomarker for colorectal cancer (CRC). Here, we aimed at the refinement of risk assessment by separate and combined survival analyses of MACC1 expression with any of the markers KRAS mutated in codon 12 (KRAS G12) or codon 13 (KRAS G13), BRAF V600 mutation and MSI status in a retrospective study of 99 CRC patients with tumors UICC staged I, II and III. We showed that only high MACC1 expression (HR: 6.09, 95% CI: 2.50-14.85, P < 0.001) and KRAS G13 mutation (HR: 5.19, 95% CI: 1.06-25.45, P = 0.042) were independent prognostic markers for shorter metastasis-free survival (MFS). Accordingly, Cox regression analysis revealed that patients with high MACC1 expression and KRAS G13 mutation exhibited the worst prognosis (HR: 14.48, 95% CI: 3.37-62.18, P < 0.001). Patients were classified based on their molecular characteristics into four clusters with significant differences in MFS (P = 0.003) by using the SPSS 2-step cluster function and Kaplan-Meier survival analysis. According to our results, patients with high MACC1 expression and mutated KRAS G13 exhibited the highest risk for metachronous metastases formation. Moreover, we demonstrated that the "Traditional pathway" with an intermediate risk for metastasis formation can be further subdivided by assessing MACC1 expression into a low and high risk group with regard to MFS prognosis. This is the first report showing that identification of CRC patients at high risk for metastasis is possible by assessing MACC1 expression in combination with KRAS G13 mutation.
NASA Astrophysics Data System (ADS)
Wagner, A.; Blechschmidt, A.-M.; Bouarar, I.; Brunke, E.-G.; Clerbaux, C.; Cupeiro, M.; Cristofanelli, P.; Eskes, H.; Flemming, J.; Flentje, H.; George, M.; Gilge, S.; Hilboll, A.; Inness, A.; Kapsomenakis, J.; Richter, A.; Ries, L.; Spangl, W.; Stein, O.; Weller, R.; Zerefos, C.
2015-03-01
Monitoring Atmospheric Composition and Climate (MACC/MACCII) currently represents the European Union's Copernicus Atmosphere Monitoring Service (CAMS) (http://www.copernicus.eu), which will become fully operational in the course of 2015. The global near-real-time MACC model production run for aerosol and reactive gases provides daily analyses and 5 day forecasts of atmospheric composition fields. It is the only assimilation system world-wide that is operational to produce global analyses and forecasts of reactive gases and aerosol fields. We have investigated the ability of the MACC analysis system to simulate tropospheric concentrations of reactive gases (CO, O3, and NO2) covering the period between 2009 and 2012. A validation was performed based on CO and O3 surface observations from the Global Atmosphere Watch (GAW) network, O3 surface observations from the European Monitoring and Evaluation Programme (EMEP) and furthermore, NO2 tropospheric columns derived from the satellite sensors SCIAMACHY and GOME-2, and CO total columns derived from the satellite sensor MOPITT. The MACC system proved capable of reproducing reactive gas concentrations in consistent quality, however, with a seasonally dependent bias compared to surface and satellite observations: for northern hemispheric surface O3 mixing ratios, positive biases appear during the warm seasons and negative biases during the cold parts of the years, with monthly Modified Normalised Mean Biases (MNMBs) ranging between -30 and 30% at the surface. Model biases are likely to result from difficulties in the simulation of vertical mixing at night and deficiencies in the model's dry deposition parameterization. Observed tropospheric columns of NO2 and CO could be reproduced correctly during the warm seasons, but are mostly underestimated by the model during the cold seasons, when anthropogenic emissions are at a highest, especially over the US, Europe and Asia. Monthly MNMBs of the satellite data evaluation range between -110 and 40% for NO2 and at most -20% for CO, over the investigated regions. The underestimation is likely to result from a combination of errors concerning the dry deposition parameterization and certain limitations in the current emission inventories, together with an insufficiently established seasonality in the emissions.
Oh, Wen-Da; Lua, Shun-Kuang; Dong, Zhili; Lim, Teik-Thye
2015-03-02
Magnetic activated carbon composite (CuFe2O4/AC, MACC) was prepared by a co-precipitation-calcination method. The MACC consisted of porous micro-particle morphology with homogeneously distributed CuFe2O4 and possessed high magnetic saturation moment (8.1 emu g(-1)). The performance of MACC was evaluated as catalyst and regenerable adsorbent via peroxymonosulfate (PMS, Oxone(®)) activation for methylene blue (MB) removal. Optimum CuFe2O4/AC w/w ratio was 1:1.5 giving excellent performance and can be reused for at least 3 cycles. The presence of common inorganic ions, namely Cl(-) and NO3(-) did not exert significant influence on MB degradation but humic acid decreased the MB degradation rate. As a regenerable adsorbent, negligible difference in regeneration efficiency was observed when a higher Oxone(®) dosage was employed but a better efficiency was obtained at a lower MACC loading. The factors hindering complete MACC regeneration are MB adsorption irreversibility and AC surface modification by PMS making it less favorable for subsequent MB adsorption. With an additional mild heat treatment (150 °C) after regeneration, 82% of the active sites were successfully regenerated. A kinetic model incorporating simultaneous first-order desorption, second-order adsorption and pseudo-first order degradation processes was numerically-solved to describe the rate of regeneration. The regeneration rate increased linearly with increasing Oxone(®):MACC ratio. The MACC could potentially serve as a catalyst for PMS activation and regenerable adsorbent. Copyright © 2014 Elsevier B.V. All rights reserved.
2013-01-01
Background Activity of disease in patients with multiple sclerosis (MS) is monitored by detecting and delineating hyper-intense lesions on MRI scans. The Minimum Area Contour Change (MACC) algorithm has been created with two main goals: a) to improve inter-operator agreement on outlining regions of interest (ROIs) and b) to automatically propagate longitudinal ROIs from the baseline scan to a follow-up scan. Methods The MACC algorithm first identifies an outer bound for the solution path, forms a high number of iso-contour curves based on equally spaced contour values, and then selects the best contour value to outline the lesion. The MACC software was tested on a set of 17 FLAIR MRI images evaluated by a pair of human experts and a longitudinal dataset of 12 pairs of T2-weighted Fluid Attenuated Inversion Recovery (FLAIR) images that had lesion analysis ROIs drawn by a single expert operator. Results In the tests where two human experts evaluated the same MRI images, the MACC program demonstrated that it could markedly reduce inter-operator outline error. In the longitudinal part of the study, the MACC program created ROIs on follow-up scans that were in close agreement to the original expert’s ROIs. Finally, in a post-hoc analysis of 424 follow-up scans 91% of propagated MACC were accepted by an expert and only 9% of the final accepted ROIS had to be created or edited by the expert. Conclusion When used with an expert operator's verification of automatically created ROIs, MACC can be used to improve inter- operator agreement and decrease analysis time, which should improve data collected and analyzed in multicenter clinical trials. PMID:24004511
Suh, Young Joo; Han, Kyunghwa; Chang, Suyon; Kim, Jin Young; Im, Dong Jin; Hong, Yoo Jin; Lee, Hye-Jeong; Hur, Jin; Kim, Young Jin; Choi, Byoung Wook
2017-09-01
The SYNergy between percutaneous coronary intervention with TAXus and cardiac surgery (SYNTAX) score is an invasive coronary angiography (ICA)-based score for quantifying the complexity of coronary artery disease (CAD). Although the SYNTAX score was originally developed based on ICA, recent publications have reported that coronary computed tomography angiography (CCTA) is a feasible modality for the estimation of the SYNTAX score.The aim of our study was to investigate the prognostic value of the SYNTAX score, based on CCTA for the prediction of major adverse cardiac and cerebrovascular events (MACCEs) in patients with complex CAD.The current study was approved by the institutional review board of our institution, and informed consent was waived for this retrospective cohort study. We included 251 patients (173 men, mean age 66.0 ± 9.29 years) who had complex CAD [3-vessel disease or left main (LM) disease] on CCTA. SYNTAX score was obtained on the basis of CCTA. Follow-up clinical outcome data regarding composite MACCEs were also obtained. Cox proportional hazards models were developed to predict the risk of MACCEs based on clinical variables, treatment, and computed tomography (CT)-SYNTAX scores.During the median follow-up period of 1517 days, there were 48 MACCEs. Univariate Cox hazards models demonstrated that MACCEs were associated with advanced age, low body mass index (BMI), and dyslipidemia (P < .2). In patients with LM disease, MACCEs were associated with a higher SYNTAX score. In patients with CT-SYNTAX score ≥23, patients who underwent coronary artery bypass graft surgery (CABG) and percutaneous coronary intervention had significantly lower hazard ratios than patients who were treated with medication alone. In multivariate Cox hazards model, advanced age, low BMI, and higher SYNTAX score showed an increased hazard ratio for MACCE, while treatment with CABG showed a lower hazard ratio (P < .2).On the basis of our results, CT-SYNTAX score can be a useful method for noninvasively predicting MACCEs in patients with complex CAD, especially in patients with LM disease.
Bell, Diana; Bell, Achim H; Bondaruk, Jolanta; Hanna, Ehab Y; Weber, Randall S
2016-05-15
Adenoid cystic carcinoma (ACC), 1 of the most common salivary gland malignancies, arises from the intercalated ducts, which are composed of inner ductal epithelial cells and outer myoepithelial cells. The objective of this study was to determine the genomic subtypes of ACC with emphasis on dominant cell type to identify potential specific biomarkers for each subtype and to improve the understanding of this disease. A whole-genome expression study was performed based on 42 primary salivary ACCs and 5 normal salivary glands. RNA from these specimens was subjected to expression profiling with RNA sequencing, and results were analyzed to identify transcripts in epithelial-dominant ACC (E-ACC), myoepithelial-dominant ACC (M-ACC), and all ACC that were expressed differentially compared with the transcripts in normal salivary tissue. In total, the authors identified 430 differentially expressed transcripts that were unique to E-ACC, 392 that were unique to M-ACC, and 424 that were common to both M-ACC and E-ACC. The sets of E-ACC-specific and M-ACC-specific transcripts were sufficiently large to define and differentiate E-ACC from M-ACC. Ingenuity pathway analysis identified known cancer-related genes for 60% of the E-ACC transcripts, 69% of the M-ACC transcripts, and 68% of the transcripts that were common in both E-ACC and M-ACC. Three sets of highly expressed candidate genes-distal-less homeobox 6 (DLX6) for E-ACC; protein keratin 16 (KRT16), SRY box 11 (SOX11), and v-myb avian myeloblastosis viral oncogene homolog (MYB) for M-ACC; and engrailed 1 (EN1) and statherin (STATH), which are common to both E-ACC and M-ACC)-were further validated at the protein level. The current results enabled the authors to identify novel potential therapeutic targets and biomarkers in E-ACC and M-ACC individually, with the implication that EN1, DLX6, and OTX1 (orthodenticle homeobox 1) are potential drivers of these cancers. Cancer 2016;122:1513-22. © 2016 American Cancer Society. © 2016 American Cancer Society.
NASA Astrophysics Data System (ADS)
Yu, Dongshan; Liang, Xuejie; Wang, Jingwei; Li, Xiaoning; Nie, Zhiqiang; Liu, Xingsheng
2017-02-01
A novel marco channel cooler (MaCC) has been developed for packaging high power diode vertical stacked (HPDL) lasers, which eliminates many of the issues in commercially-available copper micro-channel coolers (MCC). The MaCC coolers, which do not require deionized water as coolant, were carefully designed for compact size and superior thermal dissipation capability. Indium-free packaging technology was adopted throughout product design and fabrication process to minimize the risk of solder electromigration and thermal fatigue at high current density and long pulse width under QCW operation. Single MaCC unit with peak output power of up to 700W/bar at pulse width in microsecond range and 200W/bar at pulse width in millisecond range has been recorded. Characteristic comparison on thermal resistivity, spectrum, near filed and lifetime have been conducted between a MaCC product and its counterpart MCC product. QCW lifetime test (30ms 10Hz, 30% duty cycle) has also been conducted with distilled water as coolant. A vertical 40-MaCC stack product has been fabricated, total output power of 9 kilowatts has been recorded under QCW mode (3ms, 30Hz, 9% duty cycle).
Hussey, Daniel K; McGrory, Brian J
2017-08-01
Mechanically assisted crevice corrosion (MACC) in metal-on-polyethylene total hip arthroplasty (THA) is of concern, but its prevalence, etiology, and natural history are incompletely understood. From January 2003 to December 2012, 1352 consecutive THA surgeries using a titanium stem, cobalt-chromium alloy femoral head, and highly cross-linked polyethylene liner from a single manufacturer were performed. Patients were followed at 1-year and 5-year intervals for surveillance, but also seen earlier if they had symptoms. Any patient with osteolysis >1 cm (n = 3) or unexplained pain (n = 85) underwent examination, radiographs, complete blood count, erythrocyte sedimentation rate, and C-reactive protein, as well as tests for serum cobalt and chromium levels. Symptomatic MACC was present in 43 of 1352 patients (3.2%). Prevalence of MACC by year of implant ranged from 0% (0 of 61, 2003; 0 of 138, 2005) to 10.5% (17 of 162; 2009). The M/L Taper stem had a greater prevalence (4.9%) of MACC than all other Zimmer (Zimmer, Inc, Warsaw, IN) 12/14 trunnion stem types combined (1.2%; P < .001). Twenty-seven of 43 (62.8%) patients have undergone revision surgery, and 16 of 43 (37.2%) patients have opted for ongoing surveillance. Comparing symptomatic THA patients with and without MACC, no demographic, clinical, or radiographic differences were found. MACC was significantly more common in 0 length femoral heads (compared with both -3.5 mm and +3.5 mm heads). The prevalence of MACC in metal-on-polyethylene hips is higher in this cross-sectional study than previously reported. A significantly higher prevalence was found in patients with M/L Taper style stem and THA performed both in 2009 and also between 2009 and 2012 with this manufacturer. Copyright © 2017 Elsevier Inc. All rights reserved.
Moon, Jeonggeun; Suh, Jon; Oh, Pyung Chun; Lee, Kyounghoon; Park, Hyun Woo; Jang, Ho-Jun; Kim, Tae-Hoon; Park, Sang-Don; Kwon, Sung Woo; Kang, Woong Chol
2016-07-15
Although epidemiologic studies have shown the impact of height on occurrence and/or prognosis of cardiovascular diseases, the underlying mechanism is unclear. In addition, the relation in patients with ST-segment elevation myocardial infarction (STEMI) who underwent primary percutaneous coronary intervention (PCI) remains unknown. We sought to assess the influence of height on outcomes of patients with acute STEMI undergoing primary PCI and to provide a pathophysiological explanation. All 1,490 patients with STEMI undergoing primary PCI were analyzed. Major adverse cardiac and cerebrovascular events (MACCE) were defined as all-cause mortality, nonfatal myocardial infarction, nonfatal stroke, and unplanned hospitalization for heart failure (HF). Patients were divided into (1) MACCE (+) versus MACCE (-) and (2) first- to third-tertile groups according to height. MACCE (+) group was shorter than MACCE (-) group (164 ± 8 vs 166 ± 8 cm, p = 0.012). Prognostic impact of short stature was significant in older (≥70 years) male patients even after adjusting for co-morbidities (hazard ratio 0.951, 95% confidence interval 0.912 to 0.991, p = 0.017). The first-tertile group showed the worst MACCE-free survival (p = 0.035), and most cases of MACCE were HF (n, 17 [3%] vs 6 [1%] vs 2 [0%], p = 0.004). On post-PCI echocardiography, left atrial volume and early diastolic mitral velocity to early diastolic mitral annulus velocity ratio showed an inverse relation with height (p <0.001 for all) despite similar left ventricular ejection fraction. In conclusion, short stature is associated with occurrence of HF after primary PCI for STEMI, and its influence is prominent in aged male patients presumably for its correlation with diastolic dysfunction. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Petrucci, B.; Huc, M.; Feuvrier, T.; Ruffel, C.; Hagolle, O.; Lonjou, V.; Desjardins, C.
2015-10-01
For the production of Level2A products during Sentinel-2 commissioning in the Technical Expertise Center Sentinel-2 in CNES, CESBIO proposed to adapt the Venus Level-2 , taking advantage of the similarities between the two missions: image acquisition at a high frequency (2 days for Venus, 5 days with the two Sentinel-2), high resolution (5m for Venus, 10, 20 and 60m for Sentinel-2), images acquisition under constant viewing conditions. The Multi-Mission Atmospheric Correction and Cloud Screening (MACCS) tool was born: based on CNES Orfeo Toolbox Library, Venμs processor which was already able to process Formosat2 and VENμS data, was adapted to process Sentinel-2 and Landsat5-7 data; since then, a great effort has been made reviewing MACCS software architecture in order to ease the add-on of new missions that have also the peculiarity of acquiring images at high resolution, high revisit and under constant viewing angles, such as Spot4/Take5 and Landsat8. The recursive and multi-temporal algorithm is implemented in a core that is the same for all the sensors and that combines several processing steps: estimation of cloud cover, cloud shadow, water, snow and shadows masks, of water vapor content, aerosol optical thickness, atmospheric correction. This core is accessed via a number of plug-ins where the specificity of the sensor and of the user project are taken into account: products format, algorithmic processing chaining and parameters. After a presentation of MACCS architecture and functionalities, the paper will give an overview of the production facilities integrating MACCS and the associated specificities: the interest for this tool has grown worldwide and MACCS will be used for extensive production within the THEIA land data center and Agri-S2 project. Finally the paper will zoom on the use of MACCS during Sentinel-2 In Orbit Test phase showing the first Level-2A products.
The development of a classification system for maternity models of care.
Donnolley, Natasha; Butler-Henderson, Kerryn; Chapman, Michael; Sullivan, Elizabeth
2016-08-01
A lack of standard terminology or means to identify and define models of maternity care in Australia has prevented accurate evaluations of outcomes for mothers and babies in different models of maternity care. As part of the Commonwealth-funded National Maternity Data Development Project, a classification system was developed utilising a data set specification that defines characteristics of models of maternity care. The Maternity Care Classification System or MaCCS was developed using a participatory action research design that built upon the published and grey literature. The study identified the characteristics that differentiate models of care and classifies models into eleven different Major Model Categories. The MaCCS will enable individual health services, local health districts (networks), jurisdictional and national health authorities to make better informed decisions for planning, policy development and delivery of maternity services in Australia. © The Author(s) 2016.
MISR Regional GoMACCS Products
Atmospheric Science Data Center
2016-08-24
... parameters from one Level 1 or Level 2 product. Further information about the Level 1 and Level 2 data products can be found on the ... MISR GoMACCS data table . Images available on this web site include the following parameters: Image Description ...
The Mobile Advanced Command and Control Station (MACCS) Experimental Testbed
2007-10-01
were selected: Vehicle: Dodge (Sprinter 2500 high-roof - Mercedes - Benz vehicle) Electrical equipment and habitability equipment: Crossroads Coaches...this innovative , mobile, experimental tested. IMPACT/APPLICATIONS While MACCS clearly supports the research agenda for both HAL and ONR (as well as
Suh, Soon Yong; Kang, Woong Chol; Oh, Pyung Chun; Choi, Hanul; Moon, Chan Il; Lee, Kyounghoon; Han, Seung Hwan; Ahn, Taehoon; Choi, In Suck; Shin, Eak Kyun
2014-09-01
There are limited data on the optimal antithrombotic therapy for patients with atrial fibrillation (AF) who undergoing coronary stenting. We reviewed 203 patients (62.6 % men, mean age 68.3 ± 10.1 years) between 2003 and 2012, and recorded clinical and demographic characteristics of the patients. Clinical follow-up included major adverse cardiac and cerebrovascular events (MACCE) (cardiac death, myocardial infarction, target lesion revascularization, and stroke), stent thrombosis, and bleeding. The most commonly associated comorbidities were hypertension (70.4 %), diabetes mellitus (35.5 %), and congestive heart failure (26.6 %). Sixty-three percent of patients had stroke risk higher than CHADS2 score 2. At discharge, dual-antiplatelet therapy (aspirin, clopidogrel) was used in 166 patients (81.8 %; Group I), whereas 37 patients (18.2 %) were discharged with triple therapy (aspirin, clopidogrel, warfarin; Group II). The mean follow-up period was 42.0 ± 29.0 months. The mean international normalized ratio (INR) in group II was 1.83 ± 0.41. The total MACCE was 16.3 %, with stroke in 3.4 %. Compared with the group II, the incidence of MACCE (2.7 % vs 19.3 %, P = 0.012) and cardiac death (0 % vs 11.4 %, P = 0.028) were higher in the group I. Major and any bleeding, however, did not differ between the two groups. In multivariate analysis, no warfarin therapy (odds ratio 7.8, 95 % confidence interval 1.02-59.35; P = 0.048) was an independent predictor of MACCE. By Kaplan-Meier survival analysis, warfarin therapy was associated with a lower risk of MACCE (P = 0.024). In patients with AF undergoing coronary artery stenting, MACCE were reduced by warfarin therapy without increased bleeding, which might be related to tighter control with a lower INR value.
Roy, Andrew K; Chevalier, Bernard; Lefèvre, Thierry; Louvard, Yves; Segurado, Ricardo; Sawaya, Fadi; Spaziano, Marco; Neylon, Antoinette; Serruys, Patrick A; Dawkins, Keith D; Kappetein, Arie Pieter; Mohr, Friedrich-Wilhelm; Colombo, Antonio; Feldman, Ted; Morice, Marie-Claude
2017-09-20
The use of multiple geographical sites for randomised cardiovascular trials may lead to important heterogeneity in treatment effects. This study aimed to determine whether treatment effects from different geographical recruitment regions impacted significantly on five-year MACCE rates in the SYNTAX trial. Five-year SYNTAX results (n=1,800) were analysed for geographical variability by site and country for the effect of treatment (CABG vs. PCI) on MACCE rates. Fixed, random, and linear mixed models were used to test clinical covariate effects, such as diabetes, lesion characteristics, and procedural factors. Comparing five-year MACCE rates, the pooled odds ratio (OR) between study sites was 0.58 (95% CI: 0.47-0.71), and countries 0.59 (95% CI: 0.45-0.73). By homogeneity testing, no individual site (X2=93.8, p=0.051) or country differences (X2=25.7, p=0.080) were observed. For random effects models, the intraclass correlation was minimal (ICC site=5.1%, ICC country=1.5%, p<0.001), indicating minimal geographical heterogeneity, with a hazard ratio of 0.70 (95% CI: 0.59-0.83). Baseline risk (smoking, diabetes, PAD) did not influence regional five-year MACCE outcomes (ICC 1.3%-5.2%), nor did revascularisation of the left main vs. three-vessel disease (p=0.241), across site or country subgroups. For CABG patients, the number of arterial (p=0.49) or venous (p=0.38) conduits used also made no difference. Geographic variability has no significant treatment effect on MACCE rates at five years. These findings highlight the generalisability of the five-year outcomes of the SYNTAX study.
Wiemers, Paul D; Marney, Lucy; White, Nicole; Bough, Georgina; Hustig, Alistair; Tan, Wei; Cheng, Ching-Siang; Kang, Dong; Yadav, Sumit; Tam, Robert; Fraser, John F
2018-04-24
There is a paucity of data in regards to longer term morbidity outcomes in Indigenous Australian patients undergoing coronary artery bypass grafting (CABG). No comparative data on re-infarction, stroke or reintervention rates exist. Outcome data following percutaneous coronary intervention (PCI) is also extremely limited. Addressing this gap in knowledge forms the major aim of our study. This was a single centre cohort study conducted at the Townsville Hospital, Australia which provides tertiary adult cardiac surgical services to the northern parts of the state of Queensland. It incorporated consecutive patients (n=350) undergoing isolated CABG procedures, 2008-2010, 20.9% (73/350) of whom were Indigenous Australians. The main outcome measures were major adverse cardiac or cerebrovascular events (MACCE) at mid-term follow-up (mean 38.9 months). The incidence of MACCE among Indigenous Australian patients was approximately twice that of non-Indigenous patients at mid-term follow-up (36.7% vs. 18.6%; p=0.005; OR 2.525 (1.291-4.880)). Following adjustment for preoperative and operative variables, Indigenous Australian status itself was not significantly associated with MACCE (AOR 1.578 (0.637-3.910)). Significant associations with MACCE included renal impairment (AOR 2.198 (1.010-4.783)) and moderate-severe left ventricular impairment (AOR 3.697 (1.820-7.508)). An association between diabetes and MACCE failed to reach statistical significance (AOR 1.812 (0.941-3.490)). Indigenous Australians undergoing CABG suffer an excess of MACCE when followed-up in the longer term. High rates of comorbidities in the Indigenous Australian population likely play an aetiological role. Copyright © 2018. Published by Elsevier B.V.
Integrative marker analysis allows risk assessment for metastasis in stage II colon cancer.
Nitsche, Ulrich; Rosenberg, Robert; Balmert, Alexander; Schuster, Tibor; Slotta-Huspenina, Julia; Herrmann, Pia; Bader, Franz G; Friess, Helmut; Schlag, Peter M; Stein, Ulrike; Janssen, Klaus-Peter
2012-11-01
Individualized risk assessment in patients with UICC stage II colon cancer based on a panel of molecular genetic alterations. Risk assessment in patients with colon cancer and localized disease (UICC stage II) is not sufficiently reliable. Development of metachronous metastasis is assumed to be governed largely by individual tumor genetics. Fresh frozen tissue from 232 patients (T3-4, N0, M0) with complete tumor resection and a median follow-up of 97 months was analyzed for microsatellite stability, KRAS exon 2, and BRAF exon 15 mutations. Gene expression of the WNT-pathway surrogate marker osteopontin and the metastasis-associated genes SASH1 and MACC1 was determined for 179 patients. The results were correlated with metachronous distant metastasis risk (n = 22 patients). Mutations of KRAS were detected in 30% patients, mutations of BRAF in 15% patients, and microsatellite instability in 26% patients. Risk of recurrence was associated with KRAS mutation (P = 0.033), microsatellite stable tumors (P = 0.015), decreased expression of SASH1 (P = 0.049), and increased expression of MACC1 (P < 0.001). MACC1 was the only independent parameter for recurrence prediction (hazard ratio: 6.2; 95% confidence interval: 2.4-16; P < 0.001). Integrative 2-step cluster analysis allocated patients into 4 groups, according to their tumor genetics. KRAS mutation, BRAF wild type, microsatellite stability, and high MACC1 expression defined the group with the highest risk of recurrence (16%, 7 of 43), whereas BRAF wild type, microsatellite instability, and low MACC1 expression defined the group with the lowest risk (4%, 1 of 26). MACC1 expression predicts development of metastases, outperforming microsatellite stability status, as well as KRAS/BRAF mutation status.
Weighting Composite Endpoints in Clinical Trials: Essential Evidence for the Heart Team
Tong, Betty C.; Huber, Joel C.; Ascheim, Deborah D.; Puskas, John D.; Ferguson, T. Bruce; Blackstone, Eugene H.; Smith, Peter K.
2013-01-01
Background Coronary revascularization trials often use a composite endpoint of major adverse cardiac and cerebrovascular events (MACCE). The usual practice in analyzing data with a composite endpoint is to assign equal weights to each of the individual MACCE elements. Non-inferiority margins are used to offset effects of presumably less important components, but their magnitudes are subject to bias. This study describes the relative importance of MACCE elements from a patient perspective. Methods A discrete choice experiment was conducted. Survey respondents were presented with a scenario that would make them eligible for the SYNTAX 3-Vessel Disease cohort. Respondents chose among pairs of procedures that differed on the 3-year probability of MACCE, potential for increased longevity, and procedure/recovery time. Conjoint analysis derived relative weights for these attributes. Results In all, 224 respondents completed the survey. The attributes did not have equal weight. Risk of death was most important (relative weight 0.23), followed by stroke (.18), potential increased longevity and recovery time (each 0.17), MI (0.14) and risk of repeat revascularization (0.11). Applying these weights to the SYNTAX 3-year endpoints resulted in a persistent, but decreased margin of difference in MACCE favoring CABG compared to PCI. When labeled only as “Procedure A” and “B,” 87% of respondents chose CABG over PCI. When procedures were labeled as “Coronary Stent” and “Coronary Bypass Surgery,” only 73% chose CABG. Procedural preference varied with demographics, gender and familiarity with the procedures. Conclusions MACCE elements do not carry equal weight in a composite endpoint, from a patient perspective. Using a weighted composite endpoint increases the validity of statistical analyses and trial conclusions. Patients are subject to bias by labels when considering coronary revascularization. PMID:22795064
Barile, Christopher J.; Barile, Elizabeth C.; Zavadil, Kevin R.; ...
2014-12-04
We describe in this report the electrochemistry of Mg deposition and dissolution from the magnesium aluminum chloride complex (MACC). The results define the requirements for reversible Mg deposition and definitively establish that voltammetric cycling of the electrolyte significantly alters its composition and performance. Elemental analysis, scanning electron microscopy, and energy-dispersive X-ray spectroscopy (SEM-EDS) results demonstrate that irreversible Mg and Al deposits form during early cycles. Electrospray ionization-mass spectrometry (ESI-MS) data show that inhibitory oligomers develop in THF-based solutions. These oligomers form via the well-established mechanism of a cationic ring-opening polymerization of THF during the initial synthesis of the MACC andmore » under resting conditions. In contrast, MACC solutions in 1,2-dimethoxyethane (DME), an acyclic solvent, do not evolve as dramatically at open circuit potential. Furthermore, we propose a mechanism describing how the conditioning process of the MACC in THF improves its performance by both tuning the Mg:Al stoichiometry and eliminating oligomers.« less
Mid-latitude storm track variability and its influence on atmospheric composition
NASA Astrophysics Data System (ADS)
Knowland, K. E.; Doherty, R. M.; Hodges, K.
2013-12-01
Using the storm tracking algorithm, TRACK (Hodges, 1994, 1995, 1999), we have studied the behaviour of storm tracks in the North Atlantic basin, using 850-hPa relative vorticity from the ERA-Interim Re-analysis (Dee et al., 2011). We have correlated surface ozone measurements at rural coastal sites in Europe to the storm track data to explore the role mid-latitude cyclones and their transport of pollutants play in determining surface air quality in Western Europe. To further investigate this relationship, we have used the Monitoring Atmospheric Composition Climate (MACC) Re-analysis dataset (Inness et al., 2013) in TRACK. The MACC Re-analysis is a 10-year dataset which couples a chemistry transport model (Mozart-3; Stein 2009, 2012) to an extended version of the European Centre for Medium-Range Weather Forecasts' (ECMWF) Integrated Forecast System (IFS). Storm tracks in the MACC Re-analysis compare well to the storm tracks using the ERA-Interim Re-analysis for the same 10-year period, as both are based on ECMWF IFSs. We also compare surface ozone values from MACC to surface ozone measurements previously studied. Using TRACK, we follow ozone (O3) and carbon monoxide (CO) through the life cycle of storms from North America to Western Europe. Along the storm tracks, we examine the distribution of CO and O3 within 6 degrees of the center of each storm and vertically at different pressure levels in the troposphere. We hope to better understand the mechanisms with which pollution is vented from the boundary layer to the free troposphere, as well as transport of pollutants to rural areas. Our hope is to give policy makers more detailed information on how climate variability associated with storm tracks between 1979-2013 may affect air quality in Northeast USA and Western Europe.
A marginal abatement cost curve (MACC) traces out the relationship between the quantity of pollution abated and the marginal cost of abating each additional unit. In the context of air quality management, MACCs typically are developed by sorting end-of-pipe controls by their resp...
A marginal abatement cost curve (MACC) traces out the relationship between the quantity of pollution abated and the marginal cost of abating each additional unit. In the context of air quality management, MACCs typically are developed by sorting end-of-pipe controls by their resp...
A marginal abatement cost curve (MACC) traces out the relationship between the quantity of pollution abated and the marginal cost of abating each additional unit. In the context of air quality management, MACCs typically are developed by sorting end-of-pipe controls by their resp...
A marginal abatement cost curve (MACC) traces out the relationship between the quantity of pollution abated and the marginal cost of abating each additional unit. In the context of air quality management, MACCs typically are developed by sorting end-of-pipe controls by their rela...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bixler, Nathan E.; Osborn, Douglas M.; Sallaberry, Cedric Jean-Marie
2014-02-01
This paper describes the convergence of MELCOR Accident Consequence Code System, Version 2 (MACCS2) probabilistic results of offsite consequences for the uncertainty analysis of the State-of-the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout scenario at the Peach Bottom Atomic Power Station. The consequence metrics evaluated are individual latent-cancer fatality (LCF) risk and individual early fatality risk. Consequence results are presented as conditional risk (i.e., assuming the accident occurs, risk per event) to individuals of the public as a result of the accident. In order to verify convergence for this uncertainty analysis, as recommended by the Nuclear Regulatory Commission’s Advisorymore » Committee on Reactor Safeguards, a ‘high’ source term from the original population of Monte Carlo runs has been selected to be used for: (1) a study of the distribution of consequence results stemming solely from epistemic uncertainty in the MACCS2 parameters (i.e., separating the effect from the source term uncertainty), and (2) a comparison between Simple Random Sampling (SRS) and Latin Hypercube Sampling (LHS) in order to validate the original results obtained with LHS. Three replicates (each using a different random seed) of size 1,000 each using LHS and another set of three replicates of size 1,000 using SRS are analyzed. The results show that the LCF risk results are well converged with either LHS or SRS sampling. The early fatality risk results are less well converged at radial distances beyond 2 miles, and this is expected due to the sparse data (predominance of “zero” results).« less
Clinical outcomes of patients with hypothyroidism undergoing percutaneous coronary intervention
Zhang, Ming; Sara, Jaskanwal D.S.; Matsuzawa, Yasushi; Gharib, Hossein; Bell, Malcolm R.; Gulati, Rajiv; Lerman, Lilach O.
2016-01-01
Abstract Aims The aim of this study was to investigate the association between hypothyroidism and major adverse cardiovascular and cerebral events (MACCE) in patients undergoing percutaneous coronary intervention (PCI). Methods and results Two thousand four hundred and thirty patients who underwent PCI were included. Subjects were divided into two groups: hypothyroidism ( n = 686) defined either as a history of hypothyroidism or thyroid-stimulating hormone (TSH) ≥5.0 mU/mL, and euthyroidism ( n = 1744) defined as no history of hypothyroidism and/or 0.3 mU/mL ≤ TSH < 5.0 mU/mL. Patients with hypothyroidism were further categorized as untreated ( n = 193), or those taking thyroid replacement therapy (TRT) with adequate replacement (0.3 mU/mL ≤ TSH < 5.0 mU/mL, n = 175) or inadequate replacement (TSH ≥ 5.0 mU/mL, n = 318). Adjusted hazard ratios (HRs) were calculated using Cox proportional hazards models. Median follow-up was 3.0 years (interquartile range, 0.5–7.0). After adjustment for covariates, the risk of MACCE and its constituent parts was higher in patients with hypothyroidism compared with those with euthyroidism (MACCE: HR: 1.28, P = 0.0001; myocardial infarction (MI): HR: 1.25, P = 0.037; heart failure: HR: 1.46, P = 0.004; revascularization: HR: 1.26, P = 0.0008; stroke: HR: 1.62, P = 0.04). Compared with untreated patients or those with inadequate replacement, adequately treated hypothyroid patients had a lower risk of MACCE (HR: 0.69, P = 0.005; HR: 0.78, P = 0.045), cardiac death (HR: 0.43, P = 0.008), MI (HR: 0.50, P = 0.0004; HR: 0.60, P = 0.02), and heart failure (HR: 0.50, P = 0.02; HR: 0.52, P = 0.017). Conclusion Hypothyroidism is associated with a higher incidence of MACCE compared with euthyroidism in patients undergoing PCI. Maintaining adequate control on TRT is beneficial in preventing MACCE. PMID:26757789
Kang, Se Hun; Ahn, Jung-Min; Lee, Cheol Hyun; Lee, Pil Hyung; Kang, Soo-Jin; Lee, Seung-Whan; Kim, Young-Hak; Lee, Cheol Whan; Park, Seong-Wook; Park, Duk-Woo; Park, Seung-Jung
2017-07-01
Identifying predictive factors for major cardiovascular events and death in patients with unprotected left main coronary artery disease is of great clinical value for risk stratification and possible guidance for tailored preventive strategies. The Interventional Research Incorporation Society-Left MAIN Revascularization registry included 5795 patients with unprotected left main coronary artery disease (percutaneous coronary intervention, n=2850; coronary-artery bypass grafting, n=2337; medication alone, n=608). We analyzed the incidence and independent predictors of major adverse cardiac and cerebrovascular events (MACCE; a composite of death, MI, stroke, or repeat revascularization) and all-cause mortality in each treatment stratum. During follow-up (median, 4.3 years), the rates of MACCE and death were substantially higher in the medical group than in the percutaneous coronary intervention and coronary-artery bypass grafting groups ( P <0.001). In the percutaneous coronary intervention group, the 3 strongest predictors for MACCE were chronic renal failure, old age (≥65 years), and previous heart failure; those for all-cause mortality were chronic renal failure, old age, and low ejection fraction. In the coronary-artery bypass grafting group, old age, chronic renal failure, and low ejection fraction were the 3 strongest predictors of MACCE and death. In the medication group, old age, low ejection fraction, and diabetes mellitus were the 3 strongest predictors of MACCE and death. Among patients with unprotected left main coronary artery disease, the key clinical predictors for MACCE and death were generally similar regardless of index treatment. This study provides effect estimates for clinically relevant predictors of long-term clinical outcomes in real-world left main coronary artery patients, providing possible guidance for tailored preventive strategies. URL: https://clinicaltrials.gov. Unique identifier: NCT01341327. © 2017 American Heart Association, Inc.
Severity of OSAS, CPAP and cardiovascular events: A follow-up study.
Baratta, Francesco; Pastori, Daniele; Fabiani, Mario; Fabiani, Valerio; Ceci, Fabrizio; Lillo, Rossella; Lolli, Valeria; Brunori, Marco; Pannitteri, Gaetano; Cravotto, Elena; De Vito, Corrado; Angelico, Francesco; Del Ben, Maria
2018-05-01
Previous studies suggested obstructive sleep apnoea syndrome (OSAS) as a major risk factor for incident cardiovascular events. However, the relationship between OSAS severity, the use of continuous positive airway pressure (CPAP) treatment and the development of cardiovascular disease is still matter of debate. The aim was to test the association between OSAS and cardiovascular events in patients with concomitant cardio-metabolic diseases and the potential impact of CPAP therapy on cardiovascular outcomes. Prospective observational cohort study of consecutive outpatients with suspected metabolic disorders who had complete clinical and biochemical workup including polysomnography because of heavy snoring and possible OSAS. The primary endpoint was a composite of major adverse cardiovascular and cerebrovascular events (MACCE). Median follow-up was 81.3 months, including 434 patients (2701.2 person/years); 83 had a primary snoring, 84 had mild, 93 moderate and 174 severe OSAS, respectively. The incidence of MACCE was 0.8% per year (95% confidence interval [CI] 0.2-2.1) in primary snorers and 2.1% per year (95% CI 1.5-2.8) for those with OSAS. A positive association was observed between event-free survival and OSAS severity (log-rank test; P = .041). A multivariable Cox regression analysis showed obesity (HR = 8.011, 95% CI 1.071-59.922, P = .043), moderate OSAS (vs non-OSAS HR = 3.853, 95% CI 1.069-13.879, P = .039) and severe OSAS (vs non-OSAS HR = 3.540, 95% CI 1.026-12.217, P = .045) as predictors of MACCE. No significant association was observed between CPAP treatment and MACCE (log-rank test; P = .227). Our findings support the role of moderate/severe OSAS as a risk factor for incident MACCE. CPAP treatment was not associated with a lower rate of MACCE. © 2018 Stichting European Society for Clinical Investigation Journal Foundation.
NASA Astrophysics Data System (ADS)
Boulanger, Damien; Gautron, Benoit; Schultz, Martin; Brötz, Björn; Rauthe-Schöch, Armin; Thouret, Valérie
2015-04-01
IAGOS (In-service Aircraft for a Global Observing System) aims at the provision of long-term, frequent, regular, accurate, and spatially resolved in situ observations of the atmospheric composition. IAGOS observation systems are deployed on a fleet of commercial aircraft. The IAGOS database is an essential part of the global atmospheric monitoring network. Data access is handled by open access policy based on the submission of research requests which are reviewed by the PIs. The IAGOS database (http://www.iagos.fr, damien.boulanger@obs-mip.fr) is part of the French atmospheric chemistry data centre Ether (CNES and CNRS). In the framework of the IGAS project (IAGOS for Copernicus Atmospheric Service) interoperability with international portals or other databases is implemented in order to improve IAGOS data discovery. The IGAS data network is composed of three data centres: the IAGOS database in Toulouse including IAGOS-core data and IAGOS-CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container) data since January 2015; the HALO research aircraft database at DLR (https://halo-db.pa.op.dlr.de); and the MACC data centre in Jülich (http://join.iek.fz-juelich.de). The MACC (Monitoring Atmospheric Composition and Climate) project is a prominent user of the IGAS data network. In June 2015 a new version of the IAGOS database will be released providing improved services such as download in NetCDF or NASA Ames formats; graphical tools (maps, scatter plots, etc.); standardized metadata (ISO 19115) and a better users management. The link with the MACC data centre, through JOIN (Jülich OWS Interface), will allow to combine model outputs with IAGOS data for intercomparison. The interoperability within the IGAS data network, implemented thanks to many web services, will improve the functionalities of the web interfaces of each data centre.
Cavallari, Ilaria; Ruff, Christian T; Nordio, Francesco; Deenadayalu, Naveen; Shi, Minggao; Lanz, Hans; Rutman, Howard; Mercuri, Michele F; Antman, Elliott M; Braunwald, Eugene; Giugliano, Robert P
2018-04-15
Patients with atrial fibrillation (AF) who interrupt anticoagulation are at high risk of thromboembolism and death. Patients enrolled in the ENGAGE AF-TIMI 48 trial (randomized comparison of edoxaban vs. warfarin) who interrupted study anticoagulant for >3 days were identified. Clinical events (ischemic stroke/systemic embolism, major cardiac and cerebrovascular events [MACCE]) were analyzed from day 4 after interruption until day 34 or study drug resumption. During 2.8 years median follow-up, 13,311 (63%) patients interrupted study drug for >3 days. After excluding those who received open-label anticoagulation during the at-risk window, the population for analysis included 9148 patients. The rates of ischemic stroke/systemic embolism and MACCE post interruption were substantially greater than in patients who never interrupted (15.42 vs. 0.26 and 60.82 vs. 0.36 per 100 patient-years, respectively, p adj < .001). Patients who interrupted study drug for an adverse event (44.1% of the cohort), compared to those who interrupted for other reasons, had an increased risk of MACCE (HR adj 2.75; 95% CI 2.02-3.74, p < .0001), but similar rates of ischemic stroke/systemic embolism. Rates of clinical events after interruption of warfarin and edoxaban were similar. Interruption of study drug was frequent in patients with AF and was associated with a substantial risk of major cardiac and cerebrovascular events over the ensuing 30 days. This risk was particularly high in patients who interrupted as a result of an adverse event; these patients deserve close monitoring and resumption of anticoagulation as soon as it is safe to do so. Copyright © 2018 Elsevier B.V. All rights reserved.
The Interplay of Al and Mg Speciation in Advanced Mg Battery Electrolyte Solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
See, Kimberly A.; Chapman, Karena W.; Zhu, Lingyang
2016-01-13
Mg batteries are an attractive alternative to Li-based energy storage due to the possibility of higher volumetric capacities with the added advantage of using sustainable materials. A promising emerging electrolyte for Mg batteries is the magnesium aluminum chloride complex (MACC) which shows high Mg electrodeposition and stripping efficiencies and relatively high anodic stabilities. As prepared, MACC is inactive with respect to Mg deposition; however, efficient Mg electrodeposition can be achieved following an electrolytic conditioning process. Through the use of Raman spectroscopy, surface enhanced Raman spectroscopy, 27Al and 35Cl nuclear magnetic resonance spectroscopy, and pair distribution function analysis, we explore themore » active vs inactive complexes in the MACC electrolyte and demonstrate the codependence of Al and Mg speciation. These techniques report on significant changes occurring in the bulk speciation of the conditioned electrolyte relative to the as-prepared solution. Analysis shows that the active Mg complex in conditioned MACC is very likely the [Mg2(μ–Cl)3·6THF]+ complex that is observed in the solid state structure. Additionally, conditioning creates free Cl– in the electrolyte solution, and we suggest the free Cl– adsorbs at the electrode surface to enhance Mg electrodeposition.« less
Effect of growth phase on the fatty acid compositions of four species of marine diatoms
NASA Astrophysics Data System (ADS)
Liang, Ying; Mai, Kangsen
2005-04-01
The fatty acid compositions of four species of marine diatoms ( Chaetoceros gracilis MACC/B13, Cylindrotheca fusiformis MACC/B211, Phaeodactylum tricornutum MACC/B221 and Nitzschia closterium MACC/B222), cultivated at 22°C±1°C with the salinity of 28 in f/2 medium and harvested in the exponential growth phase, the early stationary phase and the late stationary phase, were determined. The results showed that growth phase has significant effect on most fatty acid contents in the four species of marine diatoms. The proportions of 16:0 and 16:1n-7 fatty acids increased while those of 16:3n-4 and eicosapentaenoic acid (EPA) decreased with increasing culture age in all species studied. The subtotal of saturated fatty acids (SFA) increased with the increasing culture age in all species with the exception of B13. The subtotal of monounsaturated fatty acids (MUFA) increased while that of polyunsaturated fatty acids (PUFA) decreased with culture age in the four species of marine diatoms. MUFA reached their lowest value in the exponential growth phase, whereas PUFA reached their highest value in the same phase.
Use of the RenalGuard system to prevent contrast-induced AKI: A meta-analysis.
Mattathil, Stephanie; Ghumman, Saad; Weinerman, Jonathan; Prasad, Anand
2017-10-01
Contrast-induced kidney injury (CI-AKI) following cardiovascular interventions results in increased morbidity and mortality. RenalGuard (RG) is a novel, closed loop system which balances volume administration with forced diuresis to maintain a high urine output. We performed a meta-analysis of the existing data comparing use of RG to conventional volume expansion. Ten studies were found eligible, of which four were randomized controlled trials. Of an aggregate sample size (N) of 1585 patients, 698 were enrolled in the four RCTs and 887 belonged to the remaining registries included in this meta-analysis. Primary outcomes included CI-AKI incidence and relative risk. Mortality, dialysis, and major adverse cardiovascular events (MACCE) were secondary outcomes. A random effects model was used and data were evaluated for publication bias. RG was associated with significant risk reduction in CI-AKI compared to control (RR: 0.30, 95%CI: 0.18-0.50, P < 0.01). CI-AKI in RG was found to be 7.7% versus 23.6% in the control group (P < 0.01). Use of RG was associated with decreased mortality (RR: 0.43, 95%CI: 0.18-0.99, P = 0.05), dialysis (RR: 0.20, 95%CI: 0.06-0.61, P = 0.01), and MACCE (RR: 0.42, 95%CI: 0.27-0.65, P < 0.01) compared to control. RG significantly reduces rates of CI-AKI compared to standard volume expansion and is also associated with decreased rates of death, dialysis, and MACCE. © 2017, Wiley Periodicals, Inc.
Clinical outcomes of patients with hypothyroidism undergoing percutaneous coronary intervention.
Zhang, Ming; Sara, Jaskanwal D S; Matsuzawa, Yasushi; Gharib, Hossein; Bell, Malcolm R; Gulati, Rajiv; Lerman, Lilach O; Lerman, Amir
2016-07-07
The aim of this study was to investigate the association between hypothyroidism and major adverse cardiovascular and cerebral events (MACCE) in patients undergoing percutaneous coronary intervention (PCI). Two thousand four hundred and thirty patients who underwent PCI were included. Subjects were divided into two groups: hypothyroidism (n = 686) defined either as a history of hypothyroidism or thyroid-stimulating hormone (TSH) ≥5.0 mU/mL, and euthyroidism (n = 1744) defined as no history of hypothyroidism and/or 0.3 mU/mL ≤ TSH < 5.0 mU/mL. Patients with hypothyroidism were further categorized as untreated (n = 193), or those taking thyroid replacement therapy (TRT) with adequate replacement (0.3 mU/mL ≤ TSH < 5.0 mU/mL, n = 175) or inadequate replacement (TSH ≥ 5.0 mU/mL, n = 318). Adjusted hazard ratios (HRs) were calculated using Cox proportional hazards models. Median follow-up was 3.0 years (interquartile range, 0.5-7.0). After adjustment for covariates, the risk of MACCE and its constituent parts was higher in patients with hypothyroidism compared with those with euthyroidism (MACCE: HR: 1.28, P = 0.0001; myocardial infarction (MI): HR: 1.25, P = 0.037; heart failure: HR: 1.46, P = 0.004; revascularization: HR: 1.26, P = 0.0008; stroke: HR: 1.62, P = 0.04). Compared with untreated patients or those with inadequate replacement, adequately treated hypothyroid patients had a lower risk of MACCE (HR: 0.69, P = 0.005; HR: 0.78, P = 0.045), cardiac death (HR: 0.43, P = 0.008), MI (HR: 0.50, P = 0.0004; HR: 0.60, P = 0.02), and heart failure (HR: 0.50, P = 0.02; HR: 0.52, P = 0.017). Hypothyroidism is associated with a higher incidence of MACCE compared with euthyroidism in patients undergoing PCI. Maintaining adequate control on TRT is beneficial in preventing MACCE. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2016. For permissions please email: journals.permissions@oup.com.
Bamberg, Fabian; Parhofer, Klaus G; Lochner, Elena; Marcus, Roy P; Theisen, Daniel; Findeisen, Hannes M; Hoffmann, Udo; Schönberg, Stefan O; Schlett, Christopher L; Reiser, Maximilian F; Weckbach, Sabine
2013-12-01
To study the predictive value of whole-body magnetic resonance (MR) imaging for the occurrence of cardiac and cerebrovascular events in a cohort of patients with diabetes mellitus (DM). This HIPAA-compliant study was approved by the institutional review board. Informed consent was obtained from all patients before enrollment into the study. The authors followed up 65 patients with DM (types 1 and 2) who underwent a comprehensive, contrast material-enhanced whole-body MR imaging protocol, including brain, cardiac, and vascular sequences at baseline. Follow-up was performed by phone interview. The primary endpoint was a major adverse cardiac and cerebrovascular event (MACCE), which was defined as composite cardiac-cerebrovascular death, myocardial infarction, cerebrovascular event, or revascularization. MR images were assessed for the presence of systemic atherosclerotic vessel changes, white matter lesions, and myocardial changes. Kaplan-Meier survival and Cox regression analyses were performed to determine associations. Follow-up was completed in 61 patients (94%; median age, 67.5 years; 30 women [49%]; median follow-up, 70 months); 14 of the 61 patients (23%) experienced MACCE. Although normal whole-body MR imaging excluded MACCE during the follow-up period (0%; 95% confidence interval [CI]: 0%, 17%), any detectable ischemic and/or atherosclerotic changes at whole-body MR imaging (prevalence, 66%) conferred a cumulative event rate of 20% at 3 years and 35% at 6 years. Whole-body MR imaging summary estimate of disease was strongly predictive for MACCE (one increment of vessel score and each territory with atherosclerotic changes: hazard ratio, 13.2 [95% CI: 4.5, 40.1] and 3.9 [95% CI: 2.2, 7.5], respectively), also beyond clinical characteristics as well as individual cardiac or cerebrovascular MR findings. These initial data indicate that disease burden as assessed with whole-body MR imaging confers strong prognostic information in patients with DM. Online supplemental material is available for this article. © RSNA, 2013.
ChemoPy: freely available python package for computational biology and chemoinformatics.
Cao, Dong-Sheng; Xu, Qing-Song; Hu, Qian-Nan; Liang, Yi-Zeng
2013-04-15
Molecular representation for small molecules has been routinely used in QSAR/SAR, virtual screening, database search, ranking, drug ADME/T prediction and other drug discovery processes. To facilitate extensive studies of drug molecules, we developed a freely available, open-source python package called chemoinformatics in python (ChemoPy) for calculating the commonly used structural and physicochemical features. It computes 16 drug feature groups composed of 19 descriptors that include 1135 descriptor values. In addition, it provides seven types of molecular fingerprint systems for drug molecules, including topological fingerprints, electro-topological state (E-state) fingerprints, MACCS keys, FP4 keys, atom pairs fingerprints, topological torsion fingerprints and Morgan/circular fingerprints. By applying a semi-empirical quantum chemistry program MOPAC, ChemoPy can also compute a large number of 3D molecular descriptors conveniently. The python package, ChemoPy, is freely available via http://code.google.com/p/pychem/downloads/list, and it runs on Linux and MS-Windows. Supplementary data are available at Bioinformatics online.
Tropospheric chemistry in the integrated forecasting system of ECMWF
NASA Astrophysics Data System (ADS)
Flemming, J.; Huijnen, V.; Arteta, J.; Bechtold, P.; Beljaars, A.; Blechschmidt, A.-M.; Josse, B.; Diamantakis, M.; Engelen, R. J.; Gaudel, A.; Inness, A.; Jones, L.; Katragkou, E.; Marecal, V.; Peuch, V.-H.; Richter, A.; Schultz, M. G.; Stein, O.; Tsikerdekis, A.
2014-11-01
A representation of atmospheric chemistry has been included in the Integrated Forecasting System (IFS) of the European Centre for Medium-range Weather Forecasts (ECMWF). The new chemistry modules complement the aerosol modules of the IFS for atmospheric composition, which is named C-IFS. C-IFS for chemistry supersedes a coupled system, in which the Chemical Transport Model (CTM) Model for OZone and Related chemical Tracers 3 was two-way coupled to the IFS (IFS-MOZART). This paper contains a description of the new on-line implementation, an evaluation with observations and a comparison of the performance of C-IFS with MOZART and with a re-analysis of atmospheric composition produced by IFS-MOZART within the Monitoring Atmospheric Composition and Climate (MACC) project. The chemical mechanism of C-IFS is an extended version of the Carbon Bond 2005 (CB05) chemical mechanism as implemented in the CTM Transport Model 5 (TM5). CB05 describes tropospheric chemistry with 54 species and 126 reactions. Wet deposition and lightning nitrogen monoxide (NO) emissions are modelled in C-IFS using the detailed input of the IFS physics package. A one-year simulation by C-IFS, MOZART and the MACC re-analysis is evaluated against ozonesondes, carbon monoxide (CO) aircraft profiles, European surface observations of ozone (O3), CO, sulphur dioxide (SO2) and nitrogen dioxide (NO2) as well as satellite retrievals of CO, tropospheric NO2 and formaldehyde. Anthropogenic emissions from the MACC/CityZen (MACCity) inventory and biomass burning emissions from the Global Fire Assimilation System (GFAS) data set were used in the simulations by both C-IFS and MOZART. C-IFS (CB05) showed an improved performance with respect to MOZART for CO, upper tropospheric O3, winter time SO2 and was of a similar accuracy for other evaluated species. C-IFS (CB05) is about ten times more computationally efficient than IFS-MOZART.
Tropospheric chemistry in the Integrated Forecasting System of ECMWF
NASA Astrophysics Data System (ADS)
Flemming, J.; Huijnen, V.; Arteta, J.; Bechtold, P.; Beljaars, A.; Blechschmidt, A.-M.; Diamantakis, M.; Engelen, R. J.; Gaudel, A.; Inness, A.; Jones, L.; Josse, B.; Katragkou, E.; Marecal, V.; Peuch, V.-H.; Richter, A.; Schultz, M. G.; Stein, O.; Tsikerdekis, A.
2015-04-01
A representation of atmospheric chemistry has been included in the Integrated Forecasting System (IFS) of the European Centre for Medium-Range Weather Forecasts (ECMWF). The new chemistry modules complement the aerosol modules of the IFS for atmospheric composition, which is named C-IFS. C-IFS for chemistry supersedes a coupled system in which chemical transport model (CTM) Model for OZone and Related chemical Tracers 3 was two-way coupled to the IFS (IFS-MOZART). This paper contains a description of the new on-line implementation, an evaluation with observations and a comparison of the performance of C-IFS with MOZART and with a re-analysis of atmospheric composition produced by IFS-MOZART within the Monitoring Atmospheric Composition and Climate (MACC) project. The chemical mechanism of C-IFS is an extended version of the Carbon Bond 2005 (CB05) chemical mechanism as implemented in CTM Transport Model 5 (TM5). CB05 describes tropospheric chemistry with 54 species and 126 reactions. Wet deposition and lightning nitrogen monoxide (NO) emissions are modelled in C-IFS using the detailed input of the IFS physics package. A 1 year simulation by C-IFS, MOZART and the MACC re-analysis is evaluated against ozonesondes, carbon monoxide (CO) aircraft profiles, European surface observations of ozone (O3), CO, sulfur dioxide (SO2) and nitrogen dioxide (NO2) as well as satellite retrievals of CO, tropospheric NO2 and formaldehyde. Anthropogenic emissions from the MACC/CityZen (MACCity) inventory and biomass burning emissions from the Global Fire Assimilation System (GFAS) data set were used in the simulations by both C-IFS and MOZART. C-IFS (CB05) showed an improved performance with respect to MOZART for CO, upper tropospheric O3, and wintertime SO2, and was of a similar accuracy for other evaluated species. C-IFS (CB05) is about 10 times more computationally efficient than IFS-MOZART.
The Minnesota Adolescent Community Cohort Study: Design and Baseline Results
Forster, Jean; Chen, Vincent; Perry, Cheryl; Oswald, John; Willmorth, Michael
2014-01-01
The Minnesota Adolescent Community Cohort (MACC) Study is a population-based, longitudinal study that enrolled 3636 youth from Minnesota and 605 youth from comparison states age 12 to 16 years in 2000–2001. Participants have been surveyed by telephone semi-annually about their tobacco-related attitudes and behaviors. The goals of the study are to evaluate the effects of the Minnesota Youth Tobacco Prevention Initiative and its shutdown on youth smoking patterns, and to better define the patterns of development of tobacco use in adolescents. A multilevel sample was constructed representing individuals, local jurisdictions and the entire state, and data are collected to characterize each of these levels. This paper presents the details of the multilevel study design. We also provide baseline information about MACC participants including demographics and tobacco-related attitudes and behaviors. This paper describes smoking prevalence at the local level, and compares MACC participants to the state as a whole. PMID:21360063
Evaluation of a new microphysical aerosol module in the ECMWF Integrated Forecasting System
NASA Astrophysics Data System (ADS)
Woodhouse, Matthew; Mann, Graham; Carslaw, Ken; Morcrette, Jean-Jacques; Schulz, Michael; Kinne, Stefan; Boucher, Olivier
2013-04-01
The Monitoring Atmospheric Composition and Climate II (MACC-II) project will provide a system for monitoring and predicting atmospheric composition. As part of the first phase of MACC, the GLOMAP-mode microphysical aerosol scheme (Mann et al., 2010, GMD) was incorporated within the ECMWF Integrated Forecasting System (IFS). The two-moment modal GLOMAP-mode scheme includes new particle formation, condensation, coagulation, cloud-processing, and wet and dry deposition. GLOMAP-mode is already incorporated as a module within the TOMCAT chemistry transport model and within the UK Met Office HadGEM3 general circulation model. The microphysical, process-based GLOMAP-mode scheme allows an improved representation of aerosol size and composition and can simulate aerosol evolution in the troposphere and stratosphere. The new aerosol forecasting and re-analysis system (known as IFS-GLOMAP) will also provide improved boundary conditions for regional air quality forecasts, and will benefit from assimilation of observed aerosol optical depths in near real time. Presented here is an evaluation of the performance of the IFS-GLOMAP system in comparison to in situ aerosol mass and number measurements, and remotely-sensed aerosol optical depth measurements. Future development will provide a fully-coupled chemistry-aerosol scheme, and the capability to resolve nitrate aerosol.
Monitoring Air Quality over China: Evaluation of the modeling system of the PANDA project
NASA Astrophysics Data System (ADS)
Bouarar, Idir; Katinka Petersen, Anna; Brasseur, Guy; Granier, Claire; Xie, Ying; Wang, Xuemei; Fan, Qi; Wang, Lili
2015-04-01
Air pollution has become a pressing problem in Asia and specifically in China due to rapid increase in anthropogenic emissions related to growth of China's economic activity and increasing demand for energy in the past decade. Observed levels of particulate matter and ozone regularly exceed World Health Organization (WHO) air quality guidelines in many parts of the country leading to increased risk of respiratory illnesses and other health problems. The EU-funded project PANDA aims to establish a team of European and Chinese scientists to monitor air pollution over China and elaborate air quality indicators in support of European and Chinese policies. PANDA combines state-of-the-art air pollution modeling with space and surface observations of chemical species to improve methods for monitoring air quality. The modeling system of the PANDA project follows a downscaling approach: global models such as MOZART and MACC system provide initial and boundary conditions to regional WRF-Chem and EMEP simulations over East Asia. WRF-Chem simulations at higher resolution (e.g. 20km) are then performed over a smaller domain covering East China and initial and boundary conditions from this run are used to perform simulations at a finer resolution (e.g. 5km) over specific megacities like Shanghai. Here we present results of model simulations for January and July 2010 performed during the first year of the project. We show an intercomparison of the global (MACC, EMEP) and regional (WRF-Chem) simulations and a comprehensive evaluation with satellite measurements (NO2, CO) and in-situ data (O3, CO, NOx, PM10 and PM2.5) at several surface stations. Using the WRF-Chem model, we demonstrate that model performance is influenced not only by the resolution (e.g. 60km, 20km) but also the emission inventories used (MACCity, HTAPv2), their resolution and diurnal variation, and the choice of initial and boundary conditions (e.g. MOZART, MACC analysis).
Prognostic Implications of Dual Platelet Reactivity Testing in Acute Coronary Syndrome.
de Carvalho, Leonardo P; Fong, Alan; Troughton, Richard; Yan, Bryan P; Chin, Chee-Tang; Poh, Sock-Cheng; Mejin, Melissa; Huang, Nancy; Seneviratna, Aruni; Lee, Chi-Hang; Low, Adrian F; Tan, Huay-Cheem; Chan, Siew-Pang; Frampton, Christopher; Richards, A Mark; Chan, Mark Y
2018-02-01
Studies on platelet reactivity (PR) testing commonly test PR only after percutaneous coronary intervention (PCI) has been performed. There are few data on pre- and post-PCI testing. Data on simultaneous testing of aspirin and adenosine diphosphate antagonist response are conflicting. We investigated the prognostic value of combined serial assessments of high on-aspirin PR (HASPR) and high on-adenosine diphosphate receptor antagonist PR (HADPR) in patients with acute coronary syndrome (ACS). HASPR and HADPR were assessed in 928 ACS patients before (initial test) and 24 hours after (final test) coronary angiography, with or without revascularization. Patients with HASPR on the initial test, compared with those without, had significantly higher intraprocedural thrombotic events (IPTE) (8.6 vs. 1.2%, p ≤ 0.001) and higher 30-day major adverse cardiovascular and cerebrovascular events (MACCE; 5.2 vs. 2.3%, p = 0.05), but not 12-month MACCE (13.0 vs. 15.1%, p = 0.50). Patients with initial HADPR, compared with those without, had significantly higher IPTE (4.4 vs. 0.9%, p = 0.004), but not 30-day (3.5 vs. 2.3%, p = 0.32) or 12-month MACCE (14.0 vs. 12.5%, p = 0.54). The c-statistic of the Global Registry of Acute Coronary Events (GRACE) score alone, GRACE score + ASPR test and GRACE score + ADPR test for discriminating 30-day MACCE was 0.649, 0.803 and 0.757, respectively. Final ADPR was associated with 30-day MACCE among patients with intermediate-to-high GRACE score (adjusted odds ratio [OR]: 4.50, 95% confidence interval [CI]: 1.14-17.66), but not low GRACE score (adjusted OR: 1.19, 95% CI: 0.13-10.79). In conclusion, both HASPR and HADPR predict ischaemic events in ACS. This predictive utility is time-dependent and risk-dependent. Schattauer GmbH Stuttgart.
2016-09-09
law enforcement detachment (USCG) LEO law enforcement operations LOC line of communications MACCS Marine air command and control system MAS...enemy command and control [C2], intelligence, fires, reinforcing units, lines of communications [ LOCs ], logistics, and other operational- and tactical...enemy naval, engineering, and personnel resources to the tasks of repairing and recovering damaged equipment, facilities, and LOCs . It can draw the
Wang, Ning; Zhang, Yang; Liang, Huaxin
2018-02-14
The dysregulation of microRNAs (miRNAs) expression is closely related with tumorigenesis and tumour development in glioblastoma (GBM). In this study, we found that miRNA-598 (miR-598) expression was significantly downregulated in GBM tissues and cell lines. Restoring miR-598 expression inhibited cell proliferation and invasion in GBM. Moreover, we validated that metastasis associated in colon cancer-1 (MACC1) is a novel target of miR-598 in GBM. Recovered MACC1 expression reversed the inhibitory effects of miR-598 overexpression on GBM cells. In addition, miR-598 overexpression suppressed the Met/AKT pathway activation in GBM. Our results provided compelling evidence that miR-598 serves tumour suppressive roles in GBM and that its anti-oncogenic effects are mediated chiefly through the direct suppression of MACC1 expression and regulation of the Met/AKT signalling pathway. Therefore, miR-598 is a potential target in the treatment of GBM.
Frye, Mark A; Hinton, David J; Karpyak, Victor M; Biernacka, Joanna M; Gunderson, Lee J; Feeder, Scott E; Choi, Doo-Sup; Port, John D
2016-12-01
Although the precise drug mechanism of action of acamprosate remains unclear, its antidipsotropic effect is mediated in part through glutamatergic neurotransmission. We evaluated the effect of 4 weeks of acamprosate treatment in a cohort of 13 subjects with alcohol dependence (confirmed by a structured interview, Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision) on proton magnetic resonance spectroscopy glutamate levels in the midline anterior cingulate cortex (MACC). We compared levels of metabolites with a group of 16 healthy controls. The Pennsylvania Alcohol Craving Scale was used to assess craving intensity. At baseline, before treatment, the mean cerebrospinal fluid-corrected MACC glutamate (Glu) level was significantly elevated in subjects with alcohol dependence compared with controls (P = 0.004). Four weeks of acamprosate treatment reduced glutamate levels (P = 0.025), an effect that was not observed in subjects who did not take acamprosate. At baseline, there was a significant positive correlation between cravings, measured by the Pennsylvania Alcohol Craving Scale, and MACC (Glu) levels (P = 0.019). Overall, these data would suggest a normalizing effect of acamprosate on a hyperglutamatergic state observed in recently withdrawn patients with alcohol dependence and a positive association between MACC glutamate levels and craving intensity in early abstinence. Further research is needed to evaluate the use of these findings for clinical practice, including monitoring of craving intensity and individualized selection of treatment with antidipsotropic medications in subjects with alcohol dependence.
2010-06-01
mutation si gnature i s prognostic in EGFR wild-type l ung adenocarcinomas and identifies Metastasis associated in colon cancer 1 (MACC1) as an EGFR...T790M mutation (N=7, blue curve) (AUC: area under the curve). Figure 3. EGFR dependency signature is a favorable prognostic factor. EGFR index...developed. T he si gnature w as shown t o b e prognostic regardless of EGFR status. T he results also suggest MACC1 to be a regulator of MET in NSCLC
NASA Technical Reports Server (NTRS)
Kulawik, Susan; Wunch, Debra; O’Dell, Christopher; Frankenberg, Christian; Reuter, Maximilian; Chevallier, Frederic; Oda, Tomohiro; Sherlock, Vanessa; Buchwitz, Michael; Osterman, Greg;
2016-01-01
Consistent validation of satellite CO2 estimates is a prerequisite for using multiple satellite CO2 measurements for joint flux inversion, and for establishing an accurate long-term atmospheric CO2 data record. Harmonizing satellite CO2 measurements is particularly important since the differences in instruments, observing geometries, sampling strategies, etc. imbue different measurement characteristics in the various satellite CO2 data products. We focus on validating model and satellite observation attributes that impact flux estimates and CO2 assimilation, including accurate error estimates, correlated and random errors, overall biases, biases by season and latitude, the impact of coincidence criteria, validation of seasonal cycle phase and amplitude, yearly growth, and daily variability. We evaluate dry-air mole fraction (X(sub CO2)) for Greenhouse gases Observing SATellite (GOSAT) (Atmospheric CO2 Observations from Space, ACOS b3.5) and SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY (SCIAMACHY) (Bremen Optimal Estimation DOAS, BESD v2.00.08) as well as the CarbonTracker (CT2013b) simulated CO2 mole fraction fields and the Monitoring Atmospheric Composition and Climate (MACC) CO2 inversion system (v13.1) and compare these to Total Carbon Column Observing Network (TCCON) observations (GGG2012/2014). We find standard deviations of 0.9, 0.9, 1.7, and 2.1 parts per million vs. TCCON for CT2013b, MACC, GOSAT, and SCIAMACHY, respectively, with the single observation errors 1.9 and 0.9 times the predicted errors for GOSAT and SCIAMACHY, respectively. We quantify how satellite error drops with data averaging by interpreting according to (error(sup 2) equals a(sup 2) plus b(sup 2) divided by n (with n being the number of observations averaged, a the systematic (correlated) errors, and b the random (uncorrelated) errors). a and b are estimated by satellites, coincidence criteria, and hemisphere. Biases at individual stations have year-to-year variability of 0.3 parts per million, with biases larger than the TCCON predicted bias uncertainty of 0.4 parts per million at many stations. We find that GOSAT and CT2013b under-predict the seasonal cycle amplitude in the Northern Hemisphere (NH) between 46 and 53 degrees North latitude, MACC over-predicts between 26 and 37 degrees North latitude, and CT2013b under-predicts the seasonal cycle amplitude in the Southern Hemisphere (SH). The seasonal cycle phase indicates whether a data set or model lags another data set in time. We find that the GOSAT measurements improve the seasonal cycle phase substantially over the prior while SCIAMACHY measurements improve the phase significantly for just two of seven sites. The models reproduce the measured seasonal cycle phase well except for at Lauder_125HR (CT2013b) and Darwin (MACC). We compare the variability within 1 day between TCCON and models in June-July-August; there is correlation between 0.2 and 0.8 in the NH, with models showing 10-50 percent the variability of TCCON at different stations and CT2013b showing more variability than MACC. This paper highlights findings that provide inputs to estimate flux errors in model assimilations, and places where models and satellites need further investigation, e.g., the SH for models and 45-67 degrees North latitude for GOSAT and CT2013b.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weber, Scott; Bixler, Nathan E.; McFadden, Katherine Letizia
In 1973 the U.S. Environmental Protection Agency (EPA) developed SecPop to calculate population estimates to support a study on air quality. The Nuclear Regulatory Commission (NRC) adopted this program to support siting reviews for nuclear power plant construction and license applications. Currently SecPop is used to prepare site data input files for offsite consequence calculations with the MELCOR Accident Consequence Code System (MACCS). SecPop enables the use of site-specific population, land use, and economic data for a polar grid defined by the user. Updated versions of SecPop have been released to use U.S. decennial census population data. SECPOP90 was releasedmore » in 1997 to use 1990 population and economic data. SECPOP2000 was released in 2003 to use 2000 population data and 1997 economic data. This report describes the current code version, SecPop version 4.3, which uses 2010 population data and both 2007 and 2012 economic data. It is also compatible with 2000 census and 2002 economic data. At the time of this writing, the current version of SecPop is 4.3.0, and that version is described herein. This report contains guidance for the installation and use of the code as well as a description of the theory, models, and algorithms involved. This report contains appendices which describe the development of the 2010 census file, 2007 county file, and 2012 county file. Finally, an appendix is included that describes the validation assessments performed.« less
Morice, Marie-Claude; Feldman, Ted E E; Mack, Michael J; Ståhle, Elisabeth; Holmes, David R; Colombo, Antonio; Morel, Marie-Angèle; van den Brand, Marcel; Serruys, Patrick W; Mohr, Friedrich; Carrié, Didier; Fournial, Gérard; James, Stefan; Leadley, Katrin; Dawkins, Keith D; Kappetein, A Pieter
2011-10-30
The SYNTAX-LE MANS substudy prospectively evaluated 15-month angiographic and clinical outcomes in patients with treated left main (LM) disease. In the SYNTAX trial, 1,800 patients with three-vessel and/or LM disease were randomised to either CABG or PCI; of these, 271 LM patients were prospectively assigned to receive a 15-month angiogram. The primary endpoint for the CABG arm was the ratio of ≥50% to <100% obstructed/occluded grafts bypassing LM lesions to the number placed. The primary endpoint for the PCI arm was the proportion of patients with ≤50% diameter stenosis ('patent' stents) of treated LM lesions. Per protocol, no formal comparison between CABG and PCI arms was intended based on the differing primary endpoints. Available 15-month angiograms were analysed for 114 CABG and 149 PCI patients. At 15 months, 9.9% (26/263) of CABG grafts were 100% occluded and an additional 5.7% (15/263) were ≥50% to <100% occluded. Overall, 27.2% (31/114) of patients had ≥1 obstructed/occluded graft. The 15-month CABG MACCE rate was 8.8% (10/114) and MACCE at 15 months was not significantly associated with graft obstruction/occlusion (p=0.85). In the PCI arm, 92.4% (134/145) of patients had ≤50% diameter LM stenosis at 15 months (89.7% [87/97] distal LM lesions and 97.9% [47/48] non-distal LM lesions). The 15-month PCI MACCE rate was 12.8% (20/156) and this was significantly associated with lack of stent patency at 15 months (p<0.001), mainly due to repeat revascularisation. At 15 months, 15.6% (41/263) of grafts were at least 50% obstructed but this was not significantly associated with MACCE; 92.4% (134/145) of patients had stents that remained patent at 15 months, and stent restenosis was significantly associated with MACCE, predominantly due to revascularisation.
Gyöngyösi, Mariann; Christ, Günter; Lang, Irene; Kreiner, Gerhard; Sochor, Heinz; Probst, Peter; Neunteufl, Thomas; Badr-Eslam, Rosa; Winkler, Susanne; Nyolczas, Noemi; Posa, Aniko; Leisch, Franz; Karnik, Ronald; Siostrzonek, Peter; Harb, Stefan; Heigert, Matthias; Zenker, Gerald; Benzer, Werner; Bonner, Gerhard; Kaider, Alexandra; Glogar, Dietmar
2009-08-01
The multicenter AUTAX (Austrian Multivessel TAXUS-Stent) registry investigated the 2-year clinical/angiographic outcomes of patients with multivessel coronary artery disease after implantation of TAXUS Express stents (Boston Scientific, Natick, Massachusetts), in a "real-world" setting. The AUTAX registry included patients with 2- or 3-vessel disease, with/without previous percutaneous coronary intervention (PCI) and concomitant surgery. Patients (n = 441, 64 +/- 12 years, 78% men) (n = 1,080 lesions) with possible complete revascularization by PCI were prospectively included. Median clinical follow-up was 753 (quartiles 728 to 775) days after PCI in 95.7%, with control angiography of 78% at 6 months. The primary end point was the composite of major adverse cardiac (nonfatal acute myocardial infarction [AMI], all-cause mortality, target lesion revascularization [TLR]) and cerebrovascular events (MACCE). Potential risk factor effects on 2-year MACCE were evaluated using Cox regression. Complete revascularization was successful in 90.5%, with left main PCI of 6.8%. Rates of acute, subacute, and late stent thrombosis were 0.7%, 0.5%, and 0.5%. Two-year follow-up identified AMI (1.4%), death (3.6%), stroke (0.2%), and TLR (13.1%), for a composite MACCE of 18.3%. The binary restenosis rate was 10.8%. The median of cumulative SYNTAX score was 23.0 (range 12.0 to 56.5). The SYNTAX score did not predict TLR or MACCE, due to lack of scoring of restenotic or bypass stenoses (29.8%). Age (hazard ratio [HR]: 1.03, p = 0.019) and acute coronary syndrome (HR: 2.1, p = 0.001) were significant predictors of 2-year MACCE. Incomplete revascularization predicted death or AMI (HR: 3.84, p = 0.002). With the aim of complete revascularization, TAXUS stent implantations can be safe for patients with multivessel disease. The AUTAX registry including patients with post-PCI lesions provides additional information to the SYNTAX (Synergy Between Percutaneous Coronary Intervention With TAXUS and Cardiac Surgery) study. (Austrian Multivessel TAXUS-Stent Registry; NCT00738686).
Yatsu, Shoichiro; Naito, Ryo; Kasai, Takatoshi; Matsumoto, Hiroki; Shitara, Jun; Shimizu, Megumi; Murata, Azusa; Kato, Takao; Suda, Shoko; Hiki, Masaru; Sai, Eiryu; Miyauchi, Katsumi; Daida, Hiroyuki
2018-03-31
Sleep-disordered breathing (SDB) has been recognized as an important risk factor for coronary artery disease (CAD). However, SDB was not fully examined, because sleep studies are limited. Nocturnal pulse oximetry has been suggested to be a useful tool for evaluating SDB. Therefore, the aim of this study was to investigate the influence of SDB assessed by nocturnal pulse oximetry on clinical outcomes in patients who underwent percutaneous coronary intervention (PCI). We conducted a prospective, multicenter, observational cohort study, wherein SDB was assessed by finger pulse oximetry in patients who underwent PCI from January 2014 to December 2016. SDB was defined as 4% oxygen desaturation index of 5 and higher. The primary endpoint was major adverse cardiac or cerebrovascular event (MACCE), defined as a composite of all-cause mortality, acute coronary syndrome, and/or stroke. Of 539 patients, 296 (54.9%) had SDB. MACCE occurred in 32 patients (5.8%) during a median follow-up of 1.9 years. The cumulative incidence of MACCE was significantly higher in patients with SDB (P = 0.0134). In the stepwise multivariable Cox proportional model, the presence of SDB was a significant predictor of MACCE (hazard ratio 2.26; 95% confidence interval 1.05-5.4, P = 0.036). SDB determined by nocturnal pulse oximetry was associated with worse clinical outcomes in patients who underwent PCI. Screening for SDB with nocturnal pulse oximetry was considered to be important for risk stratification in patients with CAD.
Access to MISR Aerosol Data and Imagery for the GoMACCS Field Study
NASA Astrophysics Data System (ADS)
Ritchey, N.; Watkinson, T.; Davis, J.; Walter, J.; Protack, S.; Matthews, J.; Smyth, M.; Rheingans, B.; Gaitley, B.; Ferebee, M.; Haberer, S.
2006-12-01
NASA Langley Atmospheric Science Data Center (ASDC) and NASA Jet Propulsion Laboratory (JPL) Multi- angle Imaging SpectroRadiometer (MISR) teams collaborated to provide special data products and images in an innovative approach for the Gulf of Mexico Atmospheric Composition and Climate Study (GoMACCS) field campaign. GoMACCS was an intensive field study focused on providing a better understanding of the sources and atmospheric processes responsible for the formation and distribution of ozone and aerosols in the atmosphere and the influence that these species have on the radiative forcing of regional and global climate, as well as their impact on human health and regional haze. The study area encompassed Texas and the northwestern Gulf of Mexico. Numerous U. S. Government agencies, universities and commercial entities participated in the field campaign which occurred August through September 2006. Aerosol and meteorological measurements were provided by a network of instruments on land, buoys and ships, by airborne in situ and remote instruments, and by satellite retrievals. MISR's role in GoMACCS was to provide satellite retrievals of aerosols and cloud properties and imagery as quickly as possible after data acquisition. The diverse group of scientific participants created unique opportunities for ASDC and MISR to develop special data products and images that were easily accessible by all participants. Examples of the data products, images and access methods as well as the data and imagery flow will be presented. Additional information about ASDC and MISR is available from the following web sites, http://eosweb.larc.nasa.gov and http://www-misr.jpl.nasa.gov/.
NASA Astrophysics Data System (ADS)
Gaudel, A.; Clark, H.; Thouret, V.; Eskes, H.; Huijnen, V.; Nedelec, P.
2013-12-01
Tropospheric ozone is probably one of the most important trace gases in the atmosphere. It plays a major role in the chemistry of the troposphere by exerting a strong influence on the concentrations of oxidants such as hydroxyl radical (OH) and is the third greenhouse gas after carbon dioxide and methane. Its radiative impact is of particular importance in the Upper Troposphere / Lower Stratosphere (UTLS), the most critical region regarding the climate change. Carbon Monoxide (CO) is one of the major ozone precursors (originating from all types of combustion) in the troposphere. In the UTLS, it also has implications for stratospheric chemistry and indirect radiative forcing effects (as a chemical precursor of CO2 and O3). Assessing the global distribution (and possibly trends) of O3 and CO in this region of the atmosphere, combining high resolution in situ data and the most appropriate global 3D model to further quantify the different sources and their origins is then of particular interest. This is one of the objectives of the MOZAIC-IAGOS (http://www.iagos.fr) and MACC-II (http://www.gmes-atmosphere.eu) European programs. The aircraft of the MOZAIC program have collected simultaneously O3 and CO data regularly all over the world since the end of 2001. Most of the data are recorded in northern mid-latitudes, in the UTLS region (as commercial aircraft cruise altitude is between 9 and 12 km). MACC-II aims at providing information services covering air quality, climate forcing and stratospheric ozone, UV radiation and solar-energy resources, using near real time analysis and forecasting products, and reanalysis. The validation reports of the MACC models are regularly published (http://www.gmes-atmosphere.eu/services/gac/nrt/ and http://www.gmes-atmosphere.eu/services/gac/reanalysis/). We will present and discuss the performance of the MACC-reanalysis, including the ECMWF-Integrated Forecasting System (IFS) coupled to the CTM MOZART with 4DVAR data assimilation, to reproduce ozone and CO in the UTLS, as evaluated by the observations of MOZAIC between 2003 and 2008. In the UT, the model tends to overestimate O3 by about 30-40 % in the mid-latitudes and polar regions. This applies broadly to all seasons but is more marked in DJF and MAM. In tropical regions, the model underestimates UT ozone by about 20 % in all seasons but this is stronger in JJA. Upper-tropospheric CO is globally underestimated by the model in all seasons, by 10-20 %. In the southern hemisphere, it is particularly the case in SON in the regions of wildfires in South Africa. In the northern hemisphere, the zonal gradient of CO between the US, Europe and Asia is not well-captured by the model, especially in MAM.
Lipoprotein(a) levels predict adverse vascular events after acute myocardial infarction.
Mitsuda, Takayuki; Uemura, Yusuke; Ishii, Hideki; Takemoto, Kenji; Uchikawa, Tomohiro; Koyasu, Masayoshi; Ishikawa, Shinji; Miura, Ayako; Imai, Ryo; Iwamiya, Satoshi; Ozaki, Yuta; Kato, Tomohiro; Shibata, Rei; Watarai, Masato; Murohara, Toyoaki
2016-12-01
Lipoprotein(a) [Lp(a)], which is genetically determined, has been reported as an independent risk factor for atherosclerotic vascular disease. However, the prognostic value of Lp(a) for secondary vascular events in patients after coronary artery disease has not been fully elucidated. This 3-year observational study included a total of 176 patients with ST-elevated myocardial infarction (STEMI), whose Lp(a) levels were measured within 24 h after primary percutaneous coronary intervention. We divided enrolled patients into two groups according to Lp(a) level and investigated the association between Lp(a) and the incidence of major adverse cardiac and cerebrovascular events (MACCE). A Kaplan-Meier analysis demonstrated that patients with higher Lp(a) levels had a higher incidence of MACCE than those with lower Lp(a) levels (log-rank P = 0.034). A multivariate Cox regression analysis revealed that Lp(a) levels were independently correlated with the occurrence of MACCE after adjusting for other classical risk factors of atherosclerotic vascular diseases (hazard ratio 1.030, 95 % confidence interval: 1.011-1.048, P = 0.002). In receiver-operating curve analysis, the cutoff value to maximize the predictive power of Lp(a) was 19.0 mg/dl (area under the curve = 0.674, sensitivity 69.2 %, specificity 62.0 %). Evaluation of Lp(a) in addition to the established coronary risk factors improved their predictive value for the occurrence of MACCE. In conclusion, Lp(a) levels at admission independently predict secondary vascular events in patients with STEMI. Lp(a) might provide useful information for the development of secondary prevention strategies in patients with myocardial infarction.
NASA Astrophysics Data System (ADS)
Sheel, Varun; Sahu, L. K.; Kajino, M.; Deushi, M.; Stein, O.; Nedelec, P.
2014-07-01
The spatial and temporal variations of carbon monoxide (CO) are analyzed over a tropical urban site, Hyderabad (17°27'N, 78°28'E) in central India. We have used vertical profiles from the Measurement of ozone and water vapor by Airbus in-service aircraft (MOZAIC) aircraft observations, Monitoring Atmospheric Composition and Climate (MACC) reanalysis, and two chemical transport model simulations (Model for Ozone And Related Tracers (MOZART) and MRI global Chemistry Climate Model (MRI-CCM2)) for the years 2006-2008. In the lower troposphere, the CO mixing ratio showed strong seasonality, with higher levels (>300 ppbv) during the winter and premonsoon seasons associated with a stable anticyclonic circulation, while lower CO values (up to 100 ppbv) were observed in the monsoon season. In the planetary boundary layer (PBL), the seasonal distribution of CO shows the impact of both local meteorology and emissions. While the PBL CO is predominantly influenced by strong winds, bringing regional background air from marine and biomass burning regions, under calm conditions CO levels are elevated by local emissions. On the other hand, in the free troposphere, seasonal variation reflects the impact of long-range transport associated with the Intertropical Convergence Zone and biomass burning. The interannual variations were mainly due to transition from El Niño to La Niña conditions. The overall modified normalized mean biases (normalization based on the observed and model mean values) with respect to the observed CO profiles were lower for the MACC reanalysis than the MOZART and MRI-CCM2 models. The CO in the PBL region was consistently underestimated by MACC reanalysis during all the seasons, while MOZART and MRI-CCM2 show both positive and negative biases depending on the season.
Kim, Yong Hoon; Her, Ae-Young; Kim, Byeong-Keuk; Shin, Dong-Ho; Kim, Jung-Sun; Ko, Young-Guk; Choi, Donghoon; Hong, Myeong-Ki; Jang, Yangsoo
2017-01-01
Objective: The appropriate selection of elderly patients for revascularization has become increasingly important because these subsets of patients are more likely to experience a major cardiac or cerebrovascular event—percutaneous coronary intervention (PCI). The objective of this study was to determine important independent risk factor for predicting clinical outcomes in the elderly patients after successful PCI, particularly in a series of South Korean population. Methods: This study is prospective, multicenter, observational cross-sectional study. A total of 1,884 consecutive patients who underwent successful PCI with Nobori® Biolimus A9-eluting stents were enrolled between April 2010 and December 2012. They were divided into two groups according to the age: patients <75 years old (younger patient group) and ≥75 years old (elderly patient group). The primary endpoint was major adverse cardiac or cerebrovascular events (MACCE) at 1-year after index PCI. Results: The 1-year cumulative incidence of MACCE (12.9% vs. 4.3%, p<0.001) and total death (7.1% vs. 1.5%, p<0.001) was significantly higher in the elderly group than in younger group. Previous cerebrovascular disease was significantly correlated with MACCE in elderly patients 1-year after PCI (hazard ratio, 2.804; 95% confidence interval, 1.290–6.093 p=0.009). Conclusion: Previous cerebrovascular disease is important independent predictor of the MACCE in elderly patients at 1-year after PCI with Nobori® Biolimus A9-eluting stents especially in a series of South Korean population. Therefore, careful PCI with intensive monitoring and management can improve major clinical outcomes after successful PCI in elderly patients with previous cerebrovascular disease compared with younger patients. PMID:28554989
Papachristidis, Alexandros; Roper, Damian; Cassar Demarco, Daniela; Tsironis, Ioannis; Papitsas, Michael; Byrne, Jonathan; Alfakih, Khaled; Monaghan, Mark J
2016-12-01
In this study, we aim to reassess the prognostic value of stress echocardiography (SE) in a contemporary population and to evaluate the clinical significance of limited apical ischaemia, which has not been previously studied. We included 880 patients who underwent SE. Follow-up data with regards to MACCE (cardiac death, myocardial infarction, any repeat revascularisation and cerebrovascular accident) were collected over 12 months after the SE. Mortality data were recorded over 27.02 ± 4.6 months (5.5-34.2 months). We sought to investigate the predictors of MACCE and all-cause mortality. In a multivariable analysis, only the positive result of SE was predictive of MACCE (HR, 3.71; P = 0.012). The positive SE group was divided into 2 subgroups: (a) inducible ischaemia limited to the apical segments ('apical ischaemia') and (b) ischaemia in any other segments with or without apical involvement ('other positive'). The subgroup of patients with apical ischaemia had a significantly worse outcome compared to the patients with a negative SE (HR, 3.68; P = 0.041) but a similar outcome to the 'other positive' subgroup. However, when investigated with invasive coronary angiography, the prevalence of coronary artery disease (CAD) and their rate of revascularisation was considerably lower. Only age (HR, 1.07; P < 0.001) was correlated with all-cause mortality. SE remains a strong predictor of patients' outcome in a contemporary population. A positive SE result was the only predictor of 12-month MACCE. The subgroup of patients with limited apical ischaemia have similar outcome to patients with ischaemia in other segments despite a lower prevalence of CAD and a lower revascularisation rate. © 2016 The authors.
Kappetein, Arie Pieter; Head, Stuart J; Morice, Marie-Claude; Banning, Adrian P; Serruys, Patrick W; Mohr, Friedrich-Wilhelm; Dawkins, Keith D; Mack, Michael J
2013-05-01
This prespecified subgroup analysis examined the effect of diabetes on left main coronary disease (LM) and/or three-vessel disease (3VD) in patients treated with percutaneous coronary intervention (PCI) or coronary artery bypass grafting (CABG) in the SYNTAX trial. Patients (n = 1800) with LM and/or 3VD were randomized to receive either PCI with TAXUS Express paclitaxel-eluting stents or CABG. Five-year outcomes in subgroups with (n = 452) or without (n = 1348) diabetes were examined: major adverse cardiac or cerebrovascular events (MACCE), the composite safety end-point of all-cause death/stroke/myocardial infarction (MI) and individual MACCE components death, stroke, MI and repeat revascularization. Event rates were estimated with Kaplan-Meier analyses. In diabetic patients, 5-year rates were significantly higher for PCI vs CABG for MACCE (PCI: 46.5% vs CABG: 29.0%; P < 0.001) and repeat revascularization (PCI: 35.3% vs CABG: 14.6%; P < 0.001). There was no difference in the composite of all-cause death/stroke/MI (PCI: 23.9% vs CABG: 19.1%; P = 0.26) or individual components all-cause death (PCI: 19.5% vs CABG: 12.9%; P = 0.065), stroke (PCI: 3.0% vs CABG: 4.7%; P = 0.34) or MI (PCI: 9.0% vs CABG: 5.4%; P = 0.20). In non-diabetic patients, rates with PCI were also higher for MACCE (PCI: 34.1% vs CABG: 26.3%; P = 0.002) and repeat revascularization (PCI: 22.8% vs CABG: 13.4%; P < 0.001), but not for the composite end-point of all-cause death/stroke/MI (PCI: 19.8% vs CABG: 15.9%; P = 0.069). There were no differences in all-cause death (PCI: 12.0% vs CABG: 10.9%; P = 0.48) or stroke (PCI: 2.2% vs CABG: 3.5%; P = 0.15), but rates of MI (PCI: 9.9% vs CABG: 3.4%; P < 0.001) were significantly increased in the PCI arm in non-diabetic patients. In both diabetic and non-diabetic patients, PCI resulted in higher rates of MACCE and repeat revascularization at 5 years. Although PCI is a potential treatment option in patients with less-complex lesions, CABG should be the revascularization option of choice for patients with more-complex anatomic disease, especially with concurrent diabetes.
NASA Astrophysics Data System (ADS)
Massart, S.; Agusti-Panareda, A.; Aben, I.; Butz, A.; Chevallier, F.; Crevosier, C.; Engelen, R.; Frankenberg, C.; Hasekamp, O.
2014-06-01
The Monitoring Atmospheric Composition and Climate Interim Implementation (MACC-II) delayed-mode (DM) system has been producing an atmospheric methane (CH4) analysis 6 months behind real time since June 2009. This analysis used to rely on the assimilation of the CH4 product from the SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY (SCIAMACHY) instrument onboard Envisat. Recently the Laboratoire de Météorologie Dynamique (LMD) CH4 products from the Infrared Atmospheric Sounding Interferometer (IASI) and the SRON Netherlands Institute for Space Research CH4 products from the Thermal And Near-infrared Sensor for carbon Observation (TANSO) were added to the DM system. With the loss of Envisat in April 2012, the DM system now has to rely on the assimilation of methane data from TANSO and IASI. This paper documents the impact of this change in the observing system on the methane tropospheric analysis. It is based on four experiments: one free run and three analyses from respectively the assimilation of SCIAMACHY, TANSO and a combination of TANSO and IASI CH4 products in the MACC-II system. The period between December 2010 and April 2012 is studied. The SCIAMACHY experiment globally underestimates the tropospheric methane by 35 part per billion (ppb) compared to the HIAPER Pole-to-Pole Observations (HIPPO) data and by 28 ppb compared the Total Carbon Column Observing Network (TCCON) data, while the free run presents an underestimation of 5 ppb and 1 ppb against the same HIPPO and TCCON data, respectively. The assimilated TANSO product changed in October 2011 from version v.1 to version v.2.0. The analysis of version v.1 globally underestimates the tropospheric methane by 18 ppb compared to the HIPPO data and by 15 ppb compared to the TCCON data. In contrast, the analysis of version v.2.0 globally overestimates the column by 3 ppb. When the high density IASI data are added in the tropical region between 30° N and 30° S, their impact is mainly positive but more pronounced and effective when combined with version v.2.0 of the TANSO products. The resulting analysis globally underestimates the column-averaged dry-air mole fractions of methane (xCH4) just under 1 ppb on average compared to the TCCON data, whereas in the tropics it overestimates xCH4 by about 3 ppb. The random error is estimated to be less than 7 ppb when compared to TCCON data.
PsyScript: a Macintosh application for scripting experiments.
Bates, Timothy C; D'Oliveiro, Lawrence
2003-11-01
PsyScript is a scriptable application allowing users to describe experiments in Apple's compiled high-level object-oriented AppleScript language, while still supporting millisecond or better within-trial event timing (delays can be in milliseconds or refresh-based, and PsyScript can wait on external I/O, such as eye movement fixations). Because AppleScript is object oriented and system-wide, PsyScript experiments support complex branching, code reuse, and integration with other applications. Included AppleScript-based libraries support file handling and stimulus randomization and sampling, as well as more specialized tasks, such as adaptive testing. Advanced features include support for the BBox serial port button box, as well as a low-cost USB-based digital I/O card for millisecond timing, recording of any number and types of responses within a trial, novel responses, such as graphics tablet drawing, and use of the Macintosh sound facilities to provide an accurate voice key, saving voice responses to disk, scriptable image creation, support for flicker-free animation, and gaze-dependent masking. The application is open source, allowing researchers to enhance the feature set and verify internal functions. Both the application and the source are available for free download at www.maccs.mq.edu.au/-tim/psyscript/.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauntt, Randall O.; Mattie, Patrick D.; Bixler, Nathan E.
2014-02-01
This paper describes the knowledge advancements from the uncertainty analysis for the State-of- the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout accident scenario at the Peach Bottom Atomic Power Station. This work assessed key MELCOR and MELCOR Accident Consequence Code System, Version 2 (MACCS2) modeling uncertainties in an integrated fashion to quantify the relative importance of each uncertain input on potential accident progression, radiological releases, and off-site consequences. This quantitative uncertainty analysis provides measures of the effects on consequences, of each of the selected uncertain parameters both individually and in interaction with other parameters. The results measure the modelmore » response (e.g., variance in the output) to uncertainty in the selected input. Investigation into the important uncertain parameters in turn yields insights into important phenomena for accident progression and off-site consequences. This uncertainty analysis confirmed the known importance of some parameters, such as failure rate of the Safety Relief Valve in accident progression modeling and the dry deposition velocity in off-site consequence modeling. The analysis also revealed some new insights, such as dependent effect of cesium chemical form for different accident progressions. (auth)« less
Kitchener, Henry C; Gittins, Matthew; Desai, Mina; Smith, John H F; Cook, Gary; Roberts, Chris; Turnbull, Lesley
2015-03-01
Liquid-based cytology (LBC) for cervical screening would benefit from laboratory practice guidelines that define specimen adequacy for reporting of slides. The evidence base required to define cell adequacy should incorporate both ThinPrep™ (TP; Hologic, Inc., Bedford, MA, USA) and SurePath™ (SP; BD Diagnostics, Burlington, NC, USA), the two LBC systems used in the UK cervical screening programmes. The objectives of this study were to determine (1) current practice for reporting LBC in England, Wales and Scotland, (2) a reproducible method for cell counting, (3) the cellularity of slides classified as inadequate, negative or abnormal and (4) the impact of varying cellularity on the likelihood of detecting cytological abnormalities. The study involved four separate arms to pursue each of the four objectives. (1) A questionnaire survey of laboratories was conducted. (2) A standard counting protocol was developed and used by three experienced cytopathologists to determine a reliable and reproducible cell counting method. (3) Slide sets which included a range of cytological abnormalities were each sent to three laboratories for cell counting to study the correlation between cell counts and reported cytological outcomes. (4) Dilution of LBC samples by fluid only (unmixed) or by dilution with a sample containing normal cells (mixed) was performed to study the impact on reporting of reducing either the total cell count or the relative proportion of abnormal to normal cells. The study was conducted within the cervical screening programmes in England, Wales and Scotland, using routinely obtained cervical screening samples, and in 56 participating NHS cervical cytology laboratories. The study involved only routinely obtained cervical screening samples. There was no clinical intervention. The main outcome measures were (1) reliability of counting method, (2) correlation of reported cytology grades with cellularity and (3) levels of detection of abnormal cells in progressively diluted cervical samples. Laboratory practice varied in terms of threshold of cellular adequacy and of morphological markers of adequacy. While SP laboratories generally used a minimum acceptable cell count (MACC) of 15,000, the MACC employed by TP laboratories varied between 5000 and 15,000. The cell counting study showed that a standard protocol achieved moderate to strong inter-rater reproducibility. Analysis of slide reporting from laboratories revealed that a large proportion of the samples reported as inadequate had cell counts above a threshold of 15,000 for SP, and 5000 and 10,000 for TP. Inter-rater unanimity was greater among more cellular preparations. Dilution studies demonstrated greater detection of abnormalities in slides with counts above the MACC and among slides with more than 25 dyskaryotic cells. Variation in laboratory practice demonstrates a requirement for evidence-based standards for designating a MACC. This study has indicated that a MACC of 15,000 and 5000 for SP and TP, respectively, achieves a balance in terms of maintaining sensitivity and low inadequacy rates. The findings of this study should inform the development of laboratory practice guidelines. The National Institute for Health Research Health Technology Assessment programme.
López-Aguilar, Carlos; Abundes-Velasco, Arturo; Eid-Lidt, Guering; Piña-Reyna, Yigal; Gaspar-Hernández, Jorge
The best revascularisation method of the unprotected left main artery is a current and evolving topic. A total of 2439 percutaneous coronary interventions (PCI) were registered during a 3-year period. The study included all the patients with PCI of the unprotected left main coronary (n=48) and matched with patients who underwent coronary artery bypass graft (CABG) (n=50). Major adverse cerebral and cardiac events (MACCE) were assessed within the hospital and in outpatients during a 16 month follow up. The cardiovascular risk was greater in the PCI group; logEuroSCORE 16±21 vs. 5±6, P=.001; clinical Syntax 77±74 vs 53±39, P=.04. On admission, the PCI group of patients had a higher frequency of ST segment elevation myocardial infarction (STEMI) and cardiogenic shock. The MACCE were similar in both groups (14% vs. 18%, P=.64). STEMI was less frequent in the PCI group (0% vs. 10%, P=.03). Cardiovascular events were lower in the PCI group (2.3% vs. 18%, P=.01), and there was a decrease in general and cardiac mortality (2.3% vs. 12%, P=.08 y 2.3% vs. 8%, P=.24), on excluding the patients with cardiogenic shock as a presentation. MACCE were similar in both groups in the out-patient phase (15% vs. 12%, P=.46). Survival without MACCE, general and cardiac death were comparable between groups (log rank, P=.38, P=.44 and P=.16, respectively). Even though the clinical and peri-procedural risk profile of the PCI patients were higher, the in-hospital and out-hospital efficacy and safety were comparable with CABG. Copyright © 2016 Instituto Nacional de Cardiología Ignacio Chávez. Publicado por Masson Doyma México S.A. All rights reserved.
Krenn, Lisa; Kopp, Christoph; Glogar, Dietmar; Lang, Irene M; Delle-Karth, Georg; Neunteufl, Thomas; Kreiner, Gerhard; Kaider, Alexandra; Bergler-Klein, Jutta; Khorsand, Aliasghar; Nikfardjam, Mariam; Laufer, Günther; Maurer, Gerald; Gyöngyösi, Mariann
2014-01-01
Objectives Cost-effectiveness of percutaneous coronary intervention (PCI) using drug-eluting stents (DES), and coronary artery bypass surgery (CABG) was analyzed in patients with multivessel coronary artery disease over a 5-year follow-up. Background DES implantation reducing revascularization rate and associated costs might be attractive for health economics as compared to CABG. Methods Consecutive patients with multivessel DES-PCI (n = 114, 3.3 ± 1.2 DES/patient) or CABG (n = 85, 2.7 ± 0.9 grafts/patient) were included prospectively. Primary endpoint was cost-benefit of multivessel DES-PCI over CABG, and the incremental cost-effectiveness ratio (ICER) was calculated. Secondary endpoint was the incidence of major adverse cardiac and cerebrovascular events (MACCE), including acute myocardial infarction (AMI), all-cause death, revascularization, and stroke. Results Despite multiple uses for DES, in-hospital costs were significantly less for PCI than CABG, with 4551 €/patient difference between the groups. At 5-years, the overall costs remained higher for CABG patients (mean difference 5400 € between groups). Cost-effectiveness planes including all patients or subgroups of elderly patients, diabetic patients, or Syntax score >32 indicated that CABG is a more effective, more costly treatment mode for multivessel disease. At the 5-year follow-up, a higher incidence of MACCE (37.7% vs. 25.8%; log rank P = 0.048) and a trend towards more AMI/death/stroke (25.4% vs. 21.2%, log rank P = 0.359) was observed in PCI as compared to CABG. ICER indicated 45615 € or 126683 € to prevent one MACCE or AMI/death/stroke if CABG is performed. Conclusions Cost-effectiveness analysis of DES-PCI vs. CABG demonstrated that CABG is the most effective, but most costly, treatment for preventing MACCE in patients with multivessel disease. © 2014 Wiley Periodicals, Inc. PMID:24403120
NASA Astrophysics Data System (ADS)
Knowland, K. E.; Doherty, R. M.; Hodges, K.
2015-12-01
The influence of the North Atlantic Oscillation (NAO) on the tropospheric distributions of ozone (O3) and carbon monoxide (CO) has been quantified. The Monitoring Atmospheric Composition and Climate (MACC) Reanalysis, a combined meteorology and composition dataset for the period 2003-2012 (Innes et al., 2013), is used to investigate the composition of the troposphere and lower stratosphere in relation to the location of the storm track as well as other meteorological parameters over the North Atlantic associated with the different NAO phases. Cyclone tracks in the MACC Reanalysis compare well to the cyclone tracks in the widely-used ERA-Interim Reanalysis for the same 10-year period (cyclone tracking performed using the tracking algorithm of Hodges (1995, 1999)), as both are based on the European Centre for Medium-Range Weather Forecasts' (ECMWF) Integrated Forecast System (IFS). A seasonal analysis is performed whereby the MACC reanalysis meteorological fields, O3 and CO mixing ratios are weighted by the monthly NAO index values. The location of the main storm track, which tilts towards high latitudes (toward the Arctic) during positive NAO phases to a more zonal location in the mid-latitudes (toward Europe) during negative NAO phases, impacts the location of both horizontal and vertical transport across the North Atlantic and into the Arctic. During positive NAO seasons, the persistence of cyclones over the North Atlantic coupled with a stronger Azores High promotes strong horizontal transport across the North Atlantic throughout the troposphere. In all seasons, significantly more intense cyclones occur at higher latitudes (north of ~50°C) during the positive phase of the NAO and in the southern mid-latitudes during the negative NAO phase. This impacts the location of stratospheric intrusions within the descending dry airstream behind the associated cold front of the extratropical cyclone and the venting of low-level pollution up into the free troposphere within the warm conveyor belt airstream which rises ahead of the cold front.
Methodology for Air Quality Forecast Downscaling from Regional- to Street-Scale
NASA Astrophysics Data System (ADS)
Baklanov, Alexander; Nuterman, Roman; Mahura, Alexander; Amstrup, Bjarne; Hansen Saas, Bent; Havskov Sørensen, Jens; Lorenzen, Thomas; Weismann, Jakob
2010-05-01
The most serious air pollution events occur in cities where there is a combination of high population density and air pollution, e.g. from vehicles. The pollutants can lead to serious human health problems, including asthma, irritation of the lungs, bronchitis, pneumonia, decreased resistance to respiratory infections, and premature death. In particular air pollution is associated with increase in cardiovascular disease and lung cancer. In 2000 WHO estimated that between 2.5 % and 11 % of total annual deaths are caused by exposure to air pollution. However, European-scale air quality models are not suited for local forecasts, as their grid-cell is typically of the order of 5 to 10km and they generally lack detailed representation of urban effects. Two suites are used in the framework of the EC FP7 project MACC (Monitoring of Atmosphere Composition and Climate) to demonstrate how downscaling from the European MACC ensemble to local-scale air quality forecast will be carried out: one will illustrate capabilities for the city of Copenhagen (Denmark); the second will focus on the city of Bucharest (Romania). This work is devoted to the first suite, where methodological aspects of downscaling from regional (European/ Denmark) to urban scale (Copenhagen), and from the urban down to street scale. The first results of downscaling according to the proposed methodology are presented. The potential for downscaling of European air quality forecasts by operating urban and street-level forecast models is evaluated. This will bring a strong support for continuous improvement of the regional forecast modelling systems for air quality in Europe, and underline clear perspectives for the future regional air quality core and downstream services for end-users. At the end of the MACC project, requirements on "how-to-do" downscaling of European air-quality forecasts to the city and street levels with different approaches will be formulated.
Karpov, Yu; Logunova, N; Tomilova, D; Buza, V; Khomitskaya, Yu
2017-02-01
The OPTIMA II study sought to evaluate rates of major adverse cardiac and cerebrovascular events (MACCEs) during the long-term follow-up of chronic statin users who underwent percutaneous coronary intervention (PCI) with implantation of a drug-eluting stent (DES). OPTIMA II was a non-interventional, observational study conducted at a single center in the Russian Federation. Included patients were aged ≥18 years with stable angina who had received long-term (≥1 month) statin therapy prior to elective PCI with DES implantation and who had participated in the original OPTIMA study. Patients received treatment for stable angina after PCI as per routine study site clinical practice. Study data were collected from patient medical records and a routine visit 4 years after PCI. NCT02099565. Rate of MACCEs 4 years after PCI. Overall, 543 patients agreed to participate in the study (90.2% of patients in the original OPTIMA study). The mean (± standard deviation [SD]) duration of follow-up from the date of PCI to data collection was 4.42 ± 0.58 (range: 0.28-5.56) years. The frequency of MACCEs (including data in patients who died) was 30.8% (95% confidence interval: 27.0-34.7); half of MACCEs occurred in the first year of follow-up. After PCI, the majority of patients had no clinical signs of angina. Overall, 24.3% of patients discontinued statin intake in the 4 years after PCI. Only 7.7% of patients achieved a low-density lipoprotein (LDL) cholesterol goal of <1.8 mmol/L. Key limitations of this study related to its observational nature; for example, the sample size was small, the clinical results were derived from outpatients and hospitalized medical records, only one follow-up visit was performed at the end of the study (after 4 years' follow-up), only depersonalized medical information was made available for statistical analysis, and adherence to statin treatment was evaluated on the basis of patient questionnaire. Long-term follow-up of patients who underwent PCI with DES implantation demonstrated MACCEs in nearly one-third of patients, which is comparable to data from other studies. PCI was associated with relief from angina or minimal angina frequency, but compliance with statin therapy and the achievement of LDL cholesterol targets 4 years after PCI were suboptimal.
Consistent evaluation of GOSAT, SCIAMACHY, carbontracker, and MACC through comparisons to TCCON
Kulawik, S. S.; Wunch, D.; O'Dell, C.; ...
2015-06-22
Consistent validation of satellite CO 2 estimates is a prerequisite for using multiple satellite CO 2 measurements for joint flux inversion, and for establishing an accurate long-term atmospheric CO 2 data record. We focus on validating model and satellite observation attributes that impact flux estimates and CO 2 assimilation, including accurate error estimates, correlated and random errors, overall biases, biases by season and latitude, the impact of coincidence criteria, validation of seasonal cycle phase and amplitude, yearly growth, and daily variability. We evaluate dry air mole fraction (X CO 2) for GOSAT (ACOS b3.5) and SCIAMACHY (BESD v2.00.08) as wellmore » as the CarbonTracker (CT2013b) simulated CO 2 mole fraction fields and the MACC CO 2 inversion system (v13.1) and compare these to TCCON observations (GGG2014). We find standard deviations of 0.9 ppm, 0.9, 1.7, and 2.1 ppm versus TCCON for CT2013b, MACC, GOSAT, and SCIAMACHY, respectively, with the single target errors 1.9 and 0.9 times the predicted errors for GOSAT and SCIAMACHY, respectively. When satellite data are averaged and interpreted according to error 2 = a 2+ b 2 / n (where n are the number of observations averaged, a are the systematic (correlated) errors, and b are the random (uncorrelated) errors), we find that the correlated error term a = 0.6 ppm and the uncorrelated error term b = 1.7 ppm for GOSAT and a = 1.0 ppm, b = 1.4 ppm for SCIAMACHY regional averages. Biases at individual stations have year-to-year variability of ~ 0.3 ppm, with biases larger than the TCCON predicted bias uncertainty of 0.4 ppm at many stations. Using fitting software, we find that GOSAT underpredicts the seasonal cycle amplitude in the Northern Hemisphere (NH) between 46–53° N. In the Southern Hemisphere (SH), CT2013b underestimates the seasonal cycle amplitude. Biases are calculated for 3-month intervals and indicate the months that contribute to the observed amplitude differences. The seasonal cycle phase indicates whether a dataset or model lags another dataset in time. We calculate this at a subset of stations where there is adequate satellite data, and find that the GOSAT retrieved phase improves substantially over the prior and the SCIAMACHY retrieved phase improves substantially for 2 of 7 sites. The models reproduce the measured seasonal cycle phase well except for at Lauder125 (CT2013b), Darwin (MACC), and Izana (+ 10 days, CT2013b), as for Bremen and Four Corners, which are highly influenced by local effects. We compare the variability within one day between TCCON and models in JJA; there is correlation between 0.2 and 0.8 in the NH, with models showing 10–100 % the variability of TCCON at different stations (except Bremen and Four Corners which have no variability compared to TCCON) and CT2013b showing more variability than MACC. This paper highlights findings that provide inputs to estimate flux errors in model assimilations, and places where models and satellites need further investigation, e.g. the SH for models and 45–67° N for GOSAT« less
AIRS Views of Anthropogenic and Biomass Burning CO: INTEX-B/MILAGRO and TEXAQS/GoMACCS
NASA Astrophysics Data System (ADS)
McMillan, W. W.; Warner, J.; Wicks, D.; Barnet, C.; Sachse, G.; Chu, A.; Sparling, L.
2006-12-01
Utilizing the Atmospheric InfraRed Sounder's (AIRS) unique spatial and temporal coverage, we present observations of anthropogenic and biomass burning CO emissions as observed by AIRS during the 2006 field experiments INTEX-B/MILAGRO and TEXAQS/GoMACCS. AIRS daily CO maps covering more than 75% of the planet demonstrate the near global transport of these emissions. AIRS day/night coverage of significant portions of the Earth often show substantial changes in 12 hours or less. However, the coarse vertical resolution of AIRS retrieved CO complicates its interpretation. For example, extensive CO emissions are evident from Asia during April and May 2006, but it is difficult to determine the relative contributions of biomass burning in Thailand vs. domestic and industrial emissions from China. Similarly, sometimes AIRS sees enhanced CO over and downwind of Mexico City and other populated areas. AIRS low information content and decreasing sensitivity in the boundary layer can result in underestimates of CO total columns and free tropospheric abundances. Building on our analyses of INTEX-A/ICARTT data from 2004, we present comparisons with INTEX-B/MILAGRO and TEXAQS/GoMACCS in situ aircraft measurements and other satellite CO observations. The combined analysis of AIRS CO, water vapor and O3 retrievals; MODIS aerosol optical depths; and forward trajectory computations illuminate a variety of dynamical processes in the troposphere.
Climate Literacy Through Student-Teacher-Scientist Research Partnerships
NASA Astrophysics Data System (ADS)
Niepold, F.; Brooks, D.; Lefer, B.; Linsley, A.; Duckenfield, K.
2006-12-01
Expanding on the GLOBE Program's Atmosphere and Aerosol investigations, high school students can conduct Earth System scientific research that promotes scientific literacy in both content and the science process. Through the use of Student-Teacher-Scientist partnerships, Earth system scientific investigations can be conducted that serve the needs of the classroom as well as participating scientific investigators. During the proof-of-concept phase of this partnership model, teachers and their students developed science plans, through consultation with scientists, and began collecting atmospheric and aerosol data in support of the Gulf of Mexico Atmospheric Composition and Climate Study (GoMACCS) campaign in Houston Texas. This effort uses some pre-existing GLOBE materials, but draws on a variety of other resources to tailor the teacher development activities and intended student participation in a way that addresses local and regional problems. Students and teachers have learned about best practices in scientific inquiry and they also helped to expand the pipeline of potential future scientists and researchers for industry, academia, and government. This work began with a Student-Teacher-Scientist partnership started in 2002 during a GLOBE Aerosol Protocol Cross- Ground Validation of AERONET with MODIS Satellite Aerosol Measurements. Several other GLOBE schools, both national and international, have contributed to this research. The current project support of the intensive GoMACCS air quality and atmospheric dynamics field campaign during September and October of 2006. This model will be evaluated for wider use in other project-focused partnerships led by NOAA's Climate Program Office.
Wańha, Wojciech; Kawecki, Damian; Roleder, Tomasz; Pluta, Aleksandra; Marcinkiewicz, Kamil; Dola, Janusz; Morawiec, Beata; Krzych, Łukasz; Pawłowski, Tomasz; Smolka, Grzegorz; Ochała, Andrzej; Nowalany-Kozielska, Ewa; Tendera, Michał; Wojakowski, Wojciech
2016-01-01
Coexisting anaemia is associated with an increased risk of major adverse cardiac and cerebrovascular events (MACCE) and bleeding complications after percutaneous coronary intervention (PCI), especially in patients with acute coronary syndrome. To assess the impact of anaemia in patients with coronary artery disease (CAD) treated with first- and second-generation drug-eluting stents (DES) on one-year MACCE. The registry included 1916 consecutive patients (UA: n = 1502, 78.3%; NSTEMI: n = 283, 14.7%; STEMI/LBBB: n = 131, 6.8%) treated either with first- (34%) or second-generation (66%) DES. The study population was divided into two groups: patients presenting with anaemia 217 (11%) and without anaemia 1699 (89%) prior to PCI. Anaemia was defined according to World Heart Organisation (haemoglobin [Hb] level < 13 g/dL for men and < 12 g/dL for women). Patients with anaemia were older (69, IQR: 61-75 vs. 62, IQR: 56-70, p < 0.001), had higher prevalence of co-morbidities: diabetes (44.7% vs. 36.4%, p = 0.020), chronic kidney disease (31.3% vs. 19.4%; p < 0.001), peripheral artery disease (10.1% vs. 5.4%, p = 0.005), and lower left ventricular ejection fraction values (50, IQR: 40-57% vs. 55, IQR: 45-60%; p < 0.001). No difference between gender in frequency of anaemia was found. Patients with anaemia more often had prior myocardial infarction (MI) (57.6% vs. 46.4%; p = 0.002) and coronary artery bypass grafting (31.3% vs. 19.4%; p < 0.001) in comparison to patients without anaemia. They also more often had multivessel disease in angiography (36.4% vs. 26.1%; p = 0.001) and more complexity CAD as measured by SYNTAX score (21, IQR: 12-27 points vs. 14, IQR: 8-22 points; p = 0.001). In-hospital risk of acute heart failure (2.7% vs. 0.7%; p = 0.006) and bleeding requiring transfusion (3.2% vs. 0.5%; p < 0.001) was significantly higher in patients with anaemia. One-year follow-up showed that there was higher rate of death in patients with anaemia. However, there were no differences in MI, stroke, target vessel revascularisation (TVR) and MACCE in comparison to patients with normal Hb. There were no differences according to type of DES (first vs. second generation) in the population of patients with anaemia. In patients with anaemia there is a significantly higher risk of death in 12-month follow-up, but anaemia has no impact on the incidence of MI, repeat revascularisation, stroke and MACCE. There is no advantage of II-DES over I-DES generation in terms of MACCE and TVR in patients with anaemia.
Green Infrastructure Barriers and Opportunities in the Macatawa Watershed, Michigan
The project supports MACC outreach and implementation efforts of the watershed management plan by facilitating communication with local municipal staff and educating local decision makers about green infrastructure.
Zhang, Ming; Cheng, Yun-Jiu; Zheng, Wei-Ping; Liu, Guang-Hui; Chen, Huai-Sheng; Ning, Yu; Zhao, Xin; Su, Li-Xiao; Liu, Li-Juan
2016-01-01
Objective . The aim of this study was to investigate the association between COPD and major adverse cardiovascular and cerebral events (MACCE) in patients undergoing percutaneous coronary intervention (PCI). Methods . 2,362 patients who underwent PCI were included in this study. Subjects were divided into 2 groups: with COPD ( n = 233) and without COPD ( n = 2,129). Cox proportional hazards models were analyzed to determine the effect of COPD on the incidence of MACCE. Results . The patients with COPD were older ( P < 0.0001) and were more likely to be current smokers ( P = 0.02) and have had hypertension ( P = 0.02) and diabetes mellitus ( P = 0.01). Prevalence of serious cardiovascular comorbidity was higher in the patients with COPD, including a history of MI ( P = 0.02) and HF ( P < 0.0001). Compared with non-COPD group, the COPD group showed a higher risk of all-cause death (hazard ratio (HR): 2.45, P < 0.0001), cardiac death (HR: 2.53, P = 0.0002), MI (HR: 1.387, P = 0.027), and HF (HR: 2.25, P < 0.0001). Conclusions . Patients with CAD and concomitant COPD are associated with a higher incidence of MACCE (all-cause death, cardiac death, MI, and HF) compared to patients without COPD. The patients with a history of COPD have higher in-hospital and long-term mortality rates than those without COPD after PCI.
Global data set of biogenic VOC emissions calculated by the MEGAN model over the last 30 years
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sindelarova, K.; Granier, Claire; Bouarar, I.
The Model of Emissions of Gases and Aerosols from Nature (MEGANv2.1) together with the Modern-Era Retrospective Analysis for Research and Applications (MERRA) meteorological fields were used to create a global emission dataset of biogenic VOCs available on a monthly basis for the time period of 1980 - 2010. This dataset is called MEGAN-MACC. The model estimated mean annual total BVOC emission of 760 Tg(C) yr1 consisting of isoprene (70%), monoterpenes (11%), methanol (6%), acetone (3%), sesquiterpenes (2.5%) and other BVOC species each contributing less than 2 %. Several sensitivity model runs were performed to study the impact of different modelmore » input and model settings on isoprene estimates and resulted in differences of * 17% of the reference isoprene total. A greater impact was observed for sensitivity run applying parameterization of soil moisture deficit that led to a 50% reduction of isoprene emissions on a global scale, most significantly in specific regions of Africa, South America and Australia. MEGAN-MACC estimates are comparable to results of previous studies. More detailed comparison with other isoprene in ventories indicated significant spatial and temporal differences between the datasets especially for Australia, Southeast Asia and South America. MEGAN-MACC estimates of isoprene and*-pinene showed a reasonable agreement with surface flux measurements in the Amazon andthe model was able to capture the seasonal variation of emissions in this region.« less
Gao, Fei; Zhou, Yu Jie; Wang, Zhi Jian; Shen, Hua; Liu, Xiao Li; Nie, Bin; Yan, Zhen Xian; Yang, Shi Wei; Jia, De An; Yu, Miao
2010-04-01
The optimal antithrombotic strategy for patients with atrial fibrillation (AF) undergoing drug-eluting stent (DES) implantation is unknown. The 622 consecutive AF patients undergoing DES implantation were prospectively enrolled. Among them, 142 patients (TT group) continued triple antithrombotic therapy comprising aspirin, clopidogrel and warfarin after discharge; 355 patients (DT group) had dual antiplatelet therapy; 125 patients (WS group) were discharged with warfarin and a single antiplatelet agent. Target INR was set as 1.8-2.5 and was regularly monitored after discharge. The TT group had a significant reduction in stroke and major adverse cardiac and cerebral events (MACCE) (8.8% vs 20.1% vs 14.9%, P=0.010) as compared with either the DT or WS group. In the Cox regression analysis, administration with warfarin (hazard ratio (HR) 0.49; 95% confidence interval (CI) 0.31-0.77; P=0.002) and baseline CHADS(2) score >or=2 (HR 2.09; 95%CI 1.27-3.45; P=0.004) were independent predictors of MACCE. Importantly, the incidence of major bleeding was comparable among 3 groups (2.9% vs 1.8% vs 2.5%, P=0.725), although the overall bleeding rate was increased in the TT group. Kaplan-Meier analysis indicated that the TT group was associated with the best net clinical outcome. The cardiovascular benefits of triple antithrombotic therapy were confirmed by reducing the MACCE rate, and its major bleeding risk might be acceptable if the INR is closely monitored.
Insights Gained from Forensic Analysis with MELCOR of the Fukushima-Daiichi Accidents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, Nathan C.; Gauntt, Randall O.
Since the accidents at Fukushima-Daiichi, Sandia National Laboratories has been modeling these accident scenarios using the severe accident analysis code, MELCOR. MELCOR is a widely used computer code developed at Sandia National Laboratories since ~1982 for the U.S. Nuclear Regulatory Commission. Insights from the modeling of these accidents is being used to better inform future code development and potentially improved accident management. To date, our necessity to better capture in-vessel thermal-hydraulic and ex-vessel melt coolability and concrete interactions has led to the implementation of new models. The most recent analyses, presented in this paper, have been in support of themore » of the Organization for Economic Cooperation and Development Nuclear Energy Agency’s (OECD/NEA) Benchmark Study of the Accident at the Fukushima Daiichi Nuclear Power Station (BSAF) Project. The goal of this project is to accurately capture the source term from all three releases and then model the atmospheric dispersion. In order to do this, a forensic approach is being used in which available plant data and release timings is being used to inform the modeled MELCOR accident scenario. For example, containment failures, core slumping events and lower head failure timings are all enforced parameters in these analyses. This approach is fundamentally different from a blind code assessment analysis often used in standard problem exercises. The timings of these events are informed by representative spikes or decreases in plant data. The combination of improvements to the MELCOR source code resulting from analysis previous accident analysis and this forensic approach has allowed Sandia to generate representative and plausible source terms for all three accidents at Fukushima Daiichi out to three weeks after the accident to capture both early and late releases. In particular, using the source terms developed by MELCOR, the MACCS software code, which models atmospheric dispersion and deposition, we are able to reasonably capture the deposition of radionuclides to the northwest of the reactor site.« less
Preliminary risks associated with postulated tritium release from production reactor operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Kula, K.R.; Horton, W.H.
1988-01-01
The Probabilistic Risk Assessment (PRA) of Savannah River Plant (SRP) reactor operation is assessing the off-site risk due to tritium releases during postulated full or partial loss of heavy water moderator accidents. Other sources of tritium in the reactor are less likely to contribute to off-site risk in non-fuel melting accident scenarios. Preliminary determination of the frequency of average partial moderator loss (including incidents with leaks as small as .5 kg) yields an estimate of /approximately/1 per reactor year. The full moderator loss frequency is conservatively chosen as 5 /times/ 10/sup /minus/3/ per reactor year. Conditional consequences, determined with amore » version of the MACCS code modified to handle tritium, are found to be insignificant. The 95th percentile individual cancer risk is 4 /times/ 10/sup /minus/8/ per reactor year within 16 km of the release point. The full moderator loss accident contributes about 75% of the evaluated risks. 13 refs., 4 figs., 5 tabs.« less
Stochastic Modeling of Radioactive Material Releases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrus, Jason; Pope, Chad
2015-09-01
Nonreactor nuclear facilities operated under the approval authority of the U.S. Department of Energy use unmitigated hazard evaluations to determine if potential radiological doses associated with design basis events challenge or exceed dose evaluation guidelines. Unmitigated design basis events that sufficiently challenge dose evaluation guidelines or exceed the guidelines for members of the public or workers, merit selection of safety structures, systems, or components or other controls to prevent or mitigate the hazard. Idaho State University, in collaboration with Idaho National Laboratory, has developed a portable and simple to use software application called SODA (Stochastic Objective Decision-Aide) that stochastically calculatesmore » the radiation dose associated with hypothetical radiological material release scenarios. Rather than producing a point estimate of the dose, SODA produces a dose distribution result to allow a deeper understanding of the dose potential. SODA allows users to select the distribution type and parameter values for all of the input variables used to perform the dose calculation. SODA then randomly samples each distribution input variable and calculates the overall resulting dose distribution. In cases where an input variable distribution is unknown, a traditional single point value can be used. SODA was developed using the MATLAB coding framework. The software application has a graphical user input. SODA can be installed on both Windows and Mac computers and does not require MATLAB to function. SODA provides improved risk understanding leading to better informed decision making associated with establishing nuclear facility material-at-risk limits and safety structure, system, or component selection. It is important to note that SODA does not replace or compete with codes such as MACCS or RSAC, rather it is viewed as an easy to use supplemental tool to help improve risk understanding and support better informed decisions. The work was funded through a grant from the DOE Nuclear Safety Research and Development Program.« less
NASA Astrophysics Data System (ADS)
Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; Brink, Henrik; Crellin-Quick, Arien
2012-12-01
With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In addition to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.
2012-12-15
With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In additionmore » to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.« less
A comparison of two brands of clopidogrel in patients with drug-eluting stent implantation.
Park, Yae Min; Ahn, Taehoon; Lee, Kyounghoon; Shin, Kwen-Chul; Jung, Eul Sik; Shin, Dong Su; Kim, Myeong Gun; Kang, Woong Chol; Han, Seung Hwan; Choi, In Suck; Shin, Eak Kyun
2012-07-01
Although generic clopidogrel is widely used, clinical efficacy and safety between generic and original clopidogrel had not been well evaluated. The aim of this study was to evaluate the clinical outcomes of 2 oral formulations of clopidogrel 75 mg tablets in patients with coronary artery disease (CAD) undergoing drug-eluting stent (DES) implantation. Between July 2006 and February 2009, 428 patients that underwent implantation with DES for CAD and completed >1 year of clinical follow-up were enrolled in this study. Patients were divided into the following 2 groups based on treatment formulation, Platless® (test formulation, n=211) or Plavix® (reference formulation, n=217). The incidence of 1-year major adverse cardiovascular and cerebrovascular event (MACCE) and stent thrombosis (ST) were retrospectively reviewed. The baseline demographic and procedural characteristics were not significantly different between two treatment groups. The incidence of 1-year MACCEs was 8.5% {19/211, 2 deaths, 4 myocardial infarctions (MIs), 2 strokes, and 11 target vessel revascularizations (TVRs)} in Platless® group vs. 7.4% (16/217, 4 deaths, 1 MI, 2 strokes, and 9 TVRs) in Plavix® group (p=0.66). The incidence of 1-year ST was 0.5% (1 definite and subacute ST) in Platless® group vs. 0% in Plavix® group (p=0.49). In this study, the 2 tablet preparations of clopidogrel showed similar rates of MACCEs, but additional prospective randomized studies with pharmacodynamics and platelet reactivity are needed to conclude whether generic clopidgrel may replace original clopidogrel.
GLANCE - calculatinG heaLth impActs of atmospheric pollutioN in a Changing climatE
NASA Astrophysics Data System (ADS)
Vogel, Leif; Faria, Sérgio; Markandya, Anil
2016-04-01
Current annual global estimates of premature deaths from poor air quality are estimated in the range of 2.6-4.4 million, and 2050 projections are expected to double against 2010 levels. In Europe, annual economic burdens are estimated at around 750 bn €. Climate change will further exacerbate air pollution burdens; therefore, a better understanding of the economic impacts on human societies has become an area of intense investigation. European research efforts are being carried out within the MACC project series, which started in 2005. The outcome of this work has been integrated into a European capacity for Earth Observation, the Copernicus Atmospheric Monitoring Service (CAMS). In MACC/CAMS, key pollutant concentrations are computed at the European scale and globally by employing chemically-driven advanced transport models. The project GLANCE (calculatinG heaLth impActs of atmospheric pollutioN in a Changing climatE) aims at developing an integrated assessment model for calculating the health impacts and damage costs of air pollution at different physical scales. It combines MACC/CAMS (assimilated Earth Observations, an ensemble of chemical transport models and state of the art ECWMF weather forecasting) with downscaling based on in-situ network measurements. The strengthening of modelled projections through integration with empirical evidence reduces errors and uncertainties in the health impact projections and subsequent economic cost assessment. In addition, GLANCE will yield improved data accuracy at different time resolutions. This project is a multidisciplinary approach which brings together expertise from natural sciences and socio economic fields. Here, its general approach will be presented together with first results for the years 2007 - 2012 on the European scale. The results on health impacts and economic burdens are compared to existing assessments.
Wang, Wei-Ting; You, Li-Kai; Chiang, Chern-En; Sung, Shih-Hsien; Chuang, Shao-Yuan; Cheng, Hao-Min; Chen, Chen-Huan
2016-01-01
Abstract Hypertension is the most important risk factor for stroke and stroke recurrence. However, the preferred blood pressure (BP)-lowering drug class for patients who have suffered from a stroke has yet to be determined. To investigate the relative effects of BP-lowering therapies [angiotensin-converting enzyme inhibitor (ACEI), angiotensin receptor blockers (ARB), β blockers, calcium channel blockers (CCBs), diuretics, and combinations of these drugs] in patients with a prior stroke history, we performed a systematic review and meta-analysis using both traditional frequentist and Bayesian random-effects models and meta-regression of randomized controlled trials (RCTs) on the outcomes of recurrent stroke, coronary heart disease (CHD), and any major adverse cardiac and cerebrovascular events (MACCE). Trials were identified from searches of published hypertension guidelines, electronic databases, and previous systematic reviews. Fifteen RCTs composed of 39,329 participants with previous stroke were identified. Compared with the placebo, only ACEI along with diuretics significantly reduced recurrent stroke events [odds ratio (OR) = 0.54, 95% credibility interval (95% CI) 0.33–0.90]. On the basis of the distribution of posterior probabilities, the treatment ranking consistently identified ACEI along with diuretics as the preferred BP-lowering strategy for the reduction of recurrent stroke and CHD (31% and 35%, respectively). For preventing MACCE, diuretics appeared to be the preferred agent for stroke survivors (34%). Moreover, the meta-regression analysis failed to demonstrate a statistical significance between BP reduction and all outcomes (P = 0.1618 for total stroke, 0.4933 for CHD, and 0.2411 for MACCE). Evidence from RCTs supports the use of diuretics-based treatment, especially when combined with ACEI, for the secondary prevention of recurrent stroke and any vascular events in patients who have suffered from stroke. PMID:27082571
NASA Astrophysics Data System (ADS)
Siddans, Richard; Knappett, Diane; Kerridge, Brian; Waterfall, Alison; Hurley, Jane; Latter, Barry; Boesch, Hartmut; Parker, Robert
2017-11-01
This paper describes the global height-resolved methane (CH4) retrieval scheme for the Infrared Atmospheric Sounding Interferometer (IASI) on MetOp, developed at the Rutherford Appleton Laboratory (RAL). The scheme precisely fits measured spectra in the 7.9 micron region to allow information to be retrieved on two independent layers centred in the upper and lower troposphere. It also uses nitrous oxide (N2O) spectral features in the same spectral interval to directly retrieve effective cloud parameters to mitigate errors in retrieved methane due to residual cloud and other geophysical variables. The scheme has been applied to analyse IASI measurements between 2007 and 2015. Results are compared to model fields from the MACC greenhouse gas inversion and independent measurements from satellite (GOSAT), airborne (HIPPO) and ground (TCCON) sensors. The estimated error on methane mixing ratio in the lower- and upper-tropospheric layers ranges from 20 to 100 and from 30 to 40 ppbv, respectively, and error on the derived column-average ranges from 20 to 40 ppbv. Vertical sensitivity extends through the lower troposphere, though it decreases near to the surface. Systematic differences with the other datasets are typically < 10 ppbv regionally and < 5 ppbv globally. In the Southern Hemisphere, a bias of around 20 ppbv is found with respect to MACC, which is not explained by vertical sensitivity or found in comparison of IASI to TCCON. Comparisons to HIPPO and MACC support the assertion that two layers can be independently retrieved and provide confirmation that the estimated random errors on the column- and layer-averaged amounts are realistic. The data have been made publically available via the Centre for Environmental Data Analysis (CEDA) data archive (Siddans, 2016).
A new method for assessing surface solar irradiance: Heliosat-4
NASA Astrophysics Data System (ADS)
Qu, Z.; Oumbe, A.; Blanc, P.; Lefèvre, M.; Wald, L.; Schroedter-Homscheidt, M.; Gesell, G.
2012-04-01
Downwelling shortwave irradiance at surface (SSI) is more and more often assessed by means of satellite-derived estimates of optical properties of the atmosphere. Performances are judged satisfactory for the time being but there is an increasing need for the assessment of the direct and diffuse components of the SSI. MINES ParisTech and the German Aerospace Center (DLR) are currently developing the Heliosat-4 method to assess the SSI and its components in a more accurate way than current practices. This method is composed by two parts: a clear sky module based on the radiative transfer model libRadtran, and a cloud-ground module using two-stream and delta-Eddington approximations for clouds and a database of ground albedo. Advanced products derived from geostationary satellites and recent Earth Observation missions are the inputs of the Heliosat-4 method. Such products are: cloud optical depth, cloud phase, cloud type and cloud coverage from APOLLO of DLR, aerosol optical depth, aerosol type, water vapor in clear-sky, ozone from MACC products (FP7), and ground albedo from MODIS of NASA. In this communication, we briefly present Heliosat-4 and focus on its performances. The results of Heliosat-4 for the period 2004-2010 will be compared to the measurements made in five stations within the Baseline Surface Radiation Network. Extensive statistic analysis as well as case studies are performed in order to better understand Heliosat-4 and have an in-depth view of the performance of Heliosat-4, to understand its advantages comparing to existing methods and to identify its defaults for future improvements. The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement no. 218793 (MACC project) and no. 283576 (MACC-II project).
Kang, Dong Oh; Yu, Cheol Woong; Kim, Hee Dong; Cho, Jae Young; Joo, Hyung Joon; Choi, Rak Kyong; Park, Jin Sik; Lee, Hyun Jong; Kim, Je Sang; Park, Jae Hyung; Hong, Soon Jun; Lim, Do-Sun
2015-08-01
The optimal antithrombotic regimen in patients with atrial fibrillation (AF) undergoing drug-eluting stent (DES) implantation for complex coronary artery disease is unclear. We compared the net clinical outcomes of triple antithrombotic therapy (TAT; aspirin, thienopyridine, and warfarin) and dual antiplatelet therapy (DAPT; aspirin and thienopyridine) in AF patients who had undergone DES implantation. A total of 367 patients were enrolled and analyzed retrospectively; 131 patients (35.7%) received TAT and 236 patients (64.3%) received DAPT. DAPT and warfarin were maintained for a minimum of 12 and 24 months, respectively. The primary endpoint was the 2-year net clinical outcomes, a composite of major bleeding and major adverse cardiac and cerebral events (MACCE). Propensity score-matching analysis was carried out in 99 patient pairs. The 2-year net clinical outcomes of the TAT group were worse than those of the DAPT group (34.3 vs. 21.1%, P=0.006), which was mainly due to the higher incidence of major bleeding (16.7 vs. 4.6%, P<0.001), without any significant increase in MACCE (22.1 vs. 17.7%, P=0.313). In the multivariate analysis, TAT was an independent predictor of worse net clinical outcomes (odds ratio 1.63, 95% confidence interval 1.06-2.50) and major bleeding (odds ratio 3.54, 95% confidence interval 1.65-7.58). After propensity score matching, the TAT group still had worse net clinical outcomes and a higher incidence of major bleeding compared with the DAPT group. In AF patients undergoing DES implantation, prolonged administration of TAT may be harmful due to the substantial increase in the risk for major bleeding without any reduction in MACCE.
Wang, Shifei; Li, Hairui; He, Nvqin; Sun, Yili; Guo, Shengcun; Liao, Wangjun; Liao, Yulin; Chen, Yanmei; Bin, Jianping
2017-01-15
The impact of remote ischaemic preconditioning (RIPC) on major clinical outcomes in patients undergoing cardiovascular surgery remains controversial. We systematically reviewed the available evidence to evaluate the potential benefits of RIPC in such patients. PubMed, Embase, and Cochrane Library databases were searched for relevant randomised controlled trials (RCTs) conducted between January 2006 and March 2016. The pooled population of patients who underwent cardiovascular surgery was divided into the RIPC and control groups. Trial sequential analysis was applied to judge data reliability. The pooled relative risks (RRs) with 95% confidence intervals (CIs) between the groups were calculated for all-cause mortality, major adverse cardiovascular and cerebral events (MACCEs), myocardial infarction (MI), and renal failure. RIPC was not associated with improvement in all-cause mortality (RR, 1.04; 95%CI, 0.82-1.31; I 2 =26%; P>0.05) or MACCE incidence (RR, 0.90; 95%CI, 0.71-1.14; I 2 =40%; P>0.05) after cardiovascular surgery, and both results were assessed by trial sequential analysis as sufficient and conclusive. Nevertheless, RIPC was associated with a significantly lower incidence of MI (RR, 0.87; 95%CI, 0.76-1.00; I 2 =13%; P≤0.05). However, after excluding a study that had a high contribution to heterogeneity, RIPC was associated with increased rates of renal failure (RR, 1.53; 95%CI, 1.12-2.10; I 2 =5%; P≤0.05). In patients undergoing cardiovascular surgery, RIPC reduced the risk for postoperative MI, but not that for MACCEs or all-cause mortality, a discrepancy likely related to the higher rate of renal failure associated with RIPC. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Red light regulation of ethylene biosynthesis and gravitropism in etiolated pea stems
NASA Technical Reports Server (NTRS)
Steed, C. L.; Taylor, L. K.; Harrison, M. A.
2004-01-01
During gravitropism, the accumulation of auxin in the lower side of the stem causes increased growth and the subsequent curvature, while the gaseous hormone ethylene plays a modulating role in regulating the kinetics of growth asymmetries. Light also contributes to the control of gravitropic curvature, potentially through its interaction with ethylene biosynthesis. In this study, red-light pulse treatment of etiolated pea epicotyls was evaluated for its effect on ethylene biosynthesis during gravitropic curvature. Ethylene biosynthesis analysis included measurements of ethylene; the ethylene precursor 1-aminocyclopropane-1-carboxylic acid (ACC); malonyl-conjugated ACC (MACC); and expression levels of pea ACC oxidase (Ps-ACO1) and ACC synthase (Ps-ACS1, Ps-ACS2) genes by reverse transcriptase-polymerase chain reaction analysis. Red-pulsed seedlings were given a 6 min pulse of 11 micromoles m-2 s-1 red-light 15 h prior to horizontal reorientation for consistency with the timeline of red-light inhibition of ethylene production. Red-pulse treatment significantly reduced ethylene production and MACC levels in epicotyl tissue. However, there was no effect of red-pulse treatment on ACC level, or expression of ACS or ACO genes. During gravitropic curvature, ethylene production increased from 60 to 120 min after horizontal placement in both control and red-pulsed epicotyls. In red-pulsed tissues, ACC levels increased by 120 min after horizontal reorientation, accompanied by decreased MACC levels in the lower portion of the epicotyl. Overall, our results demonstrate that ethylene production in etiolated epicotyls increases after the initiation of curvature. This ethylene increase may inhibit cell growth in the lower portion of the epicotyl and contribute to tip straightening and reduced overall curvature observed after the initial 60 min of curvature in etiolated pea epicotyls.
Pattanshetty, Deepak J; Bhat, Pradeep K; Aneja, Ashish; Pillai, Dilip P
2012-12-01
Hypertensive crisis is associated with poor clinical outcomes. Elevated troponin, frequently observed in hypertensive crisis, may be attributed to myocardial supply-demand mismatch or obstructive coronary artery disease (CAD). However, in patients presenting with hypertensive crisis and an elevated troponin, the prevalence of CAD and the long-term adverse cardiovascular outcomes are unknown. We sought to assess the impact of elevated troponin on cardiovascular outcomes and evaluate the role of troponin as a predictor of obstructive CAD in patients with hypertensive crisis. Patients who presented with hypertensive crisis (n = 236) were screened retrospectively. Baseline and follow-up data including the event rates were obtained using electronic patient records. Those without an assay for cardiac Troponin I (cTnI) (n = 65) were excluded. Of the remaining 171 patients, those with elevated cTnI (cTnI ≥ 0.12 ng/ml) (n = 56) were compared with those with normal cTnI (cTnI < 0.12 ng/ml) (n = 115) at 2 years for the occurrence of major adverse cardiac or cerebrovascular events (MACCE) (composite of myocardial infarction, unstable angina, hypertensive crisis, pulmonary edema, stroke or transient ischemic attack). At 2 years, MACCE occurred in 40 (71.4%) patients with elevated cTnI compared with 44 (38.3%) patients with normal cTnI [hazard ratio: 2.77; 95% confidence interval (CI): 1.79-4.27; P < 0.001]. Also, patients with elevated cTnI were significantly more likely to have underlying obstructive CAD (odds ratio: 8.97; 95% CI: 1.4-55.9; P < 0.01). In patients with hypertensive crisis, elevated cTnI confers a significantly greater risk of long-term MACCE, and is a strong predictor of obstructive CAD.
MISR Regional GoMACCS Map Projection
Atmospheric Science Data Center
2017-03-29
... Regional Imagery: Overview | Products | Data Quality | Map Projection | File Format | View Data | ... is needed if you are doing high precision work. The packages mentioned about (HDF-EOS library, GCTP, and IDL) all convert to and ...
Deconvolution of magnetic acoustic change complex (mACC).
Bardy, Fabrice; McMahon, Catherine M; Yau, Shu Hui; Johnson, Blake W
2014-11-01
The aim of this study was to design a novel experimental approach to investigate the morphological characteristics of auditory cortical responses elicited by rapidly changing synthesized speech sounds. Six sound-evoked magnetoencephalographic (MEG) responses were measured to a synthesized train of speech sounds using the vowels /e/ and /u/ in 17 normal hearing young adults. Responses were measured to: (i) the onset of the speech train, (ii) an F0 increment; (iii) an F0 decrement; (iv) an F2 decrement; (v) an F2 increment; and (vi) the offset of the speech train using short (jittered around 135ms) and long (1500ms) stimulus onset asynchronies (SOAs). The least squares (LS) deconvolution technique was used to disentangle the overlapping MEG responses in the short SOA condition only. Comparison between the morphology of the recovered cortical responses in the short and long SOAs conditions showed high similarity, suggesting that the LS deconvolution technique was successful in disentangling the MEG waveforms. Waveform latencies and amplitudes were different for the two SOAs conditions and were influenced by the spectro-temporal properties of the sound sequence. The magnetic acoustic change complex (mACC) for the short SOA condition showed significantly lower amplitudes and shorter latencies compared to the long SOA condition. The F0 transition showed a larger reduction in amplitude from long to short SOA compared to the F2 transition. Lateralization of the cortical responses were observed under some stimulus conditions and appeared to be associated with the spectro-temporal properties of the acoustic stimulus. The LS deconvolution technique provides a new tool to study the properties of the auditory cortical response to rapidly changing sound stimuli. The presence of the cortical auditory evoked responses for rapid transition of synthesized speech stimuli suggests that the temporal code is preserved at the level of the auditory cortex. Further, the reduced amplitudes and shorter latencies might reflect intrinsic properties of the cortical neurons to rapidly presented sounds. This is the first demonstration of the separation of overlapping cortical responses to rapidly changing speech sounds and offers a potential new biomarker of discrimination of rapid transition of sound. Crown Copyright © 2014. Published by Elsevier Ireland Ltd. All rights reserved.
Marginal abatement cost curves for NOx incorporating both controls and alternative measures
A marginal abatement cost curve (MACC) traces out the efficient marginal abatement cost level for any aggregate emissions target when a least cost approach is implemented. In order for it to represent the efficient MAC level, all abatement opportunities across all sectors and loc...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrus, Jason P.; Pope, Chad; Toston, Mary
2016-12-01
Nonreactor nuclear facilities operating under the approval authority of the U.S. Department of Energy use unmitigated hazard evaluations to determine if potential radiological doses associated with design basis events challenge or exceed dose evaluation guidelines. Unmitigated design basis events that sufficiently challenge dose evaluation guidelines or exceed the guidelines for members of the public or workers, merit selection of safety structures, systems, or components or other controls to prevent or mitigate the hazard. Idaho State University, in collaboration with Idaho National Laboratory, has developed a portable and simple to use software application called SODA (Stochastic Objective Decision-Aide) that stochastically calculatesmore » the radiation dose distribution associated with hypothetical radiological material release scenarios. Rather than producing a point estimate of the dose, SODA produces a dose distribution result to allow a deeper understanding of the dose potential. SODA allows users to select the distribution type and parameter values for all of the input variables used to perform the dose calculation. Users can also specify custom distributions through a user defined distribution option. SODA then randomly samples each distribution input variable and calculates the overall resulting dose distribution. In cases where an input variable distribution is unknown, a traditional single point value can be used. SODA, developed using the MATLAB coding framework, has a graphical user interface and can be installed on both Windows and Mac computers. SODA is a standalone software application and does not require MATLAB to function. SODA provides improved risk understanding leading to better informed decision making associated with establishing nuclear facility material-at-risk limits and safety structure, system, or component selection. It is important to note that SODA does not replace or compete with codes such as MACCS or RSAC; rather it is viewed as an easy to use supplemental tool to help improve risk understanding and support better informed decisions. The SODA development project was funded through a grant from the DOE Nuclear Safety Research and Development Program.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrus, Jason P.; Pope, Chad; Toston, Mary
Nonreactor nuclear facilities operating under the approval authority of the U.S. Department of Energy use unmitigated hazard evaluations to determine if potential radiological doses associated with design basis events challenge or exceed dose evaluation guidelines. Unmitigated design basis events that sufficiently challenge dose evaluation guidelines or exceed the guidelines for members of the public or workers, merit selection of safety structures, systems, or components or other controls to prevent or mitigate the hazard. Idaho State University, in collaboration with Idaho National Laboratory, has developed a portable and simple to use software application called SODA (Stochastic Objective Decision-Aide) that stochastically calculatesmore » the radiation dose distribution associated with hypothetical radiological material release scenarios. Rather than producing a point estimate of the dose, SODA produces a dose distribution result to allow a deeper understanding of the dose potential. SODA allows users to select the distribution type and parameter values for all of the input variables used to perform the dose calculation. Users can also specify custom distributions through a user defined distribution option. SODA then randomly samples each distribution input variable and calculates the overall resulting dose distribution. In cases where an input variable distribution is unknown, a traditional single point value can be used. SODA, developed using the MATLAB coding framework, has a graphical user interface and can be installed on both Windows and Mac computers. SODA is a standalone software application and does not require MATLAB to function. SODA provides improved risk understanding leading to better informed decision making associated with establishing nuclear facility material-at-risk limits and safety structure, system, or component selection. It is important to note that SODA does not replace or compete with codes such as MACCS or RSAC; rather it is viewed as an easy to use supplemental tool to help improve risk understanding and support better informed decisions. The SODA development project was funded through a grant from the DOE Nuclear Safety Research and Development Program.« less
The Air Quality Model Evaluation International Initiative (AQMEII) has now reached its second phase which is dedicated to the evaluation of online coupled chemistry-meteorology models. Sixteen modeling groups from Europe and five from North America have run regional air quality m...
Guo, Yutao; Apostalakis, Stavros; Blann, Andrew D; Lip, Gregory Y H
2014-01-01
There is growing evidence that chemokines are potentially important mediators of the pathogenesis of atherosclerotic disease. Major atherothrombotic complications, such as stroke and myocardial infarction, are common among atrial fibrillation (AF) patients. This increase in risk of adverse events may be predicted by a score based on the presence of certain clinical features of chronic heart failure, hypertension, age 75 years or greater, diabetes and stroke (the CHADS2 score). Our objective was to assess the prognostic value of plasma chemokines CCL2, CXCL4 and CX3CL1, and their relationship with the CHADS2 score, in AF patients. Plasma CCL2, CXCL4 and CX3CL1 were measured in 441 patients (59% male, mean age 75 years, 12% paroxysmal, 99% on warfarin) with AF. Baseline clinical and demographic factors were used to define each subject's CHADS2 score. Patients were followed up for a mean 2.1 years, and major adverse cardiovascular and cerebrovascular events (MACCE) were sought, being the combination of cardiovascular death, acute coronary events, stroke and systemic embolism. Fifty-five of the AF patients suffered a MACCE (6% per year). Those in the lowest CX3CL1 quartile (≤ 0.24 ng/ml) had fewest MACCE (p = 0.02). In the Cox regression analysis, CX3CL1 levels >0.24 ng/ml (Hazard ratio 2.8, 95% CI 1.02-8.2, p = 0.045) and age (p = 0.042) were independently linked with adverse outcomes. The CX3CL1 levels rose directly with the CHADS2 risk score (p = 0.009). The addition of CX3CL1 did not significantly increased the discriminatory ability of the CHADS2 clinical factor-based risk stratification (c-index 0.60 for CHADS2 alone versus 0.67 for CHADS2 plus CX3CL1 >0.24 ng/ml, p = 0.1). Aspirin use was associated with lower levels of CX3CL1 (p = 0.0002) and diabetes with higher levels (p = 0.031). There was no association between CXCL4 and CCL2 plasma levels and outcomes. There is an independent association between low plasma CX3CL1 levels and low risk of major cardiovascular events in AF patients, as well as a linear association between CX3CL1 plasma levels and CHADS2-defined cardiovascular risk. The potential for CX3CL1 in refining risk stratification in AF patients merits consideration. © 2014 S. Karger AG, Basel.
Capodanno, Davide; Caggegi, Anna; Capranzano, Piera; Cincotta, Glauco; Miano, Marco; Barrano, Gionbattista; Monaco, Sergio; Calvo, Francesco; Tamburino, Corrado
2011-06-01
The aim of this study is to verify the study hypothesis of the EXCEL trial by comparing percutaneous coronary intervention (PCI) and coronary artery bypass graft (CABG) in an EXCEL-like population of patients. The upcoming EXCEL trial will test the hypothesis that left main patients with SYNTAX score ≤ 32 experience similar rates of 3-year death, myocardial infarction (MI), or cerebrovascular accidents (CVA) following revascularization by PCI or CABG. We compared the 3-year rates of death/MI/CVA and death/MI/CVA/target vessel revascularization (MACCE) in 556 patients with left main disease and SYNTAX score ≤ 32 undergoing PCI (n = 285) or CABG (n = 271). To account for confounders, outcome parameters underwent extensive statistical adjustment. The unadjusted incidence of death/MI/CVA was similar between PCI and CABG (12.7% vs. 8.4%, P = 0.892), while MACCE were higher in the PCI group compared to the CABG group (27.0% vs. 11.8%, P < 0.001). After propensity score matching, PCI was not associated with a significant increase in the rate of death/MI/CVA (11.8% vs. 10.7%, P = 0.948), while MACCE were more frequently noted among patients treated with PCI (28.8% vs. 14.1%, P = 0.002). Adjustment by means of SYNTAX score and EUROSCORE, covariates with and without propensity score, and propensity score alone did not change significantly these findings. In an EXCEL-like cohort of patients with left main disease, there seems to be a clinical equipoise between PCI and CABG in terms of death/MI/CVA. However, even in patients with SYNTAX score ≤ 32, CABG is superior to PCI when target vessel revascularization is included in the combined endpoint. Copyright © 2011 Wiley-Liss, Inc.
Benedetto, Umberto; Altman, Douglas G; Gerry, Stephen; Gray, Alastair; Lees, Belinda; Flather, Marcus; Taggart, David P
2017-09-01
There is still little evidence to boldport routine dual antiplatelet therapy (DAPT) with P2Y12 antagonists following coronary artery bypass grafting (CABG). The Arterial Revascularization Trial (ART) was designed to compare 10-year survival after bilateral versus single internal thoracic artery grafting. We aimed to get insights into the effect of DAPT (with clopidogrel) following CABG on 1-year outcomes by performing a post hoc ART analysis. Among patients enrolled in the ART (n = 3102), 609 (21%) and 2308 (79%) were discharged on DAPT or aspirin alone, respectively. The primary end-point was the incidence of major adverse cerebrovascular and cardiac events (MACCE) at 1 year including cardiac death, myocardial infarction, cerebrovascular accident and reintervention; safety end-point was bleeding requiring hospitalization. Propensity score (PS) matching was used to create comparable groups. Among 609 PS-matched pairs, MACCE occurred in 34 (5.6%) and 34 (5.6%) in the DAPT and aspirin alone groups, respectively, with no significant difference between the 2 groups [hazard ratio (HR) 0.97, 95% confidence interval (CI) 0.59-1.59; P = 0.90]. Only 188 (31%) subjects completed 1 year of DAPT, and in this subgroup, MACCE rate was 5.8% (HR 1.11, 95% CI 0.53-2.30; P = 0.78). In the overall sample, bleeding rate was higher in DAPT group (2.3% vs 1.1%; P = 0.02), although this difference was no longer significant after matching (2.3% vs 1.8%; P = 0.54). Based on these findings, when compared with aspirin alone, DAPT with clopidogrel prescribed at discharge was not associated with a significant reduction of adverse cardiac and cerebrovascular events at 1 year following CABG. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Mammo, Dalia F; Cheng, Chin-I; Ragina, Neli P; Alani, Firas
This study seeks to identify factors associated with periprocedural complications of carotid artery stenting (CAS) to best understand CAS complication rates and optimize patient outcomes. Periprocedural complications include major adverse cardiovascular and cerebrovascular events (MACCE) that include myocardial infarction (MI), stroke, or death. We retrospectively analyzed 181 patients from Northern Michigan who underwent CAS. Rates of stroke, MI, and death occurring within 30days post-procedure were examined. Associations of open vs. closed cell stent type, demographics, comorbidities, and symptomatic carotid stenosis were compared to determine significance. All patients had three NIH Stroke Scale (NIHSS) exams: at baseline, 24h post-procedure, and at the one-month visit. Cardiac enzymes were measured twice in all patients, within 24h post-procedure. All patients were treated with dual anti-platelet therapy for at least 6months post-procedure. Three patients (1.66%) experienced a major complication within one-month post-procedure. These complications included one MI (0.55%), one stroke (0.55%), and one death (0.55%). The following variable factors were not associated with the occurrence of MACCE complications within 30days post-procedure: stent design (open vs. closed cell) (p=1.000), age ≥80 (p=0.559), smoking history (p=0.569), hypertension (p=1.000), diabetes (p=1.000), and symptomatic carotid stenosis (p=0.254). Age of 80years old or above, symptomatic carotid stenosis, open-cell stent design, and history of diabetes, smoking, or hypertension were not found to have an association with MACCE within 1month after CAS. Future studies using a greater sample size will be beneficial to better assess periprocedural complication risks of CAS, while also considering the effect of operator experience and technological advancements on decreasing periprocedural complication rates. Copyright © 2017 Elsevier Inc. All rights reserved.
See, Kimberly A; Liu, Yao-Min; Ha, Yeyoung; Barile, Christopher J; Gewirth, Andrew A
2017-10-18
Magnesium batteries offer an opportunity to use naturally abundant Mg and achieve large volumetric capacities reaching over four times that of conventional Li-based intercalation anodes. High volumetric capacity is enabled by the use of a Mg metal anode in which charge is stored via electrodeposition and stripping processes, however, electrolytes that support efficient Mg electrodeposition and stripping are few and are often prepared from highly reactive compounds. One interesting electrolyte solution that supports Mg deposition and stripping without the use of highly reactive reagents is the magnesium aluminum chloride complex (MACC) electrolyte. The MACC exhibits high Coulombic efficiencies and low deposition overpotentials following an electrolytic conditioning protocol that stabilizes species necessary for such behavior. Here, we discuss the effect of the MgCl 2 and AlCl 3 concentrations on the deposition overpotential, current density, and the conditioning process. Higher concentrations of MACC exhibit enhanced Mg electrodeposition current density and much faster conditioning. An increase in the salt concentrations causes a shift in the complex equilibria involving both cations. The conditioning process is strongly dependent on the concentration suggesting that the electrolyte is activated through a change in speciation of electrolyte complexes and is not simply due to the annihilation of electrolyte impurities. Additionally, the presence of the [Mg 2 (μ-Cl) 3 ·6THF] + in the electrolyte solution is again confirmed through careful analysis of experimental Raman spectra coupled with simulation and direct observation of the complex in sonic spray ionization mass spectrometry. Importantly, we suggest that the ∼210 cm -1 mode commonly observed in the Raman spectra of many Mg electrolytes is indicative of the C 3v symmetric [Mg 2 (μ-Cl) 3 ·6THF] + . The 210 cm -1 mode is present in many electrolytes containing MgCl 2 , so its assignment is of broad interest to the Mg electrolyte community.
Papachristidis, Alexandros; Demarco, Daniela Cassar; Roper, Damian; Tsironis, Ioannis; Papitsas, Michael; Byrne, Jonathan; Alfakih, Khaled; Monaghan, Mark J
2017-01-01
In this study, we assess the clinical and cost-effectiveness of stress echocardiography (SE), as well as the place of SE in patients with high pretest probability (PTP) of coronary artery disease (CAD). We investigated 257 patients with no history of CAD, who underwent SE, and they had a PTP risk score >61% (high PTP). According to the National Institute for Health and Care Excellence guidance (NICE CG95, 2010), these patients should be investigated directly with an invasive coronary angiogram (ICA). We investigated those patients with SE initially and then with ICA when appropriate. Follow-up data with regard to Major Adverse Cardiac and Cerebrovascular Events (MACCE, defined as cardiovascular mortality, cerebrovascular accident (CVA), myocardial infarction (MI) and late revascularisation for acute coronary syndrome/unstable angina) were recorded for a period of 12 months following the SE. The tariff for SE and ICA is £300 and £1400, respectively. 106 patients had a positive SE (41.2%) and 61 of them (57.5%) had further investigation with ICA. 15 (24.6%) of these patients were revascularised. The average cost per patient for investigations was £654.09. If NICE guidance had been followed, the cost would have been significantly higher at £1400 (p<0.001). Overall, 5 MACCE (2.0%) were recorded; 4 (3.8%) in the group of positive SE (2 CVAs and 2 MIs) and 1 (0.7%) in the group of negative SE (1 CVA). There was no MI and no need for revascularisation in the negative SE group. Our approach to investigate patients who present with de novo chest pain and high PTP, with SE initially and subsequently with ICA when appropriate, reduces the cost significantly (£745.91 per patient) with a very low rate of MACCE. However, this study is underpowered to assess safety of SE.
Torres, Carolina A; Sepúlveda, Gloria; Kahlaoui, Besma
2017-01-01
Sun-related physiological disorders such as sun damage on apples ( Malus domestica Borkh) are caused by cumulative photooxidative and heat stress during their growing season triggering morphological, physiological, and biochemical changes in fruit tissues not only while it is on the tree but also after it has been harvested. The objective of the work was to establish the interaction of auxin (indole-3-acetic acid; IAA), abscisic acid (ABA), jasmonic acid (JA), salicylic acid (SA), and ethylene (ET) and its precursor ACC (free and conjugated, MACC) during development of sun-injury-related disorders pre- and post-harvest on apples. Peel tissue was extracted from fruit growing under different sun exposures (Non-exposed, NE; Exposed, EX) and with sun injury symptoms (Moderate, Mod). Sampling was carried out every 15 days from 75 days after full bloom (DAFB) until 120 days post-harvest in cold storage (1°C, > 90%RH). Concentrations of IAA, ABA, JA, SA, were determined using UHPLC mass spectrometry, and ET and ACC (free and conjugated MACC) using gas chromatography. IAA was found not to be related directly to sun injury development, but it decreased 60% in sun exposed tissue, and during fruit development. ABA, JA, SA, and ethylene concentrations were significantly higher ( P ≤ 0.05) in Mod tissue, but their concentration, except for ethylene, were not affected by sun exposure. ACC and MACC concentrations increased until 105 DAFB in all sun exposure categories. During post-harvest, ethylene climacteric peak was delayed on EX compared to Mod. ABA and SA concentrations remained stable throughout storage in both tissue. JA dramatically increased post-harvest in both EX and Mod tissue, and orchards, confirming its role in low temperature tolerance. The results suggest that ABA, JA, and SA together with ethylene are modulating some of the abiotic stress defense responses on sun-exposed fruit during photooxidative and heat stress on the tree.
Torres, Carolina A.; Sepúlveda, Gloria; Kahlaoui, Besma
2017-01-01
Sun-related physiological disorders such as sun damage on apples (Malus domestica Borkh) are caused by cumulative photooxidative and heat stress during their growing season triggering morphological, physiological, and biochemical changes in fruit tissues not only while it is on the tree but also after it has been harvested. The objective of the work was to establish the interaction of auxin (indole-3-acetic acid; IAA), abscisic acid (ABA), jasmonic acid (JA), salicylic acid (SA), and ethylene (ET) and its precursor ACC (free and conjugated, MACC) during development of sun-injury-related disorders pre- and post-harvest on apples. Peel tissue was extracted from fruit growing under different sun exposures (Non-exposed, NE; Exposed, EX) and with sun injury symptoms (Moderate, Mod). Sampling was carried out every 15 days from 75 days after full bloom (DAFB) until 120 days post-harvest in cold storage (1°C, > 90%RH). Concentrations of IAA, ABA, JA, SA, were determined using UHPLC mass spectrometry, and ET and ACC (free and conjugated MACC) using gas chromatography. IAA was found not to be related directly to sun injury development, but it decreased 60% in sun exposed tissue, and during fruit development. ABA, JA, SA, and ethylene concentrations were significantly higher (P ≤ 0.05) in Mod tissue, but their concentration, except for ethylene, were not affected by sun exposure. ACC and MACC concentrations increased until 105 DAFB in all sun exposure categories. During post-harvest, ethylene climacteric peak was delayed on EX compared to Mod. ABA and SA concentrations remained stable throughout storage in both tissue. JA dramatically increased post-harvest in both EX and Mod tissue, and orchards, confirming its role in low temperature tolerance. The results suggest that ABA, JA, and SA together with ethylene are modulating some of the abiotic stress defense responses on sun-exposed fruit during photooxidative and heat stress on the tree. PMID:29491868
Samim, Mariam; van der Worp, Bart; Agostoni, Pierfrancesco; Hendrikse, Jeroen; Budde, Ricardo P J; Nijhoff, Freek; Ramjankhan, Faiz; Doevendans, Pieter A; Stella, Pieter R
2017-02-15
This study aims to evaluate the safety and performance of the new embolic deflection device TriGuard™HDH in patients undergoing TAVR. Transcatheter aortic valve replacement (TAVR) is associated with a high incidence of new cerebral ischemic lesions. The use of an embolic protection device may reduce the frequency of TAVR-related embolic events. This prospective, single arm feasibility pilot study included 14 patients with severe symptomatic aortic stenosis scheduled for TAVR. Cerebral diffusion weighted magnetic resonance imaging (DWI) was planned in all patients one day before and at day 4 (±2) after the procedure. Major adverse cerebral and cardiac events (MACCEs) were recorded for all patients. Primary endpoints of this study were I) device performance success defined as coverage of the aortic arch takeoffs throughout the entire TAVR procedure and II) MACCE occurrence. Secondary endpoints included the number and the volume of new cerebral ischemic lesions on DWI. Thirteen patients underwent transfemoral TAVR and one patient a transapical procedure. Edwards SAPIEN valve prosthesis was implanted in 8 (57%) patients and Medtronic CoreValve prosthesis in the remaining 6 (43%). Predefined performance success of the TriGuard™HDH device was achieved in 9 (64%) patients. The composite endpoint MACCE occurred in none of the patients. Post-procedural DWI was performed in 11 patients. Comparing the DWI of these patients to a historical control group showed no reduction in number [median 5.5 vs. 5.0, P = 0.857], however there was a significant reduction in mean lesion volume per patient [median 13.8 vs. 25.1, P = 0.049]. This study showed the feasibility and safety of using the TriGuard™HDH for cerebral protection during TAVR. This device did not decrease the number of post-procedural new cerebral DWI lesions, however its use showed decreased lesion volume as compared to unprotected TAVR. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Systematic review of preoperative physical activity and its impact on postcardiac surgical outcomes.
Kehler, D Scott; Stammers, Andrew N; Tangri, Navdeep; Hiebert, Brett; Fransoo, Randy; Schultz, Annette S H; Macdonald, Kerry; Giacomontonio, Nicholas; Hassan, Ansar; Légaré, Jean-Francois; Arora, Rakesh C; Duhamel, Todd A
2017-08-11
The objective of this systematic review was to study the impact of preoperative physical activity levels on adult cardiac surgical patients' postoperative: (1) major adverse cardiac and cerebrovascular events (MACCEs), (2) adverse events within 30 days, (3) hospital length of stay (HLOS), (4) intensive care unit length of stay (ICU LOS), (5) activities of daily living (ADLs), (6) quality of life, (7) cardiac rehabilitation attendance and (8) physical activity behaviour. A systematic search of MEDLINE, Embase, AgeLine and Cochrane library for cohort studies was conducted. Eleven studies (n=5733 patients) met the inclusion criteria. Only self-reported physical activity tools were used. Few studies used multivariate analyses to compare active versus inactive patients prior to surgery. When comparing patients who were active versus inactive preoperatively, there were mixed findings for MACCE, 30 day adverse events, HLOS and ICU LOS. Of the studies that adjusted for confounding variables, five studies found a protective, independent association between physical activity and MACCE (n=1), 30-day postoperative events (n=2), HLOS (n=1) and ICU LOS (n=1), but two studies found no protective association for 30-day postoperative events (n=1) and postoperative ADLs (n=1). No studies investigated if activity status before surgery impacted quality of life or cardiac rehabilitation attendance postoperatively. Three studies found that active patients prior to surgery were more likely to be inactive postoperatively. Due to the mixed findings, the literature does not presently support that self-reported preoperative physical activity behaviour is associated with postoperative cardiac surgical outcomes. Future studies should objectively measure physical activity, clearly define outcomes and adjust for clinically relevant variables. Trial registration number NCT02219815. PROSPERO number CRD42015023606. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
An Analysis of CPA Firm Recruiters' Perceptions of Online Masters of Accounting Degrees
ERIC Educational Resources Information Center
Metrejean, Eddie; Noland, Thomas G.
2011-01-01
Online education continues to grow at a rapid pace. Assessment of the effectiveness of online programs is needed to differentiate legitimate programs from diploma mills. The authors examined the perceptions of CPA firm recruiters on whether an online Master of Accounting (MACC) matters in the hiring decision. Results show that recruiters do not…
Ansel, Gary M; Hopkins, L Nelson; Jaff, Michael R; Rubino, Paolo; Bacharach, J Michael; Scheinert, Dierk; Myla, Subbarao; Das, Tony; Cremonesi, Alberto
2010-07-01
The multicenter ARMOUR (ProximAl PRotection with the MO.MA Device DUring CaRotid Stenting) trial evaluated the 30-day safety and effectiveness of the MO.MA Proximal Cerebral Protection Device (Invatec, Roncadelle, Italy) utilized to treat high surgical risk patients undergoing carotid artery stenting (CAS). Distal embolic protection devices (EPD) have been traditionally utilized during CAS. The MO.MA device acts as a balloon occlusion "endovascular clamping" system to achieve cerebral protection prior to crossing the carotid stenosis. This prospective registry enrolled 262 subjects, 37 roll-in and 225 pivotal subjects evaluated with intention to treat (ITT) from September 2007 to February 2009. Subjects underwent CAS using the MO.MA device. The primary endpoint, myocardial infarction, stroke, or death through 30 days (30-day major adverse cardiac and cerebrovascular events [MACCE]) was compared to a performance goal of 13% derived from trials utilizing distal EPD. For the ITT population, the mean age was 74.7 years with 66.7% of the cohort being male. Symptomatic patients comprised 15.1% and 28.9% were octogenarians. Device success was 98.2% and procedural success was 93.2%. The 30-day MACCE rate was 2.7% [95% CI (1.0-5.8%)] with a 30-day major stroke rate of 0.9%. No symptomatic patient suffered a stroke during this trial. The ARMOUR trial demonstrated that the MO.MA(R) Proximal Cerebral Protection Device is safe and effective for high surgical risk patients undergoing CAS. The absence of stroke in symptomatic patients is the lowest rate reported in any independently adjudicated prospective multicenter registry trial to date. (c) 2010 Wiley-Liss, Inc.
Report from Hawai'i: The Rising Tide of Arts Education in the Islands
ERIC Educational Resources Information Center
Wood, Paul
2005-01-01
The establishment of Maui Arts & Cultural Center (MACC), a community arts facility that prioritizes education at the top of its mission, has been a significant factor in the growth of arts education in Hawai'i. This article describes the role such a facility can play in the kind of educational reform that people envision, and the author's…
In this paper, impact of meteorology derived from the Weather, Research and Forecasting (WRF)– Non–hydrostatic Mesoscale Model (NMM) and WRF–Advanced Research WRF (ARW) meteorological models on the Community Multiscale Air Quality (CMAQ) simulations for ozone and its related prec...
Human-model hybrid Korean air quality forecasting system.
Chang, Lim-Seok; Cho, Ara; Park, Hyunju; Nam, Kipyo; Kim, Deokrae; Hong, Ji-Hyoung; Song, Chang-Keun
2016-09-01
The Korean national air quality forecasting system, consisting of the Weather Research and Forecasting, the Sparse Matrix Operator Kernel Emissions, and the Community Modeling and Analysis (CMAQ), commenced from August 31, 2013 with target pollutants of particulate matters (PM) and ozone. Factors contributing to PM forecasting accuracy include CMAQ inputs of meteorological field and emissions, forecasters' capacity, and inherent CMAQ limit. Four numerical experiments were conducted including two global meteorological inputs from the Global Forecast System (GFS) and the Unified Model (UM), two emissions from the Model Intercomparison Study Asia (MICS-Asia) and the Intercontinental Chemical Transport Experiment (INTEX-B) for the Northeast Asia with Clear Air Policy Support System (CAPSS) for South Korea, and data assimilation of the Monitoring Atmospheric Composition and Climate (MACC). Significant PM underpredictions by using both emissions were found for PM mass and major components (sulfate and organic carbon). CMAQ predicts PM2.5 much better than PM10 (NMB of PM2.5: -20~-25%, PM10: -43~-47%). Forecasters' error usually occurred at the next day of high PM event. Once CMAQ fails to predict high PM event the day before, forecasters are likely to dismiss the model predictions on the next day which turns out to be true. The best combination of CMAQ inputs is the set of UM global meteorological field, MICS-Asia and CAPSS 2010 emissions with the NMB of -12.3%, the RMSE of 16.6μ/m(3) and the R(2) of 0.68. By using MACC data as an initial and boundary condition, the performance skill of CMAQ would be improved, especially in the case of undefined coarse emission. A variety of methods such as ensemble and data assimilation are considered to improve further the accuracy of air quality forecasting, especially for high PM events to be comparable to for all cases. The growing utilization of the air quality forecast induced the public strongly to demand that the accuracy of the national forecasting be improved. In this study, we investigated the problems in the current forecasting as well as various alternatives to solve the problems. Such efforts to improve the accuracy of the forecast are expected to contribute to the protection of public health by increasing the availability of the forecast system.
Corruption in Myanmar - Holding a Country and its People from Economic Prosperity
2014-10-30
censorship laws and freedom to information by banning independent newspapers thereby repressing efforts towards democracy even further. 6 The SPP... censorship laws, insisting state officials return embezzled funds, signing and ratifying the United Nations Convention against Corruption (UNCAC), and...instill a culture of change. For example, in Malaysia , the government formed the Malaysian Anti-Corruption Commission (MACC), an independent watch
Lee, Ming-Chung; Shen, Yu-Chih; Wang, Ji-Hung; Li, Yu-Ying; Li, Tzu-Hsien; Chang, En-Ting; Wang, Hsiu-Mei
2017-01-01
Obstructive sleep apnea (OSA) is associated with bad cardiovascular outcomes and a high prevalence of anxiety and depression. This study investigated the effects of continuous positive airway pressure (CPAP) on the severity of anxiety and depression in OSA patients with or without coronary artery disease (CAD) and on the rate of cardio- and cerebro-vascular events in those with OSA and CAD. This prospective study included patients with moderate-to-severe OSA, with or without a recent diagnosis of CAD; all were started on CPAP therapy. Patients completed the Chinese versions of the Beck Anxiety Inventory (BAI) and Beck Depression Inventory-II (BDI-II) at baseline and after 6-month follow-up. The occurrence of major adverse cardiac and cerebrovascular events (MACCE) was assessed every 3 months up to 1 year. BAI scores decreased from 8.5 ± 8.4 at baseline to 5.4 ± 6.9 at 6 months in CPAP-compliant OSA patients without CAD ( P < 0.05). BAI scores also decreased from 20.7 ± 14.9 to 16.1 ± 14.5 in CPAP-compliant OSA patients with CAD. BDI-II scores decreased in CPAP-compliant OSA patients without CAD (from 11.1 ± 10.7 at baseline to 6.6 ± 9.5 at 6 months) and in CPAP-compliant OSA patients with CAD (from 20.4 ± 14.3 to 15.9 ± 7.3). In addition, there was a large effect size (ES) of BAI and BDI in 6-month CPAP treatment of OSA patients with CAD and a large ES in those with OSA under CPAP treatment. In OSA patients with CAD, the occurrence of MACCE was significantly lower in CPAP-compliant patients than that in CPAP noncompliant patients (11% in CPAP compliant and 50% in noncompliant; P < 0.05). CPAP improved anxiety and depression in OSA patients regardless of CAD. In OSA patients with CAD, CPAP-compliant patients had a lower 1-year rate of MACCE than CPAP-noncompliant patients.
Sharma, Sharan P; Dahal, Khagendra; Khatra, Jaspreet; Rosenfeld, Alan; Lee, Juyong
2017-06-01
It is not clear whether percutaneous coronary intervention (PCI) is as effective and safe as coronary artery bypass grafting (CABG) for left main coronary artery disease. We aimed to perform a systematic review and meta-analysis of all randomized controlled trials (RCTs) that compared PCI and CABG in left main coronary disease. We searched PubMed, EMBASE, Cochrane, Scopus and relevant references for RCTs (inception through, November 20, 2016 without language restrictions) and performed meta-analysis using random-effects model. All-cause mortality, myocardial infarction, revascularization rate, stroke, and major adverse cardiac and cerebrovascular events (MACCE) were the measured outcomes. Six RCTs with a total population of 4700 were analyzed. There was no difference in all-cause mortality at 30-day, one-year, and five-year (1.8% vs 1.1%; OR 0.60; 95% CI: 0.26-1.39; P=.23; I 2 =9%) follow-up between PCI and CABG. CABG group had less myocardial infarction (MI) at five-year follow-up than PCI (5% vs 2.5%; OR 2.04; CI: 1.30-3.19; P=.002; I 2 =1%). Revascularization rate favored CABG in one-year (8.6% vs 4.5%; OR 2; CI: 1.46-2.73; P<.0001; I 2 =45%) and five-year (15.9% vs 9.9%; OR 1.73; CI: 1.36-2.20; P<.0001; I 2 =0%) follow-up. Although stroke rate was lower in PCI group at 1 year, there was no difference in longer follow-up. MACCE at 5 years favored CABG (24% vs 18%; OR 1.45; CI: 1.19-1.76; P=.0001; I 2 =0%). On subgroup analysis, MACCE were not different between two groups in low-to-intermediate SYNTAX group while it was higher for PCI group with high SYNTAX group. Percutaneous coronary intervention could be as safe and effective as CABG in a select group of left main coronary artery disease patients. © 2017 John Wiley & Sons Ltd.
This study presents a comparative evaluation of the impact of WRF-NMM and WRF-ARW meteorology on CMAQ simulations of PM2.5, its composition and related precursors over the eastern United States with the intensive observations obtained by aircraft (NOAA WP-3), ship and ...
Unveiling the High Energy Obscured Universe: Hunting Collapsed Objects Physics
NASA Technical Reports Server (NTRS)
Ubertini, P.; Bazzano, A.; Cocchi, M.; Natalucci, L.; Bassani, L.; Caroli, E.; Stephen, J. B.; Caraveo, P.; Mereghetti, S.; Villa, G.
2005-01-01
A large part of energy from space is coming from collapsing stars (SN, Hypernovae) and collapsed stars (black holes, neutron stars and white dwarfs). The peak of their energy release is in the hard-X and gamma-ray wavelengths where photons are insensitive to absorption and can travel from the edge the Universe or the central core of the Galaxy without loosing the primordial information of energy, time signature and polarization. The most efficient process to produce energetic photons is gravitational accretion of matter from a "normal" star onto a collapsed companion (LGxMcollxdMacc/dtx( 1Rdisc)-dMacc/dt x c2), exceeding by far the nuclear reaction capability to generate high energy quanta. Thus our natural laboratory for "in situ" investigations are collapsed objects in which matter and radiation co-exist in extreme conditions of temperature and density due to gravitationally bent geometry and magnetic fields. This is a unique opportunity to study the physics of accretion flows in stellar mass and super-massive Black Holes (SMBHs), plasmoids generated in relativistic jets in galactic microQSOs and AGNs, ionised plasma interacting at the touching point of weakly magnetized NS surface, GRB/Supernovae connection, and the mysterious origins of "dark" GRB and X-ray flash.
Walsh, Simon J; Hanratty, Colm G; Watkins, Stuart; Oldroyd, Keith G; Mulvihill, Niall T; Hensey, Mark; Chase, Alex; Smith, Dave; Cruden, Nick; Spratt, James C; Mylotte, Darren; Johnson, Tom; Hill, Jonathan; Hussein, Hafiz M; Bogaerts, Kris; Morice, Marie-Claude; Foley, David P
2018-05-24
The aim of this study was to provide contemporary outcome data for patients with de novo coronary disease and Medina 1,1,1 lesions who were treated with a culotte two-stent technique, and to compare the performance of two modern-generation drug-eluting stent (DES) platforms, the 3-connector XIENCE and the 2-connector SYNERGY. Patients with Medina 1,1,1 bifurcation lesions who had disease that was amenable to culotte stenting were randomised 1:1 to treatment with XIENCE or SYNERGY DES. A total of 170 patients were included. Technical success and final kissing balloon inflation occurred in >96% of cases. Major adverse cardiovascular or cerebrovascular events (MACCE: a composite of death, myocardial infarction [MI], cerebrovascular accident [CVA] and target vessel revascularisation [TVR]) occurred in 5.9% of patients by nine months. The primary endpoint was a composite of death, MI, CVA, target vessel failure (TVF), stent thrombosis and binary angiographic restenosis. At nine months, the primary endpoint occurred in 19% of XIENCE patients and 16% of SYNERGY patients (p=0.003 for non-inferiority for platform performance). MACCE rates for culotte stenting using contemporary everolimus-eluting DES are low at nine months. The XIENCE and SYNERGY stents demonstrated comparable performance for the primary endpoint.
NASA Astrophysics Data System (ADS)
Posch, J. L.; Witte, A. J.; Engebretson, M. J.; Murr, D.; Lessard, M.; Raita, T.; Singer, H. J.
2010-12-01
Traveling convection vortices (TCVs), which appear in ground magnetometer records at near-cusp latitudes as solitary ~5 mHz pulses, are now known to originate in instabilities in the ion foreshock just upstream of Earth’s bow shock. They can also stimulate compressions or relaxations of the dayside magnetosphere (evident in geosynchronous satellite data). These transient compressions can in turn sharply increase the growth rate of electromagnetic ion cyclotron (EMIC) waves, which also appear in ground records at near-cusp latitudes as bursts of Pc 1-2 pulsations. In this study we have identified simultaneous TCV - Pc 1-2 burst events occurring from 2008 through the first 7 months of 2010 in Eastern Arctic Canada and Svalbard, using a combination of fluxgate magnetometers (MACCS and IMAGE) and search coil magnetometers in each region. Magnetometer observations at GOES 10 and 12, at longitudes near the MACCS sites, are also used to characterize the strength of the magnetic perturbations. There is no direct proportion between the amplitude of TCV and Pc 1-2 wave events in either region, consistent with the highly variable densities and pitch angle distributions of plasma of ring current / plasma sheet energies in the outer dayside magnetosphere.
Rufa, Magdalena; Schubel, Jens; Ulrich, Christian; Schaarschmidt, Jan; Tiliscan, Catalin; Bauer, Adrian; Hausmann, Harald
2015-07-01
At the moment, the main application of minimally invasive extracorporeal circulation (MiECC) is reserved for elective cardiac operations such as coronary artery bypass grafting (CABG) and/or aortic valve replacement. The purpose of this study was to compare the outcome of emergency CABG operations using either MiECC or conventional extracorporeal circulation (CECC) in patients requiring emergency CABG with regard to the perioperative course and the occurrence of major adverse cardiac and cerebral events (MACCE). We analysed the emergency CABG operations performed by a single surgeon, between January 2007 and July 2013, in order to exclude the differences in surgical technique. During this period, 187 emergency CABG patients (113 MiECC vs 74 CECC) were investigated retrospectively with respect to the following parameters: in-hospital mortality, MACCE, postoperative hospital stay and perioperative transfusion rate. The mean logistic European System for Cardiac Operative Risk Evaluation was higher in the CECC group (MiECC 12.1 ± 16 vs CECC 15.0 ± 20.8, P = 0.15) and the number of bypass grafts per patient was similar in both groups (MiECC 2.94 vs CECC 2.93). There was no significant difference in the postoperative hospital stay or in major postoperative complications. The in-hospital mortality was higher in the CECC group 6.8% versus MiECC 4.4% (P = 0.48). The perioperative transfusion rate was lower with MiECC compared with CECC (MiECC 2.6 ± 3.2 vs CECC 3.8 ± 4.2, P = 0.025 units of blood per patient). In our opinion, the use of MiECC in urgent CABG procedures is safe, feasible and shows no disadvantages compared with the use of CECC. Emergency operations using the MiECC system showed a significantly lower blood transfusion rate and better results concerning the unadjusted in-hospital mortality. © The Author 2015. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Buried Underwater Munitions and Clutter Discrimination
2010-10-01
closest point of approach of the cylinder. The k space amplitude beam pattern, sin Δ( ) Δ , in Stanton’s treatment is obtained from the Fourier ...simple modifications to be useful here. First, the amplitude of the incident plane wave P0 should be replaced by P1r0/r, where P1 is the magnitude of...Instrument Source Information Site Selec- tion MACC Phase I Input Location Resolution Age Bathymetry SEA Ltd. SWATHPlus McNinch
NASA Astrophysics Data System (ADS)
Topping, David O.; Allan, James; Rami Alfarra, M.; Aumont, Bernard
2017-06-01
Our ability to model the chemical and thermodynamic processes that lead to secondary organic aerosol (SOA) formation is thought to be hampered by the complexity of the system. While there are fundamental models now available that can simulate the tens of thousands of reactions thought to take place, validation against experiments is highly challenging. Techniques capable of identifying individual molecules such as chromatography are generally only capable of quantifying a subset of the material present, making it unsuitable for a carbon budget analysis. Integrative analytical methods such as the Aerosol Mass Spectrometer (AMS) are capable of quantifying all mass, but because of their inability to isolate individual molecules, comparisons have been limited to simple data products such as total organic mass and the O : C ratio. More detailed comparisons could be made if more of the mass spectral information could be used, but because a discrete inversion of AMS data is not possible, this activity requires a system of predicting mass spectra based on molecular composition. In this proof-of-concept study, the ability to train supervised methods to predict electron impact ionisation (EI) mass spectra for the AMS is evaluated. Supervised Training Regression for the Arbitrary Prediction of Spectra (STRAPS) is not built from first principles. A methodology is constructed whereby the presence of specific mass-to-charge ratio (m/z) channels is fitted as a function of molecular structure before the relative peak height for each channel is similarly fitted using a range of regression methods. The widely used AMS mass spectral database is used as a basis for this, using unit mass resolution spectra of laboratory standards. Key to the fitting process is choice of structural information, or molecular fingerprint. Our approach relies on using supervised methods to automatically optimise the relationship between spectral characteristics and these molecular fingerprints. Therefore, any internal mechanisms or instrument features impacting on fragmentation are implicitly accounted for in the fitted model. Whilst one might expect a collection of keys specifically designed according to EI fragmentation principles to offer a robust basis, the suitability of a range of commonly available fingerprints is evaluated. Using available fingerprints in isolation, initial results suggest the generic public MACCS
fingerprints provide the most accurate trained model when combined with both decision trees and random forests, with median cosine angles of 0.94-0.97 between modelled and measured spectra. There is some sensitivity to choice of fingerprint, but most sensitivity is in choice of regression technique. Support vector machines perform the worst, with median values of 0.78-0.85 and lower ranges approaching 0.4, depending on the fingerprint used. More detailed analysis of modelled versus mass spectra demonstrates important composition-dependent sensitivities on a compound-by-compound basis. This is further demonstrated when we apply the trained methods to a model α-pinene SOA system, using output from the GECKO-A model. This shows that use of a generic fingerprint referred to as FP4
and one designed for vapour pressure predictions (Nanoolal
) gives plausible mass spectra, whilst the use of the MACCS keys in isolation performs poorly in this application, demonstrating the need for evaluating model performance against other SOA systems rather than existing laboratory databases on single compounds. Given the limited number of compounds used within the AMS training dataset, it is difficult to prescribe which combination of approach would lead to a robust generic model across all expected compositions. Nonetheless, the study demonstrates the use of a methodology that would be improved with more training data, fingerprints designed explicitly for fragmentation mechanisms occurring within the AMS, and data from additional mixed systems for further validation. To facilitate further development of the method, including application to other instruments, the model code for re-training is provided via a public Github and Zenodo software repository.
NASA Astrophysics Data System (ADS)
López-Aparicio, Susana; Guevara, Marc; Thunis, Philippe; Cuvelier, Kees; Tarrasón, Leonor
2017-04-01
This study shows the capabilities of a benchmarking system to identify inconsistencies in emission inventories, and to evaluate the reason behind discrepancies as a mean to improve both bottom-up and downscaled emission inventories. Fine scale bottom-up emission inventories for seven urban areas in Norway are compared with three regional emission inventories, EC4MACS, TNO_MACC-II and TNO_MACC-III, downscaled to the same areas. The comparison shows discrepancies in nitrogen oxides (NOx) and particulate matter (PM2.5 and PM10) when evaluating both total and sectorial emissions. The three regional emission inventories underestimate NOx and PM10 traffic emissions by approximately 20-80% and 50-90%, respectively. The main reasons for the underestimation of PM10 emissions from traffic in the regional inventories are related to non-exhaust emissions due to resuspension, which are included in the bottom-up emission inventories but are missing in the official national emissions, and therefore in the downscaled regional inventories. The benchmarking indicates that the most probable reason behind the underestimation of NOx traffic emissions by the regional inventories is the activity data. The fine scale NOx traffic emissions from bottom-up inventories are based on the actual traffic volume at the road link and are much higher than the NOx emissions downscaled from national estimates based on fuel sales and based on population for the urban areas. We have identified important discrepancies in PM2.5 emissions from wood burning for residential heating among all the inventories. These discrepancies are associated with the assumptions made for the allocation of emissions. In the EC4MACs inventory, such assumptions imply high underestimation of PM2.5 emissions from the residential combustion sector in urban areas, which ranges from 40 to 90% compared with the bottom-up inventories. The study shows that in three of the seven Norwegian cities there is need for further improvement of the emission inventories.
NASA Astrophysics Data System (ADS)
Yahya, Khairunnisa; He, Jian; Zhang, Yang
2015-12-01
Multiyear applications of an online-coupled meteorology-chemistry model allow an assessment of the variation trends in simulated meteorology, air quality, and their interactions to changes in emissions and meteorology, as well as the impacts of initial and boundary conditions (ICONs/BCONs) on simulated aerosol-cloud-radiation interactions over a period of time. In this work, the Weather Research and Forecasting model with Chemistry version 3.4.1 (WRF/Chem v. 3.4.1) with the 2005 Carbon Bond mechanism coupled with the Volatility Basis Set module for secondary organic aerosol formation (WRF/Chem-CB05-VBS) is applied for multiple years (2001, 2006, and 2010) over continental U.S. This work also examines the changes in simulated air quality and meteorology due to changes in emissions and meteorology and the model's capability in reproducing the observed variation trends in species concentrations from 2001 to 2010. In addition, the impacts of the chemical ICONs/BCONs on model predictions are analyzed. ICONs/BCONs are downscaled from two global models, the modified Community Earth System Model/Community Atmosphere model version 5.1 (CESM/CAM v5.1) and the Monitoring Atmospheric Composition and Climate model (MACC). The evaluation of WRF/Chem-CB05-VBS simulations with the CESM ICONs/BCONs for 2001, 2006, and 2010 shows that temperature at 2 m (T2) is underpredicted for all three years likely due to inaccuracies in soil moisture and soil temperature, resulting in biases in surface relative humidity, wind speed, and precipitation. With the exception of cloud fraction, other aerosol-cloud variables including aerosol optical depth, cloud droplet number concentration, and cloud optical thickness are underpredicted for all three years, resulting in overpredictions of radiation variables. The model performs well for O3 and particulate matter with diameter less than or equal to 2.5 (PM2.5) for all three years comparable to other studies from literature. The model is able to reproduce observed annual average trends in O3 and PM2.5 concentrations from 2001 to 2006 and from 2006 to 2010 but is less skillful in simulating their observed seasonal trends. The 2006 and 2010 results using CESM and MACC ICONs/BCONs are compared to analyze the impact of ICONs/BCONs on model performance and their feedbacks to aerosol, clouds, and radiation. Comparing to the simulations with MACC ICONs/BCONs, the simulations with the CESM ICONs/BCONs improve the performance of O3 mixing ratios (e.g., the normalized mean bias for maximum 8 h O3 is reduced from -17% to -1% in 2010), PM2.5 in 2010, and sulfate in 2006 (despite a slightly larger normalized mean bias for PM2.5 in 2006). The impacts of different ICONs/BCONs on simulated aerosol-cloud-radiation variables are not negligible, with larger impacts in 2006 compared to 2010.
The Olympus satellite and satellite direct broadcasting in Italy
NASA Astrophysics Data System (ADS)
Castelli, E.; Tirro, S.
Plans for the development of DBS-TV technology in Italy are discussed from the perspective of the Italian electronics industry, with an emphasis on experimental broadcasts using the Olympus satellite channel assigned to Italy by ESA. Consideration is given to the operating characteristics of PAL, MAC-C, MAC-D2, extended-MAC, and MUSE color-TV systems and their compatibility with DBS; the planned availability of TV channels on Olympus-type and Italsat-type satellites; individual, community, and CATV reception of DBS signals; the projected growth of the DBS audience in Italy, the UK, and the FRG by 1999; and the potential Italian market for satellite receivers and antennas. The need for prompt completion and evaluation of the Olympus experiments and antennas. The need for prompt completion and evaluation of the Olympus experiments (beginning in 1987) and selection of the systems to be implemented, so that the industry can supply the home equipment required on time, is stressed. Tables of numerical data and maps of the Olympus coverage areas are provided.
NASA Astrophysics Data System (ADS)
Verstraeten, W. W.; Boersma, K. F.; Douros, J.; Williams, J. E.; Eskes, H.; Delcloo, A. W.
2017-12-01
High nitrogen oxides (NOX = NO + NO2) concentrations near the surface impact humans and ecosystems badly and play a key role in tropospheric chemistry. NO2 is an important precursor of tropospheric ozone (O3) which in turn affects the production of the hydroxyl radical controlling the chemical lifetime of key atmospheric pollutants and reactive greenhouse gases. Combustion from industrial, traffic and household activities in large and densely populated urban areas result in high NOX emissions. Accurate mapping of these emissions is essential but hard to do since reported emissions factors may differ from real-time emissions in order of magnitude. Modelled NO2 levels and lifetimes also have large associated uncertainties and overestimation in the chemical lifetime which may mask missing NOX chemistry in current chemistry transport models (CTM's). The simultaneously estimation of both the NO2 lifetime and as well as the concentrations by applying the Exponentially Modified Gaussian (EMG) method on tropospheric NO2 columns lines densities should improve the surface NOX emission estimates. Here we evaluate if the EMG methodology applied on the tropospheric NO2 columns simulated by the LOTOS-EUROS (Long Term Ozone Simulation-European Ozone Simulation) CTM can reproduce the NOX emissions used as model input. First we process both the modelled tropospheric NO2 columns for the period April-September 2013 for 21 selected European urban areas under windy conditions (averaged vertical wind speeds between surface and 500 m from ECMWF > 2 m s-1) as well as the accompanying OMI (Ozone Monitoring Instrument) data providing us with real-time observation-based estimates of midday NO2 columns. Then we compare the top-down derived surface NOX emissions with the 2011 MACC-III emission inventory, used in the CTM as input to simulate the NO2 columns. For cities where NOX emissions can be assumed as originating from one large source good agreement is found between the top-down derived NOX emissions from CTM and OMI with the MACC-III inventory. For cities where multiple sources of NOX are observed (e.g. Brussels, London), an adapted methodology is required. For some cities such as St-Petersburg and Moscow the top-down NOX estimates from 2013 OMI data are biased low compared to the MACC-III inventory which uses a 2011 NOX emissions update.
GIRAFE, a campaign forecast tool for anthropogenic and biomass burning plumes
NASA Astrophysics Data System (ADS)
Fontaine, Alain; Mari, Céline; Drouin, Marc-Antoine; Lussac, Laure
2015-04-01
GIRAFE (reGIonal ReAl time Fire plumEs, http://girafe.pole-ether.fr, alain.fontaine@obs-mip.fr) is a forecast tool supported by the French atmospheric chemistry data centre Ether (CNES and CNRS), build on the lagrangian particle dispersion model FLEXPART coupled with ECMWF meteorological fields and emission inventories. GIRAFE was used during the CHARMEX campaign (Chemistry-Aerosol Mediterranean Experiment http://charmex.lsce.ipsl.fr) in order to provide daily 5-days plumes trajectory forecast over the Mediterranean Sea. For this field experiment, the lagrangian model was used to mimic carbon monoxide pollution plumes emitted either by anthropogenic or biomass burning emissions. Sources from major industrial areas as Fos-Berre or the Po valley were extracted from the MACC-TNO inventory. Biomass burning sources were estimated based on MODIS fire detection. Comparison with MACC and CHIMERE APIFLAME models revealed that GIRAFE followed pollution plumes from small and short-duration fires which were not captured by low resolution models. GIRAFE was used as a decision-making tool to schedule field campaign like airbone operations or balloons launching. Thanks to recent features, GIRAFE is able to read the ECCAD database (http://eccad.pole-ether.fr) inventories. Global inventories such as MACCITY and ECLIPSE will be used to predict CO plumes trajectories from major urban and industrial sources over West Africa for the DACCIWA campaign (Dynamic-Aerosol-Chemistry-Cloud interactions in West Africa).
Variations of trace gases over the Bay of Bengal during the summer monsoon
NASA Astrophysics Data System (ADS)
Girach, I. A.; Ojha, Narendra; Nair, Prabha R.; Tiwari, Yogesh K.; Kumar, K. Ravi
2018-02-01
In situ measurements of near-surface ozone (O3), carbon monoxide (CO), and methane (CH4) were carried out over the Bay of Bengal (BoB) as a part of the Continental Tropical Convergence Zone (CTCZ) campaign during the summer monsoon season of 2009. O3, CO and CH4 mixing ratios varied in the ranges of 8-54 ppbv, 50-200 ppbv and 1.57-2.15 ppmv, respectively during 16 July-17 August 2009. The spatial distribution of mean tropospheric O3 from satellite retrievals is found to be similar to that in surface O3 observations, with higher levels over coastal and northern BoB as compared to central BoB. The comparison of in situ measurements with the Monitoring Atmospheric Composition & Climate (MACC) global reanalysis shows that MACC simulations reproduce the observations with small mean biases of 1.6 ppbv, -2.6 ppbv and 0.07 ppmv for O3, CO and CH4, respectively. The analysis of diurnal variation of O3 based on observations and the simulations from Weather Research and Forecasting coupled with Chemistry (WRF-Chem) at a stationary point over the BoB did not show a net photochemical build up during daytime. Satellite retrievals show limitations in capturing CH4 variations as measured by in situ sample analysis highlighting the need of more shipborne in situ measurements of trace gases over this region during monsoon.
Five-year outcomes of staged percutaneous coronary intervention in the SYNTAX study.
Watkins, Stuart; Oldroyd, Keith G; Preda, Istvan; Holmes, David R; Colombo, Antonio; Morice, Marie-Claude; Leadley, Katrin; Dawkins, Keith D; Mohr, Friedrich W; Serruys, Patrick W; Feldman, Ted E
2015-04-01
The SYNTAX study compared PCI with TAXUS Express stents to CABG for the treatment of de novo 3-vessel and/or left main coronary disease. This study aimed to determine patient characteristics and five-year outcomes after a staged PCI strategy compared to single-session PCI. In the SYNTAX trial, staged procedures were discouraged but were allowed within 72 hours or, if renal insufficiency or contrast-induced nephropathy occurred, within 14 days (mean 9.8±18.1 days post initial procedure). A total of 125 (14%) patients underwent staged PCI. These patients had greater disease severity and/or required a more complex procedure. MACCE was significantly increased in staged patients (48.1% vs. 35.5%, p=0.004), as was the composite of death/stroke/MI (32.2% vs. 19%, p=0.0007). Individually, cardiac death and stroke occurred more frequently in the staged PCI group (p=0.03). Repeat revascularisation was significantly higher in staged patients (32.8% vs 24.8%, p=0.035), as was stent thrombosis (10.9% vs. 4.7%, p=0.005). There is a higher incidence of MACCE in patients undergoing staged compared to single-session PCI for 3-vessel and/or left main disease over the first five years of follow-up. However, these patients had more comorbidities and more diffuse disease.
Orbital-Dependent Density Functionals for Chemical Catalysis
2014-10-17
noncollinear density functional theory to show that the low-spin state of Mn3 in a model of the oxygen -evolving complex of photosystem II avoids...DK, which denotes the cc-pV5Z-DK basis set for 3d metals and hydrogen and the ma-cc- pV5Z-DK basis set for oxygen ) and to nonrelativistic all...cc-pV5Z basis set for oxygen ). As compared to NCBS-DK results, all ECP calculations perform worse than def2-TZVP all-electron relativistic
Adjusted Levenberg-Marquardt method application to methene retrieval from IASI/METOP spectra
NASA Astrophysics Data System (ADS)
Khamatnurova, Marina; Gribanov, Konstantin
2016-04-01
Levenberg-Marquardt method [1] with iteratively adjusted parameter and simultaneous evaluation of averaging kernels together with technique of parameters selection are developed and applied to the retrieval of methane vertical profiles in the atmosphere from IASI/METOP spectra. Retrieved methane vertical profiles are then used for calculation of total atmospheric column amount. NCEP/NCAR reanalysis data provided by ESRL (NOAA, Boulder,USA) [2] are taken as initial guess for retrieval algorithm. Surface temperature, temperature and humidity vertical profiles are retrieved before methane vertical profile retrieval for each selected spectrum. Modified software package FIRE-ARMS [3] were used for numerical experiments. To adjust parameters and validate the method we used ECMWF MACC reanalysis data [4]. Methane columnar values retrieved from cloudless IASI spectra demonstrate good agreement with MACC columnar values. Comparison is performed for IASI spectra measured in May of 2012 over Western Siberia. Application of the method for current IASI/METOP measurements are discussed. 1.Ma C., Jiang L. Some Research on Levenberg-Marquardt Method for the Nonlinear Equations // Applied Mathematics and Computation. 2007. V.184. P. 1032-1040 2.http://www.esrl.noaa.gov/psdhttp://www.esrl.noaa.gov/psd 3.Gribanov K.G., Zakharov V.I., Tashkun S.A., Tyuterev Vl.G.. A New Software Tool for Radiative Transfer Calculations and its application to IMG/ADEOS data // JQSRT.2001.V.68.№ 4. P. 435-451. 4.http://www.ecmwf.int/http://www.ecmwf.int
Performance evaluation of CESM in simulating the dust cycle
NASA Astrophysics Data System (ADS)
Parajuli, S. P.; Yang, Z. L.; Kocurek, G.; Lawrence, D. M.
2014-12-01
Mineral dust in the atmosphere has implications for Earth's radiation budget, biogeochemical cycles, hydrological cycles, human health and visibility. Mineral dust is injected into the atmosphere during dust storms when the surface winds are sufficiently strong and the land surface conditions are favorable. Dust storms are very common in specific regions of the world including the Middle East and North Africa (MENA) region, which contains more than 50% of the global dust sources. In this work, we present simulation of the dust cycle under the framework of CESM1.2.2 and evaluate how well the model captures the spatio-temporal characteristics of dust sources, transport and deposition at global scale, especially in dust source regions. We conducted our simulations using two existing erodibility maps (geomorphic and topographic) and a new erodibility map, which is based on the correlation between observed wind and dust. We compare the simulated results with MODIS satellite data, MACC reanalysis data, and AERONET station data. Comparison with MODIS satellite data and MACC reanalysis data shows that all three erodibility maps generally reproduce the spatio-temporal characteristics of dust optical depth globally. However, comparison with AERONET station data shows that the simulated dust optical depth is generally overestimated for all erodibility maps. Results vary greatly by region and scale of observational data. Our results also show that the simulations forced by reanalysis meteorology capture the overall dust cycle more realistically compared to the simulations done using online meteorology.
Influence of Northeast Monsoon cold surges on air quality in Southeast Asia
NASA Astrophysics Data System (ADS)
Ashfold, M. J.; Latif, M. T.; Samah, A. A.; Mead, M. I.; Harris, N. R. P.
2017-10-01
Ozone (O3) is an important ground-level pollutant. O3 levels and emissions of O3 precursors have increased significantly over recent decades in East Asia and export of this O3 eastward across the Pacific Ocean is well documented. Here we show that East Asian O3 is also transported southward to tropical Southeast (SE) Asia during the Northeast Monsoon (NEM) season (defined as November to February), and that this transport pathway is especially strong during 'cold surges'. Our analysis employs reanalysis data and measurements from surface sites in Peninsular Malaysia, both covering 2003-2012, along with trajectory calculations. Using a cold surge index (northerly winds at 925 hPa averaged over 105-110°E, 5°N) to define sub-seasonal strengthening of the NEM winds, we find the largest changes in a region covering much of the Indochinese Peninsula and surrounding seas. Here, the levels of O3 and another key pollutant, carbon monoxide, calculated by the Monitoring Atmospheric Composition and Climate (MACC) Reanalysis are on average elevated by, respectively, >40% (∼15 ppb) and >60% (∼80 ppb) during cold surges. Further, in the broader region of SE Asia local afternoon exceedances of the World Health Organization's air quality guideline for O3 (100 μg m-3, or ∼50 ppb, averaged over 8 h) largely occur during these cold surges. Day-to-day variations in available O3 observations at surface sites on the east coast of Peninsular Malaysia and in corresponding parts of the MACC Reanalysis are similar, and are clearly linked to cold surges. However, observed O3 levels are typically ∼10-20 ppb lower than the MACC Reanalysis. We show that these observations are also subject to influence from local urban pollution. In agreement with past work, we find year-to-year variations in cold surge activity related to the El Nino-Southern Oscillation (ENSO), but this does not appear to be the dominant influence of ENSO on atmospheric composition in this region. Overall, our study indicates that the influence of East Asian pollution on air quality in SE Asia during the NEM could be at least as large as the corresponding, well-studied spring-time influence on North America. Both an enhanced regional observational capability and chemical modelling studies will be required to fully untangle the importance of this long-range influence relative to local processes.
Nagappa, Mahesh; Ho, George; Patra, Jayadeep; Wong, Jean; Singh, Mandeep; Kaw, Roop; Cheng, Davy; Chung, Frances
2017-12-01
Obstructive sleep apnea (OSA) is a common comorbidity in patients undergoing cardiac surgery and may predispose patients to postoperative complications. The purpose of this meta-analysis is to determine the evidence of postoperative complications associated with OSA patients undergoing cardiac surgery. A literature search of Cochrane Database of Systematic Reviews, Medline, Medline In-process, Web of Science, Scopus, EMBASE, Cochrane Central Register of Controlled Trials, and CINAHL until October 2016 was performed. The search was constrained to studies in adult cardiac surgical patients with diagnosed or suspected OSA. All included studies must report at least 1 postoperative complication. The primary outcome is major adverse cardiac or cerebrovascular events (MACCEs) up to 30 days after surgery, which includes death from all-cause mortality, myocardial infarction, myocardial injury, nonfatal cardiac arrest, revascularization process, pulmonary embolism, deep venous thrombosis, newly documented postoperative atrial fibrillation (POAF), stroke, and congestive heart failure. Secondary outcome is newly documented POAF. The other exploratory outcomes include the following: (1) postoperative tracheal intubation and mechanical ventilation; (2) infection and/or sepsis; (3) unplanned intensive care unit (ICU) admission; and (4) duration of stay in hospital and ICU. Meta-analysis and meta- regression were conducted using Cochrane Review Manager 5.3 (Cochrane, London, UK) and OpenBUGS v3.0, respectively. Eleven comparative studies were included (n = 1801 patients; OSA versus non-OSA: 688 vs 1113, respectively). MACCEs were 33.3% higher odds in OSA versus non-OSA patients (OSA versus non-OSA: 31% vs 10.6%; odds ratio [OR], 2.4; 95% confidence interval [CI], 1.38-4.2; P = .002). The odds of newly documented POAF (OSA versus non-OSA: 31% vs 21%; OR, 1.94; 95% CI, 1.13-3.33; P = .02) was higher in OSA compared to non-OSA. Even though the postoperative tracheal intubation and mechanical ventilation (OSA versus non-OSA: 13% vs 5.4%; OR, 2.67; 95% CI, 1.03-6.89; P = .04) were significantly higher in OSA patients, the length of ICU stay and hospital stay were not significantly prolonged in patients with OSA compared to non-OSA. The majority of OSA patients were not treated with continuous positive airway pressure therapy. Meta-regression and sensitivity analysis of the subgroups did not impact the OR of postoperative complications for OSA versus non-OSA groups. Our meta-analysis demonstrates that after cardiac surgery, MACCEs and newly documented POAF were 33.3% and 18.1% higher odds in OSA versus non-OSA patients, respectively.
Efficacy of multiple arterial coronary bypass grafting in patients with diabetes mellitus.
Yamaguchi, Atsushi; Kimura, Naoyuki; Itoh, Satoshi; Adachi, Koichi; Yuri, Koichi; Okamura, Homare; Adachi, Hideo
2016-09-01
Use of the left internal mammary artery in patients with diabetes mellitus and multivessel coronary artery disease is known to improve survival after coronary artery bypass grafting (CABG); however, the survival benefit of multiple arterial grafts (MAGs) in diabetic patients is debated. We investigated the efficacy of CABG performed with MAGs in diabetic patients. The overall patient group comprised 2618 consecutive patients who underwent isolated CABG at our hospital between 1990 and 2014. Perioperative characteristics, in-hospital outcomes and long-term outcomes were compared between diabetic (n = 1110) and non-diabetic patients (n = 1508). The long-term outcomes of diabetic and non-diabetic patients were analysed between those who received a single arterial graft (SAG) and those who received MAGs. Both full unmatched patient population and propensity-matched patient population analyses (diabetic cohort = 431 pairs, non-diabetic cohort = 577 pairs) were performed. Preoperative comorbidities were much more common in the diabetic patients than in the non-diabetic patients; however, comorbidities were not associated with in-hospital outcomes (diabetes versus non-diabetes group, in-hospital mortality: 2.2 vs 1.5%; deep sternal wound infection: 2.2 vs 1.8%, P > 0.05). Although survival and freedom from major cardiac and cerebrovascular events (MACCEs) at 15 years were lower in the diabetes group than in the non-diabetes group (survival: 48.6 vs 55.0%, P = 0.019; MACCE-free survival: 40.8 vs 46.1%, P = 0.02), cardiac death-free survival at 15 years was similar (81.7 vs 83.9%, P = 0.24). Overall, 12-year survival was higher in both diabetic and non-diabetic patients treated with MAGs than in those treated with an SAG (64.9 vs 56.8%, P = 0.006, and 71.9 vs 60.5%, P < 0.001). Propensity-matched patient cohort analysis revealed improved 12-year survival with MAGs versus SAG in both the diabetes group (64.9 vs 58.8%, P = 0.041) and non-diabetes group (71.4 vs 63.8%, P = 0.014). Similarly, MACCE-free survival was improved in both groups. A long-term survival advantage, with no increase in perioperative morbidity, is conferred with the use of multiple arterial bypass grafts not only in non-diabetic patients but also in diabetic patients. © The Author 2016. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
The joint methane profiles retrieval approach from GOSAT TIR and SWIR spectra
NASA Astrophysics Data System (ADS)
Zadvornykh, Ilya V.; Gribanov, Konstantin G.; Zakharov, Vyacheslav I.; Imasu, Ryoichi
2017-11-01
In this paper we present a method, using methane as example, which allows more accurate greenhouse gases retrieval in the Earth's atmosphere. Using the new version of the FIRE-ARMS software, supplemented with the VLIDORT vector radiation transfer model, we carried out joint methane retrieval from TIR (Thermal Infrared Range) and SWIR (ShortWavelength Infrared Range) GOSAT spectra using optimal estimation method. MACC reanalysis data from the European Center for Medium-Range Forecasts (ECMWF), supplemented by data from aircraft measurements of the HIPPO experiment were used as a statistical ensemble.
Lee, Michael S; Shlofmitz, Evan; Mansourian, Pejman; Sethi, Sanjum; Shlofmitz, Richard A
2016-11-01
We evaluated the relationship between gender and angiographic and clinical outcomes in patients with severely calcified lesions who underwent orbital atherectomy. Female gender is associated with increased risk of adverse clinical events after percutaneous coronary intervention (PCI). Severe coronary artery calcification increases the complexity of PCI and increases the risk of adverse cardiac events. Orbital atherectomy is effective in plaque modification, which facilitates stent delivery and expansion. Whether gender differences exist after orbital atherectomy is unclear. Our analysis retrospectively analyzed 458 consecutive real-world patients (314 males and 144 females) from three centers who underwent orbital atherectomy. The primary endpoint was the major adverse cardiac and cerebrovascular event (MACCE) rate, defined as the composite of death, myocardial infarction (MI), target-vessel revascularization (TVR), and stroke, at 30 days. The primary endpoint of MACCE was low and similar in females and males (0.7% vs 2.9%; P=.14). The individual endpoints of death (0.7% vs 1.6%; P=.43), MI (0.7% vs 1.3%; P=.58), TVR (0% vs 0%; P>.99), and stroke (0% vs 0.3%; P=.50) were low in both groups and did not differ. Angiographic complications were low: perforation (0.8% vs 0.7%; P>.90), dissection (0.8% vs 1.1%; P=.80), and no-reflow (0.8% vs 0.7%; P>.90). Plaque modification with orbital atherectomy was safe and provided similar angiographic and clinical outcomes between females and males. Randomized trials with longer-term follow-up are needed to support our results.
An overview of cancer research in South African academic and research institutions, 2013 - 2014.
Moodley, Jennifer; Stefan, D Cristina; Sewram, Vikash; Ruff, Paul; Freeman, Melvyn; Asante-Shongwe, Kwanele
2016-05-10
Cancer is emerging as a critical public health problem in South Africa (SA). Recognising the importance of research in addressing the cancer burden, the Ministerial Advisory Committee on the Prevention and Control of Cancer (MACC) research working group undertook a review of the current cancer research landscape in SA and related this to the cancer burden. Academic and research institutions in SA were contacted to provide information on the titles of all current and recently completed (2013/2014) cancer research projects. Three MACC research working group members used the project titles to independently classify the projects by type of research (basic, clinical and public health - projects could be classified in more than one category) and disease site. A more detailed classification of projects addressing the five most common cancers diagnosed in males and females in SA was conducted using an adapted Common Scientific Outline (CSO) categorisation. Information was available on 556 cancer research projects. Overall, 301 projects were classified as clinical, 254 as basic science and 71 as public health research. The most common cancers being researched were cancers of the breast (n=95 projects) and cervix (n=43), leukaemia (n=36), non-Hodgkin's lymphoma (n=35) and lung cancer (n=23). Classification of the five most common cancers in males and females in SA, using the adapted CSO categories, showed that the majority of projects related to treatment, with relatively few projects on prevention, survivorship and patient perspectives. Our findings established that there is a dearth of public health cancer research in SA.
Breuckmann, Frank; Hochadel, Matthias; Darius, Harald; Giannitsis, Evangelos; Münzel, Thomas; Maier, Lars S; Schmitt, Claus; Schumacher, Burghard; Heusch, Gerd; Voigtländer, Thomas; Mudra, Harald; Senges, Jochen
2015-08-01
We investigated the current management of unstable angina pectoris (UAP) in certified chest pain units (CPUs) in Germany and focused on the European Society of Cardiology (ESC) guideline-adherence in the timing of invasive strategies or choice of conservative treatment options. More specifically, we analyzed differences in clinical outcome with respect to guideline-adherence. Prospective data from 1400 UAP patients were collected. Analyses of high-risk criteria with indication for invasive management and 3-month clinical outcome data were performed. Guideline-adherence was tested for a primarily conservative strategy as well as for percutaneous coronary intervention (PCI) within <24 and <72h after admission. Overall guideline-conforming management was performed in 38.2%. In UAP patients at risk, undertreatment caused by an insufficient consideration of risk criteria was obvious in 78%. Reciprocally, overtreatment in the absence of adequate risk markers was performed in 27%, whereas a guideline-conforming primarily conservative strategy was chosen in 73% of the low-risk patients. Together, the 3-month major adverse coronary and cerebrovascular events (MACCE) were low (3.6%). Nonetheless, guideline-conforming treatment was even associated with significantly lower MACCE rates (1.6% vs. 4.0%, p<0.05). The data suggest an inadequate adherence to ESC guidelines in nearly two thirds of the patients, particularly in those patients at high to intermediate risk with secondary risk factors, emphasizing the need for further attention to consistent risk profiling in the CPU and its certification process. Copyright © 2014 Japanese College of Cardiology. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Verstraeten, Willem W.; Folkert Boersma, K.; Douros, John; Williams, Jason E.; Eskes, Henk H.; Delcloo, Andy
2017-04-01
High nitrogen oxides concentrations at the surface (NOX = NO + NO2) impact humans and ecosystem badly and play a key role in tropospheric chemistry. Surface NOX emissions drive major processes in regional and global chemistry transport models (CTM). NOX contributes to the formation of acid rain, act as aerosol precursors and is an important trace gas for the formation of tropospheric ozone (O3). Via tropospheric O3, NOX indirectly affects the production of the hydroxyl radical which controls the chemical lifetime of key atmospheric pollutants and reactive greenhouse gases. High NOX emissions are mainly observed in polluted regions produced by anthropogenic combustion from industrial, traffic and household activities typically observed in large and densely populated urban areas. Accurate NOX inventories are essential, but state-of the- art emission databases may vary substantially and uncertainties are high since reported emissions factors may differ in order of magnitude and more. To date, the modelled NO2 concentrations and lifetimes have large associated uncertainties due to the highly non-linear small-scale chemistry that occurs in urban areas and uncertainties in the reaction rate data, missing nitrogen (N) species and volatile organic compounds (VOC) emissions, and incomplete knowledge of nitrogen oxides chemistry. Any overestimation in the chemical lifetime may mask missing NOX chemistry in current CTM's. By simultaneously estimating both the NO2 lifetime and concentrations, for instance by using the Exponentially Modified Gaussian (EMG), a better surface NOX emission flux estimate can be obtained. Here we evaluate if the EMG methodology can reproduce the emissions input from the tropospheric NO2 columns simulated by the LOTOS-EUROS (Long Term Ozone Simulation-European Ozone Simulation) CTM model. We apply the EMG methodology on LOTOS-EUROS simulated tropospheric NO2 columns for the period April-September 2013 for 21 selected European urban areas under windy conditions (surface wind speeds > 3 m s-1). We then compare the top-down derived surface NOX emissions with the 2011 MACC-III emission inventory, used in the LOTOS-EUROS model as input to simulate the NO2 columns. We also apply the EMG methodology on OMI (Ozone Monitoring Instrument) tropospheric NO2 column data, providing us with real-time observation-based estimates of midday NO2 lifetime and NOX emissions over 21 European cities in 2013. Results indicate that the top-down derived NOX emissions from LOTOS-EUROS (respectively OMI) are comparable with the MACC-III inventory with a R2 of 0.99 (respectively R2 = 0.79). For St-Petersburg and Moscow the top-down NOX estimates from 2013 OMI data are biased low compared to the MACC-III inventory which uses a 2011 NOX emissions update.
Impact of improved soil climatology and intialization on WRF-chem dust simulations over West Asia
NASA Astrophysics Data System (ADS)
Omid Nabavi, Seyed; Haimberger, Leopold; Samimi, Cyrus
2016-04-01
Meteorological forecast models such as WRF-chem are designed to forecast not only standard atmospheric parameters but also aerosol, particularly mineral dust concentrations. It has therefore become an important tool for the prediction of dust storms in West Asia where dust storms have the considerable impact on living conditions. However, verification of forecasts against satellite data indicates only moderate skill in prediction of such events. Earlier studies have already indicated that the erosion factor, land use classification, soil moisture, and temperature initializations play a critical role in the accuracy of WRF-chem dust simulations. In the standard setting the erosion factor and land use classification are based on topographic variations and post-processed images of the advanced very high-resolution radiometer (AVHRR) during the period April 1992-March 1993. Furthermore, WRF-chem is normally initialized by the soil moisture and temperature of Final Analysis (FNL) model on 1.0x1.0 degree grids. In this study, we have changed boundary initial conditions so that they better represent current changing environmental conditions. To do so, land use (only bare soil class) and the erosion factor were both modified using information from MODIS deep blue AOD (Aerosol Optical Depth). In this method, bare soils are where the relative frequency of dust occurrence (deep blue AOD > 0.5) is more than one-third of a given month. Subsequently, the erosion factor, limited within the bare soil class, is determined by the monthly frequency of dust occurrence ranging from 0.3 to 1. It is worth to mention, that 50 percent of calculated erosion factor is afterward assigned to sand class while silt and clay classes each gain 25 percent of it. Soil moisture and temperature from the Global Land Data Assimilation System (GLDAS) were utilized to provide these initializations in higher resolution of 0.25 degree than in the standard setting. Modified and control simulations were conducted for the summertime of 2008-2012 and verified by satellite data (MODIS deep blue AOD, TOMs Aerosol Index and MISR AOD 550nm) and two well-known modeling systems of atmospheric composition (MACC and DREAM). All comparisons show a significant improvement in WRF-chem dust simulations after implementing the modifications. In comparison to the control run, the modified run bears an average increase of spearman correlation of 17-20 percent points when it is compared with satellite data. Our runs with modified WRF-chem even outperform MACC and DREAM dust simulations for the region.
Off-pump compared to minimal extracorporeal circulation surgery in coronary artery bypass grafting.
Reuthebuch, Oliver; Koechlin, Luca; Gahl, Brigitta; Matt, Peter; Schurr, Ulrich; Grapow, Martin; Eckstein, Friedrich
2014-01-01
Coronary artery bypass grafting (CABG) using extracorporeal circulation (ECC) is still the gold standard. However, alternative techniques have been developed to avoid ECC and its potential adverse effects. These encompass minimal extracorporeal circulation (MECC) or off-pump coronary artery bypass grafting (OPCAB). However, the prevailing potential benefits when comparing MECC and OPCABG are not yet clearly established. In this retrospective study we investigated the potential benefits of MECC and OPCABG in 697 patients undergoing CABG. Of these, 555 patients had been operated with MECC and 142 off-pump. The primary endpoint was Troponin T level as an indicator for myocardial damage. Study groups were not significantly different in general. However, patients undergoing OPCABG were significantly older (65.01 years ± 9.5 vs. 69.39 years ± 9.5; p value <0.001) with a higher Logistic EuroSCORE I (4.92% ± 6.5 vs. 5.88% ± 6.8; p value = 0.017). Operating off pump significantly reduced the need for intra-operative blood products (0.7% vs. 8.6%; p-value <0.001) and the length of stay in the intensive care unit (ICU) (2.04 days ± 2.63 vs. 2.76 days ± 2.79; p value <0.001). Regarding other blood values a significant difference could not be found in the adjusted calculations. The combined secondary endpoint, major cardiac or cerebrovascular events (MACCE), was equal in both groups as well. Coronary artery bypass grafting using MECC or OPCABG are two comparable techniques with advantages for OPCABG regarding the reduced need for intra-operative blood products and shorter length of stay in the ICU. However serological values and combined endpoint MACCE did not differ significantly in both groups.
NASA Astrophysics Data System (ADS)
Gilman, Jessica B.; Kuster, William C.; Goldan, Paul D.; Herndon, Scott C.; Zahniser, Mark S.; Tucker, Sara C.; Brewer, W. Alan; Lerner, Brian M.; Williams, Eric J.; Harley, Robert A.; Fehsenfeld, Fred C.; Warneke, Carsten; de Gouw, Joost A.
2009-04-01
An extensive set of volatile organic compounds (VOCs) and other gas phase species were measured in situ aboard the NOAA R/V Ronald H. Brown as the ship sailed in the Gulf of Mexico and the Houston and Galveston Bay (HGB) area as part of the Texas Air Quality (TexAQS)/Gulf of Mexico Atmospheric Composition and Climate Study (GoMACCS) conducted from July-September 2006. The magnitudes of the reactivities of CH4, CO, VOCs, and NO2 with the hydroxyl radical, OH, were determined in order to quantify the contributions of these compounds to potential ozone formation. The average total OH reactivity (ROH,TOTAL) increased from 1.01 s-1 in the central gulf to 10.1 s-1 in the HGB area as a result of the substantial increase in the contribution from VOCs and NO2. The increase in the measured concentrations of reactive VOCs in the HGB area compared to the central gulf was explained by the impact of industrial emissions, the regional distribution of VOCs, and the effects of local meteorology. By compensating for the effects of boundary layer mixing, the diurnal profiles of the OH reactivity were used to characterize the source signatures and relative magnitudes of biogenic, anthropogenic (urban + industrial), and oxygenated VOCs as a function of the time of day. The source of reactive oxygenated VOCs (e.g., formaldehyde) was determined to be almost entirely from secondary production. The secondary formation of oxygenated VOCs, in addition to the continued emissions of reactive anthropogenic VOCs, served to sustain elevated levels of OH reactivity throughout the time of peak ozone production.
Barthélémy, Olivier; Degrell, Philippe; Berman, Emmanuel; Kerneis, Mathieu; Petroni, Thibaut; Silvain, Johanne; Payot, Laurent; Choussat, Remi; Collet, Jean-Philippe; Helft, Gerard; Montalescot, Gilles; Le Feuvre, Claude
2015-01-01
Whether outcomes differ for women and men after percutaneous coronary intervention (PCI) for ST-segment elevation myocardial infarction (STEMI) remains controversial. To compare 1-year outcomes after primary PCI in women and men with STEMI, matched for age and diabetes. Consecutive women with STEMI of<24 hours' duration referred (August 2007 to January 2011) for primary PCI were compared with men matched for age and diabetes. Rates of all-cause mortality, target vessel revascularization (TVR) and major cardiovascular and cerebrovascular events (MACCE) (death/myocardial infarction/stroke) were assessed at 1 year. Among 775 consecutive patients, 182 (23.5%) women were compared with 182 matched men. Mean age was 69±15 years, 18% had diabetes. Patient characteristics were similar, except for lower creatinine clearance (73±41 vs 82±38 μmol/L; P=0.041), more cardiogenic shock (14.8% vs 6.6%; P=0.017) and less radial PCI (81.3% vs 90.1%; P=0.024) in women. Rates of 1-year death (22.7% vs 18.1%), TVR (8.3% vs 6.0%) and MACCE (24.3% vs 20.9%) were not statistically different in women (P>0.05 for all). After exclusion of patients with shock (10.7%) and out-of-hospital cardiac arrest (6.6%), death rates were even more similar (11.3% vs 11.8%; P=0.10). Female sex was not independently associated with death (odds ratio 1.01, 95% confidence interval 0.55-1.87; P=0.97). In our consecutive unselected patient population, women had similar 1-year outcomes to men matched for age and diabetes, after contemporary primary PCI for STEMI, despite having a higher risk profile at baseline. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Marine aerosol distribution and variability over the pristine Southern Indian Ocean
NASA Astrophysics Data System (ADS)
Mallet, Paul-Étienne; Pujol, Olivier; Brioude, Jérôme; Evan, Stéphanie; Jensen, Andrew
2018-06-01
This paper presents an 8-year (2005-2012 inclusive) study of the marine aerosol distribution and variability over the Southern Indian Ocean, precisely in the area { 10 °S - 40 °S ; 50 °E - 110 °E } which has been identified as one of the most pristine regions of the globe. A large dataset consisting of satellite data (POLDER, CALIOP), AERONET measurements at Saint-Denis (French Réunion Island) and model reanalysis (MACC), has been used. In spite of a positive bias of about 0.05 between the AOD (aerosol optical depth) given by POLDER and MACC on one hand and the AOD measured by AERONET on the other, consistent results for aerosol distribution and variability over the area considered have been obtained. First, aerosols are mainly confined below 2km asl (above sea level) and are dominated by sea salt, especially in the center of the area of interest, with AOD ≤ 0 . 1. This zone is the most pristine and is associated with the position of the Mascarene anticyclone. There, the direct radiative effect is assessed around - 9 Wm-2 at the top of the atmosphere and probability density functions of the AOD s are leptokurtic lognormal functions without any significant seasonal variation. It is also suggested that the Madden-Jullian oscillation impacts sea salt emissions in the northern part of the area considered by modifying the state of the ocean surface. Finally, this area is surrounded in the northeast and the southwest by seasonal Australian and South African intrusions (AOD > 0.1) ; throughout the year, the ITCZ seems to limit continental contaminations from Asia. Due to the long period of time considered (almost a decade), this paper completes and strengthens results of studies based on observations performed during previous specific field campaigns.
NASA Astrophysics Data System (ADS)
Topping, David; Allan, James; Alfarra, Rami; Aumont, Bernard
2017-04-01
Our ability to model the chemical and thermodynamic processes that lead to secondary organic aerosol (SOA) formation is thought to be hampered by the complexity of the system. While there are fundamental models now available that can simulate the tens of thousands of reactions thought to take place, validation against experiments is highly challenging. Techniques capable of identifying individual molecules such as chromatography are generally only capable of quantifying a subset of the material present, making it unsuitable for a carbon budget analysis. Integrative analytical methods such as the Aerosol Mass Spectrometer (AMS) are capable of quantifying all mass, but because of their inability to isolate individual molecules, comparisons have been limited to simple data products such as total organic mass and O:C ratio. More detailed comparisons could be made if more of the mass spectral information could be used, but because a discrete inversion of AMS data is not possible, this activity requires a system of predicting mass spectra based on molecular composition. In this proof of concept study, the ability to train supervised methods to predict electron impact ionisation (EI) mass spectra for the AMS is evaluated. Supervised Training Regression for the Arbitrary Prediction of Spectra (STRAPS), is not built from first principles. A methodology is constructed whereby the presence of specific mass-to-charge ratio (m/z) channels are fit as a function of molecular structure before the relative peak height for each channel is similarly fit using a range of regression methods. The widely-used AMS mass spectral database is used as a basis for this, using unit mass resolution spectra of laboratory standards. Key to the fitting process is choice of structural information, or molecular fingerprint. Initial results suggest the generic public 'MACCS' fingerprints provide the most accurate trained model when combined with both decision trees and random forests with median cosine angles of 0.94-.0.97 between modelled and measured spectra. There is some sensitivity to choice of fingerprint, but most sensitivity is in choice of regression technique. Support Vector Machines perform the worst, with median values of 0.78-0.85 and lower ranges approaching 0.4 depending on the fingerprint used. More detailed analysis of modelled versus mass spectra demonstrates important composition dependent sensitivities on a compound-by-compound basis. This is further demonstrated when we apply the trained methods to a model α-pinene SOA system, using output from the GECKO-A model. This shows that use of a generic fingerprint referred to as 'FP4' and one designed for vapour pressure predictions ('Nanoolal') give plausible mass spectra, whilst the use of the MACCS keys perform poorly in this application, demonstrating the need for evaluating model performance against other SOA systems rather than existing laboratory databases on single compounds.
NASA Astrophysics Data System (ADS)
Chubarova, Nataly; Poliukhov, Alexei; Shatunova, Marina; Rivin, Gdali; Becker, Ralf; Muskatel, Harel; Blahak, Ulrich; Kinne, Stefan; Tarasova, Tatiana
2017-04-01
We use the operational Russian COSMO-Ru weather forecast model (Ritter and and Geleyn, 1991) with different aerosol input data for the evaluation of radiative and temperature effects of aerosol in different atmospheric conditions. Various aerosol datasets were utilized including Tegen climatology (Tegen et al., 1997), updated Macv2 climatology (Kinne et al., 2013), Tanre climatology (Tanre et al., 1984) as well as the MACC data (Morcrette et al., 2009). For clear sky conditions we compare the radiative effects from the COSMO-Ru model over Moscow (55.7N, 37.5E) and Lindenberg/Falkenberg sites (52.2N, 14.1E) with the results obtained using long-term aerosol measurements. Additional tests of the COSMO RT code were performed against (FC05)-SW model (Tarasova T.A. and Fomin B.A., 2007). The overestimation of about 5-8% of COSMO RT code was obtained. The study of aerosol effect on temperature at 2 meters has revealed the sensitivity of about 0.7-1.1 degree C per 100 W/m2 change in shortwave net radiation due to aerosol variations. We also discuss the radiative impact of urban aerosol properties according to the long-term AERONET measurements in Moscow and Moscow suburb as well as long-term aerosol trends over Moscow from the measurements and Macv2 dataset. References: Kinne, S., O'Donnel D., Stier P., et al., J. Adv. Model. Earth Syst., 5, 704-740, 2013. Morcrette J.-J.,O. Boucher, L. Jones, eet al, J.GEOPHYS. RES.,VOL. 114, D06206, doi:10.1029/2008JD011235, 2009. Ritter, B. and Geleyn, J., Monthly Weather Review, 120, 303-325, 1992. Tanre, D., Geleyn, J., and Slingo, J., A. Deepak Publ., Hampton, Virginia, 133-177, 1984. Tarasova, T., and Fomin, B., Journal of Atmospheric and Oceanic Technology, 24, 1157-1162, 2007. Tegen, I., Hollrig, P., Chin, M., et al., Journal of Geophysical Research- Atmospheres, 102, 23895-23915, 1997.
NASA Astrophysics Data System (ADS)
Remy, Samuel; Benedetti, Angela; Jones, Luke; Razinger, Miha; Haiden, Thomas
2014-05-01
The WMO-sponsored Working Group on Numerical Experimentation (WGNE) set up a project aimed at understanding the importance of aerosols for numerical weather prediction (NWP). Three cases are being investigated by several NWP centres with aerosol capabilities: a severe dust case that affected Southern Europe in April 2012, a biomass burning case in South America in September 2012, and an extreme pollution event in Beijing (China) which took place in January 2013. At ECMWF these cases are being studied using the MACC-II system with radiatively interactive aerosols. Some preliminary results related to the dust and the fire event will be presented here. A preliminary verification of the impact of the aerosol-radiation direct interaction on surface meteorological parameters such as 2m Temperature and surface winds over the region of interest will be presented. Aerosol optical depth (AOD) verification using AERONET data will also be discussed. For the biomass burning case, the impact of using injection heights estimated by a Plume Rise Model (PRM) for the biomass burning emissions will be presented.
A Morpholino-based screen to identify novel genes involved in craniofacial morphogenesis
Melvin, Vida Senkus; Feng, Weiguo; Hernandez-Lagunas, Laura; Artinger, Kristin Bruk; Williams, Trevor
2014-01-01
BACKGROUND The regulatory mechanisms underpinning facial development are conserved between diverse species. Therefore, results from model systems provide insight into the genetic causes of human craniofacial defects. Previously, we generated a comprehensive dataset examining gene expression during development and fusion of the mouse facial prominences. Here, we used this resource to identify genes that have dynamic expression patterns in the facial prominences, but for which only limited information exists concerning developmental function. RESULTS This set of ~80 genes was used for a high throughput functional analysis in the zebrafish system using Morpholino gene knockdown technology. This screen revealed three classes of cranial cartilage phenotypes depending upon whether knockdown of the gene affected the neurocranium, viscerocranium, or both. The targeted genes that produced consistent phenotypes encoded proteins linked to transcription (meis1, meis2a, tshz2, vgll4l), signaling (pkdcc, vlk, macc1, wu:fb16h09), and extracellular matrix function (smoc2). The majority of these phenotypes were not altered by reduction of p53 levels, demonstrating that both p53 dependent and independent mechanisms were involved in the craniofacial abnormalities. CONCLUSIONS This Morpholino-based screen highlights new genes involved in development of the zebrafish craniofacial skeleton with wider relevance to formation of the face in other species, particularly mouse and human. PMID:23559552
Kereiakes, Dean J; Yeh, Robert W; Massaro, Joseph M; Driscoll-Shempp, Priscilla; Cutlip, Donald E; Steg, P Gabriel; Gershlick, Anthony H; Darius, Harald; Meredith, Ian T; Ormiston, John; Tanguay, Jean-François; Windecker, Stephan; Garratt, Kirk N; Kandzari, David E; Lee, David P; Simon, Daniel I; Iancu, Adrian Corneliu; Trebacz, Jaroslaw; Mauri, Laura
2015-10-01
This study sought to compare rates of stent thrombosis and major adverse cardiac and cerebrovascular events (MACCE) (composite of death, myocardial infarction, or stroke) after coronary stenting with drug-eluting stents (DES) versus bare-metal stents (BMS) in patients who participated in the DAPT (Dual Antiplatelet Therapy) study, an international multicenter randomized trial comparing 30 versus 12 months of dual antiplatelet therapy in subjects undergoing coronary stenting with either DES or BMS. Despite antirestenotic efficacy of coronary DES compared with BMS, the relative risk of stent thrombosis and adverse cardiovascular events is unclear. Many clinicians perceive BMS to be associated with fewer adverse ischemic events and to require shorter-duration dual antiplatelet therapy than DES. Prospective propensity-matched analysis of subjects enrolled into a randomized trial of dual antiplatelet therapy duration was performed. DES- and BMS-treated subjects were propensity-score matched in a many-to-one fashion. The study design was observational for all subjects 0 to 12 months following stenting. A subset of eligible subjects without major ischemic or bleeding events were randomized at 12 months to continued thienopyridine versus placebo; all subjects were followed through 33 months. Among 10,026 propensity-matched subjects, DES-treated subjects (n = 8,308) had a lower rate of stent thrombosis through 33 months compared with BMS-treated subjects (n = 1,718, 1.7% vs. 2.6%; weighted risk difference -1.1%, p = 0.01) and a noninferior rate of MACCE (11.4% vs. 13.2%, respectively, weighted risk difference -1.8%, p = 0.053, noninferiority p < 0.001). DES-treated subjects have long-term rates of stent thrombosis that are lower than BMS-treated subjects. (The Dual Antiplatelet Therapy Study [DAPT study]; NCT00977938). Copyright © 2015 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Scudiero, Fernando; Zocchi, Chiara; De Vito, Elena; Tarantini, Giuseppe; Marcucci, Rossella; Valenti, Renato; Migliorini, Angela; Antoniucci, David; Marchionni, Niccolò; Parodi, Guido
2018-07-01
The CHA 2 DS 2 -VASc score predicts stroke risk in patients with atrial fibrillation, but recently has been reported to have a prognostic role even in patients with ACS. We sought to assess the ability of the CHA 2 DS 2 -VASc score to predict the severity of coronary artery disease, high residual platelet reactivity and long-term outcomes in patients with acute coronary syndrome (ACS). Overall, 1729 consecutive patients with ACS undergoing invasive management were included in this prospective registry. We assessed platelet reactivity via light transmittance aggregometry after clopidogrel loading. Patients were divided according to the CHA 2 DS 2 -VASc score: group A = 0, B = 1, C = 2, D = 3, E = 4 and F ≥ 5. Patients with higher CHA 2 DS 2 -VASc score were more likely to have a higher rate of multivessel CAD (37%, 47%, 55%, 62%, 67 and 75% in Group A, B, C, D, E and F; p < 0.001); moreover, CHA 2 DS 2 -VASc score correlated linearly with residual platelet reactivity (R = 0.77; p < 0.001). At long-term follow-up, estimated adverse event rates (MACCE: cardiac death, MI, stroke or any urgent coronary revascularization) were 3%, 8%, 10%, 14%, 19% and 24% in group A, B, C, D, E and F; p < 0.001. Multivariable analysis demonstrated CHA 2 DS 2 -VASc to be an independent predictor of severity of coronary artery disease, of high residual platelet reactivity and of MACCE. In a cohort of patients with ACS, CHA 2 DS 2 -VASc score correlated with coronary disease severity and residual platelet reactivity, and therefore it predicted the risk of long-term adverse events. Copyright © 2018 Elsevier B.V. All rights reserved.
Kunihara, Takashi; Wendler, Olaf; Heinrich, Kerstin; Nomura, Ryota; Schäfers, Hans-Joachim
2018-06-20
The optimal choice of conduit and configuration for coronary artery bypass grafting (CABG) in diabetic patients remains somewhat controversial, even though arterial grafts have been proposed as superior. We attempted to clarify the role of complete arterial revascularization using the left internal thoracic artery (LITA) and the radial artery (RA) alone in "T-Graft" configuration on long-term outcome. From 1994 to 2001, 104 diabetic patients with triple vessel disease underwent CABG using LITA/RA "T-Grafts" (Group-A). Using propensity-score matching, 104 patients with comparable preoperative characteristics who underwent CABG using LITA and one sequential vein graft were identified (Group-V). Freedom from all causes of death, cardiac death, major adverse cardiac event (MACE), major adverse cardiac (and cerebral) event (MACCE), and repeat revascularization at 10 years of Group-A was 60 ± 5%, 67 ± 5%, 48 ± 5%, 37 ± 5%, and 81 ± 4%, respectively, compared with 58 ± 5%, 70 ± 5%, 49 ± 5%, 39 ± 5%, and 93 ± 3% in Group-V. There were no significant differences in these end points between groups regardless of insulin-dependency. Multivariable Cox proportional hazards model identified age, left ventricular ejection fraction, renal failure, and hyperlipidemia as independent predictors for all death, age and left ventricular ejection fraction for cardiac death, sinus rhythm for both MACE and MACCE, and prior percutaneous coronary intervention for re-revascularization. In our experience, complete arterial revascularization using LITA/RA "T-Grafts" does not provide superior long-term clinical benefits for diabetic patients compared with a combination of LITA and sequential vein graft. Georg Thieme Verlag KG Stuttgart · New York.
Kappetein, Arie Pieter; Feldman, Ted E; Mack, Michael J; Morice, Marie-Claude; Holmes, David R; Ståhle, Elisabeth; Dawkins, Keith D; Mohr, Friedrich W; Serruys, Patrick W; Colombo, Antonio
2011-09-01
Long-term randomized comparisons of percutaneous coronary intervention (PCI) to coronary artery bypass grafting (CABG) in left main coronary (LM) disease and/or three-vessel disease (3VD) patients have been limited. This analysis compares 3-year outcomes in LM and/or 3VD patients treated with CABG or PCI with TAXUS Express stents. SYNTAX is an 85-centre randomized clinical trial (n= 1800). Prospectively screened, consecutive LM and/or 3VD patients were randomized if amenable to equivalent revascularization using either technique; if not, they were entered into a registry. Patients in the randomized cohort will continue to be followed for 5 years. At 3 years, major adverse cardiac and cerebrovascular events [MACCE: death, stroke, myocardial infarction (MI), and repeat revascularization; CABG 20.2% vs. PCI 28.0%, P< 0.001], repeat revascularization (10.7 vs. 19.7%, P< 0.001), and MI (3.6 vs. 7.1%, P= 0.002) were elevated in the PCI arm. Rates of the composite safety endpoint (death/stroke/MI 12.0 vs. 14.1%, P= 0.21) and stroke alone (3.4 vs. 2.0%, P= 0.07) were not significantly different between treatment groups. Major adverse cardiac and cerebrovascular event rates were not significantly different between arms in the LM subgroup (22.3 vs. 26.8%, P= 0.20) but were higher with PCI in the 3VD subgroup (18.8 vs. 28.8%, P< 0.001). At 3 years, MACCE was significantly higher in PCI- compared with CABG-treated patients. In patients with less complex disease (low SYNTAX scores for 3VD or low/intermediate terciles for LM patients), PCI is an acceptable revascularization, although longer follow-up is needed to evaluate these two revascularization strategies.
Petricevic, Mate; Kopjar, Tomislav; Gasparovic, Hrvoje; Milicic, Davor; Svetina, Lucija; Zdilar, Boris; Boban, Marko; Mihaljevic, Martina Zrno; Biocina, Bojan
2015-05-01
Individual variability in the response to aspirin, has been established by various platelet function assays, however, the clinical relevance of aspirin resistance (AR) in patients undergoing coronary artery bypass grafting (CABG) has to be evaluated. Our working group conducted a randomized controlled trial (NCT01159639) with the aim to assess impact of dual antiplatelet therapy (APT) on outcomes among patients with AR following CABG. Patients that were aspirin resistant on fourth postoperative day (POD 4) were randomly assigned to receive either dual APT with clopidogrel (75 mg) plus aspirin (300 mg)-intervention arm or monotherapy with aspirin (300 mg)-control arm. This exploratory analysis compares clinical outcomes between aspirin resistant patients allocated to control arm and patients that have had adequate platelet inhibitory response to aspirin at POD 4. Both groups were treated with 300 mg of aspirin per day following surgery. We sought to evaluate the impact of early postoperative AR on outcomes among patients following CABG. Exploratory analysis included a total number of 325 patients. Of those, 215 patients with adequate response to aspirin and 110 patients with AR allocated to aspirin monotherapy following randomization protocol. The primary efficacy end point (MACCEs-major adverse cardiac and cardiovascular events) occurred in 10 and 6 % of patients with AR and with adequate aspirin response, respectively (p = 0.27). Non-significant differences were observed in bleeding events occurrence. Subgroup analysis of the primary end point revealed that aspirin resistant patients with BMI > 30 kg/m(2) tend to have a higher occurrence of MACCEs 18 versus 5 % (relative risk 0.44 [95 % CI 0.16-1.16]; p = 0.05). This exploratory analysis did not reveal significant impact of aspirin resistance on outcomes among patients undergoing CABG. Further, sufficiently powered studies are needed in order to evaluate clinical relevance of AR in patients undergoing CABG.
Baschin, M; Selleng, S; Hummel, A; Diedrich, S; Schroeder, H W; Kohlmann, T; Westphal, A; Greinacher, A; Thiele, T
2018-04-01
Essentials An increasing number of patients requiring surgery receive antiplatelet therapy (APT). We analyzed 181 patients receiving presurgery platelet transfusions to reverse APT. No coronary thrombosis occurred after platelet transfusion. This justifies a prospective trial to test preoperative platelet transfusions to reverse APT. Background Patients receiving antiplatelet therapy (APT) have an increased risk of perioperative bleeding and cardiac adverse events (CAE). Preoperative platelet transfusions may reduce the bleeding risk but may also increase the risk of CAE, particularly coronary thrombosis in patients after recent stent implantation. Objectives To analyze the incidence of perioperative CAE and bleeding in patients undergoing non-cardiac surgery using a standardized management of transfusing two platelet concentrates preoperatively and restart of APT within 24-72 h after surgery. Methods A cohort of consecutive patients on APT treated with two platelet concentrates before non-cardiac surgery between January 2012 and December 2014 was retrospectively identified. Patients were stratified by the risk of major adverse cardiac and cerebrovascular events (MACCE). The primary objective was the incidence of CAE (myocardial infarction, acute heart failure and cardiac troponine T increase). Secondary objectives were incidences of other thromboembolic events, bleedings, transfusions and mortality. Results Among 181 patients, 88 received aspirin, 21 clopidogrel and 72 dual APT. MACCE risk was high in 63, moderate in 103 and low in 15 patients; 67 had cardiac stents. Ten patients (5.5%; 95% CI, 3.0-9.9%) developed a CAE (three myocardial infarctions, four cardiac failures and three troponin T increases). None was caused by coronary thrombosis. Surgery-related bleeding occurred in 22 patients (12.2%; 95% CI, 8.2-17.7%), making 12 re-interventions necessary (6.6%; 95% CI, 3.8-11.2%). Conclusion Preoperative platelet transfusions and early restart of APT allowed urgent surgery and did not cause coronary thromboses, but non-thrombotic CAEs and re-bleeding occurred. Randomized trials are warranted to test platelet transfusion against other management strategies. © 2018 International Society on Thrombosis and Haemostasis.
Wang, Heyang; Liang, Zhenyang; Li, Yi; Li, Bin; Liu, Junming; Hong, Xueyi; Lu, Xin; Wu, Jiansheng; Zhao, Wei; Liu, Qiang; An, Jian; Li, Linfeng; Pu, Fanli; Ming, Qiang; Han, Yaling
2017-06-01
This study aimed to evaluate the effect of prolonged full-dose bivalirudin infusion in real-world population with ST-elevation myocardial infarction (STEMI). Subgroup data as well as meta-analysis from randomized clinical trials have shown the potency of postprocedural full-dose infusion (1.75 mg/kg/h) of bivalirudin on attenuating acute stent thrombosis (ST) after primary percutaneous coronary intervention (PCI). In this multicenter retrospective observational study, 2047 consecutive STEMI patients treated with bivalirudin during primary PCI were enrolled in 65 Chinese centers between July 2013 and May 2016. The primary outcome was acute ST defined as ARC definite/probable within 24 hours after the index procedure, and the secondary endpoints included total ST, major adverse cardiac or cerebral events (MACCE, defined as death, reinfarction, stroke, and target vessel revascularization), and any bleeding at 30 days. Among 2047 STEMI patients, 1123 (54.9%) were treated with postprocedural bivalirudin full-dose infusion (median 120 minutes) while the other 924 (45.1%) received low-dose (0.25 mg/kg/h) or null postprocedural infusion. A total of three acute ST (0.3%) occurred in STEMI patients with none or low-dose prolonged infusion of bivalirudin, but none was observed in those treated with post-PCI full-dose infusion (0.3% vs 0.0%, P=.092). Outcomes on MACCE (2.1% vs 2.7%, P=.402) and total bleeding (2.1% vs 1.4%, P=.217) at 30 days showed no significant difference between the two groups, and no subacute ST was observed. Post-PCI full-dose bivalirudin infusion is safe and has a trend to protect against acute ST in STEMI patients undergoing primary PCI in real-world settings. © 2017 John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Nabat, P.; Somot, S.; Mallet, M.; Chiapello, I; Morcrette, J. J.; Solomon, F.; Szopa, S.; Dulac, F; Collins, W.; Ghan, S.;
2013-01-01
Since the 1980s several spaceborne sensors have been used to retrieve the aerosol optical depth (AOD) over the Mediterranean region. In parallel, AOD climatologies coming from different numerical model simulations are now also available, permitting to distinguish the contribution of several aerosol types to the total AOD. In this work, we perform a comparative analysis of this unique multiyear database in terms of total AOD and of its apportionment by the five main aerosol types (soil dust, seasalt, sulfate, black and organic carbon). We use 9 different satellite-derived monthly AOD products: NOAA/AVHRR, SeaWiFS (2 products), TERRA/MISR, TERRA/MODIS, AQUA/MODIS, ENVISAT/MERIS, PARASOL/POLDER and MSG/SEVIRI, as well as 3 more historical datasets: NIMBUS7/CZCS, TOMS (onboard NIMBUS7 and Earth- Probe) and METEOSAT/MVIRI. Monthly model datasets include the aerosol climatology from Tegen et al. (1997), the climate-chemistry models LMDz-OR-INCA and RegCM-4, the multi-model mean coming from the ACCMIP exercise, and the reanalyses GEMS and MACC. Ground-based Level- 2 AERONET AOD observations from 47 stations around the basin are used here to evaluate the model and satellite data. The sensor MODIS (on AQUA and TERRA) has the best average AOD scores over this region, showing a relevant spatio-temporal variability and highlighting high dust loads over Northern Africa and the sea (spring and summer), and sulfate aerosols over continental Europe (summer). The comparison also shows limitations of certain datasets (especially MERIS and SeaWiFS standard products). Models reproduce the main patterns of the AOD variability over the basin. The MACC reanalysis is the closest to AERONET data, but appears to underestimate dust over Northern Africa, where RegCM-4 is found closer to MODIS thanks to its interactive scheme for dust emissions. The vertical dimension is also investigated using the CALIOP instrument. This study confirms differences of vertical distribution between dust aerosols showing a large vertical spread, and other continental and marine aerosols which are confined in the boundary layer. From this compilation, we propose a 4-D blended product from model and satellite data, consisting in monthly time series of 3-D aerosol distribution at a 50 km horizontal resolution over the Euro-Mediterranean marine and continental region for the 2003-2009 period. The product is based on the total AOD from AQUA/MODIS, apportioned into sulfates, black and organic carbon from the MACC reanalysis, and into dust and sea-salt aerosols from RegCM-4 simulations, which are distributed vertically based on CALIOP climatology.We extend the 2003-2009 reconstruction to the past up to 1979 using the 2003-2009 average and applying the decreasing trend in sulfate aerosols from LMDz-OR-INCA, whose AOD trends over Europe and the Mediterranean are median among the ACCMIP models. Finally optical properties of the different aerosol types in this region are proposed from Mie calculations so that this reconstruction can be included in regional climate models for aerosol radiative forcing and aerosol-climate studies.
Montorsi, Piero; Galli, Stefano; Ravagnani, Paolo M; Tresoldi, Simone; Teruzzi, Giovanni; Caputi, Luigi; Trabattoni, Daniela; Fabbiocchi, Franco; Calligaris, Giuseppe; Grancini, Luca; Lualdi, Alessandro; de Martini, Stefano; Bartorelli, Antonio L
2016-08-01
To compare the feasibility and safety of proximal cerebral protection to a distal filter during carotid artery stenting (CAS) via a transbrachial (TB) or transradial (TR) approach. Among 856 patients who underwent CAS between January 2007 and July 2015, 214 (25%) patients (mean age 72±8 years; 154 men) had the procedure via a TR (n=154) or TB (n=60) approach with either Mo.MA proximal protection (n=61) or distal filter protection (n=153). The Mo.MA group (mean age 73±7 years; 54 men) had significantly more men and more severe stenosis than the filter group (mean age 71±8 years; 100 men). Stent type and CAS technique were left to operator discretion. Heparin and a dedicated closure device or bivalirudin and manual compression were used in TR and TB accesses, respectively. Technical and procedure success, crossover to femoral artery, 30-day major adverse cardiovascular/cerebrovascular events (MACCE; death, all strokes, and myocardial infarction), vascular complications, and radiation exposure were compared between groups. Crossover to a femoral approach was required in 1/61 (1.6%) Mo.MA patient vs 11/153 (7.1%) filter patients mainly due to technical difficulty in engaging the target vessel. Five Mo.MA patients developed acute intolerance to proximal occlusion; 4 were successfully shifted to filter protection. A TR patient was shifted to filter because the Mo.MA system was too short. CAS was technically successful in the remaining 55 (90%) Mo.MA patients and 142 (93%) filter patients. The MACCE rate was 0% in the Mo.MA patients and 2.8% in the filter group (p=0.18). Radiation exposure was similar between groups. Major vascular complications occurred in 1/61 (1.6%) and in 3/153 (1.96%) patients in the Mo.MA and filter groups (p=0.18), respectively, and were confined to the TB approach in the early part of the learning curve. Chronic radial artery occlusion was detected by Doppler ultrasound in 2/30 (6.6%) Mo.MA patients and in 4/124 (3.2%) filter patients by clinical assessment (p=0.25) at 8.1±7.5-month follow-up. CAS with proximal protection via a TR or TB approach is a feasible, safe, and effective technique with a low rate of vascular complications. © The Author(s) 2016.
Ultraviolet observations of the symbiotic star AS 296
NASA Technical Reports Server (NTRS)
Gutierrez-Moreno, A.; Moreno, H.; Feibelman, W. A.
1992-01-01
AS 296 is a well-known S-type symbiotic star which underwent an optical outburst during 1988. In this paper, UV data based on IUE observations obtained both during the quiescent and outburst stages are presented and discussed, correlating them to observations made in the optical region. It is concluded that the object is a symbiotic nova, in which the outburst is due to a thermonuclear runaway produced in the hydrogen-burning shell of a white dwarf with M of about 0.5 solar masses, accreting from the late-type giant at a rate M(acc) of about 9.7 x 10 exp -9 solar mass/year. It is not possible to determine from the observations if the hydrogen flash is degenerate or nondegenerate.
NASA Astrophysics Data System (ADS)
Fuchs, Julia; Cermak, Jan; Andersen, Hendrik
2017-04-01
This study aims at untangling the impacts of external dynamics and local conditions on cloud properties in the Southeast Atlantic (SEA) by combining satellite and reanalysis data using multivariate statistics. The understanding of clouds and their determinants at different scales is important for constraining the Earth's radiative budget, and thus prominent in climate-system research. In this study, SEA stratocumulus cloud properties are observed not only as the result of local environmental conditions but also as affected by external dynamics and spatial origins of air masses entering the study area. In order to assess to what extent cloud properties are impacted by aerosol concentration, air mass history, and meteorology, a multivariate approach is conducted using satellite observations of aerosol and cloud properties (MODIS, SEVIRI), information on aerosol species composition (MACC) and meteorological context (ERA-Interim reanalysis). To account for the often-neglected but important role of air mass origin, information on air mass history based on HYSPLIT modeling is included in the statistical model. This multivariate approach is intended to lead to a better understanding of the physical processes behind observed stratocumulus cloud properties in the SEA.
Coordinated design of coding and modulation systems
NASA Technical Reports Server (NTRS)
Massey, J. L.; Ancheta, T.; Johannesson, R.; Lauer, G.; Lee, L.
1976-01-01
The joint optimization of the coding and modulation systems employed in telemetry systems was investigated. Emphasis was placed on formulating inner and outer coding standards used by the Goddard Spaceflight Center. Convolutional codes were found that are nearly optimum for use with Viterbi decoding in the inner coding of concatenated coding systems. A convolutional code, the unit-memory code, was discovered and is ideal for inner system usage because of its byte-oriented structure. Simulations of sequential decoding on the deep-space channel were carried out to compare directly various convolutional codes that are proposed for use in deep-space systems.
NASA Astrophysics Data System (ADS)
Bauwens, Maite; Stavrakou, Trissevgeni; Müller, Jean-François; De Smedt, Isabelle; Van Roozendael, Michel
2016-04-01
Isoprene is one of the most largely emitted hydrocarbons in the atmosphere, with global annual emissions estimated at about 500 Tg, but with large uncertainties (Arneth et al., 2011). Here we use the source inversion approach to derive top-down biogenic isoprene emission estimates for the period between 2005 and 2014 constrained by formaldehyde observations, a high-yield intermediate in the oxidation of isoprene in the atmosphere. Formaldehyde columns retrieved from the Ozone Monitoring Instrument (OMI) are used to constrain the IMAGESv2 global chemistry-transport model and its adjoint code (Stavrakou et al., 2009). The MEGAN-MOHYCAN isoprene emissions (Stavrakou et al., 2014) are used as bottom-up inventory in the model. The inversions are performed separately for each year of the study period, and monthly emissions are derived for every model grid cell. The inversion results are compared to independent isoprene emissions from GUESS-ES (Arneth et al., 2007) and MEGAN-MACC (Sinderalova et al., 2014) and to top-down fluxes based on GOME-2 formaldehyde columns (Bauwens et al., 2014; Stavrakou et al., 2015). The mean global annual OMI-based isoprene flux for the period 2005-2014 is estimated to be 270 Tg, with small interannual variation. This estimate is by 20% lower with regard to the a priori inventory on average, but on the regional scale strong emission updates are inferred. The OMI-based emissions are substantially lower than the MEGAN-MACC and the GUESS-ES inventory, but agree well with the isoprene fluxes constrained by GOME-2 formaldehyde columns. Strong emission reductions are derived over tropical regions. The seasonal pattern of isoprene emissions is generally well preserved after inversion and relatively consistent with other inventories, lending confidence to the MEGAN parameterization of the a priori inventory. In boreal regions the isoprene emission trend is positive and reinforced after inversion, whereas the inversion suggests negative trends in the rainforests of Equatorial Africa and South America. The top-down isoprene fluxes are available at a resolution of 0.5°x0.5° between 2005 and 2014 at the GlobEmission website (http://www.globemission.eu). References: Arneth, A., et al.: Process-based estimates of terrestrial ecosystem isoprene emissions: incorporating the effects of a direct CO 2-isoprene interaction, Atmos. Chem. Phys., 7(1), 31-53, 2007. Arneth, A., et al.: Global terrestrial isoprene emission models: sensitivity to variability in climate and vegetation, Atmos. Chem. Phys., 11(15), 8037-8052, 2011. Bauwens, M., et al.: Satellite-based isoprene emission estimates (2007-2012) from the GlobEmission project, in ACCENT-Plus Symposium 2013 Proceedings., 2014. Stavrakou, T., et al.: Isoprene emissions over Asia 1979 - 2012: impact of climate and land-use changes, Atmos. Chem. Phys., 14(9), 4587-4605, doi:10.5194/acp-14-4587-2014, 2014. Stavrakou, T., et al.: How consistent are top-down hydrocarbon emissions based on formaldehyde observations from GOME-2 and OMI?, Atmos. Chem. Phys., 15(20), 11861-11884, doi:10.5194/acp-15-11861-2015, 2015. Stavrakou, T., et al.: Evaluating the performance of pyrogenic and biogenic emission inventories against one decade of space-based formaldehyde columns, Atmos. Chem. Phys., 9(3), 1037-1060, doi:10.5194/acp-9-1037-2009, 2009.
Evaluation and implementation of QR Code Identity Tag system for Healthcare in Turkey.
Uzun, Vassilya; Bilgin, Sami
2016-01-01
For this study, we designed a QR Code Identity Tag system to integrate into the Turkish healthcare system. This system provides QR code-based medical identification alerts and an in-hospital patient identification system. Every member of the medical system is assigned a unique QR Code Tag; to facilitate medical identification alerts, the QR Code Identity Tag can be worn as a bracelet or necklace or carried as an ID card. Patients must always possess the QR Code Identity bracelets within hospital grounds. These QR code bracelets link to the QR Code Identity website, where detailed information is stored; a smartphone or standalone QR code scanner can be used to scan the code. The design of this system allows authorized personnel (e.g., paramedics, firefighters, or police) to access more detailed patient information than the average smartphone user: emergency service professionals are authorized to access patient medical histories to improve the accuracy of medical treatment. In Istanbul, we tested the self-designed system with 174 participants. To analyze the QR Code Identity Tag system's usability, the participants completed the System Usability Scale questionnaire after using the system.
System Design Description for the TMAD Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finfrock, S.H.
This document serves as the System Design Description (SDD) for the TMAD Code System, which includes the TMAD code and the LIBMAKR code. The SDD provides a detailed description of the theory behind the code, and the implementation of that theory. It is essential for anyone who is attempting to review or modify the code or who otherwise needs to understand the internal workings of the code. In addition, this document includes, in Appendix A, the System Requirements Specification for the TMAD System.
Error-correction coding for digital communications
NASA Astrophysics Data System (ADS)
Clark, G. C., Jr.; Cain, J. B.
This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nabat, P.; Somot, S.; Mallet, M.
Since the 1980s several spaceborne sensors have been used to retrieve the aerosol optical depth (AOD) over the Mediterranean region. In parallel, AOD climatologies coming from different numerical model simulations are now also available, permitting to distinguish the contribution of several aerosol types to the total AOD. In this work, we perform a comparative analysis of this unique multiyear database in terms of total AOD and of its apportionment by the five main aerosol types (soil dust, seasalt, sulfate, black and organic carbon). We use 9 different satellite-derived monthly AOD products: NOAA/AVHRR, SeaWiFS (2 products), TERRA/MISR, TERRA/MODIS, AQUA/MODIS, ENVISAT/MERIS, PARASOL/POLDERmore » and MSG/SEVIRI, as well as 3 more historical datasets: NIMBUS7/CZCS, TOMS (onboard NIMBUS7 and Earth- Probe) and METEOSAT/MVIRI. Monthly model datasets include the aerosol climatology from Tegen et al. (1997), the climate-chemistry models LMDz-OR-INCA and RegCM-4, the multi-model mean coming from the ACCMIP exercise, and the reanalyses GEMS and MACC. Ground-based Level- 2 AERONET AOD observations from 47 stations around the basin are used here to evaluate the model and satellite data. The sensor MODIS (on AQUA and TERRA) has the best average AOD scores over this region, showing a relevant spatiotemporal variability and highlighting high dust loads over Northern Africa and the sea (spring and summer), and sulfate aerosols over continental Europe (summer). The comparison also shows limitations of certain datasets (especially MERIS and SeaWiFS standard products). Models reproduce the main patterns of the AOD variability over the basin. The MACC reanalysis is the closest to AERONET data, but appears to underestimate dust over Northern Africa, where RegCM-4 is found closer to MODIS thanks to its interactive scheme for dust emissions. The vertical dimension is also investigated using the CALIOP instrument. This study confirms differences of vertical distribution between dust aerosols showing a large vertical spread, and other continental and marine aerosols which are confined in the boundary layer. From this compilation, we propose a 4-D blended product from model and satellite data, consisting in monthly time series of 3-D aerosol distribution at a 50 km horizontal resolution over the Euro-Mediterranean marine and continental region for the 2003–2009 period. The product is based on the total AOD from AQUA/MODIS, apportioned into sulfates, black and organic carbon from the MACC reanalysis, and into dust and sea-salt aerosols from RegCM-4 simulations, which are distributed vertically based on CALIOP climatology.We extend the 2003–2009 reconstruction to the past up to 1979 using the 2003–2009 average and applying the decreasing trend in sulfate aerosols from LMDz-OR-INCA, whose AOD trends over Europe and the Mediterranean are median among the ACCMIP models. Finally optical properties of the different aerosol types in this region are proposed from Mie calculations so that this reconstruction can be included in regional climate models for aerosol radiative forcing and aerosolclimate studies.« less
NASA Astrophysics Data System (ADS)
Quinn, P.; Bates, T.; Coffman, D.; Covert, D.
2007-12-01
The impact of anthropogenic aerosol on cloud properties, cloud lifetime, and precipitation processes is one of the largest uncertainties in our current understanding of climate change. Aerosols affect cloud properties by serving as cloud condensation nuclei (CCN) thereby leading to the formation of cloud droplets. The process of cloud drop activation is a function of both the size and chemistry of the aerosol particles which, in turn, depend on the source of the aerosol and transformations that occur downwind. In situ field measurements that can lead to an improved understanding of the process of cloud drop formation and simplifying parameterizations for improving the accuracy of climate models are highly desirable. During the Gulf of Mexico Atmospheric Composition and Climate Study (GoMACCS), the NOAA RV Ronald H. Brown encountered a wide variety of aerosol types ranging from marine near the Florida panhandle to urban and industrial in the Houston-Galveston area. These varied sources provided an opportunity to investigate the role of aerosol sources, aging, chemistry, and size in the activation of particles to form cloud droplets. Here, we use the correlation between variability in critical diameter for activation (determined empirically from measured CCN concentrations and the number size distribution) and aerosol composition to quantify the impact of composition on particle activation. Variability in aerosol composition is parameterized by the mass fraction of Hydrocarbon-like Organic Aerosol (HOA) for particle diameters less than 200 nm (vacuum aerodynamic). The HOA mass fraction in this size range is lowest for marine aerosol and higher for aerosol impacted by anthropogenic emissions. Combining all data collected at 0.44 percent supersaturation (SS) reveals that composition (defined in this way) explains 40 percent of the variance in the critical diameter. As expected, the dependence of activation on composition is strongest at lower SS. At the same time, correlations between HOA mass fraction and aerosol mean diameter show that these two parameters are essentially independent of one another for this data set. We conclude that, based on the variability of the HOA mass fraction observed during GoMACCS, composition plays a dominant role in determining the fraction of particles that are activated to form cloud droplets. Using Kohler theory, we estimate the error that results in calculated CCN concentrations if the organic fraction of the aerosol is neglected (i.e., a fully soluble composition of ammonium sulfate is assumed) for the range of organic mass fractions and mean diameters observed during GoMACCS. We then relate this error to the source and age of the aerosol. At 0.22 and 0.44 percent SS, the error is considerable for anthropogenic aerosol sampled near the source region as this aerosol has, on average, a high POM mass fraction and smaller particle mean diameter. The error is lower for more aged aerosol as it has a lower POM mass fraction and larger mean particle diameter. Hence, the percent error in calculated CCN concentration is expected to be larger for younger, organic- rich aerosol and smaller for aged, sulfate rich aerosol and for marine aerosol. We extend this analysis to continental and marine data sets recently reported by Dusek et al. [Science, 312, 1375, 2006] and Hudson [Geophys. Res., Lett., 34, L08801, 2007].
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1977-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
NASA Technical Reports Server (NTRS)
Lee, L. N.
1976-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
Modeling Urban Air Quality in the Berlin-Brandenburg Region: Evaluation of a WRF-Chem Setup
NASA Astrophysics Data System (ADS)
Kuik, F.; Churkina, G.; Butler, T. M.; Lauer, A.; Mar, K. A.
2015-12-01
Air pollution is the number one environmental cause of premature deaths in Europe. Despite extensive regulations, air pollution remains a challenging issue, especially in urban areas. For studying air quality in the Berlin-Brandenburg region of Germany the Weather Research and Forecasting Model with Chemistry (WRF-Chem) is set up and evaluated against meteorological and air quality observations from monitoring stations as well as from a field campaign conducted in 2014 (incl. black carbon, VOCs as well as mobile measurements of particle size distribution and particle mass). The model setup includes 3 nested domains with horizontal resolutions of 15km, 3km, and 1km, online biogenic emissions using MEGAN 2.0, and anthropogenic emissions from the TNO-MACC-II inventory. This work serves as a basis for future studies on different aspects of air pollution in the Berlin-Brandenburg region, including how heat waves affect emissions of biogenic volatile organic compounds (BVOC) from urban vegetation (summer 2006) and the impact of selected traffic measures on air quality in the Berlin-Brandenburg area (summer 2014). The model represents the meteorology as observed in the region well for both periods. An exception is the heat wave period in 2006, where the temperature simulated with 3km and 1km resolutions is biased low by around 2°C for urban built-up stations. First results of simulations with chemistry show that, on average, WRF-Chem simulates concentrations of O3 well. However, the 8 hr maxima are underestimated, and the minima are overestimated. While NOx daily means are modeled reasonably well for urban stations, they are overestimated for suburban stations. PM10 concentrations are underestimated by the model. The biases and correlation coefficients of simulated O3, NOx, and PM10 in comparison to surface observations do not show improvements for the 1km domain in comparison to the 3km domain. To improve the model performance of the 1km domain we will include an updated emission inventory (TNO-MACC-III) as well as the interpolation of the emission data from 7km to a 1km resolution.
Global forestry emission projections and abatement costs
NASA Astrophysics Data System (ADS)
Böttcher, H.; Gusti, M.; Mosnier, A.; Havlik, P.; Obersteiner, M.
2012-04-01
In this paper we present forestry emission projections and associated Marginal Abatement Cost Curves (MACCs) for individual countries, based on economic, social and policy drivers. The activities cover deforestation, afforestation, and forestry management. The global model tools G4M and GLOBIOM, developed at IIASA, are applied. GLOBIOM uses global scenarios of population, diet, GDP and energy demand to inform G4M about future land and commodity prices and demand for bioenergy and timber. G4M projects emissions from afforestation, deforestation and management of existing forests. Mitigation measures are simulated by introducing a carbon tax. Mitigation activities like reducing deforestation or enhancing afforestation are not independent of each other. In contrast to existing forestry mitigation cost curves the presented MACCs are not developed for individual activities but total forest land management which makes the estimated potentials more realistic. In the assumed baseline gross deforestation drops globally from about 12 Mha in 2005 to below 10 Mha after 2015 and reach 0.5 Mha in 2050. Afforestation rates remain fairly constant at about 7 Mha annually. Although we observe a net area increase of global forest area after 2015 net emissions from deforestation and afforestation are positive until 2045 as the newly afforested areas accumulate carbon rather slowly. About 200 Mt CO2 per year in 2030 in Annex1 countries could be mitigated at a carbon price of 50 USD. The potential for forest management improvement is very similar. Above 200 USD the potential is clearly constrained for both options. In Non-Annex1 countries avoided deforestation can achieve about 1200 Mt CO2 per year at a price of 50 USD. The potential is less constrained compared to the potential in Annex1 countries, achieving a potential of 1800 Mt CO2 annually in 2030 at a price of 1000 USD. The potential from additional afforestation is rather limited due to high baseline afforestation rates assumed. In addition we present results of several sensitivity analyses that were run to understand better model uncertainties and the mechanisms of drivers such as agricultural productivity, GDP, wood demand and national corruption rates.
Möckel, Martin; Searle, Julia; Baberg, Henning Thomas; Dirschedl, Peter; Levenson, Benny; Malzahn, Jürgen; Mansky, Thomas; Günster, Christian; Jeschke, Elke
2016-01-01
We aimed to analyse the short-term and long-term outcome of patients with end-stage renal disease (ESRD) undergoing percutaneous intervention (PCI) as compared to coronary artery bypass surgery (CABG) to evaluate the optimal coronary revascularisation strategy. Retrospective analysis of routine statutory health insurance data between 2010 and 2012. Primary outcome was adjusted all-cause mortality after 30 days and major adverse cardiovascular and cerebrovascular events at 1 year. Secondary outcomes were repeat revascularisation at 30 days and 1 year and bleeding events within 7 days. The total number of cases was n=4123 (PCI; n=3417), median age was 71 (IQR 62-77), 30.4% were women. The adjusted OR for death within 30 days was 0.59 (95% CI 0.43 to 0.81) for patients undergoing PCI versus CABG. At 1 year, the adjusted OR for major adverse cardiac and cerebrovascular events (MACCE) was 1.58 (1.32 to 1.89) for PCI versus CABG and 1.47 (1.23 to 1.75) for all-cause death. In the subgroup of patients with acute myocardial infarction (AMI), adjusted all-cause mortality at 30 days did not differ significantly between both groups (OR 0.75 (0.47 to 1.20)), whereas in patients without AMI the OR for 30-day mortality was 0.44 (0.28 to 0.68) for PCI versus CABG. At 1 year, the adjusted OR for MACCE in patients with AMI was 1.40 (1.06 to 1.85) for PCI versus CABG and 1.47 (1.08 to 1.99) for mortality. In this cohort of unselected patients with ESRD undergoing revascularisation, the 1-year outcome was better for CABG in patients with and without AMI. The 30-day mortality was higher in non-AMI patients with CABG reflecting an early hazard with surgery. In cases where the patient's characteristics and risk profile make it difficult to decide on a revascularisation strategy, CABG could be the preferred option.
Channel coding in the space station data system network
NASA Technical Reports Server (NTRS)
Healy, T.
1982-01-01
A detailed discussion of the use of channel coding for error correction, privacy/secrecy, channel separation, and synchronization is presented. Channel coding, in one form or another, is an established and common element in data systems. No analysis and design of a major new system would fail to consider ways in which channel coding could make the system more effective. The presence of channel coding on TDRS, Shuttle, the Advanced Communication Technology Satellite Program system, the JSC-proposed Space Operations Center, and the proposed 30/20 GHz Satellite Communication System strongly support the requirement for the utilization of coding for the communications channel. The designers of the space station data system have to consider the use of channel coding.
Comparison of DMSP and SECS region-1 and region-2 ionospheric current boundary
NASA Astrophysics Data System (ADS)
Weygand, J. M.; Wing, S.
2016-06-01
The region-1 and region-2 boundary has traditionally been identified using data from a single spacecraft crossing the auroral region and measuring the large scale changes in the cross track magnetic field. With data from the AUTUMN, CANMOS, CARISMA, GIMA, DTU MGS, MACCS, McMAC, STEP, THEMIS, and USGS ground magnetometer arrays we applied a state-of-art technique based on spherical elementary current system (SECS) method developed by Amm and Viljanen (1999) in order to calculate maps of region-1 and region-2 current system over the North American and Greenland auroral region. Spherical elementary current (SEC) amplitude (proxy for vertical currents) maps can be inferred at 10 s temporal resolution, ~1.5° geographic latitude (Glat), and 3.5° geographic longitude (Glon) spatial resolution. We compare the location of the region-1 and region-2 boundary obtained by the DMSP spacecraft with the region-1 and region-2 boundary observed in the SEC current amplitudes. We find that the boundaries typically agree within 0.2°±1.3°. These results indicate that the location of the region-1 and region-2 boundary can reasonably be determined from ground magnetometer data. The SECS maps represent a value-added product from the magnetometer database and can be used for contextual interpretation in conjunction with other missions as well as help with our understanding of magnetosphere-ionosphere coupling mechanisms using the ground arrays and the magnetospheric spacecraft data.
Comparison of DMSP and SECS region-1 and region-2 ionospheric current boundary✩
Weygand, J.M.; Wing, S.
2017-01-01
The region-1 and region-2 boundary has traditionally been identified using data from a single spacecraft crossing the auroral region and measuring the large scale changes in the cross track magnetic field. With data from the AUTUMN, CANMOS, CARISMA, GIMA, DTU MGS, MACCS, McMAC, STEP, THEMIS, and USGS ground magnetometer arrays we applied a state-of-art technique based on spherical elementary current system (SECS) method developed by Amm and Viljanen (1999) in order to calculate maps of region-1 and region-2 current system over the North American and Greenland auroral region. Spherical elementary current (SEC) amplitude (proxy for vertical currents) maps can be inferred at 10 s temporal resolution, ~1.5° geographic latitude (Glat), and 3.5° geographic longitude (Glon) spatial resolution. We compare the location of the region-1 and region-2 boundary obtained by the DMSP spacecraft with the region-1 and region-2 boundary observed in the SEC current amplitudes. We find that the boundaries typically agree within 0.2° ± 1.3°. These results indicate that the location of the region-1 and region-2 boundary can reasonably be determined from ground magnetometer data. The SECS maps represent a value-added product from the magnetometer database and can be used for contextual interpretation in conjunction with other missions as well as help with our understanding of magnetosphere-ionosphere coupling mechanisms using the ground arrays and the magnetospheric spacecraft data. PMID:29056861
Comparison of DMSP and SECS region-1 and region-2 ionospheric current boundary.
Weygand, J M; Wing, S
2016-06-01
The region-1 and region-2 boundary has traditionally been identified using data from a single spacecraft crossing the auroral region and measuring the large scale changes in the cross track magnetic field. With data from the AUTUMN, CANMOS, CARISMA, GIMA, DTU MGS, MACCS, McMAC, STEP, THEMIS, and USGS ground magnetometer arrays we applied a state-of-art technique based on spherical elementary current system (SECS) method developed by Amm and Viljanen (1999) in order to calculate maps of region-1 and region-2 current system over the North American and Greenland auroral region. Spherical elementary current (SEC) amplitude (proxy for vertical currents) maps can be inferred at 10 s temporal resolution, ~1.5° geographic latitude (Glat), and 3.5° geographic longitude (Glon) spatial resolution. We compare the location of the region-1 and region-2 boundary obtained by the DMSP spacecraft with the region-1 and region-2 boundary observed in the SEC current amplitudes. We find that the boundaries typically agree within 0.2° ± 1.3°. These results indicate that the location of the region-1 and region-2 boundary can reasonably be determined from ground magnetometer data. The SECS maps represent a value-added product from the magnetometer database and can be used for contextual interpretation in conjunction with other missions as well as help with our understanding of magnetosphere-ionosphere coupling mechanisms using the ground arrays and the magnetospheric spacecraft data.
Mean Line Pump Flow Model in Rocket Engine System Simulation
NASA Technical Reports Server (NTRS)
Veres, Joseph P.; Lavelle, Thomas M.
2000-01-01
A mean line pump flow modeling method has been developed to provide a fast capability for modeling turbopumps of rocket engines. Based on this method, a mean line pump flow code PUMPA has been written that can predict the performance of pumps at off-design operating conditions, given the loss of the diffusion system at the design point. The pump code can model axial flow inducers, mixed-flow and centrifugal pumps. The code can model multistage pumps in series. The code features rapid input setup and computer run time, and is an effective analysis and conceptual design tool. The map generation capability of the code provides the map information needed for interfacing with a rocket engine system modeling code. The off-design and multistage modeling capabilities of the code permit parametric design space exploration of candidate pump configurations and provide pump performance data for engine system evaluation. The PUMPA code has been integrated with the Numerical Propulsion System Simulation (NPSS) code and an expander rocket engine system has been simulated. The mean line pump flow code runs as an integral part of the NPSS rocket engine system simulation and provides key pump performance information directly to the system model at all operating conditions.
Automated Concurrent Blackboard System Generation in C++
NASA Technical Reports Server (NTRS)
Kaplan, J. A.; McManus, J. W.; Bynum, W. L.
1999-01-01
In his 1992 Ph.D. thesis, "Design and Analysis Techniques for Concurrent Blackboard Systems", John McManus defined several performance metrics for concurrent blackboard systems and developed a suite of tools for creating and analyzing such systems. These tools allow a user to analyze a concurrent blackboard system design and predict the performance of the system before any code is written. The design can be modified until simulated performance is satisfactory. Then, the code generator can be invoked to generate automatically all of the code required for the concurrent blackboard system except for the code implementing the functionality of each knowledge source. We have completed the port of the source code generator and a simulator for a concurrent blackboard system. The source code generator generates the necessary C++ source code to implement the concurrent blackboard system using Parallel Virtual Machine (PVM) running on a heterogeneous network of UNIX(trademark) workstations. The concurrent blackboard simulator uses the blackboard specification file to predict the performance of the concurrent blackboard design. The only part of the source code for the concurrent blackboard system that the user must supply is the code implementing the functionality of the knowledge sources.
The Exchange Data Communication System based on Centralized Database for the Meat Industry
NASA Astrophysics Data System (ADS)
Kobayashi, Yuichi; Taniguchi, Yoji; Terada, Shuji; Komoda, Norihisa
We propose applying the EDI system that is based on centralized database and supports conversion of code data to the meat industry. This system makes it possible to share exchange data on beef between enterprises from producers to retailers by using Web EDI technology. In order to efficiently convert code direct conversion of a sender's code to a receiver's code using a code map is used. This system that mounted this function has been implemented in September 2004. Twelve enterprises including retailers, and processing traders, and wholesalers were using the system as of June 2005. In this system, the number of code maps relevant to the introductory cost of the code conversion function was lower than the theoretical value and were close to the case that a standard code is mediated.
Design of ACM system based on non-greedy punctured LDPC codes
NASA Astrophysics Data System (ADS)
Lu, Zijun; Jiang, Zihong; Zhou, Lin; He, Yucheng
2017-08-01
In this paper, an adaptive coded modulation (ACM) scheme based on rate-compatible LDPC (RC-LDPC) codes was designed. The RC-LDPC codes were constructed by a non-greedy puncturing method which showed good performance in high code rate region. Moreover, the incremental redundancy scheme of LDPC-based ACM system over AWGN channel was proposed. By this scheme, code rates vary from 2/3 to 5/6 and the complication of the ACM system is lowered. Simulations show that more and more obvious coding gain can be obtained by the proposed ACM system with higher throughput.
Encrypted holographic data storage based on orthogonal-phase-code multiplexing.
Heanue, J F; Bashaw, M C; Hesselink, L
1995-09-10
We describe an encrypted holographic data-storage system that combines orthogonal-phase-code multiplexing with a random-phase key. The system offers the security advantages of random-phase coding but retains the low cross-talk performance and the minimum code storage requirements typical in an orthogonal-phase-code-multiplexing system.
Interframe vector wavelet coding technique
NASA Astrophysics Data System (ADS)
Wus, John P.; Li, Weiping
1997-01-01
Wavelet coding is often used to divide an image into multi- resolution wavelet coefficients which are quantized and coded. By 'vectorizing' scalar wavelet coding and combining this with vector quantization (VQ), vector wavelet coding (VWC) can be implemented. Using a finite number of states, finite-state vector quantization (FSVQ) takes advantage of the similarity between frames by incorporating memory into the video coding system. Lattice VQ eliminates the potential mismatch that could occur using pre-trained VQ codebooks. It also eliminates the need for codebook storage in the VQ process, thereby creating a more robust coding system. Therefore, by using the VWC coding method in conjunction with the FSVQ system and lattice VQ, the formulation of a high quality very low bit rate coding systems is proposed. A coding system using a simple FSVQ system where the current state is determined by the previous channel symbol only is developed. To achieve a higher degree of compression, a tree-like FSVQ system is implemented. The groupings are done in this tree-like structure from the lower subbands to the higher subbands in order to exploit the nature of subband analysis in terms of the parent-child relationship. Class A and Class B video sequences from the MPEG-IV testing evaluations are used in the evaluation of this coding method.
High rate concatenated coding systems using bandwidth efficient trellis inner codes
NASA Technical Reports Server (NTRS)
Deng, Robert H.; Costello, Daniel J., Jr.
1989-01-01
High-rate concatenated coding systems with bandwidth-efficient trellis inner codes and Reed-Solomon (RS) outer codes are investigated for application in high-speed satellite communication systems. Two concatenated coding schemes are proposed. In one the inner code is decoded with soft-decision Viterbi decoding, and the outer RS code performs error-correction-only decoding (decoding without side information). In the other, the inner code is decoded with a modified Viterbi algorithm, which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, whereas branch metrics are used to provide reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. The two schemes have been proposed for high-speed data communication on NASA satellite channels. The rates considered are at least double those used in current NASA systems, and the results indicate that high system reliability can still be achieved.
42 CFR 405.512 - Carriers' procedural terminology and coding systems.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 2 2013-10-01 2013-10-01 false Carriers' procedural terminology and coding systems... Determining Reasonable Charges § 405.512 Carriers' procedural terminology and coding systems. (a) General. Procedural terminology and coding systems are designed to provide physicians and third party payers with a...
42 CFR 405.512 - Carriers' procedural terminology and coding systems.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 2 2011-10-01 2011-10-01 false Carriers' procedural terminology and coding systems... Determining Reasonable Charges § 405.512 Carriers' procedural terminology and coding systems. (a) General. Procedural terminology and coding systems are designed to provide physicians and third party payers with a...
42 CFR 405.512 - Carriers' procedural terminology and coding systems.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 2 2014-10-01 2014-10-01 false Carriers' procedural terminology and coding systems... Determining Reasonable Charges § 405.512 Carriers' procedural terminology and coding systems. (a) General. Procedural terminology and coding systems are designed to provide physicians and third party payers with a...
42 CFR 405.512 - Carriers' procedural terminology and coding systems.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 2 2012-10-01 2012-10-01 false Carriers' procedural terminology and coding systems... Determining Reasonable Charges § 405.512 Carriers' procedural terminology and coding systems. (a) General. Procedural terminology and coding systems are designed to provide physicians and third party payers with a...
42 CFR 405.512 - Carriers' procedural terminology and coding systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 2 2010-10-01 2010-10-01 false Carriers' procedural terminology and coding systems... Determining Reasonable Charges § 405.512 Carriers' procedural terminology and coding systems. (a) General. Procedural terminology and coding systems are designed to provide physicians and third party payers with a...
A novel concatenated code based on the improved SCG-LDPC code for optical transmission systems
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Xie, Ya; Wang, Lin; Huang, Sheng; Wang, Yong
2013-01-01
Based on the optimization and improvement for the construction method of systematically constructed Gallager (SCG) (4, k) code, a novel SCG low density parity check (SCG-LDPC)(3969, 3720) code to be suitable for optical transmission systems is constructed. The novel SCG-LDPC (6561,6240) code with code rate of 95.1% is constructed by increasing the length of SCG-LDPC (3969,3720) code, and in a way, the code rate of LDPC codes can better meet the high requirements of optical transmission systems. And then the novel concatenated code is constructed by concatenating SCG-LDPC(6561,6240) code and BCH(127,120) code with code rate of 94.5%. The simulation results and analyses show that the net coding gain (NCG) of BCH(127,120)+SCG-LDPC(6561,6240) concatenated code is respectively 2.28 dB and 0.48 dB more than those of the classic RS(255,239) code and SCG-LDPC(6561,6240) code at the bit error rate (BER) of 10-7.
Trace-shortened Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Solomon, G.
1994-01-01
Reed-Solomon (RS) codes have been part of standard NASA telecommunications systems for many years. RS codes are character-oriented error-correcting codes, and their principal use in space applications has been as outer codes in concatenated coding systems. However, for a given character size, say m bits, RS codes are limited to a length of, at most, 2(exp m). It is known in theory that longer character-oriented codes would be superior to RS codes in concatenation applications, but until recently no practical class of 'long' character-oriented codes had been discovered. In 1992, however, Solomon discovered an extensive class of such codes, which are now called trace-shortened Reed-Solomon (TSRS) codes. In this article, we will continue the study of TSRS codes. Our main result is a formula for the dimension of any TSRS code, as a function of its error-correcting power. Using this formula, we will give several examples of TSRS codes, some of which look very promising as candidate outer codes in high-performance coded telecommunications systems.
An Interactive Concatenated Turbo Coding System
NASA Technical Reports Server (NTRS)
Liu, Ye; Tang, Heng; Lin, Shu; Fossorier, Marc
1999-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomon outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft- decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
NASA Technical Reports Server (NTRS)
Hinds, Erold W. (Principal Investigator)
1996-01-01
This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eslinger, Paul W.; Aaberg, Rosanne L.; Lopresti, Charles A.
2004-09-14
This document contains detailed user instructions for a suite of utility codes developed for Rev. 1 of the Systems Assessment Capability. The suite of computer codes for Rev. 1 of Systems Assessment Capability performs many functions.
Transient dynamics capability at Sandia National Laboratories
NASA Technical Reports Server (NTRS)
Attaway, Steven W.; Biffle, Johnny H.; Sjaardema, G. D.; Heinstein, M. W.; Schoof, L. A.
1993-01-01
A brief overview of the transient dynamics capabilities at Sandia National Laboratories, with an emphasis on recent new developments and current research is presented. In addition, the Sandia National Laboratories (SNL) Engineering Analysis Code Access System (SEACAS), which is a collection of structural and thermal codes and utilities used by analysts at SNL, is described. The SEACAS system includes pre- and post-processing codes, analysis codes, database translation codes, support libraries, Unix shell scripts for execution, and an installation system. SEACAS is used at SNL on a daily basis as a production, research, and development system for the engineering analysts and code developers. Over the past year, approximately 190 days of CPU time were used by SEACAS codes on jobs running from a few seconds up to two and one-half days of CPU time. SEACAS is running on several different systems at SNL including Cray Unicos, Hewlett Packard PH-UX, Digital Equipment Ultrix, and Sun SunOS. An overview of SEACAS, including a short description of the codes in the system, are presented. Abstracts and references for the codes are listed at the end of the report.
The application of coded excitation technology in medical ultrasonic Doppler imaging
NASA Astrophysics Data System (ADS)
Li, Weifeng; Chen, Xiaodong; Bao, Jing; Yu, Daoyin
2008-03-01
Medical ultrasonic Doppler imaging is one of the most important domains of modern medical imaging technology. The application of coded excitation technology in medical ultrasonic Doppler imaging system has the potential of higher SNR and deeper penetration depth than conventional pulse-echo imaging system, it also improves the image quality, and enhances the sensitivity of feeble signal, furthermore, proper coded excitation is beneficial to received spectrum of Doppler signal. Firstly, this paper analyzes the application of coded excitation technology in medical ultrasonic Doppler imaging system abstractly, showing the advantage and bright future of coded excitation technology, then introduces the principle and the theory of coded excitation. Secondly, we compare some coded serials (including Chirp and fake Chirp signal, Barker codes, Golay's complementary serial, M-sequence, etc). Considering Mainlobe Width, Range Sidelobe Level, Signal-to-Noise Ratio and sensitivity of Doppler signal, we choose Barker codes as coded serial. At last, we design the coded excitation circuit. The result in B-mode imaging and Doppler flow measurement coincided with our expectation, which incarnated the advantage of application of coded excitation technology in Digital Medical Ultrasonic Doppler Endoscope Imaging System.
Accuracy and time requirements of a bar-code inventory system for medical supplies.
Hanson, L B; Weinswig, M H; De Muth, J E
1988-02-01
The effects of implementing a bar-code system for issuing medical supplies to nursing units at a university teaching hospital were evaluated. Data on the time required to issue medical supplies to three nursing units at a 480-bed, tertiary-care teaching hospital were collected (1) before the bar-code system was implemented (i.e., when the manual system was in use), (2) one month after implementation, and (3) four months after implementation. At the same times, the accuracy of the central supply perpetual inventory was monitored using 15 selected items. One-way analysis of variance tests were done to determine any significant differences between the bar-code and manual systems. Using the bar-code system took longer than using the manual system because of a significant difference in the time required for order entry into the computer. Multiple-use requirements of the central supply computer system made entering bar-code data a much slower process. There was, however, a significant improvement in the accuracy of the perpetual inventory. Using the bar-code system for issuing medical supplies to the nursing units takes longer than using the manual system. However, the accuracy of the perpetual inventory was significantly improved with the implementation of the bar-code system.
Code of Federal Regulations, 2010 CFR
2010-10-01
... laboratory test for which a new or substantially revised Healthcare Common Procedure Coding System Code is assigned on or after January 1, 2005. Substantially Revised Healthcare Common Procedure Coding System Code...
Code of Federal Regulations, 2011 CFR
2011-10-01
... laboratory test for which a new or substantially revised Healthcare Common Procedure Coding System Code is assigned on or after January 1, 2005. Substantially Revised Healthcare Common Procedure Coding System Code...
Channel coding for underwater acoustic single-carrier CDMA communication system
NASA Astrophysics Data System (ADS)
Liu, Lanjun; Zhang, Yonglei; Zhang, Pengcheng; Zhou, Lin; Niu, Jiong
2017-01-01
CDMA is an effective multiple access protocol for underwater acoustic networks, and channel coding can effectively reduce the bit error rate (BER) of the underwater acoustic communication system. For the requirements of underwater acoustic mobile networks based on CDMA, an underwater acoustic single-carrier CDMA communication system (UWA/SCCDMA) based on the direct-sequence spread spectrum is proposed, and its channel coding scheme is studied based on convolution, RA, Turbo and LDPC coding respectively. The implementation steps of the Viterbi algorithm of convolutional coding, BP and minimum sum algorithms of RA coding, Log-MAP and SOVA algorithms of Turbo coding, and sum-product algorithm of LDPC coding are given. An UWA/SCCDMA simulation system based on Matlab is designed. Simulation results show that the UWA/SCCDMA based on RA, Turbo and LDPC coding have good performance such that the communication BER is all less than 10-6 in the underwater acoustic channel with low signal to noise ratio (SNR) from -12 dB to -10dB, which is about 2 orders of magnitude lower than that of the convolutional coding. The system based on Turbo coding with Log-MAP algorithm has the best performance.
Liakhovetskiĭ, V A; Bobrova, E V; Skopin, G N
2012-01-01
Transposition errors during the reproduction of a hand movement sequence make it possible to receive important information on the internal representation of this sequence in the motor working memory. Analysis of such errors showed that learning to reproduce sequences of the left-hand movements improves the system of positional coding (coding ofpositions), while learning of the right-hand movements improves the system of vector coding (coding of movements). Learning of the right-hand movements after the left-hand performance involved the system of positional coding "imposed" by the left hand. Learning of the left-hand movements after the right-hand performance activated the system of vector coding. Transposition errors during learning to reproduce movement sequences can be explained by neural network using either vector coding or both vector and positional coding.
ERIC Educational Resources Information Center
VanBiervliet, Alan
A project to develop and evaluate a bar code reader system as a self-directed information and instructional aid for handicapped nonreaders is described. The bar code technology involves passing a light sensitive pen or laser over a printed code with bars which correspond to coded numbers. A system would consist of a portable device which could…
Technology Infusion of CodeSonar into the Space Network Ground Segment
NASA Technical Reports Server (NTRS)
Benson, Markland J.
2009-01-01
This slide presentation reviews the applicability of CodeSonar to the Space Network software. CodeSonar is a commercial off the shelf system that analyzes programs written in C, C++ or Ada for defects in the code. Software engineers use CodeSonar results as an input to the existing source code inspection process. The study is focused on large scale software developed using formal processes. The systems studied are mission critical in nature but some use commodity computer systems.
Trellis coded multilevel DPSK system with doppler correction for mobile satellite channels
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Simon, Marvin K. (Inventor)
1991-01-01
A trellis coded multilevel differential phase shift keyed mobile communication system. The system of the present invention includes a trellis encoder for translating input signals into trellis codes; a differential encoder for differentially encoding the trellis coded signals; a transmitter for transmitting the differentially encoded trellis coded signals; a receiver for receiving the transmitted signals; a differential demodulator for demodulating the received differentially encoded trellis coded signals; and a trellis decoder for decoding the differentially demodulated signals.
SU-D-BRD-03: A Gateway for GPU Computing in Cancer Radiotherapy Research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jia, X; Folkerts, M; Shi, F
Purpose: Graphics Processing Unit (GPU) has become increasingly important in radiotherapy. However, it is still difficult for general clinical researchers to access GPU codes developed by other researchers, and for developers to objectively benchmark their codes. Moreover, it is quite often to see repeated efforts spent on developing low-quality GPU codes. The goal of this project is to establish an infrastructure for testing GPU codes, cross comparing them, and facilitating code distributions in radiotherapy community. Methods: We developed a system called Gateway for GPU Computing in Cancer Radiotherapy Research (GCR2). A number of GPU codes developed by our group andmore » other developers can be accessed via a web interface. To use the services, researchers first upload their test data or use the standard data provided by our system. Then they can select the GPU device on which the code will be executed. Our system offers all mainstream GPU hardware for code benchmarking purpose. After the code running is complete, the system automatically summarizes and displays the computing results. We also released a SDK to allow the developers to build their own algorithm implementation and submit their binary codes to the system. The submitted code is then systematically benchmarked using a variety of GPU hardware and representative data provided by our system. The developers can also compare their codes with others and generate benchmarking reports. Results: It is found that the developed system is fully functioning. Through a user-friendly web interface, researchers are able to test various GPU codes. Developers also benefit from this platform by comprehensively benchmarking their codes on various GPU platforms and representative clinical data sets. Conclusion: We have developed an open platform allowing the clinical researchers and developers to access the GPUs and GPU codes. This development will facilitate the utilization of GPU in radiation therapy field.« less
NASA Astrophysics Data System (ADS)
Jellali, Nabiha; Najjar, Monia; Ferchichi, Moez; Rezig, Houria
2017-07-01
In this paper, a new two-dimensional spectral/spatial codes family, named two dimensional dynamic cyclic shift codes (2D-DCS) is introduced. The 2D-DCS codes are derived from the dynamic cyclic shift code for the spectral and spatial coding. The proposed system can fully eliminate the multiple access interference (MAI) by using the MAI cancellation property. The effect of shot noise, phase-induced intensity noise and thermal noise are used to analyze the code performance. In comparison with existing two dimensional (2D) codes, such as 2D perfect difference (2D-PD), 2D Extended Enhanced Double Weight (2D-Extended-EDW) and 2D hybrid (2D-FCC/MDW) codes, the numerical results show that our proposed codes have the best performance. By keeping the same code length and increasing the spatial code, the performance of our 2D-DCS system is enhanced: it provides higher data rates while using lower transmitted power and a smaller spectral width.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1987-10-01
Huffman codes, comma-free codes, and block codes with shift indicators are important candidate-message compression codes for improving the efficiency of communications systems. This study was undertaken to determine if these codes could be used to increase the thruput of the fixed very-low-frequency (FVLF) communication system. This applications involves the use of compression codes in a channel with errors.
The adaption and use of research codes for performance assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebetrau, A.M.
1987-05-01
Models of real-world phenomena are developed for many reasons. The models are usually, if not always, implemented in the form of a computer code. The characteristics of a code are determined largely by its intended use. Realizations or implementations of detailed mathematical models of complex physical and/or chemical processes are often referred to as research or scientific (RS) codes. Research codes typically require large amounts of computing time. One example of an RS code is a finite-element code for solving complex systems of differential equations that describe mass transfer through some geologic medium. Considerable computing time is required because computationsmore » are done at many points in time and/or space. Codes used to evaluate the overall performance of real-world physical systems are called performance assessment (PA) codes. Performance assessment codes are used to conduct simulated experiments involving systems that cannot be directly observed. Thus, PA codes usually involve repeated simulations of system performance in situations that preclude the use of conventional experimental and statistical methods. 3 figs.« less
On the potential of GHG emissions estimation by multi-species inverse modeling
NASA Astrophysics Data System (ADS)
Gerbig, Christoph; Boschetti, Fabio; Filges, Annette; Marshall, Julia; Koch, Frank-Thomas; Janssens-Maenhout, Greet; Nedelec, Philippe; Thouret, Valerie; Karstens, Ute
2016-04-01
Reducing anthropogenic emissions of greenhouse gases is one of the most important elements in mitigating climate change. However, as emission reporting is often incomplete or incorrect, there is a need to independently monitor the emissions. Despite this, in the case of CO2 one typically assumes that emissions from fossil fuel burning are well known, and only natural fluxes are constrained by atmospheric measurements via inverse modelling. On the other hand, species such as CO2, CH4, and CO often have common emission patterns, and thus share part of the uncertainties, both related to the prior knowledge of emissions, and to model-data mismatch error. We implemented the Lagrangian transport model STILT driven by ECMWF analysis and short-term forecast meteorological fields together with emission sector and fuel-type specific emissions of CO2, CH4 and CO from EDGARv4.3 at a spatial resolution of 0.1 x 0.1 deg., providing an atmospheric fingerprint of anthropogenic emissions for multiple trace gases. We combine the regional STILT simulations with lateral boundary conditions for CO2 and CO from MACC forecasts and CH4 from TM3 simulations. Here we apply this framework to airborne in-situ measurements made in the context of IAGOS (In-service Aircraft for a Global Observing System) and in the context of a HALO mission conducted for testing the active remote sensing system CHARM-F during April/May 2015 over central Europe. Simulated tracer distributions are compared to observed profiles of CO2, CH4, and CO, and the potential for a multi-species inversion using synergies between different tracers is assessed with respect to the uncertainty reduction in retrieved emission fluxes. Implications for inversions solving for anthropogenic emissions using atmospheric observations from ICOS (Integrated Carbon Observing System) are discussed.
The application of LDPC code in MIMO-OFDM system
NASA Astrophysics Data System (ADS)
Liu, Ruian; Zeng, Beibei; Chen, Tingting; Liu, Nan; Yin, Ninghao
2018-03-01
The combination of MIMO and OFDM technology has become one of the key technologies of the fourth generation mobile communication., which can overcome the frequency selective fading of wireless channel, increase the system capacity and improve the frequency utilization. Error correcting coding introduced into the system can further improve its performance. LDPC (low density parity check) code is a kind of error correcting code which can improve system reliability and anti-interference ability, and the decoding is simple and easy to operate. This paper mainly discusses the application of LDPC code in MIMO-OFDM system.
VENTURE/PC manual: A multidimensional multigroup neutron diffusion code system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shapiro, A.; Huria, H.C.; Cho, K.W.
1991-12-01
VENTURE/PC is a recompilation of part of the Oak Ridge BOLD VENTURE code system, which will operate on an IBM PC or compatible computer. Neutron diffusion theory solutions are obtained for multidimensional, multigroup problems. This manual contains information associated with operating the code system. The purpose of the various modules used in the code system, and the input for these modules are discussed. The PC code structure is also given. Version 2 included several enhancements not given in the original version of the code. In particular, flux iterations can be done in core rather than by reading and writing tomore » disk, for problems which allow sufficient memory for such in-core iterations. This speeds up the iteration process. Version 3 does not include any of the special processors used in the previous versions. These special processors utilized formatted input for various elements of the code system. All such input data is now entered through the Input Processor, which produces standard interface files for the various modules in the code system. In addition, a Standard Interface File Handbook is included in the documentation which is distributed with the code, to assist in developing the input for the Input Processor.« less
Thermal hydraulic-severe accident code interfaces for SCDAP/RELAP5/MOD3.2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coryell, E.W.; Siefken, L.J.; Harvego, E.A.
1997-07-01
The SCDAP/RELAP5 computer code is designed to describe the overall reactor coolant system thermal-hydraulic response, core damage progression, and fission product release during severe accidents. The code is being developed at the Idaho National Engineering Laboratory under the primary sponsorship of the Office of Nuclear Regulatory Research of the U.S. Nuclear Regulatory Commission. The code is the result of merging the RELAP5, SCDAP, and COUPLE codes. The RELAP5 portion of the code calculates the overall reactor coolant system, thermal-hydraulics, and associated reactor system responses. The SCDAP portion of the code describes the response of the core and associated vessel structures.more » The COUPLE portion of the code describes response of lower plenum structures and debris and the failure of the lower head. The code uses a modular approach with the overall structure, input/output processing, and data structures following the pattern established for RELAP5. The code uses a building block approach to allow the code user to easily represent a wide variety of systems and conditions through a powerful input processor. The user can represent a wide variety of experiments or reactor designs by selecting fuel rods and other assembly structures from a range of representative core component models, and arrange them in a variety of patterns within the thermalhydraulic network. The COUPLE portion of the code uses two-dimensional representations of the lower plenum structures and debris beds. The flow of information between the different portions of the code occurs at each system level time step advancement. The RELAP5 portion of the code describes the fluid transport around the system. These fluid conditions are used as thermal and mass transport boundary conditions for the SCDAP and COUPLE structures and debris beds.« less
Myint, Kyaw Z.; Xie, Xiang-Qun
2015-01-01
This chapter focuses on the fingerprint-based artificial neural networks QSAR (FANN-QSAR) approach to predict biological activities of structurally diverse compounds. Three types of fingerprints, namely ECFP6, FP2, and MACCS, were used as inputs to train the FANN-QSAR models. The results were benchmarked against known 2D and 3D QSAR methods, and the derived models were used to predict cannabinoid (CB) ligand binding activities as a case study. In addition, the FANN-QSAR model was used as a virtual screening tool to search a large NCI compound database for lead cannabinoid compounds. We discovered several compounds with good CB2 binding affinities ranging from 6.70 nM to 3.75 μM. The studies proved that the FANN-QSAR method is a useful approach to predict bioactivities or properties of ligands and to find novel lead compounds for drug discovery research. PMID:25502380
Two dimension MDW OCDMA code cross-correlation for reduction of phase induced intensity noise
NASA Astrophysics Data System (ADS)
Ahmed, Israa Sh.; Aljunid, Syed A.; Nordin, Junita M.; Dulaimi, Layth A. Khalil Al; Matem, Rima
2017-11-01
In this paper, we first review 2-D MDW code cross correlation equations and table to be improved significantly by using code correlation properties. These codes can be used in the synchronous optical CDMA systems for multi access interference cancellation and maximum suppress the phase induced intensity noise. Low Psr is due to the reduction of interference noise that is induced by the 2-D MDW code PIIN suppression. High data rate causes increases in BER, requires high effective power and severely deteriorates the system performance. The 2-D W/T MDW code has an excellent system performance where the value of PIIN is suppressed as low as possible at the optimum Psr with high data bit rate. The 2-D MDW code shows better tolerance to PIIN in comparison to others with enhanced system performance. We prove by numerical analysis that the PIIN maximally suppressed by MDW code through the minimizing property of cross correlation in comparison to 2-D PDC and 2-D MQC OCDMA code.scheme systems.
NASA Astrophysics Data System (ADS)
Sikder, Somali; Ghosh, Shila
2018-02-01
This paper presents the construction of unipolar transposed modified Walsh code (TMWC) and analysis of its performance in optical code-division multiple-access (OCDMA) systems. Specifically, the signal-to-noise ratio, bit error rate (BER), cardinality, and spectral efficiency were investigated. The theoretical analysis demonstrated that the wavelength-hopping time-spreading system using TMWC was robust against multiple-access interference and more spectrally efficient than systems using other existing OCDMA codes. In particular, the spectral efficiency was calculated to be 1.0370 when TMWC of weight 3 was employed. The BER and eye pattern for the designed TMWC were also successfully obtained using OptiSystem simulation software. The results indicate that the proposed code design is promising for enhancing network capacity.
10 CFR 434.99 - Explanation of numbering system for codes.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 3 2011-01-01 2011-01-01 false Explanation of numbering system for codes. 434.99 Section 434.99 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS § 434.99 Explanation of numbering system for codes. (a) For...
10 CFR 434.99 - Explanation of numbering system for codes.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 3 2010-01-01 2010-01-01 false Explanation of numbering system for codes. 434.99 Section 434.99 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS § 434.99 Explanation of numbering system for codes. (a) For...
48 CFR 452.219-70 - Size Standard and NAICS Code Information.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Size Standard and NAICS Code Information. 452.219-70 Section 452.219-70 Federal Acquisition Regulations System DEPARTMENT OF... System Code(s) and business size standard(s) describing the products and/or services to be acquired under...
NASA Astrophysics Data System (ADS)
Imtiaz, Waqas A.; Ilyas, M.; Khan, Yousaf
2016-11-01
This paper propose a new code to optimize the performance of spectral amplitude coding-optical code division multiple access (SAC-OCDMA) system. The unique two-matrix structure of the proposed enhanced multi diagonal (EMD) code and effective correlation properties, between intended and interfering subscribers, significantly elevates the performance of SAC-OCDMA system by negating multiple access interference (MAI) and associated phase induce intensity noise (PIIN). Performance of SAC-OCDMA system based on the proposed code is thoroughly analyzed for two detection techniques through analytic and simulation analysis by referring to bit error rate (BER), signal to noise ratio (SNR) and eye patterns at the receiving end. It is shown that EMD code while using SDD technique provides high transmission capacity, reduces the receiver complexity, and provides better performance as compared to complementary subtraction detection (CSD) technique. Furthermore, analysis shows that, for a minimum acceptable BER of 10-9 , the proposed system supports 64 subscribers at data rates of up to 2 Gbps for both up-down link transmission.
Production code control system for hydrodynamics simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slone, D.M.
1997-08-18
We describe how the Production Code Control System (pCCS), written in Perl, has been used to control and monitor the execution of a large hydrodynamics simulation code in a production environment. We have been able to integrate new, disparate, and often independent, applications into the PCCS framework without the need to modify any of our existing application codes. Both users and code developers see a consistent interface to the simulation code and associated applications regardless of the physical platform, whether an MPP, SMP, server, or desktop workstation. We will also describe our use of Perl to develop a configuration managementmore » system for the simulation code, as well as a code usage database and report generator. We used Perl to write a backplane that allows us plug in preprocessors, the hydrocode, postprocessors, visualization tools, persistent storage requests, and other codes. We need only teach PCCS a minimal amount about any new tool or code to essentially plug it in and make it usable to the hydrocode. PCCS has made it easier to link together disparate codes, since using Perl has removed the need to learn the idiosyncrasies of system or RPC programming. The text handling in Perl makes it easy to teach PCCS about new codes, or changes to existing codes.« less
BCH codes for large IC random-access memory systems
NASA Technical Reports Server (NTRS)
Lin, S.; Costello, D. J., Jr.
1983-01-01
In this report some shortened BCH codes for possible applications to large IC random-access memory systems are presented. These codes are given by their parity-check matrices. Encoding and decoding of these codes are discussed.
High performance and cost effective CO-OFDM system aided by polar code.
Liu, Ling; Xiao, Shilin; Fang, Jiafei; Zhang, Lu; Zhang, Yunhao; Bi, Meihua; Hu, Weisheng
2017-02-06
A novel polar coded coherent optical orthogonal frequency division multiplexing (CO-OFDM) system is proposed and demonstrated through experiment for the first time. The principle of a polar coded CO-OFDM signal is illustrated theoretically and the suitable polar decoding method is discussed. Results show that the polar coded CO-OFDM signal achieves a net coding gain (NCG) of more than 10 dB at bit error rate (BER) of 10-3 over 25-Gb/s 480-km transmission in comparison with conventional CO-OFDM. Also, compared to the 25-Gb/s low-density parity-check (LDPC) coded CO-OFDM 160-km system, the polar code provides a NCG of 0.88 dB @BER = 10-3. Moreover, the polar code can relieve the laser linewidth requirement massively to get a more cost-effective CO-OFDM system.
A Review on Spectral Amplitude Coding Optical Code Division Multiple Access
NASA Astrophysics Data System (ADS)
Kaur, Navpreet; Goyal, Rakesh; Rani, Monika
2017-06-01
This manuscript deals with analysis of Spectral Amplitude Coding Optical Code Division Multiple Access (SACOCDMA) system. The major noise source in optical CDMA is co-channel interference from other users known as multiple access interference (MAI). The system performance in terms of bit error rate (BER) degrades as a result of increased MAI. It is perceived that number of users and type of codes used for optical system directly decide the performance of system. MAI can be restricted by efficient designing of optical codes and implementing them with unique architecture to accommodate more number of users. Hence, it is a necessity to design a technique like spectral direct detection (SDD) technique with modified double weight code, which can provide better cardinality and good correlation property.
Adaptive variable-length coding for efficient compression of spacecraft television data.
NASA Technical Reports Server (NTRS)
Rice, R. F.; Plaunt, J. R.
1971-01-01
An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.
Coordinated design of coding and modulation systems
NASA Technical Reports Server (NTRS)
Massey, J. L.
1976-01-01
Work on partial unit memory codes continued; it was shown that for a given virtual state complexity, the maximum free distance over the class of all convolutional codes is achieved within the class of unit memory codes. The effect of phase-lock loop (PLL) tracking error on coding system performance was studied by using the channel cut-off rate as the measure of quality of a modulation system. Optimum modulation signal sets for a non-white Gaussian channel considered an heuristic selection rule based on a water-filling argument. The use of error correcting codes to perform data compression by the technique of syndrome source coding was researched and a weight-and-error-locations scheme was developed that is closely related to LDSC coding.
Zelingher, Julian; Ash, Nachman
2013-05-01
The IsraeLi healthcare system has undergone major processes for the adoption of health information technologies (HIT), and enjoys high Levels of utilization in hospital and ambulatory care. Coding is an essential infrastructure component of HIT, and ts purpose is to represent data in a simplified and common format, enhancing its manipulation by digital systems. Proper coding of data enables efficient identification, storage, retrieval and communication of data. UtiLization of uniform coding systems by different organizations enables data interoperability between them, facilitating communication and integrating data elements originating in different information systems from various organizations. Current needs in Israel for heaLth data coding include recording and reporting of diagnoses for hospitalized patients, outpatients and visitors of the Emergency Department, coding of procedures and operations, coding of pathology findings, reporting of discharge diagnoses and causes of death, billing codes, organizational data warehouses and national registries. New national projects for cLinicaL data integration, obligatory reporting of quality indicators and new Ministry of Health (MOH) requirements for HIT necessitate a high Level of interoperability that can be achieved only through the adoption of uniform coding. Additional pressures were introduced by the USA decision to stop the maintenance of the ICD-9-CM codes that are also used by Israeli healthcare, and the adoption of ICD-10-C and ICD-10-PCS as the main coding system for billing purpose. The USA has also mandated utilization of SNOMED-CT as the coding terminology for the ELectronic Health Record problem list, and for reporting quality indicators to the CMS. Hence, the Israeli MOH has recently decided that discharge diagnoses will be reported using ICD-10-CM codes, and SNOMED-CT will be used to code the cLinical information in the EHR. We reviewed the characteristics, strengths and weaknesses of these two coding systems. In summary, the adoption of ICD-10-CM is in line with the USA decision to abandon ICD-9-CM, and the Israeli heaLthcare system could benefit from USA heaLthcare efforts in this direction. The Large content of SNOMED-CT and its sophisticated hierarchical data structure will enable advanced cLinicaL decision support and quality improvement applications.
Low-density parity-check codes for volume holographic memory systems.
Pishro-Nik, Hossein; Rahnavard, Nazanin; Ha, Jeongseok; Fekri, Faramarz; Adibi, Ali
2003-02-10
We investigate the application of low-density parity-check (LDPC) codes in volume holographic memory (VHM) systems. We show that a carefully designed irregular LDPC code has a very good performance in VHM systems. We optimize high-rate LDPC codes for the nonuniform error pattern in holographic memories to reduce the bit error rate extensively. The prior knowledge of noise distribution is used for designing as well as decoding the LDPC codes. We show that these codes have a superior performance to that of Reed-Solomon (RS) codes and regular LDPC counterparts. Our simulation shows that we can increase the maximum storage capacity of holographic memories by more than 50 percent if we use irregular LDPC codes with soft-decision decoding instead of conventionally employed RS codes with hard-decision decoding. The performance of these LDPC codes is close to the information theoretic capacity.
2012-03-01
advanced antenna systems AMC adaptive modulation and coding AWGN additive white Gaussian noise BPSK binary phase shift keying BS base station BTC ...QAM-16, and QAM-64, and coding types include convolutional coding (CC), convolutional turbo coding (CTC), block turbo coding ( BTC ), zero-terminating
Deductive Glue Code Synthesis for Embedded Software Systems Based on Code Patterns
NASA Technical Reports Server (NTRS)
Liu, Jian; Fu, Jicheng; Zhang, Yansheng; Bastani, Farokh; Yen, I-Ling; Tai, Ann; Chau, Savio N.
2006-01-01
Automated code synthesis is a constructive process that can be used to generate programs from specifications. It can, thus, greatly reduce the software development cost and time. The use of formal code synthesis approach for software generation further increases the dependability of the system. Though code synthesis has many potential benefits, the synthesis techniques are still limited. Meanwhile, components are widely used in embedded system development. Applying code synthesis to component based software development (CBSD) process can greatly enhance the capability of code synthesis while reducing the component composition efforts. In this paper, we discuss the issues and techniques for applying deductive code synthesis techniques to CBSD. For deductive synthesis in CBSD, a rule base is the key for inferring appropriate component composition. We use the code patterns to guide the development of rules. Code patterns have been proposed to capture the typical usages of the components. Several general composition operations have been identified to facilitate systematic composition. We present the technique for rule development and automated generation of new patterns from existing code patterns. A case study of using this method in building a real-time control system is also presented.
3D neutronic codes coupled with thermal-hydraulic system codes for PWR, and BWR and VVER reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langenbuch, S.; Velkov, K.; Lizorkin, M.
1997-07-01
This paper describes the objectives of code development for coupling 3D neutronics codes with thermal-hydraulic system codes. The present status of coupling ATHLET with three 3D neutronics codes for VVER- and LWR-reactors is presented. After describing the basic features of the 3D neutronic codes BIPR-8 from Kurchatov-Institute, DYN3D from Research Center Rossendorf and QUABOX/CUBBOX from GRS, first applications of coupled codes for different transient and accident scenarios are presented. The need of further investigations is discussed.
VENTURE/PC manual: A multidimensional multigroup neutron diffusion code system. Version 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shapiro, A.; Huria, H.C.; Cho, K.W.
1991-12-01
VENTURE/PC is a recompilation of part of the Oak Ridge BOLD VENTURE code system, which will operate on an IBM PC or compatible computer. Neutron diffusion theory solutions are obtained for multidimensional, multigroup problems. This manual contains information associated with operating the code system. The purpose of the various modules used in the code system, and the input for these modules are discussed. The PC code structure is also given. Version 2 included several enhancements not given in the original version of the code. In particular, flux iterations can be done in core rather than by reading and writing tomore » disk, for problems which allow sufficient memory for such in-core iterations. This speeds up the iteration process. Version 3 does not include any of the special processors used in the previous versions. These special processors utilized formatted input for various elements of the code system. All such input data is now entered through the Input Processor, which produces standard interface files for the various modules in the code system. In addition, a Standard Interface File Handbook is included in the documentation which is distributed with the code, to assist in developing the input for the Input Processor.« less
In silico prediction of ROCK II inhibitors by different classification approaches.
Cai, Chuipu; Wu, Qihui; Luo, Yunxia; Ma, Huili; Shen, Jiangang; Zhang, Yongbin; Yang, Lei; Chen, Yunbo; Wen, Zehuai; Wang, Qi
2017-11-01
ROCK II is an important pharmacological target linked to central nervous system disorders such as Alzheimer's disease. The purpose of this research is to generate ROCK II inhibitor prediction models by machine learning approaches. Firstly, four sets of descriptors were calculated with MOE 2010 and PaDEL-Descriptor, and optimized by F-score and linear forward selection methods. In addition, four classification algorithms were used to initially build 16 classifiers with k-nearest neighbors [Formula: see text], naïve Bayes, Random forest, and support vector machine. Furthermore, three sets of structural fingerprint descriptors were introduced to enhance the predictive capacity of classifiers, which were assessed with fivefold cross-validation, test set validation and external test set validation. The best two models, MFK + MACCS and MLR + SubFP, have both MCC values of 0.925 for external test set. After that, a privileged substructure analysis was performed to reveal common chemical features of ROCK II inhibitors. Finally, binding modes were analyzed to identify relationships between molecular descriptors and activity, while main interactions were revealed by comparing the docking interaction of the most potent and the weakest ROCK II inhibitors. To the best of our knowledge, this is the first report on ROCK II inhibitors utilizing machine learning approaches that provides a new method for discovering novel ROCK II inhibitors.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.
1986-01-01
High rate concatenated coding systems with trellis inner codes and Reed-Solomon (RS) outer codes for application in satellite communication systems are considered. Two types of inner codes are studied: high rate punctured binary convolutional codes which result in overall effective information rates between 1/2 and 1 bit per channel use; and bandwidth efficient signal space trellis codes which can achieve overall effective information rates greater than 1 bit per channel use. Channel capacity calculations with and without side information performed for the concatenated coding system. Concatenated coding schemes are investigated. In Scheme 1, the inner code is decoded with the Viterbi algorithm and the outer RS code performs error-correction only (decoding without side information). In scheme 2, the inner code is decoded with a modified Viterbi algorithm which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, while branch metrics are used to provide the reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. These two schemes are proposed for use on NASA satellite channels. Results indicate that high system reliability can be achieved with little or no bandwidth expansion.
High dynamic range coding imaging system
NASA Astrophysics Data System (ADS)
Wu, Renfan; Huang, Yifan; Hou, Guangqi
2014-10-01
We present a high dynamic range (HDR) imaging system design scheme based on coded aperture technique. This scheme can help us obtain HDR images which have extended depth of field. We adopt Sparse coding algorithm to design coded patterns. Then we utilize the sensor unit to acquire coded images under different exposure settings. With the guide of the multiple exposure parameters, a series of low dynamic range (LDR) coded images are reconstructed. We use some existing algorithms to fuse and display a HDR image by those LDR images. We build an optical simulation model and get some simulation images to verify the novel system.
Peter, Frank J.; Dalton, Larry J.; Plummer, David W.
2002-01-01
A new class of mechanical code comparators is described which have broad potential for application in safety, surety, and security applications. These devices can be implemented as micro-scale electromechanical systems that isolate a secure or otherwise controlled device until an access code is entered. This access code is converted into a series of mechanical inputs to the mechanical code comparator, which compares the access code to a pre-input combination, entered previously into the mechanical code comparator by an operator at the system security control point. These devices provide extremely high levels of robust security. Being totally mechanical in operation, an access control system properly based on such devices cannot be circumvented by software attack alone.
HERCULES: A Pattern Driven Code Transformation System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kartsaklis, Christos; Hernandez, Oscar R; Hsu, Chung-Hsing
2012-01-01
New parallel computers are emerging, but developing efficient scientific code for them remains difficult. A scientist must manage not only the science-domain complexity but also the performance-optimization complexity. HERCULES is a code transformation system designed to help the scientist to separate the two concerns, which improves code maintenance, and facilitates performance optimization. The system combines three technologies, code patterns, transformation scripts and compiler plugins, to provide the scientist with an environment to quickly implement code transformations that suit his needs. Unlike existing code optimization tools, HERCULES is unique in its focus on user-level accessibility. In this paper we discuss themore » design, implementation and an initial evaluation of HERCULES.« less
Performance analysis of optical wireless communication system based on two-fold turbo code
NASA Astrophysics Data System (ADS)
Chen, Jun; Huang, Dexiu; Yuan, Xiuhua
2005-11-01
Optical wireless communication (OWC) is beginning to emerge in the telecommunications market as a strategy to meet last-mile demand owing to its unique combination of features. Turbo codes have an impressive near Shannon-limit error correcting performance. Twofold turbo codes have been recently introduced as the least complex member of the multifold turbo code family. In this paper, at first, we present the mathematical model of signal and optical wireless channel with fading and bit error rate model with scintillation, then we provide a new turbo code method to use in OWC system, we can obtain a better BER curse of OWC system with twofold turbo code than with common turbo code.
Avidan, Alexander; Weissman, Charles; Levin, Phillip D
2015-04-01
Quick response (QR) codes containing anesthesia syllabus data were introduced into an anesthesia information management system. The code was generated automatically at the conclusion of each case and available for resident case logging using a smartphone or tablet. The goal of this study was to evaluate the use and usability/user-friendliness of such system. Resident case logging practices were assessed prior to introducing the QR codes. QR code use and satisfactions amongst residents was reassessed at three and six months. Before QR code introduction only 12/23 (52.2%) residents maintained a case log. Most of the remaining residents (9/23, 39.1%) expected to receive a case list from the anesthesia information management system database at the end of their residency. At three months and six months 17/26 (65.4%) and 15/25 (60.0%) residents, respectively, were using the QR codes. Satisfaction was rated as very good or good. QR codes for residents' case logging with smartphones or tablets were successfully introduced in an anesthesia information management system and used by most residents. QR codes can be successfully implemented into medical practice to support data transfer. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Deppe, Antje-Christin; Arbash, Wasim; Kuhn, Elmar W; Slottosch, Ingo; Scherner, Maximilian; Liakopoulos, Oliver J; Choi, Yeong-Hoon; Wahlers, Thorsten
2016-04-01
In the present systematic review with meta-analysis, we sought to determine the current strength of evidence for or against off-pump and on-pump coronary artery bypass grafting (CABG) with regard to hard clinical end-points, graft patency and cost-effectiveness. We performed a meta-analysis of only randomized controlled trials (RCT) which reported at least one of the desired end-points including: (i) major adverse cardiac and cerebrovascular events (MACCE), (ii) all-cause mortality, (iii) myocardial infarction, (iv) cerebrovascular accident, (v) repeat revascularization, (vi) graft patency and (vii) cost-effectiveness. The pooled treatment effects [odds ratio (OR) or weighted mean difference, 95% confidence intervals (95% CIs)] were assessed using a fixed or random effects model. A total of 16 904 patients from 51 studies were identified after literature search of the major databases using a predefined keyword list. The incidence of MACCE did not differ between the groups, neither during the first 30 days (OR: 0.93; 95% CI: 0.82-1.04) nor for the longest available follow-up (OR: 1.01; 95% CI: 0.92-1.12). While the incidence of mid-term graft failure (OR: 1.37; 95% CI: 1.09-1.72) and the need for repeat revascularization (OR: 1.55; 95% CI: 1.33-1.80) was increased after off-pump surgery, on-pump surgery was associated with an increased occurrence of stroke (OR: 0.74; 95% CI: 0.58-0.95), renal impairment (OR: 0.79; 95% CI: 0.71-0.89) and mediastinitis (OR: 0.44; 95% CI: 0.31-0.62). There was no difference with regard to hard clinical end-points between on- or off-pump surgery, including myocardial infarction or mortality. The present systematic review emphasizes that both off- and on-pump surgery provide excellent and comparable results in patients requiring surgical revascularization. The choice for either strategy should take into account the individual patient profile (comorbidities, life expectancy, etc.) and importantly, the surgeon's experience in performing on- or off-pump CABG in their routine practice. © The Author 2015. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
NASA Astrophysics Data System (ADS)
Frankenberg, Christian; Kulawik, Susan S.; Wofsy, Steven C.; Chevallier, Frédéric; Daube, Bruce; Kort, Eric A.; O'Dell, Christopher; Olsen, Edward T.; Osterman, Gregory
2016-06-01
In recent years, space-borne observations of atmospheric carbon dioxide (CO2) have been increasingly used in global carbon-cycle studies. In order to obtain added value from space-borne measurements, they have to suffice stringent accuracy and precision requirements, with the latter being less crucial as it can be reduced by just enhanced sample size. Validation of CO2 column-averaged dry air mole fractions (XCO2) heavily relies on measurements of the Total Carbon Column Observing Network (TCCON). Owing to the sparseness of the network and the requirements imposed on space-based measurements, independent additional validation is highly valuable. Here, we use observations from the High-Performance Instrumented Airborne Platform for Environmental Research (HIAPER) Pole-to-Pole Observations (HIPPO) flights from 01/2009 through 09/2011 to validate CO2 measurements from satellites (Greenhouse Gases Observing Satellite - GOSAT, Thermal Emission Sounder - TES, Atmospheric Infrared Sounder - AIRS) and atmospheric inversion models (CarbonTracker CT2013B, Monitoring Atmospheric Composition and Climate (MACC) v13r1). We find that the atmospheric models capture the XCO2 variability observed in HIPPO flights very well, with correlation coefficients (r2) of 0.93 and 0.95 for CT2013B and MACC, respectively. Some larger discrepancies can be observed in profile comparisons at higher latitudes, in particular at 300 hPa during the peaks of either carbon uptake or release. These deviations can be up to 4 ppm and hint at misrepresentation of vertical transport. Comparisons with the GOSAT satellite are of comparable quality, with an r2 of 0.85, a mean bias μ of -0.06 ppm, and a standard deviation σ of 0.45 ppm. TES exhibits an r2 of 0.75, μ of 0.34 ppm, and σ of 1.13 ppm. For AIRS, we find an r2 of 0.37, μ of 1.11 ppm, and σ of 1.46 ppm, with latitude-dependent biases. For these comparisons at least 6, 20, and 50 atmospheric soundings have been averaged for GOSAT, TES, and AIRS, respectively. Overall, we find that GOSAT soundings over the remote Pacific Ocean mostly meet the stringent accuracy requirements of about 0.5 ppm for space-based CO2 observations.
NASA Astrophysics Data System (ADS)
Luhar, Ashok K.; Woodhouse, Matthew T.; Galbally, Ian E.
2018-03-01
Dry deposition at the Earth's surface is an important sink of atmospheric ozone. Currently, dry deposition of ozone to the ocean surface in atmospheric chemistry models has the largest uncertainty compared to deposition to other surface types, with implications for global tropospheric ozone budget and associated radiative forcing. Most global models assume that the dominant term of surface resistance in the parameterisation of ozone dry deposition velocity at the oceanic surface is constant. There have been recent mechanistic parameterisations for air-sea exchange that account for the simultaneous waterside processes of ozone solubility, molecular diffusion, turbulent transfer, and first-order chemical reaction of ozone with dissolved iodide and other compounds, but there are questions about their performance and consistency. We present a new two-layer parameterisation scheme for the oceanic surface resistance by making the following realistic assumptions: (a) the thickness of the top water layer is of the order of a reaction-diffusion length scale (a few micrometres) within which ozone loss is dominated by chemical reaction and the influence of waterside turbulent transfer is negligible; (b) in the water layer below, both chemical reaction and waterside turbulent transfer act together and are accounted for; and (c) chemical reactivity is present through the depth of the oceanic mixing layer. The new parameterisation has been evaluated against dry deposition velocities from recent open-ocean measurements. It is found that the inclusion of only the aqueous iodide-ozone reaction satisfactorily describes the measurements. In order to better quantify the global dry deposition loss and its interannual variability, modelled 3-hourly ozone deposition velocities are combined with the 3-hourly MACC (Monitoring Atmospheric Composition and Climate) reanalysis ozone for the years 2003-2012. The resulting ozone dry deposition is found to be 98.4 ± 30.0 Tg O3 yr-1 for the ocean and 722.8 ± 87.3 Tg O3 yr-1 globally. The new estimate of the ocean component is approximately a third of the current model estimates. This reduction corresponds to an approximately 20 % decrease in the total global ozone dry deposition, which (with all other components being unchanged) is equivalent to an increase of approximately 5 % in the modelled tropospheric ozone burden and a similar increase in tropospheric ozone lifetime.
Furukawa, Nobuyuki; Kuss, Oliver; Preindl, Konstantin; Renner, André; Aboud, Anas; Hakim-Meibodi, Kavous; Benzinger, Michael; Pühler, Thomas; Ensminger, Stephan; Fujita, Buntaro; Becker, Tobias; Gummert, Jan F; Börgermann, Jochen
2017-10-01
Meta-analyses from observational and randomized studies have demonstrated benefits of off-pump surgery for hard and surrogate endpoints. In some of them, increased re-revascularization was noted in the off-pump groups, which could impact their long-term survival. Therefore, we analyzed the course of all patients undergoing isolated coronary surgery regarding the major cardiac and cerebrovascular event (MACCE) criteria. A prospective register was taken from a high-volume off-pump center recording all anaortic off-pump (ANA), clampless off-pump (PAS-Port) and conventional (CONV) coronary artery bypass operations between July 2009 and June 2015. Propensity Score Matching was performed based on 28 preoperative risk variables. We identified 935 triplets (N = 2805). Compared with CONV, in-hospital mortality of both the ANA group (OR for ANA [95% CI] 0.25 [0.06; 0.83], P = 0.021), and the PAS-Port group was lower (OR for PAS-Port [95% CI] 0.50 [0.17; 1.32], P = 0.17). In the mid-term follow-up there were no significant differences between the groups regarding mortality (HR for ANA [95%-CI] 0.83 [0.55-1.26], P = 0.38; HR for PAS-Port [95%-CI] 1.06 [0.70-1.59], P = 0.79), incidence of stroke (HR for ANA 0.81 [0.43-1.53], P = 0.52; HR for PAS-Port 0.78 [0.41-1.50], P = 0.46), myocardial infarction (HR for ANA 0.53 [0.22-1.31], P = 0.17; HR for PAS-Port 0.78 [0.37-1.66], P = 0.52) or re-revascularization rate (HR for ANA 0.99 [0.67-1.44], P = 0.94; HR for PAS-Port 0.95 [0.65-1.38], P = 0.77). Both off-pump clampless techniques were associated with lower in-hospital mortality compared with conventional CABG. The mid-term course showed no difference with regard to the MACCE criteria between anaortic off-pump, clampless off-pump using PAS-Port and conventional CABG. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
2011-01-01
reliability, e.g., Turbo Codes [2] and Low Density Parity Check ( LDPC ) codes [3]. The challenge to apply both MIMO and ECC into wireless systems is on...REPORT Fixed-point Design of theLattice-reduction-aided Iterative Detection andDecoding Receiver for Coded MIMO Systems 14. ABSTRACT 16. SECURITY...illustrates the performance of coded LR aided detectors. 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES The views, opinions
Expert system for maintenance management of a boiling water reactor power plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong Shen; Liou, L.W.; Levine, S.
1992-01-01
An expert system code has been developed for the maintenance of two boiling water reactor units in Berwick, Pennsylvania, that are operated by the Pennsylvania Power and Light Company (PP and L). The objective of this expert system code, where the knowledge of experienced operators and engineers is captured and implemented, is to support the decisions regarding which components can be safely and reliably removed from service for maintenance. It can also serve as a query-answering facility for checking the plant system status and for training purposes. The operating and maintenance information of a large number of support systems, whichmore » must be available for emergencies and/or in the event of an accident, is stored in the data base of the code. It identifies the relevant technical specifications and management rules for shutting down any one of the systems or removing a component from service to support maintenance. Because of the complexity and time needed to incorporate a large number of systems and their components, the first phase of the expert system develops a prototype code, which includes only the reactor core isolation coolant system, the high-pressure core injection system, the instrument air system, the service water system, and the plant electrical system. The next phase is scheduled to expand the code to include all other systems. This paper summarizes the prototype code and the design concept of the complete expert system code for maintenance management of all plant systems and components.« less
NASA Technical Reports Server (NTRS)
1975-01-01
A system is presented which processes FORTRAN based software systems to surface potential problems before they become execution malfunctions. The system complements the diagnostic capabilities of compilers, loaders, and execution monitors rather than duplicating these functions. Also, it emphasizes frequent sources of FORTRAN problems which require inordinate manual effort to identify. The principle value of the system is extracting small sections of unusual code from the bulk of normal sequences. Code structures likely to cause immediate or future problems are brought to the user's attention. These messages stimulate timely corrective action of solid errors and promote identification of 'tricky' code. Corrective action may require recoding or simply extending software documentation to explain the unusual technique.
NASA Technical Reports Server (NTRS)
Lin, Shu (Principal Investigator); Uehara, Gregory T.; Nakamura, Eric; Chu, Cecilia W. P.
1996-01-01
The (64, 40, 8) subcode of the third-order Reed-Muller (RM) code for high-speed satellite communications is proposed. The RM subcode can be used either alone or as an inner code of a concatenated coding system with the NASA standard (255, 233, 33) Reed-Solomon (RS) code as the outer code to achieve high performance (or low bit-error rate) with reduced decoding complexity. It can also be used as a component code in a multilevel bandwidth efficient coded modulation system to achieve reliable bandwidth efficient data transmission. The progress made toward achieving the goal of implementing a decoder system based upon this code is summarized. The development of the integrated circuit prototype sub-trellis IC, particularly focusing on the design methodology, is addressed.
Nurses' attitudes toward the use of the bar-coding medication administration system.
Marini, Sana Daya; Hasman, Arie; Huijer, Huda Abu-Saad; Dimassi, Hani
2010-01-01
This study determines nurses' attitudes toward bar-coding medication administration system use. Some of the factors underlying the successful use of bar-coding medication administration systems that are viewed as a connotative indicator of users' attitudes were used to gather data that describe the attitudinal basis for system adoption and use decisions in terms of subjective satisfaction. Only 67 nurses in the United States had the chance to respond to the e-questionnaire posted on the CARING list server for the months of June and July 2007. Participants rated their satisfaction with bar-coding medication administration system use based on system functionality, usability, and its positive/negative impact on the nursing practice. Results showed, to some extent, positive attitude, but the image profile draws attention to nurses' concerns for improving certain system characteristics. The high bar-coding medication administration system skills revealed a more negative perception of the system by the nursing staff. The reasons underlying dissatisfaction with bar-coding medication administration use by skillful users are an important source of knowledge that can be helpful for system development as well as system deployment. As a result, strengthening bar-coding medication administration system usability by magnifying its ability to eliminate medication errors and the contributing factors, maximizing system functionality by ascertaining its power as an extra eye in the medication administration process, and impacting the clinical nursing practice positively by being helpful to nurses, speeding up the medication administration process, and being user-friendly can offer a congenial settings for establishing positive attitude toward system use, which in turn leads to successful bar-coding medication administration system use.
Variation in clinical coding lists in UK general practice: a barrier to consistent data entry?
Tai, Tracy Waize; Anandarajah, Sobanna; Dhoul, Neil; de Lusignan, Simon
2007-01-01
Routinely collected general practice computer data are used for quality improvement; poor data quality including inconsistent coding can reduce their usefulness. To document the diversity of data entry systems currently in use in UK general practice and highlight possible implications for data quality. General practice volunteers provided screen shots of the clinical coding screen they would use to code a diagnosis or problem title in the clinical consultation. The six clinical conditions examined were: depression, cystitis, type 2 diabetes mellitus, sore throat, tired all the time, and myocardial infarction. We looked at the picking lists generated for these problem titles in EMIS, IPS, GPASS and iSOFT general practice clinical computer systems, using the Triset browser as a gold standard for comparison. A mean of 19.3 codes is offered in the picking list after entering a diagnosis or problem title. EMIS produced the longest picking lists and GPASS the shortest, with a mean number of choices of 35.2 and 12.7, respectively. Approximately three-quarters (73.5%) of codes are diagnoses, one-eighth (12.5%) symptom codes, and the remainder come from a range of Read chapters. There was no readily detectable consistent order in which codes were displayed. Velocity coding, whereby commonly-used codes are placed higher in the picking list, results in variation between practices even where they have the same brand of computer system. Current systems for clinical coding promote diversity rather than consistency of clinical coding. As the UK moves towards an integrated health IT system consistency of coding will become more important. A standardised, limited list of codes for primary care might help address this need.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Determining North American Industry Classification System (NAICS) codes and size standards. 219.303 Section 219.303 Federal... Programs 219.303 Determining North American Industry Classification System (NAICS) codes and size standards...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Determining North American Industry Classification System (NAICS) codes and size standards. 219.303 Section 219.303 Federal... Programs 219.303 Determining North American Industry Classification System (NAICS) codes and size standards...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Determining North American Industry Classification System (NAICS) codes and size standards. 219.303 Section 219.303 Federal... Determining North American Industry Classification System (NAICS) codes and size standards. Contracting...
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Determining North American Industry Classification System (NAICS) codes and size standards. 219.303 Section 219.303 Federal... Determining North American Industry Classification System (NAICS) codes and size standards. Contracting...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-16
... hospital payment systems; hospital medical care delivery systems; provider billing and accounting systems; APC groups; Current Procedural Terminology codes; Health Care Common Procedure Coding System (HCPCS) codes; the use of, and payment for, drugs, medical devices, and other services in the outpatient setting...
Survey of adaptive image coding techniques
NASA Technical Reports Server (NTRS)
Habibi, A.
1977-01-01
The general problem of image data compression is discussed briefly with attention given to the use of Karhunen-Loeve transforms, suboptimal systems, and block quantization. A survey is then conducted encompassing the four categories of adaptive systems: (1) adaptive transform coding (adaptive sampling, adaptive quantization, etc.), (2) adaptive predictive coding (adaptive delta modulation, adaptive DPCM encoding, etc.), (3) adaptive cluster coding (blob algorithms and the multispectral cluster coding technique), and (4) adaptive entropy coding.
The analysis of convolutional codes via the extended Smith algorithm
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Onyszchuk, I.
1993-01-01
Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.
System for loading executable code into volatile memory in a downhole tool
Hall, David R.; Bartholomew, David B.; Johnson, Monte L.
2007-09-25
A system for loading an executable code into volatile memory in a downhole tool string component comprises a surface control unit comprising executable code. An integrated downhole network comprises data transmission elements in communication with the surface control unit and the volatile memory. The executable code, stored in the surface control unit, is not permanently stored in the downhole tool string component. In a preferred embodiment of the present invention, the downhole tool string component comprises boot memory. In another embodiment, the executable code is an operating system executable code. Preferably, the volatile memory comprises random access memory (RAM). A method for loading executable code to volatile memory in a downhole tool string component comprises sending the code from the surface control unit to a processor in the downhole tool string component over the network. A central processing unit writes the executable code in the volatile memory.
Ultra Safe And Secure Blasting System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, M M
2009-07-27
The Ultra is a blasting system that is designed for special applications where the risk and consequences of unauthorized demolition or blasting are so great that the use of an extraordinarily safe and secure blasting system is justified. Such a blasting system would be connected and logically welded together through digital code-linking as part of the blasting system set-up and initialization process. The Ultra's security is so robust that it will defeat the people who designed and built the components in any attempt at unauthorized detonation. Anyone attempting to gain unauthorized control of the system by substituting components or tappingmore » into communications lines will be thwarted in their inability to provide encrypted authentication. Authentication occurs through the use of codes that are generated by the system during initialization code-linking and the codes remain unknown to anyone, including the authorized operator. Once code-linked, a closed system has been created. The system requires all components connected as they were during initialization as well as a unique code entered by the operator for function and blasting.« less
The design of the CMOS wireless bar code scanner applying optical system based on ZigBee
NASA Astrophysics Data System (ADS)
Chen, Yuelin; Peng, Jian
2008-03-01
The traditional bar code scanner is influenced by the length of data line, but the farthest distance of the wireless bar code scanner of wireless communication is generally between 30m and 100m on the market. By rebuilding the traditional CCD optical bar code scanner, a CMOS code scanner is designed based on the ZigBee to meet the demands of market. The scan system consists of the CMOS image sensor and embedded chip S3C2401X, when the two dimensional bar code is read, the results show the inaccurate and wrong code bar, resulted from image defile, disturber, reads image condition badness, signal interference, unstable system voltage. So we put forward the method which uses the matrix evaluation and Read-Solomon arithmetic to solve them. In order to construct the whole wireless optics of bar code system and to ensure its ability of transmitting bar code image signals digitally with long distances, ZigBee is used to transmit data to the base station, and this module is designed based on image acquisition system, and at last the wireless transmitting/receiving CC2430 module circuit linking chart is established. And by transplanting the embedded RTOS system LINUX to the MCU, an applying wireless CMOS optics bar code scanner and multi-task system is constructed. Finally, performance of communication is tested by evaluation software Smart RF. In broad space, every ZIGBEE node can realize 50m transmission with high reliability. When adding more ZigBee nodes, the transmission distance can be several thousands of meters long.
A survey to identify the clinical coding and classification systems currently in use across Europe.
de Lusignan, S; Minmagh, C; Kennedy, J; Zeimet, M; Bommezijn, H; Bryant, J
2001-01-01
This is a survey to identify what clinical coding systems are currently in use across the European Union, and the states seeking membership to it. We sought to identify what systems are currently used and to what extent they were subject to local adaptation. Clinical coding should facilitate identifying key medical events in a computerised medical record, and aggregating information across groups of records. The emerging new driver is as the enabler of the life-long computerised medical record. A prerequisite for this level of functionality is the transfer of information between different computer systems. This transfer can be facilitated either by working on the interoperability problems between disparate systems or by harmonising the underlying data. This paper examines the extent to which the latter has occurred across Europe. Literature and Internet search. Requests for information via electronic mail to pan-European mailing lists of health informatics professionals. Coding systems are now a de facto part of health information systems across Europe. There are relatively few coding systems in existence across Europe. ICD9 and ICD 10, ICPC and Read were the most established. However the local adaptation of these classification systems either on a by country or by computer software manufacturer basis; significantly reduces the ability for the meaning coded with patients computer records to be easily transferred from one medical record system to another. There is no longer any debate as to whether a coding or classification system should be used. Convergence of different classifications systems should be encouraged. Countries and computer manufacturers within the EU should be encouraged to stop making local modifications to coding and classification systems, as this practice risks significantly slowing progress towards easy transfer of records between computer systems.
Practical guide to bar coding for patient medication safety.
Neuenschwander, Mark; Cohen, Michael R; Vaida, Allen J; Patchett, Jeffrey A; Kelly, Jamie; Trohimovich, Barbara
2003-04-15
Bar coding for the medication administration step of the drug-use process is discussed. FDA will propose a rule in 2003 that would require bar-code labels on all human drugs and biologicals. Even with an FDA mandate, manufacturer procrastination and possible shifts in product availability are likely to slow progress. Such delays should not preclude health systems from adopting bar-code-enabled point-of-care (BPOC) systems to achieve gains in patient safety. Bar-code technology is a replacement for traditional keyboard data entry. The elements of bar coding are content, which determines the meaning; data format, which refers to the embedded data and symbology, which describes the "font" in which the machine-readable code is written. For a BPOC system to deliver an acceptable level of patient protection, the hospital must first establish reliable processes for a patient identification band, caregiver badge, and medication bar coding. Medications can have either drug-specific or patient-specific bar codes. Both varieties result in the desired code that supports patient's five rights of drug administration. When medications are not available from the manufacturer in immediate-container bar-coded packaging, other means of applying the bar code must be devised, including the use of repackaging equipment, overwrapping, manual bar coding, and outsourcing. Virtually all medications should be bar coded, the bar code on the label should be easily readable, and appropriate policies, procedures, and checks should be in place. Bar coding has the potential to be not only cost-effective but to produce a return on investment. By bar coding patient identification tags, caregiver badges, and immediate-container medications, health systems can substantially increase patient safety during medication administration.
System Design for FEC in Aeronautical Telemetry
2012-03-12
rate punctured convolutional codes for soft decision Viterbi...below follows that given in [8]. The final coding rate of exactly 2/3 is achieved by puncturing the rate -1/2 code as follows. We begin with the buffer c1...concatenated convolutional code (SCCC). The contributions of this paper are on the system-design level. One major contribution is to design a SCCC code
1985-11-01
Boiler and Pressure Vessel Code HEI Heat Exchanger Institute Heat and Material Balance c. System Description (1) Condenser... Boiler and Pressure Vessel Code "AN(SI B31.1 Power Piping d. System Description (1) Deaerator The deaerator will be d direct contact feedwater heater, and...vent, and drain piping. "b . Applicable Codes ASME Boiler and Pressure Vessel Code "ANSI B31.1 - Power Piping Code
Augmented burst-error correction for UNICON laser memory. [digital memory
NASA Technical Reports Server (NTRS)
Lim, R. S.
1974-01-01
A single-burst-error correction system is described for data stored in the UNICON laser memory. In the proposed system, a long fire code with code length n greater than 16,768 bits was used as an outer code to augment an existing inner shorter fire code for burst error corrections. The inner fire code is a (80,64) code shortened from the (630,614) code, and it is used to correct a single-burst-error on a per-word basis with burst length b less than or equal to 6. The outer code, with b less than or equal to 12, would be used to correct a single-burst-error on a per-page basis, where a page consists of 512 32-bit words. In the proposed system, the encoding and error detection processes are implemented by hardware. A minicomputer, currently used as a UNICON memory management processor, is used on a time-demanding basis for error correction. Based upon existing error statistics, this combination of an inner code and an outer code would enable the UNICON system to obtain a very low error rate in spite of flaws affecting the recorded data.
Simulation of spacecraft attitude dynamics using TREETOPS and model-specific computer Codes
NASA Technical Reports Server (NTRS)
Cochran, John E.; No, T. S.; Fitz-Coy, Norman G.
1989-01-01
The simulation of spacecraft attitude dynamics and control using the generic, multi-body code called TREETOPS and other codes written especially to simulate particular systems is discussed. Differences in the methods used to derive equations of motion--Kane's method for TREETOPS and the Lagrangian and Newton-Euler methods, respectively, for the other two codes--are considered. Simulation results from the TREETOPS code are compared with those from the other two codes for two example systems. One system is a chain of rigid bodies; the other consists of two rigid bodies attached to a flexible base body. Since the computer codes were developed independently, consistent results serve as a verification of the correctness of all the programs. Differences in the results are discussed. Results for the two-rigid-body, one-flexible-body system are useful also as information on multi-body, flexible, pointing payload dynamics.
A family of chaotic pure analog coding schemes based on baker's map function
NASA Astrophysics Data System (ADS)
Liu, Yang; Li, Jing; Lu, Xuanxuan; Yuen, Chau; Wu, Jun
2015-12-01
This paper considers a family of pure analog coding schemes constructed from dynamic systems which are governed by chaotic functions—baker's map function and its variants. Various decoding methods, including maximum likelihood (ML), minimum mean square error (MMSE), and mixed ML-MMSE decoding algorithms, have been developed for these novel encoding schemes. The proposed mirrored baker's and single-input baker's analog codes perform a balanced protection against the fold error (large distortion) and weak distortion and outperform the classical chaotic analog coding and analog joint source-channel coding schemes in literature. Compared to the conventional digital communication system, where quantization and digital error correction codes are used, the proposed analog coding system has graceful performance evolution, low decoding latency, and no quantization noise. Numerical results show that under the same bandwidth expansion, the proposed analog system outperforms the digital ones over a wide signal-to-noise (SNR) range.
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Lin, Shu
2000-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomom outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft-decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
NASA Astrophysics Data System (ADS)
Hoh, Siew Sin; Rapie, Nurul Nadiah; Lim, Edwin Suh Wen; Tan, Chun Yuan; Yavar, Alireza; Sarmani, Sukiman; Majid, Amran Ab.; Khoo, Kok Siong
2013-05-01
Instrumental Neutron Activation Analysis (INAA) is often used to determine and calculate the elemental concentrations of a sample at The National University of Malaysia (UKM) typically in Nuclear Science Programme, Faculty of Science and Technology. The objective of this study was to develop a database code-system based on Microsoft Access 2010 which could help the INAA users to choose either comparator method, k0-method or absolute method for calculating the elemental concentrations of a sample. This study also integrated k0data, Com-INAA, k0Concent, k0-Westcott and Abs-INAA to execute and complete the ECC-UKM database code-system. After the integration, a study was conducted to test the effectiveness of the ECC-UKM database code-system by comparing the concentrations between the experiments and the code-systems. 'Triple Bare Monitor' Zr-Au and Cr-Mo-Au were used in k0Concent, k0-Westcott and Abs-INAA code-systems as monitors to determine the thermal to epithermal neutron flux ratio (f). Calculations involved in determining the concentration were net peak area (Np), measurement time (tm), irradiation time (tirr), k-factor (k), thermal to epithermal neutron flux ratio (f), parameters of the neutron flux distribution epithermal (α) and detection efficiency (ɛp). For Com-INAA code-system, certified reference material IAEA-375 Soil was used to calculate the concentrations of elements in a sample. Other CRM and SRM were also used in this database codesystem. Later, a verification process to examine the effectiveness of the Abs-INAA code-system was carried out by comparing the sample concentrations between the code-system and the experiment. The results of the experimental concentration values of ECC-UKM database code-system were performed with good accuracy.
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu
2016-03-01
A novel lower-complexity construction scheme of quasi-cyclic low-density parity-check (QC-LDPC) codes for optical transmission systems is proposed based on the structure of the parity-check matrix for the Richardson-Urbanke (RU) algorithm. Furthermore, a novel irregular QC-LDPC(4 288, 4 020) code with high code-rate of 0.937 is constructed by this novel construction scheme. The simulation analyses show that the net coding gain ( NCG) of the novel irregular QC-LDPC(4 288,4 020) code is respectively 2.08 dB, 1.25 dB and 0.29 dB more than those of the classic RS(255, 239) code, the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code at the bit error rate ( BER) of 10-6. The irregular QC-LDPC(4 288, 4 020) code has the lower encoding/decoding complexity compared with the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code. The proposed novel QC-LDPC(4 288, 4 020) code can be more suitable for the increasing development requirements of high-speed optical transmission systems.
NASA Astrophysics Data System (ADS)
Walker, Ernest; Chen, Xinjia; Cooper, Reginald L.
2010-04-01
An arbitrarily accurate approach is used to determine the bit-error rate (BER) performance for generalized asynchronous DS-CDMA systems, in Gaussian noise with Raleigh fading. In this paper, and the sequel, new theoretical work has been contributed which substantially enhances existing performance analysis formulations. Major contributions include: substantial computational complexity reduction, including a priori BER accuracy bounding; an analytical approach that facilitates performance evaluation for systems with arbitrary spectral spreading distributions, with non-uniform transmission delay distributions. Using prior results, augmented by these enhancements, a generalized DS-CDMA system model is constructed and used to evaluated the BER performance, in a variety of scenarios. In this paper, the generalized system modeling was used to evaluate the performance of both Walsh- Hadamard (WH) and Walsh-Hadamard-seeded zero-correlation-zone (WH-ZCZ) coding. The selection of these codes was informed by the observation that WH codes contain N spectral spreading values (0 to N - 1), one for each code sequence; while WH-ZCZ codes contain only two spectral spreading values (N/2 - 1,N/2); where N is the sequence length in chips. Since these codes span the spectral spreading range for DS-CDMA coding, by invoking an induction argument, the generalization of the system model is sufficiently supported. The results in this paper, and the sequel, support the claim that an arbitrary accurate performance analysis for DS-CDMA systems can be evaluated over the full range of binary coding, with minimal computational complexity.
Surface acoustic wave coding for orthogonal frequency coded devices
NASA Technical Reports Server (NTRS)
Malocha, Donald (Inventor); Kozlovski, Nikolai (Inventor)
2011-01-01
Methods and systems for coding SAW OFC devices to mitigate code collisions in a wireless multi-tag system. Each device producing plural stepped frequencies as an OFC signal with a chip offset delay to increase code diversity. A method for assigning a different OCF to each device includes using a matrix based on the number of OFCs needed and the number chips per code, populating each matrix cell with OFC chip, and assigning the codes from the matrix to the devices. The asynchronous passive multi-tag system includes plural surface acoustic wave devices each producing a different OFC signal having the same number of chips and including a chip offset time delay, an algorithm for assigning OFCs to each device, and a transceiver to transmit an interrogation signal and receive OFC signals in response with minimal code collisions during transmission.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Determining North American Industry Classification System (NAICS) codes and size standards. 19.303 Section 19.303 Federal Acquisition... Classification System (NAICS) codes and size standards. (a) The contracting officer shall determine the...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-20
... Panel. This expertise encompasses hospital payment systems; hospital medical-care delivery systems; provider billing systems; APC groups, Current Procedural Terminology codes, and alpha-numeric Healthcare Common Procedure Coding System codes; and the use of, and payment for, drugs and medical devices in the...
Variable Coded Modulation software simulation
NASA Astrophysics Data System (ADS)
Sielicki, Thomas A.; Hamkins, Jon; Thorsen, Denise
This paper reports on the design and performance of a new Variable Coded Modulation (VCM) system. This VCM system comprises eight of NASA's recommended codes from the Consultative Committee for Space Data Systems (CCSDS) standards, including four turbo and four AR4JA/C2 low-density parity-check codes, together with six modulations types (BPSK, QPSK, 8-PSK, 16-APSK, 32-APSK, 64-APSK). The signaling protocol for the transmission mode is based on a CCSDS recommendation. The coded modulation may be dynamically chosen, block to block, to optimize throughput.
Rocketdyne/Westinghouse nuclear thermal rocket engine modeling
NASA Technical Reports Server (NTRS)
Glass, James F.
1993-01-01
The topics are presented in viewgraph form and include the following: systems approach needed for nuclear thermal rocket (NTR) design optimization; generic NTR engine power balance codes; rocketdyne nuclear thermal system code; software capabilities; steady state model; NTR engine optimizer code-logic; reactor power calculation logic; sample multi-component configuration; NTR design code output; generic NTR code at Rocketdyne; Rocketdyne NTR model; and nuclear thermal rocket modeling directions.
Properties of a certain stochastic dynamical system, channel polarization, and polar codes
NASA Astrophysics Data System (ADS)
Tanaka, Toshiyuki
2010-06-01
A new family of codes, called polar codes, has recently been proposed by Arikan. Polar codes are of theoretical importance because they are provably capacity achieving with low-complexity encoding and decoding. We first discuss basic properties of a certain stochastic dynamical system, on the basis of which properties of channel polarization and polar codes are reviewed, with emphasis on our recent results.
Ultra-narrow bandwidth voice coding
Holzrichter, John F [Berkeley, CA; Ng, Lawrence C [Danville, CA
2007-01-09
A system of removing excess information from a human speech signal and coding the remaining signal information, transmitting the coded signal, and reconstructing the coded signal. The system uses one or more EM wave sensors and one or more acoustic microphones to determine at least one characteristic of the human speech signal.
Spatial transform coding of color images.
NASA Technical Reports Server (NTRS)
Pratt, W. K.
1971-01-01
The application of the transform-coding concept to the coding of color images represented by three primary color planes of data is discussed. The principles of spatial transform coding are reviewed and the merits of various methods of color-image representation are examined. A performance analysis is presented for the color-image transform-coding system. Results of a computer simulation of the coding system are also given. It is shown that, by transform coding, the chrominance content of a color image can be coded with an average of 1.0 bits per element or less without serious degradation. If luminance coding is also employed, the average rate reduces to about 2.0 bits per element or less.
Bar-Code System for a Microbiological Laboratory
NASA Technical Reports Server (NTRS)
Law, Jennifer; Kirschner, Larry
2007-01-01
A bar-code system has been assembled for a microbiological laboratory that must examine a large number of samples. The system includes a commercial bar-code reader, computer hardware and software components, plus custom-designed database software. The software generates a user-friendly, menu-driven interface.
Geographic Information Systems using CODES linked data (Crash outcome data evaluation system)
DOT National Transportation Integrated Search
2001-04-01
This report presents information about geographic information systems (GIS) and CODES linked data. Section one provides an overview of a GIS and the benefits of linking to CODES. Section two outlines the basic issues relative to the types of map data...
Hadden, Kellie L; LeFort, Sandra; O'Brien, Michelle; Coyte, Peter C; Guerriere, Denise N
2016-04-01
The purpose of the current study was to examine the concurrent and discriminant validity of the Child Facial Coding System for children with cerebral palsy. Eighty-five children (mean = 8.35 years, SD = 4.72 years) were videotaped during a passive joint stretch with their physiotherapist and during 3 time segments: baseline, passive joint stretch, and recovery. Children's pain responses were rated from videotape using the Numerical Rating Scale and Child Facial Coding System. Results indicated that Child Facial Coding System scores during the passive joint stretch significantly correlated with Numerical Rating Scale scores (r = .72, P < .01). Child Facial Coding System scores were also significantly higher during the passive joint stretch than the baseline and recovery segments (P < .001). Facial activity was not significantly correlated with the developmental measures. These findings suggest that the Child Facial Coding System is a valid method of identifying pain in children with cerebral palsy. © The Author(s) 2015.
LDPC coded OFDM over the atmospheric turbulence channel.
Djordjevic, Ivan B; Vasic, Bane; Neifeld, Mark A
2007-05-14
Low-density parity-check (LDPC) coded optical orthogonal frequency division multiplexing (OFDM) is shown to significantly outperform LDPC coded on-off keying (OOK) over the atmospheric turbulence channel in terms of both coding gain and spectral efficiency. In the regime of strong turbulence at a bit-error rate of 10(-5), the coding gain improvement of the LDPC coded single-side band unclipped-OFDM system with 64 sub-carriers is larger than the coding gain of the LDPC coded OOK system by 20.2 dB for quadrature-phase-shift keying (QPSK) and by 23.4 dB for binary-phase-shift keying (BPSK).
Convolutional coding techniques for data protection
NASA Technical Reports Server (NTRS)
Massey, J. L.
1975-01-01
Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.
NASA Astrophysics Data System (ADS)
Braun, Walter; Eglin, Peter; Abello, Ricard
1993-02-01
Spread Spectrum Code Division Multiplex is an attractive scheme for the transmission of multiple signals over a satellite transponder. By using orthogonal or quasi-orthogonal spreading codes the interference between the users can be virtually eliminated. However, the acquisition and tracking of the spreading code phase can not take advantage of the code orthogonality since sequential acquisition and Delay-Locked loop tracking depend on correlation with code phases other than the optimal despreading phase. Hence, synchronization is a critical issue in such a system. A demonstration hardware for the verification of the orthogonal CDM synchronization and data transmission concept is being designed and implemented. The system concept, the synchronization scheme, and the implementation are described. The performance of the system is discussed based on computer simulations.
Los Alamos radiation transport code system on desktop computing platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Briesmeister, J.F.; Brinkley, F.W.; Clark, B.A.
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. These codes were originally developed many years ago and have undergone continual improvement. With a large initial effort and continued vigilance, the codes are easily portable from one type of hardware to another. The performance of scientific work-stations (SWS) has evolved to the point that such platforms can be used routinely to perform sophisticated radiation transport calculations. As the personal computer (PC) performance approaches that of the SWS, the hardware options for desk-top radiation transport calculations expands considerably. Themore » current status of the radiation transport codes within the LARTCS is described: MCNP, SABRINA, LAHET, ONEDANT, TWODANT, TWOHEX, and ONELD. Specifically, the authors discuss hardware systems on which the codes run and present code performance comparisons for various machines.« less
48 CFR 1.105-1 - Publication and code arrangement.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Publication and code arrangement. 1.105-1 Section 1.105-1 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION GENERAL FEDERAL ACQUISITION REGULATIONS SYSTEM Purpose, Authority, Issuance 1.105-1 Publication and code...
48 CFR 901.105-1 - Publication and code arrangement.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false Publication and code arrangement. 901.105-1 Section 901.105-1 Federal Acquisition Regulations System DEPARTMENT OF ENERGY GENERAL FEDERAL ACQUISITION REGULATIONS SYSTEM Purpose, Authority, Issuance 901.105-1 Publication and code...
48 CFR 1.105-1 - Publication and code arrangement.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 1 2012-10-01 2012-10-01 false Publication and code arrangement. 1.105-1 Section 1.105-1 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION GENERAL FEDERAL ACQUISITION REGULATIONS SYSTEM Purpose, Authority, Issuance 1.105-1 Publication and code...
48 CFR 901.105-1 - Publication and code arrangement.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false Publication and code arrangement. 901.105-1 Section 901.105-1 Federal Acquisition Regulations System DEPARTMENT OF ENERGY GENERAL FEDERAL ACQUISITION REGULATIONS SYSTEM Purpose, Authority, Issuance 901.105-1 Publication and code...
48 CFR 1.105-1 - Publication and code arrangement.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false Publication and code arrangement. 1.105-1 Section 1.105-1 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION GENERAL FEDERAL ACQUISITION REGULATIONS SYSTEM Purpose, Authority, Issuance 1.105-1 Publication and code...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmer, M.E.
1997-12-05
This V and V Report includes analysis of two revisions of the DMS [data management system] System Requirements Specification (SRS) and the Preliminary System Design Document (PSDD); the source code for the DMS Communication Module (DMSCOM) messages; the source code for selected DMS Screens, and the code for the BWAS Simulator. BDM Federal analysts used a series of matrices to: compare the requirements in the System Requirements Specification (SRS) to the specifications found in the System Design Document (SDD), to ensure the design supports the business functions, compare the discreet parts of the SDD with each other, to ensure thatmore » the design is consistent and cohesive, compare the source code of the DMS Communication Module with the specifications, to ensure that the resultant messages will support the design, compare the source code of selected screens to the specifications to ensure that resultant system screens will support the design, compare the source code of the BWAS simulator with the requirements to interface with DMS messages and data transfers relating to the BWAS operations.« less
2012-01-01
Background Procedures documented by general practitioners in primary care have not been studied in relation to procedure coding systems. We aimed to describe procedures documented by Swedish general practitioners in electronic patient records and to compare them to the Swedish Classification of Health Interventions (KVÅ) and SNOMED CT. Methods Procedures in 200 record entries were identified, coded, assessed in relation to two procedure coding systems and analysed. Results 417 procedures found in the 200 electronic patient record entries were coded with 36 different Classification of Health Interventions categories and 148 different SNOMED CT concepts. 22.8% of the procedures could not be coded with any Classification of Health Interventions category and 4.3% could not be coded with any SNOMED CT concept. 206 procedure-concept/category pairs were assessed as a complete match in SNOMED CT compared to 10 in the Classification of Health Interventions. Conclusions Procedures documented by general practitioners were present in nearly all electronic patient record entries. Almost all procedures could be coded using SNOMED CT. Classification of Health Interventions covered the procedures to a lesser extent and with a much lower degree of concordance. SNOMED CT is a more flexible terminology system that can be used for different purposes for procedure coding in primary care. PMID:22230095
Child Injury Deaths: Comparing Prevention Information from Two Coding Systems
Schnitzer, Patricia G.; Ewigman, Bernard G.
2006-01-01
Objectives The International Classification of Disease (ICD) external cause of injury E-codes do not sufficiently identify injury circumstances amenable to prevention. The researchers developed an alternative classification system (B-codes) that incorporates behavioral and environmental factors, for use in childhood injury research, and compare the two coding systems in this paper. Methods All fatal injuries among children less than age five that occurred between January 1, 1992, and December 31, 1994, were classified using both B-codes and E-codes. Results E-codes identified the most common causes of injury death: homicide (24%), fires (21%), motor vehicle incidents (21%), drowning (10%), and suffocation (9%). The B-codes further revealed that homicides (51%) resulted from the child being shaken or struck by another person; many fires deaths (42%) resulted from children playing with matches or lighters; drownings (46%) usually occurred in natural bodies of water; and most suffocation deaths (68%) occurred in unsafe sleeping arrangements. Conclusions B-codes identify additional information with specific relevance for prevention of childhood injuries. PMID:15944169
The Social Interactive Coding System (SICS): An On-Line, Clinically Relevant Descriptive Tool.
ERIC Educational Resources Information Center
Rice, Mabel L.; And Others
1990-01-01
The Social Interactive Coding System (SICS) assesses the continuous verbal interactions of preschool children as a function of play areas, addressees, script codes, and play levels. This paper describes the 26 subjects and the setting involved in SICS development, coding definitions and procedures, training procedures, reliability, sample…
The design of wavefront coded imaging system
NASA Astrophysics Data System (ADS)
Lan, Shun; Cen, Zhaofeng; Li, Xiaotong
2016-10-01
Wavefront Coding is a new method to extend the depth of field, which combines optical design and signal processing together. By using optical design software ZEMAX ,we designed a practical wavefront coded imaging system based on a conventional Cooke triplet system .Unlike conventional optical system, the wavefront of this new system is modulated by a specially designed phase mask, which makes the point spread function (PSF)of optical system not sensitive to defocus. Therefore, a series of same blurred images obtained at the image plane. In addition, the optical transfer function (OTF) of the wavefront coded imaging system is independent of focus, which is nearly constant with misfocus and has no regions of zeros. All object information can be completely recovered through digital filtering at different defocus positions. The focus invariance of MTF is selected as merit function in this design. And the coefficients of phase mask are set as optimization goals. Compared to conventional optical system, wavefront coded imaging system obtains better quality images under different object distances. Some deficiencies appear in the restored images due to the influence of digital filtering algorithm, which are also analyzed in this paper. The depth of field of the designed wavefront coded imaging system is about 28 times larger than initial optical system, while keeping higher optical power and resolution at the image plane.
NASA Astrophysics Data System (ADS)
Kong, Gyuyeol; Choi, Sooyong
2017-09-01
An enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding is proposed for four-level holographic data storage systems. While the previous four-ary modulation codes focus on preventing maximum two-dimensional intersymbol interference patterns, the proposed four-ary modulation code aims at maximizing the coding gains for better bit error rate performances. For achieving significant coding gains from the four-ary modulation codes, we design a new 2/3 four-ary modulation code in order to enlarge the free distance on the trellis through extensive simulation. The free distance of the proposed four-ary modulation code is extended from 1.21 to 2.04 compared with that of the conventional four-ary modulation code. The simulation result shows that the proposed four-ary modulation code has more than 1 dB gains compared with the conventional four-ary modulation code.
Automated encoding of clinical documents based on natural language processing.
Friedman, Carol; Shagina, Lyudmila; Lussier, Yves; Hripcsak, George
2004-01-01
The aim of this study was to develop a method based on natural language processing (NLP) that automatically maps an entire clinical document to codes with modifiers and to quantitatively evaluate the method. An existing NLP system, MedLEE, was adapted to automatically generate codes. The method involves matching of structured output generated by MedLEE consisting of findings and modifiers to obtain the most specific code. Recall and precision applied to Unified Medical Language System (UMLS) coding were evaluated in two separate studies. Recall was measured using a test set of 150 randomly selected sentences, which were processed using MedLEE. Results were compared with a reference standard determined manually by seven experts. Precision was measured using a second test set of 150 randomly selected sentences from which UMLS codes were automatically generated by the method and then validated by experts. Recall of the system for UMLS coding of all terms was .77 (95% CI.72-.81), and for coding terms that had corresponding UMLS codes recall was .83 (.79-.87). Recall of the system for extracting all terms was .84 (.81-.88). Recall of the experts ranged from .69 to .91 for extracting terms. The precision of the system was .89 (.87-.91), and precision of the experts ranged from .61 to .91. Extraction of relevant clinical information and UMLS coding were accomplished using a method based on NLP. The method appeared to be comparable to or better than six experts. The advantage of the method is that it maps text to codes along with other related information, rendering the coded output suitable for effective retrieval.
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Zhou, Guang-xiang; Gao, Wen-chun; Wang, Yong; Lin, Jin-zhao; Pang, Yu
2016-01-01
According to the requirements of the increasing development for optical transmission systems, a novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on the subgroup of the finite field multiplicative group is proposed. Furthermore, this construction method can effectively avoid the girth-4 phenomena and has the advantages such as simpler construction, easier implementation, lower encoding/decoding complexity, better girth properties and more flexible adjustment for the code length and code rate. The simulation results show that the error correction performance of the QC-LDPC(3 780,3 540) code with the code rate of 93.7% constructed by this proposed method is excellent, its net coding gain is respectively 0.3 dB, 0.55 dB, 1.4 dB and 1.98 dB higher than those of the QC-LDPC(5 334,4 962) code constructed by the method based on the inverse element characteristics in the finite field multiplicative group, the SCG-LDPC(3 969,3 720) code constructed by the systematically constructed Gallager (SCG) random construction method, the LDPC(32 640,30 592) code in ITU-T G.975.1 and the classic RS(255,239) code which is widely used in optical transmission systems in ITU-T G.975 at the bit error rate ( BER) of 10-7. Therefore, the constructed QC-LDPC(3 780,3 540) code is more suitable for optical transmission systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Robert Cameron; Steiner, Don
2004-06-15
The generation of runaway electrons during a thermal plasma disruption is a concern for the safe and economical operation of a tokamak power system. Runaway electrons have high energy, 10 to 300 MeV, and may potentially cause extensive damage to plasma-facing components (PFCs) through large temperature increases, melting of metallic components, surface erosion, and possible burnout of coolant tubes. The EPQ code system was developed to simulate the thermal response of PFCs to a runaway electron impact. The EPQ code system consists of several parts: UNIX scripts that control the operation of an electron-photon Monte Carlo code to calculate themore » interaction of the runaway electrons with the plasma-facing materials; a finite difference code to calculate the thermal response, melting, and surface erosion of the materials; a code to process, scale, transform, and convert the electron Monte Carlo data to volumetric heating rates for use in the thermal code; and several minor and auxiliary codes for the manipulation and postprocessing of the data. The electron-photon Monte Carlo code used was Electron-Gamma-Shower (EGS), developed and maintained by the National Research Center of Canada. The Quick-Therm-Two-Dimensional-Nonlinear (QTTN) thermal code solves the two-dimensional cylindrical modified heat conduction equation using the Quickest third-order accurate and stable explicit finite difference method and is capable of tracking melting or surface erosion. The EPQ code system is validated using a series of analytical solutions and simulations of experiments. The verification of the QTTN thermal code with analytical solutions shows that the code with the Quickest method is better than 99.9% accurate. The benchmarking of the EPQ code system and QTTN versus experiments showed that QTTN's erosion tracking method is accurate within 30% and that EPQ is able to predict the occurrence of melting within the proper time constraints. QTTN and EPQ are verified and validated as able to calculate the temperature distribution, phase change, and surface erosion successfully.« less
48 CFR 19.303 - Determining North American Industry Classification System codes and size standards.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Industry Classification System codes and size standards. 19.303 Section 19.303 Federal Acquisition... of Small Business Status for Small Business Programs 19.303 Determining North American Industry... North American Industry Classification System (NAICS) code and related small business size standard and...
48 CFR 1601.104-1 - Publication and code arrangement.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false Publication and code arrangement. 1601.104-1 Section 1601.104-1 Federal Acquisition Regulations System OFFICE OF PERSONNEL... SYSTEM Purpose, Authority, Issuance 1601.104-1 Publication and code arrangement. (a) The FEHBAR and its...
48 CFR 2101.104-1 - Publication and code arrangement.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false Publication and code arrangement. 2101.104-1 Section 2101.104-1 Federal Acquisition Regulations System OFFICE OF PERSONNEL... REGULATIONS SYSTEM Purpose, Authority, Issuance 2101.104-1 Publication and code arrangement. (a) The LIFAR and...
Decomposition of the optical transfer function: wavefront coding imaging systems
NASA Astrophysics Data System (ADS)
Muyo, Gonzalo; Harvey, Andy R.
2005-10-01
We describe the mapping of the optical transfer function (OTF) of an incoherent imaging system into a geometrical representation. We show that for defocused traditional and wavefront-coded systems the OTF can be represented as a generalized Cornu spiral. This representation provides a physical insight into the way in which wavefront coding can increase the depth of field of an imaging system and permits analytical quantification of salient OTF parameters, such as the depth of focus, the location of nulls, and amplitude and phase modulation of the wavefront-coding OTF.
Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding
NASA Astrophysics Data System (ADS)
Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.
2016-03-01
In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.
Wiebe, Jens; Franke, Jennifer; Lubos, Edith; Boekstegers, Peter; Schillinger, Wolfgang; Ouarrak, Taoufik; May, Andreas E; Eggebrecht, Holger; Kuck, Karl-Heinz; Baldus, Stephan; Senges, Jochen; Sievert, Horst
2014-10-01
To evaluate in-hospital and short-term outcomes of percutaneous mitral valve repair according to patients' logistic EuroSCORE (logEuroSCORE) in a multicenter registry The logEuroSCORE is an established tool to predict the risk of mortality during cardiac surgery. In high-risk patients percutaneous mitral valve repair with the MitraClip system represents a less-invasive alternative Data from 1002 patients, who underwent percutaneous mitral valve repair with the MitraClip system, were analyzed in the German Transcatheter Mitral Valve Interventions (TRAMI) Registry. A logEuroSCORE (mortality risk in %) ≥ 20 was considered high risk Of all patients, 557 (55.6%) had a logEuroSCORE ≥ 20. Implantation of the MitraClip was successful in 95.5 % (942/986) patients. Moderate residual mitral valve regurgitation was more often detected in patients with a logEuroSCORE ≥ 20 (23.8% vs. 17.1%, respectively, P < 0.05). In patients with a logEuroSCORE ≥ 20 the procedural complication rate was 8.9% (vs. 6.4, n.s.) and the in-hospital MACCE rate 4.9% (vs. 1.4% P < 0.01). The in-hospital mortality rate in patients with a logEuroSCORE ≥ 20 and logEuroSCORE < 20 was 4.3 and 1.1%, respectively (P ≤ 0.01) CONCLUSION: Percutaneous mitral valve repair with the MitraClip system is feasible in patients with a logEuroSCORE ≥ 20 with similar procedural results compared to patients with lower predicted risk. Although mortality was four times higher than in patients with logEuroSCORE < 20, mortality in high risk patients was lower than predicted. In those with a logEuroSCORE ≥ 20, moderate residual mitral valve regurgitation was more frequent. © 2014 Wiley Periodicals, Inc.
VeryVote: A Voter Verifiable Code Voting System
NASA Astrophysics Data System (ADS)
Joaquim, Rui; Ribeiro, Carlos; Ferreira, Paulo
Code voting is a technique used to address the secure platform problem of remote voting. A code voting system consists in secretly sending, e.g. by mail, code sheets to voters that map their choices to entry codes in their ballot. While voting, the voter uses the code sheet to know what code to enter in order to vote for a particular candidate. In effect, the voter does the vote encryption and, since no malicious software on the PC has access to the code sheet it is not able to change the voter’s intention. However, without compromising the voter’s privacy, the vote codes are not enough to prove that the vote is recorded and counted as cast by the election server.
Audit of Clinical Coding of Major Head and Neck Operations
Mitra, Indu; Malik, Tass; Homer, Jarrod J; Loughran, Sean
2009-01-01
INTRODUCTION Within the NHS, operations are coded using the Office of Population Censuses and Surveys (OPCS) classification system. These codes, together with diagnostic codes, are used to generate Healthcare Resource Group (HRG) codes, which correlate to a payment bracket. The aim of this study was to determine whether allocated procedure codes for major head and neck operations were correct and reflective of the work undertaken. HRG codes generated were assessed to determine accuracy of remuneration. PATIENTS AND METHODS The coding of consecutive major head and neck operations undertaken in a tertiary referral centre over a retrospective 3-month period were assessed. Procedure codes were initially ascribed by professional hospital coders. Operations were then recoded by the surgical trainee in liaison with the head of clinical coding. The initial and revised procedure codes were compared and used to generate HRG codes, to determine whether the payment banding had altered. RESULTS A total of 34 cases were reviewed. The number of procedure codes generated initially by the clinical coders was 99, whereas the revised codes generated 146. Of the original codes, 47 of 99 (47.4%) were incorrect. In 19 of the 34 cases reviewed (55.9%), the HRG code remained unchanged, thus resulting in the correct payment. Six cases were never coded, equating to £15,300 loss of payment. CONCLUSIONS These results highlight the inadequacy of this system to reward hospitals for the work carried out within the NHS in a fair and consistent manner. The current coding system was found to be complicated, ambiguous and inaccurate, resulting in loss of remuneration. PMID:19220944
Recent improvements of reactor physics codes in MHI
NASA Astrophysics Data System (ADS)
Kosaka, Shinya; Yamaji, Kazuya; Kirimura, Kazuki; Kamiyama, Yohei; Matsumoto, Hideki
2015-12-01
This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO's Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipated transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.
Recent improvements of reactor physics codes in MHI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kosaka, Shinya, E-mail: shinya-kosaka@mhi.co.jp; Yamaji, Kazuya; Kirimura, Kazuki
2015-12-31
This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO’s Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipatedmore » transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.« less
Numerical predictions of EML (electromagnetic launcher) system performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schnurr, N.M.; Kerrisk, J.F.; Davidson, R.F.
1987-01-01
The performance of an electromagnetic launcher (EML) depends on a large number of parameters, including the characteristics of the power supply, rail geometry, rail and insulator material properties, injection velocity, and projectile mass. EML system performance is frequently limited by structural or thermal effects in the launcher (railgun). A series of computer codes has been developed at the Los Alamos National Laboratory to predict EML system performance and to determine the structural and thermal constraints on barrel design. These codes include FLD, a two-dimensional electrostatic code used to calculate the high-frequency inductance gradient and surface current density distribution for themore » rails; TOPAZRG, a two-dimensional finite-element code that simultaneously analyzes thermal and electromagnetic diffusion in the rails; and LARGE, a code that predicts the performance of the entire EML system. Trhe NIKE2D code, developed at the Lawrence Livermore National Laboratory, is used to perform structural analyses of the rails. These codes have been instrumental in the design of the Lethality Test System (LTS) at Los Alamos, which has an ultimate goal of accelerating a 30-g projectile to a velocity of 15 km/s. The capabilities of the individual codes and the coupling of these codes to perform a comprehensive analysis is discussed in relation to the LTS design. Numerical predictions are compared with experimental data and presented for the LTS prototype tests.« less
Blackwell, C.D.
1988-01-01
Codes for the unique identification of public and private organizations listed in computerized data systems are presented. These codes are used by the U.S. Geological Survey 's National Water Data Exchange (NAWDEX), National Water Data Storage and Retrieval System (WATSTORE), National Cartographic Information Center (NCIC), and Office of Water Data Coordination (OWDC). The format structure of the codes is discussed and instructions are given for requesting new books. (Author 's abstract)
CORESAFE: A Formal Approach against Code Replacement Attacks on Cyber Physical Systems
2018-04-19
AFRL-AFOSR-JP-TR-2018-0035 CORESAFE:A Formal Approach against Code Replacement Attacks on Cyber Physical Systems Sandeep Shukla INDIAN INSTITUTE OF...Formal Approach against Code Replacement Attacks on Cyber Physical Systems 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA2386-16-1-4099 5c. PROGRAM ELEMENT...Institute of Technology Kanpur India Final Report for AOARD Grant “CORESAFE: A Formal Approach against Code Replacement Attacks on Cyber Physical
Combined trellis coding with asymmetric MPSK modulation: An MSAT-X report
NASA Technical Reports Server (NTRS)
Simon, M. K.; Divsalar, D.
1985-01-01
Traditionally symmetric, multiple phase-shift-keyed (MPSK) signal constellations, i.e., those with uniformly spaced signal points around the circle, have been used for both uncoded and coded systems. Although symmetric MPSK signal constellations are optimum for systems with no coding, the same is not necessarily true for coded systems. This appears to show that by designing the signal constellations to be asymmetric, one can, in many instances, obtain a significant performance improvement over the traditional symmetric MPSK constellations combined with trellis coding. The joint design of n/(n + 1) trellis codes and asymmetric 2 sup n + 1 - point MPSK is considered, which has a unity bandwidth expansion relative to uncoded 2 sup n-point symmetric MPSK. The asymptotic performance gains due to coding and asymmetry are evaluated in terms of the minimum free Euclidean distance free of the trellis. A comparison of the maximum value of this performance measure with the minimum distance d sub min of the uncoded system is an indication of the maximum reduction in required E sub b/N sub O that can be achieved for arbitrarily small system bit-error rates. It is to be emphasized that the introduction of asymmetry into the signal set does not effect the bandwidth of power requirements of the system; hence, the above-mentioned improvements in performance come at little or no cost. MPSK signal sets in coded systems appear in the work of Divsalar.
Advanced Imaging Optics Utilizing Wavefront Coding.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scrymgeour, David; Boye, Robert; Adelsberger, Kathleen
2015-06-01
Image processing offers a potential to simplify an optical system by shifting some of the imaging burden from lenses to the more cost effective electronics. Wavefront coding using a cubic phase plate combined with image processing can extend the system's depth of focus, reducing many of the focus-related aberrations as well as material related chromatic aberrations. However, the optimal design process and physical limitations of wavefront coding systems with respect to first-order optical parameters and noise are not well documented. We examined image quality of simulated and experimental wavefront coded images before and after reconstruction in the presence of noise.more » Challenges in the implementation of cubic phase in an optical system are discussed. In particular, we found that limitations must be placed on system noise, aperture, field of view and bandwidth to develop a robust wavefront coded system.« less
Exploring Type and Amount of Parent Talk during Individualized Family Service Plan Meetings
ERIC Educational Resources Information Center
Ridgley, Robyn; Snyder, Patricia; McWilliam, R. A.
2014-01-01
We discuss the utility of a coding system designed to evaluate the amount and type of parent talk during individualized family service plan (IFSP) meetings. The iterative processes used to develop the "Parent Communication Coding System" (PCCS) and its associated codes are described. In addition, we explored whether PCCS codes could be…
NASA Technical Reports Server (NTRS)
Bittker, D. A.; Scullin, V. J.
1984-01-01
A general chemical kinetics code is described for complex, homogeneous ideal gas reactions in any chemical system. The main features of the GCKP84 code are flexibility, convenience, and speed of computation for many different reaction conditions. The code, which replaces the GCKP code published previously, solves numerically the differential equations for complex reaction in a batch system or one dimensional inviscid flow. It also solves numerically the nonlinear algebraic equations describing the well stirred reactor. A new state of the art numerical integration method is used for greatly increased speed in handling systems of stiff differential equations. The theory and the computer program, including details of input preparation and a guide to using the code are given.
Investigation of Near Shannon Limit Coding Schemes
NASA Technical Reports Server (NTRS)
Kwatra, S. C.; Kim, J.; Mo, Fan
1999-01-01
Turbo codes can deliver performance that is very close to the Shannon limit. This report investigates algorithms for convolutional turbo codes and block turbo codes. Both coding schemes can achieve performance near Shannon limit. The performance of the schemes is obtained using computer simulations. There are three sections in this report. First section is the introduction. The fundamental knowledge about coding, block coding and convolutional coding is discussed. In the second section, the basic concepts of convolutional turbo codes are introduced and the performance of turbo codes, especially high rate turbo codes, is provided from the simulation results. After introducing all the parameters that help turbo codes achieve such a good performance, it is concluded that output weight distribution should be the main consideration in designing turbo codes. Based on the output weight distribution, the performance bounds for turbo codes are given. Then, the relationships between the output weight distribution and the factors like generator polynomial, interleaver and puncturing pattern are examined. The criterion for the best selection of system components is provided. The puncturing pattern algorithm is discussed in detail. Different puncturing patterns are compared for each high rate. For most of the high rate codes, the puncturing pattern does not show any significant effect on the code performance if pseudo - random interleaver is used in the system. For some special rate codes with poor performance, an alternative puncturing algorithm is designed which restores their performance close to the Shannon limit. Finally, in section three, for iterative decoding of block codes, the method of building trellis for block codes, the structure of the iterative decoding system and the calculation of extrinsic values are discussed.
Maclean, Donald; Younes, Hakim Ben; Forrest, Margaret; Towers, Hazel K
2012-03-01
Accurate and timely clinical data are required for clinical and organisational purposes and is especially important for patient management, audit of surgical performance and the electronic health record. The recent introduction of computerised theatre management systems has enabled real-time (point-of-care) operative procedure coding by clinical staff. However the accuracy of these data is unknown. The aim of this Scottish study was to compare the accuracy of theatre nurses' real-time coding on the local theatre management system with the central Scottish Morbidity Record (SMR01). Paired procedural codes were recorded, qualitatively graded for precision and compared (n = 1038). In this study, real-time, point-of-care coding by theatre nurses resulted in significant coding errors compared with the central SMR01 database. Improved collaboration between full-time coders and clinical staff using computerised decision support systems is suggested.
Min, Mun Ki; Ryu, Ji Ho; Kim, Yong In; Park, Maeng Real; Park, Yong Myeon; Park, Sung Wook; Yeom, Seok Ran; Han, Sang Kyoon; Kim, Yang Weon
2014-11-01
In an attempt to begin ST-segment elevation myocardial infarction (STEMI) treatment more quickly (referred to as door-to-balloon [DTB] time) by minimizing preventable delays in electrocardiogram (ECG) interpretation, cardiac catheterization laboratory (CCL) activation was changed from activation by the emergency physician (code heart I) to activation by a single page if the ECG is interpreted as STEMI by the ECG machine (ECG machine auto-interpretation) (code heart II). We sought to determine the impact of ECG machine auto-interpretation on CCL activation. The study period was from June 2010 to May 2012 (from June to November 2011, code heart I; from December 2011 to May 2012, code heart II). All patients aged 18 years or older who were diagnosed with STEMI were evaluated for enrollment. Patients who experienced the code heart system were also included. Door-to-balloon time before and after code heart system were compared with a retrospective chart review. In addition, to determine the appropriateness of the activation, we compared coronary angiography performance rate and percentage of STEMI between code heart I and II. After the code heart system, the mean DTB time was significantly decreased (before, 96.51 ± 65.60 minutes; after, 65.40 ± 26.40 minutes; P = .043). The STEMI diagnosis and the coronary angiography performance rates were significantly lower in the code heart II group than in the code heart I group without difference in DTB time. Cardiac catheterization laboratory activation by ECG machine auto-interpretation does not reduce DTB time and often unnecessarily activates the code heart system compared with emergency physician-initiated activation. This system therefore decreases the appropriateness of CCL activation. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Nasaruddin; Tsujioka, Tetsuo
An optical CDMA (OCDMA) system is a flexible technology for future broadband multiple access networks. A secure OCDMA network in broadband optical access technologies is also becoming an issue of great importance. In this paper, we propose novel reconfigurable wavelength-time (W-T) optical codes that lead to secure transmission in OCDMA networks. The proposed W-T optical codes are constructed by using quasigroups (QGs) for wavelength hopping and one-dimensional optical orthogonal codes (OOCs) for time spreading; we call them QGs/OOCs. Both QGs and OOCs are randomly generated by a computer search to ensure that an eavesdropper could not improve its interception performance by making use of the coding structure. Then, the proposed reconfigurable QGs/OOCs can provide more codewords, and many different code set patterns, which differ in both wavelength and time positions for given code parameters. Moreover, the bit error probability of the proposed codes is analyzed numerically. To realize the proposed codes, a secure system is proposed by employing reconfigurable encoders/decoders based on array waveguide gratings (AWGs), which allow the users to change their codeword patterns to protect against eavesdropping. Finally, the probability of breaking a certain codeword in the proposed system is evaluated analytically. The results show that the proposed codes and system can provide a large codeword pattern, and decrease the probability of breaking a certain codeword, to enhance OCDMA network security.
Cardinality enhancement utilizing Sequential Algorithm (SeQ) code in OCDMA system
NASA Astrophysics Data System (ADS)
Fazlina, C. A. S.; Rashidi, C. B. M.; Rahman, A. K.; Aljunid, S. A.
2017-11-01
Optical Code Division Multiple Access (OCDMA) has been important with increasing demand for high capacity and speed for communication in optical networks because of OCDMA technique high efficiency that can be achieved, hence fibre bandwidth is fully used. In this paper we will focus on Sequential Algorithm (SeQ) code with AND detection technique using Optisystem design tool. The result revealed SeQ code capable to eliminate Multiple Access Interference (MAI) and improve Bit Error Rate (BER), Phase Induced Intensity Noise (PIIN) and orthogonally between users in the system. From the results, SeQ shows good performance of BER and capable to accommodate 190 numbers of simultaneous users contrast with existing code. Thus, SeQ code have enhanced the system about 36% and 111% of FCC and DCS code. In addition, SeQ have good BER performance 10-25 at 155 Mbps in comparison with 622 Mbps, 1 Gbps and 2 Gbps bit rate. From the plot graph, 155 Mbps bit rate is suitable enough speed for FTTH and LAN networks. Resolution can be made based on the superior performance of SeQ code. Thus, these codes will give an opportunity in OCDMA system for better quality of service in an optical access network for future generation's usage
New GOES satellite synchronized time code generation
NASA Technical Reports Server (NTRS)
Fossler, D. E.; Olson, R. K.
1984-01-01
The TRAK Systems' GOES Satellite Synchronized Time Code Generator is described. TRAK Systems has developed this timing instrument to supply improved accuracy over most existing GOES receiver clocks. A classical time code generator is integrated with a GOES receiver.
DOT National Transportation Integrated Search
2001-02-01
Problems, solutions and recommendations for implementation have been contributed by 16 of the 27 CODES states and organized as appropriate under the administrative, linkage and application requirements for a Crash Outcome Data Evaluation System (CODE...
Lee, Jin Hee; Hong, Ki Jeong; Kim, Do Kyun; Kwak, Young Ho; Jang, Hye Young; Kim, Hahn Bom; Noh, Hyun; Park, Jungho; Song, Bongkyu; Jung, Jae Yun
2013-12-01
A clinically sensible diagnosis grouping system (DGS) is needed for describing pediatric emergency diagnoses for research, medical resource preparedness, and making national policy for pediatric emergency medical care. The Pediatric Emergency Care Applied Research Network (PECARN) developed the DGS successfully. We developed the modified PECARN DGS based on the different pediatric population of South Korea and validated the system to obtain the accurate and comparable epidemiologic data of pediatric emergent conditions of the selected population. The data source used to develop and validate the modified PECARN DGS was the National Emergency Department Information System of South Korea, which was coded by the International Classification of Diseases, 10th Revision (ICD-10) code system. To develop the modified DGS based on ICD-10 code, we matched the selected ICD-10 codes with those of the PECARN DGS by the General Equivalence Mappings (GEMs). After converting ICD-10 codes to ICD-9 codes by GEMs, we matched ICD-9 codes into PECARN DGS categories using the matrix developed by PECARN group. Lastly, we conducted the expert panel survey using Delphi method for the remaining diagnosis codes that were not matched. A total of 1879 ICD-10 codes were used in development of the modified DGS. After 1078 (57.4%) of 1879 ICD-10 codes were assigned to the modified DGS by GEM and PECARN conversion tools, investigators assigned each of the remaining 801 codes (42.6%) to DGS subgroups by 2 rounds of electronic Delphi surveys. And we assigned the remaining 29 codes (4%) into the modified DGS at the second expert consensus meeting. The modified DGS accounts for 98.7% and 95.2% of diagnoses of the 2008 and 2009 National Emergency Department Information System data set. This modified DGS also exhibited strong construct validity using the concepts of age, sex, site of care, and seasons. This also reflected the 2009 outbreak of H1N1 influenza in Korea. We developed and validated clinically feasible and sensible DGS system for describing pediatric emergent conditions in Korea. The modified PECARN DGS showed good comprehensiveness and demonstrated reliable construct validity. This modified DGS based on PECARN DGS framework may be effectively implemented for research, reporting, and resource planning in pediatric emergency system of South Korea.
NASA Astrophysics Data System (ADS)
He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin
2015-09-01
In this paper, a Golay complementary training sequence (TS)-based symbol synchronization scheme is proposed and experimentally demonstrated in multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system with a variable rate low-density parity-check (LDPC) code. Meanwhile, the coding gain and spectral efficiency in the variable rate LDPC-coded MB-OFDM UWBoF system are investigated. By utilizing the non-periodic auto-correlation property of the Golay complementary pair, the start point of LDPC-coded MB-OFDM UWB signal can be estimated accurately. After 100 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1×10-3, the experimental results show that the short block length 64QAM-LDPC coding provides a coding gain of 4.5 dB, 3.8 dB and 2.9 dB for a code rate of 62.5%, 75% and 87.5%, respectively.
Health information management: an introduction to disease classification and coding.
Mony, Prem Kumar; Nagaraj, C
2007-01-01
Morbidity and mortality data constitute an important component of a health information system and their coding enables uniform data collation and analysis as well as meaningful comparisons between regions or countries. Strengthening the recording and reporting systems for health monitoring is a basic requirement for an efficient health information management system. Increased advocacy for and awareness of a uniform coding system together with adequate capacity building of physicians, coders and other allied health and information technology personnel would pave the way for a valid and reliable health information management system in India. The core requirements for the implementation of disease coding are: (i) support from national/institutional health administrators, (ii) widespread availability of the ICD-10 material for morbidity and mortality coding; (iii) enhanced human and financial resources; and (iv) optimal use of informatics. We describe the methodology of a disease classification and codification system as also its applications for developing and maintaining an effective health information management system for India.
NASA Astrophysics Data System (ADS)
Kracher, Daniela; Manzini, Elisa; Reick, Christian H.; Schultz, Martin; Stein, Olaf
2014-05-01
Climate change is driven by an increasing release of anthropogenic greenhouse gases (GHGs) such as carbon dioxide and nitrous oxide (N2O). Besides fossil fuel burning, also land use change and land management are anthropogenic sources of GHGs. Especially inputs of reactive nitrogen via fertilizer and deposition lead to enhanced emissions of N2O. One effect of a drastic future increase in surface temperature is a modification of atmospheric circulation, e.g. an accelerated Brewer Dobson circulation affecting the exchange between troposphere and stratosphere. N2O is inert in the troposphere and decayed only in the stratosphere. Thus, changes in atmospheric circulation, especially changes in the exchange between troposphere and stratosphere, will affect the atmospheric transport, decay, and distribution of N2O. In our study we assess the impact of global warming on atmospheric circulation and implied effects on the distribution and lifetime of atmospheric N2O. As terrestrial N2O emissions are highly determined by inputs of reactive nitrogen - the location of which being determined by human choice - we examine in particular the importance of latitudinal source regions of N2O for its global distribution. For this purpose we apply the Max Planck Institute Earth System Model, MPI-ESM. MPI-ESM consists of the atmospheric general circulation model ECHAM, the land surface model JSBACH, and MPIOM/HAMOCC representing ocean circulation and ocean biogeochemistry. Prognostic atmospheric N2O concentrations in MPI-ESM are determined by land N2O emissions, ocean N2O exchange and atmospheric tracer transport. As stratospheric chemistry is not explicitly represented in MPI-ESM, stratospheric decay rates of N2O are prescribed from a MACC MOZART simulation.
On the ability of a global atmospheric inversion to constrain variations of CO2 fluxes over Amazonia
NASA Astrophysics Data System (ADS)
Molina, L.; Broquet, G.; Imbach, P.; Chevallier, F.; Poulter, B.; Bonal, D.; Burban, B.; Ramonet, M.; Gatti, L. V.; Wofsy, S. C.; Munger, J. W.; Dlugokencky, E.; Ciais, P.
2015-07-01
The exchanges of carbon, water and energy between the atmosphere and the Amazon basin have global implications for the current and future climate. Here, the global atmospheric inversion system of the Monitoring of Atmospheric Composition and Climate (MACC) service is used to study the seasonal and interannual variations of biogenic CO2 fluxes in Amazonia during the period 2002-2010. The system assimilated surface measurements of atmospheric CO2 mole fractions made at more than 100 sites over the globe into an atmospheric transport model. The present study adds measurements from four surface stations located in tropical South America, a region poorly covered by CO2 observations. The estimates of net ecosystem exchange (NEE) optimized by the inversion are compared to an independent estimate of NEE upscaled from eddy-covariance flux measurements in Amazonia. They are also qualitatively evaluated against reports on the seasonal and interannual variations of the land sink in South America from the scientific literature. We attempt at assessing the impact on NEE of the strong droughts in 2005 and 2010 (due to severe and longer-than-usual dry seasons) and the extreme rainfall conditions registered in 2009. The spatial variations of the seasonal and interannual variability of optimized NEE are also investigated. While the inversion supports the assumption of strong spatial heterogeneity of these variations, the results reveal critical limitations of the coarse-resolution transport model, the surface observation network in South America during the recent years and the present knowledge of modelling uncertainties in South America that prevent our inversion from capturing the seasonal patterns of fluxes across Amazonia. However, some patterns from the inversion seem consistent with the anomaly of moisture conditions in 2009.
On the ability of a global atmospheric inversion to constrain variations of CO2 fluxes over Amazonia
NASA Astrophysics Data System (ADS)
Molina, L.; Broquet, G.; Imbach, P.; Chevallier, F.; Poulter, B.; Bonal, D.; Burban, B.; Ramonet, M.; Gatti, L. V.; Wofsy, S. C.; Munger, J. W.; Dlugokencky, E.; Ciais, P.
2015-01-01
The exchanges of carbon, water, and energy between the atmosphere and the Amazon Basin have global implications for current and future climate. Here, the global atmospheric inversion system of the Monitoring of Atmospheric Composition and Climate service (MACC) was used to further study the seasonal and interannual variations of biogenic CO2 fluxes in Amazonia. The system assimilated surface measurements of atmospheric CO2 mole fractions made over more than 100 sites over the globe into an atmospheric transport model. This study added four surface stations located in tropical South America, a region poorly covered by CO2 observations. The estimates of net ecosystem exchange (NEE) optimized by the inversion were compared to independent estimates of NEE upscaled from eddy-covariance flux measurements in Amazonia, and against reports on the seasonal and interannual variations of the land sink in South America from the scientific literature. We focused on the impact of the interannual variation of the strong droughts in 2005 and 2010 (due to severe and longer-than-usual dry seasons), and of the extreme rainfall conditions registered in 2009. The spatial variations of the seasonal and interannual variability of optimized NEE were also investigated. While the inversion supported the assumption of strong spatial heterogeneity of these variations, the results revealed critical limitations that prevent global inversion frameworks from capturing the data-driven seasonal patterns of fluxes across Amazonia. In particular, it highlighted issues due to the configuration of the observation network in South America and the lack of continuity of the measurements. However, some robust patterns from the inversion seemed consistent with the abnormal moisture conditions in 2009.
NASA Astrophysics Data System (ADS)
Parajuli, Sagar Prasad; Yang, Zong-Liang; Lawrence, David M.
2016-06-01
Large amounts of mineral dust are injected into the atmosphere during dust storms, which are common in the Middle East and North Africa (MENA) where most of the global dust hotspots are located. In this work, we present simulations of dust emission using the Community Earth System Model Version 1.2.2 (CESM 1.2.2) and evaluate how well it captures the spatio-temporal characteristics of dust emission in the MENA region with a focus on large-scale dust storm mobilization. We explicitly focus our analysis on the model's two major input parameters that affect the vertical mass flux of dust-surface winds and the soil erodibility factor. We analyze dust emissions in simulations with both prognostic CESM winds and with CESM winds that are nudged towards ERA-Interim reanalysis values. Simulations with three existing erodibility maps and a new observation-based erodibility map are also conducted. We compare the simulated results with MODIS satellite data, MACC reanalysis data, AERONET station data, and CALIPSO 3-d aerosol profile data. The dust emission simulated by CESM, when driven by nudged reanalysis winds, compares reasonably well with observations on daily to monthly time scales despite CESM being a global General Circulation Model. However, considerable bias exists around known high dust source locations in northwest/northeast Africa and over the Arabian Peninsula where recurring large-scale dust storms are common. The new observation-based erodibility map, which can represent anthropogenic dust sources that are not directly represented by existing erodibility maps, shows improved performance in terms of the simulated dust optical depth (DOD) and aerosol optical depth (AOD) compared to existing erodibility maps although the performance of different erodibility maps varies by region.
From Verified Models to Verifiable Code
NASA Technical Reports Server (NTRS)
Lensink, Leonard; Munoz, Cesar A.; Goodloe, Alwyn E.
2009-01-01
Declarative specifications of digital systems often contain parts that can be automatically translated into executable code. Automated code generation may reduce or eliminate the kinds of errors typically introduced through manual code writing. For this approach to be effective, the generated code should be reasonably efficient and, more importantly, verifiable. This paper presents a prototype code generator for the Prototype Verification System (PVS) that translates a subset of PVS functional specifications into an intermediate language and subsequently to multiple target programming languages. Several case studies are presented to illustrate the tool's functionality. The generated code can be analyzed by software verification tools such as verification condition generators, static analyzers, and software model-checkers to increase the confidence that the generated code is correct.
NASA Astrophysics Data System (ADS)
Cai, Huai-yu; Dong, Xiao-tong; Zhu, Meng; Huang, Zhan-hua
2018-01-01
Wavefront coding for athermal technique can effectively ensure the stability of the optical system imaging in large temperature range, as well as the advantages of compact structure and low cost. Using simulation method to analyze the properties such as PSF and MTF of wavefront coding athermal system under several typical temperature gradient distributions has directive function to characterize the working state of non-ideal temperature environment, and can effectively realize the system design indicators as well. In this paper, we utilize the interoperability of data between Solidworks and ZEMAX to simplify the traditional process of structure/thermal/optical integrated analysis. Besides, we design and build the optical model and corresponding mechanical model of the infrared imaging wavefront coding athermal system. The axial and radial temperature gradients of different degrees are applied to the whole system by using SolidWorks software, thus the changes of curvature, refractive index and the distance between the lenses are obtained. Then, we import the deformation model to ZEMAX for ray tracing, and obtain the changes of PSF and MTF in optical system. Finally, we discuss and evaluate the consistency of the PSF (MTF) of the wavefront coding athermal system and the image restorability, which provides the basis and reference for the optimal design of the wavefront coding athermal system. The results show that the adaptability of single material infrared wavefront coding athermal system to axial temperature gradient can reach the upper limit of temperature fluctuation of 60°C, which is much higher than that of radial temperature gradient.
Viewing hybrid systems as products of control systems and automata
NASA Technical Reports Server (NTRS)
Grossman, R. L.; Larson, R. G.
1992-01-01
The purpose of this note is to show how hybrid systems may be modeled as products of nonlinear control systems and finite state automata. By a hybrid system, we mean a network of consisting of continuous, nonlinear control system connected to discrete, finite state automata. Our point of view is that the automata switches between the control systems, and that this switching is a function of the discrete input symbols or letters that it receives. We show how a nonlinear control system may be viewed as a pair consisting of a bialgebra of operators coding the dynamics, and an algebra of observations coding the state space. We also show that a finite automata has a similar representation. A hybrid system is then modeled by taking suitable products of the bialgebras coding the dynamics and the observation algebras coding the state spaces.
The study on dynamic cadastral coding rules based on kinship relationship
NASA Astrophysics Data System (ADS)
Xu, Huan; Liu, Nan; Liu, Renyi; Lu, Jingfeng
2007-06-01
Cadastral coding rules are an important supplement to the existing national and local standard specifications for building cadastral database. After analyzing the course of cadastral change, especially the parcel change with the method of object-oriented analysis, a set of dynamic cadastral coding rules based on kinship relationship corresponding to the cadastral change is put forward and a coding format composed of street code, block code, father parcel code, child parcel code and grandchild parcel code is worked out within the county administrative area. The coding rule has been applied to the development of an urban cadastral information system called "ReGIS", which is not only able to figure out the cadastral code automatically according to both the type of parcel change and the coding rules, but also capable of checking out whether the code is spatiotemporally unique before the parcel is stored in the database. The system has been used in several cities of Zhejiang Province and got a favorable response. This verifies the feasibility and effectiveness of the coding rules to some extent.
ERIC Educational Resources Information Center
National Forum on Education Statistics, 2011
2011-01-01
In this handbook, "Prior-to-Secondary School Course Classification System: School Codes for the Exchange of Data" (SCED), the National Center for Education Statistics (NCES) and the National Forum on Education Statistics have extended the existing secondary course classification system with codes and descriptions for courses offered at…
Proposing a Web-Based Tutorial System to Teach Malay Language Braille Code to the Sighted
ERIC Educational Resources Information Center
Wah, Lee Lay; Keong, Foo Kok
2010-01-01
The "e-KodBrailleBM Tutorial System" is a web-based tutorial system which is specially designed to teach, facilitate and support the learning of Malay Language Braille Code to individuals who are sighted. The targeted group includes special education teachers, pre-service teachers, and parents. Learning Braille code involves memorisation…
NASA Astrophysics Data System (ADS)
Carles, Guillem; Ferran, Carme; Carnicer, Artur; Bosch, Salvador
2012-01-01
A computational imaging system based on wavefront coding is presented. Wavefront coding provides an extension of the depth-of-field at the expense of a slight reduction of image quality. This trade-off results from the amount of coding used. By using spatial light modulators, a flexible coding is achieved which permits it to be increased or decreased as needed. In this paper a computational method is proposed for evaluating the output of a wavefront coding imaging system equipped with a spatial light modulator, with the aim of thus making it possible to implement the most suitable coding strength for a given scene. This is achieved in an unsupervised manner, thus the whole system acts as a dynamically selfadaptable imaging system. The program presented here controls the spatial light modulator and the camera, and also processes the images in a synchronised way in order to implement the dynamic system in real time. A prototype of the system was implemented in the laboratory and illustrative examples of the performance are reported in this paper. Program summaryProgram title: DynWFC (Dynamic WaveFront Coding) Catalogue identifier: AEKC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 10 483 No. of bytes in distributed program, including test data, etc.: 2 437 713 Distribution format: tar.gz Programming language: Labview 8.5 and NI Vision and MinGW C Compiler Computer: Tested on PC Intel ® Pentium ® Operating system: Tested on Windows XP Classification: 18 Nature of problem: The program implements an enhanced wavefront coding imaging system able to adapt the degree of coding to the requirements of a specific scene. The program controls the acquisition by a camera, the display of a spatial light modulator and the image processing operations synchronously. The spatial light modulator is used to implement the phase mask with flexibility given the trade-off between depth-of-field extension and image quality achieved. The action of the program is to evaluate the depth-of-field requirements of the specific scene and subsequently control the coding established by the spatial light modulator, in real time.
Reinventing radiology reimbursement.
Marshall, John; Adema, Denise
2005-01-01
Lee Memorial Health System (LMHS), located in southwest Florida, consists of 5 hospitals, a home health agency, a skilled nursing facility, multiple outpatient centers, walk-in medical centers, and primary care physician offices. LMHS annually performs more than 300,000 imaging procedures with gross imaging revenues exceeding dollar 350 million. In fall 2002, LMHS received the results of an independent audit of its IR coding. The overall IR coding error rate was determined to be 84.5%. The projected net financial impact of these errors was an annual reimbursement loss of dollar 182,000. To address the issues of coding errors and reimbursement loss, LMHS implemented its clinical reimbursementspecialist (CRS) system in October 2003, as an extension of financial services' reimbursement division. LMHS began with CRSs in 3 service lines: emergency department, cardiac catheterization, and radiology. These 3 CRSs coordinate all facets of their respective areas' chargemaster, patient charges, coding, and reimbursement functions while serving as a resident coding expert within their clinical areas. The radiology reimbursement specialist (RRS) combines an experienced radiologic technologist, interventional technologist, medical records coder, financial auditor, reimbursement specialist, and biller into a single position. The RRS's radiology experience and technologist knowledge are key assets to resolving coding conflicts and handling complex interventional coding. In addition, performing a daily charge audit and an active code review are essential if an organization is to eliminate coding errors. One of the inherent effects of eliminating coding errors is the capturing of additional RVUs and units of service. During its first year, based on account level detail, the RRS system increased radiology productivity through the additional capture of just more than 3,000 RVUs and 1,000 additional units of service. In addition, the physicians appreciate having someone who "keeps up with all the coding changes" and looks out for the charges. By assisting a few physicians' staff with coding questions, providing coding updates, and allowing them to sit in on educational sessions, at least 2 physicians have transferred some their volume to LMHS from a competitor. The provision of a "clean account," without coding errors, allows the biller to avoid the rework and billing delays caused by coding issues. During the first quarter of the RRS system, the billers referred an average of 9 accounts per day for coding resolution. During the fourth quarter of the system, these referrals were reduced to less than one per day. Prior to the RRS system, resolving these issues took an average of 4 business days. Now the conflicts are resolved within 24 hours.
Improving performance of DS-CDMA systems using chaotic complex Bernoulli spreading codes
NASA Astrophysics Data System (ADS)
Farzan Sabahi, Mohammad; Dehghanfard, Ali
2014-12-01
The most important goal of spreading spectrum communication system is to protect communication signals against interference and exploitation of information by unintended listeners. In fact, low probability of detection and low probability of intercept are two important parameters to increase the performance of the system. In Direct Sequence Code Division Multiple Access (DS-CDMA) systems, these properties are achieved by multiplying the data information in spreading sequences. Chaotic sequences, with their particular properties, have numerous applications in constructing spreading codes. Using one-dimensional Bernoulli chaotic sequence as spreading code is proposed in literature previously. The main feature of this sequence is its negative auto-correlation at lag of 1, which with proper design, leads to increase in efficiency of the communication system based on these codes. On the other hand, employing the complex chaotic sequences as spreading sequence also has been discussed in several papers. In this paper, use of two-dimensional Bernoulli chaotic sequences is proposed as spreading codes. The performance of a multi-user synchronous and asynchronous DS-CDMA system will be evaluated by applying these sequences under Additive White Gaussian Noise (AWGN) and fading channel. Simulation results indicate improvement of the performance in comparison with conventional spreading codes like Gold codes as well as similar complex chaotic spreading sequences. Similar to one-dimensional Bernoulli chaotic sequences, the proposed sequences also have negative auto-correlation. Besides, construction of complex sequences with lower average cross-correlation is possible with the proposed method.
NASA Astrophysics Data System (ADS)
Bai, Cheng-lin; Cheng, Zhi-hui
2016-09-01
In order to further improve the carrier synchronization estimation range and accuracy at low signal-to-noise ratio ( SNR), this paper proposes a code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check (NB-LDPC) codes to study the polarization-division-multiplexing coherent optical orthogonal frequency division multiplexing (PDM-CO-OFDM) system performance in the cases of quadrature phase shift keying (QPSK) and 16 quadrature amplitude modulation (16-QAM) modes. The simulation results indicate that this algorithm can enlarge frequency and phase offset estimation ranges and enhance accuracy of the system greatly, and the bit error rate ( BER) performance of the system is improved effectively compared with that of the system employing traditional NB-LDPC code-aided carrier synchronization algorithm.
Association between implementation of a code stroke system and poststroke epilepsy.
Chen, Ziyi; Churilov, Leonid; Chen, Ziyuan; Naylor, Jillian; Koome, Miriam; Yan, Bernard; Kwan, Patrick
2018-03-27
We aimed to investigate the effect of a code stroke system on the development of poststroke epilepsy. We retrospectively analyzed consecutive patients treated with IV thrombolysis under or outside the code stroke system between 2003 and 2012. Patients were followed up for at least 2 years or until death. Factors with p < 0.1 in univariate comparisons were selected for multivariable logistic and Cox regression. A total of 409 patients met the eligibility criteria. Their median age at stroke onset was 75 years (interquartile range 64-83 years); 220 (53.8%) were male. The median follow-up duration was 1,074 days (interquartile range 119-1,671 days). Thirty-two patients (7.8%) had poststroke seizures during follow-up, comprising 7 (1.7%) with acute symptomatic seizures and 25 (6.1%) with late-onset seizures. Twenty-six patients (6.4%) fulfilled the definition of poststroke epilepsy. Three hundred eighteen patients (77.8%) were treated with the code stroke system while 91 (22.2%) were not. After adjustment for age and stroke etiology, use of the code stroke system was associated with decreased odds of poststroke epilepsy (odds ratio = 0.36, 95% confidence interval 0.14-0.87, p = 0.024). Cox regression showed lower adjusted hazard rates for poststroke epilepsy within 5 years for patients managed under the code stroke system (hazard ratio = 0.60, 95% confidence interval 0.47-0.79, p < 0.001). The code stroke system was associated with reduced odds and instantaneous risk of poststroke epilepsy. Further studies are required to identify the contribution of the individual components and mechanisms against epileptogenesis after stroke. This study provides Class III evidence that for people with acute ischemic stroke, implementation of a code stroke system reduces the risk of poststroke epilepsy. © 2018 American Academy of Neurology.
An ultrasound transient elastography system with coded excitation.
Diao, Xianfen; Zhu, Jing; He, Xiaonian; Chen, Xin; Zhang, Xinyu; Chen, Siping; Liu, Weixiang
2017-06-28
Ultrasound transient elastography technology has found its place in elastography because it is safe and easy to operate. However, it's application in deep tissue is limited. The aim of this study is to design an ultrasound transient elastography system with coded excitation to obtain greater detection depth. The ultrasound transient elastography system requires tissue vibration to be strictly synchronous with ultrasound detection. Therefore, an ultrasound transient elastography system with coded excitation was designed. A central component of this transient elastography system was an arbitrary waveform generator with multi-channel signals output function. This arbitrary waveform generator was used to produce the tissue vibration signal, the ultrasound detection signal and the synchronous triggering signal of the radio frequency data acquisition system. The arbitrary waveform generator can produce different forms of vibration waveform to induce different shear wave propagation in the tissue. Moreover, it can achieve either traditional pulse-echo detection or a phase-modulated or a frequency-modulated coded excitation. A 7-chip Barker code and traditional pulse-echo detection were programmed on the designed ultrasound transient elastography system to detect the shear wave in the phantom excited by the mechanical vibrator. Then an elasticity QA phantom and sixteen in vitro rat livers were used for performance evaluation of the two detection pulses. The elasticity QA phantom's results show that our system is effective, and the rat liver results show the detection depth can be increased more than 1 cm. In addition, the SNR (signal-to-noise ratio) is increased by 15 dB using the 7-chip Barker coded excitation. Applying 7-chip Barker coded excitation technique to the ultrasound transient elastography can increase the detection depth and SNR. Using coded excitation technology to assess the human liver, especially in obese patients, may be a good choice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xin Zhi Jiao
Ultrastructural changes caused by gamma-ray (Co-60) irradiation were investigated in preclimacteric apple fruits during storage. Under the electron microscope, the cellulose in the cell walls was reduced to a line when treated with 40 Krad gamma radiation for 38 hr, and disappeared completely after treatment with 100 Krad. The disintegration of plasmalemma and mitochondria membranes was observed. Plasmalemma membranes were impaired after 10 Krads for 38 hr, while in the mitochondria the destruction of the original structure and its inner membrane spine began at 40 Krads for 38 hr. Moreover, the size of starch granules was reduced by the irradiation,more » disappearing after treatment with 100 Krads. Both ethylene production and respiration rate were drastically reduced. The reduction of ethylene production in treated apple fruit was found to be due to the decrease of ACC content and the inhibition of ethylene-forming enzyme activity. MACC content was also decreased. Fruits treated with 40 Krad gamma radiation and stored at 0-2 degrees C maintained their quality for six months.« less
CD8+CD28+ T cells might mediate injury of cardiomyocytes in acute myocardial infarction.
Zhang, Lili; Wang, Zhiyan; Wang, Di; Zhu, Jumo; Wang, Yi
2018-06-07
CD8 + T cells accumulate in the necrotic myocardium of acute myocardial infarction (AMI). It is unclear whether CD8 + CD28 + T cells, a specific subset of CD8 + T cells, contribute to myocardial injury. In this study, 92 consecutive patients with AMI and 28 healthy control subjects were enrolled. The frequency of CD8 + CD28 + T cells in peripheral blood samples was assayed by flow cytometry. Plasma cardiac troponin I (TNI) and left ventricular ejection fraction (LVEF) were determined. Long-term prognosis of the patients was evaluated by major adverse cardiac and cerebrovascular events (MACCE) over a 12-month follow-up period. Our findings indicated that patients with AMI who presented with high numbers of CD8 + CD28 + T cells had an increased infarction size and aggravated ventricular function. We proposed that cytotoxic CD8 + CD28 + T cell-mediated myocardial necrosis may act as a novel and alternative pathway of AMI. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Hartenstein, Richard G., Jr.
1985-01-01
Computer codes have been developed to analyze antennas on aircraft and in the presence of scatterers. The purpose of this study is to use these codes to develop accurate computer models of various aircraft and antenna systems. The antenna systems analyzed are a P-3B L-Band antenna, an A-7E UHF relay pod antenna, and traffic advisory antenna system installed on a Bell Long Ranger helicopter. Computer results are compared to measured ones with good agreement. These codes can be used in the design stage of an antenna system to determine the optimum antenna location and save valuable time and costly flight hours.
ETF system code: composition and applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reid, R.L.; Wu, K.F.
1980-01-01
A computer code has been developed for application to ETF tokamak system and conceptual design studies. The code determines cost, performance, configuration, and technology requirements as a function of tokamak parameters. The ETF code is structured in a modular fashion in order to allow independent modeling of each major tokamak component. The primary benefit of modularization is that it allows updating of a component module, such as the TF coil module, without disturbing the remainder of the system code as long as the input/output to the modules remains unchanged. The modules may be run independently to perform specific design studies,more » such as determining the effect of allowable strain on TF coil structural requirements, or the modules may be executed together as a system to determine global effects, such as defining the impact of aspect ratio on the entire tokamak system.« less
A systematic literature review of automated clinical coding and classification systems
Williams, Margaret; Fenton, Susan H; Jenders, Robert A; Hersh, William R
2010-01-01
Clinical coding and classification processes transform natural language descriptions in clinical text into data that can subsequently be used for clinical care, research, and other purposes. This systematic literature review examined studies that evaluated all types of automated coding and classification systems to determine the performance of such systems. Studies indexed in Medline or other relevant databases prior to March 2009 were considered. The 113 studies included in this review show that automated tools exist for a variety of coding and classification purposes, focus on various healthcare specialties, and handle a wide variety of clinical document types. Automated coding and classification systems themselves are not generalizable, nor are the results of the studies evaluating them. Published research shows these systems hold promise, but these data must be considered in context, with performance relative to the complexity of the task and the desired outcome. PMID:20962126
[ENT and head and neck surgery in the German DRG system 2007].
Franz, D; Roeder, N; Hörmann, K; Alberty, J
2007-07-01
The German DRG system has been further developed into version 2007. For ENT and head and neck surgery, significant changes in the coding of diagnoses and medical operations as well as in the the DRG structure have been made. New ICD codes for sleep apnoea and acquired tracheal stenosis have been implemented. Surgery on the acoustic meatus, removal of auricle hyaline cartilage for transplantation (e. g. rhinosurgery) and tonsillotomy have been coded in the 2007 version. In addition, the DRG structure has been improved. Case allocation of more than one significant operation has been established. The G-DRG system has gained in complexity. High demands are made on the coding of complex cases, whereas standard cases require mostly only one specific diagnosis and one specific OPS code. The quality of case allocation for ENT patients within the G-DRG system has been improved. Nevertheless, further adjustments of the G-DRG system are necessary.
Computer code for analyzing the performance of aquifer thermal energy storage systems
NASA Astrophysics Data System (ADS)
Vail, L. W.; Kincaid, C. T.; Kannberg, L. D.
1985-05-01
A code called Aquifer Thermal Energy Storage System Simulator (ATESSS) has been developed to analyze the operational performance of ATES systems. The ATESSS code provides an ability to examine the interrelationships among design specifications, general operational strategies, and unpredictable variations in the demand for energy. The uses of the code can vary the well field layout, heat exchanger size, and pumping/injection schedule. Unpredictable aspects of supply and demand may also be examined through the use of a stochastic model of selected system parameters. While employing a relatively simple model of the aquifer, the ATESSS code plays an important role in the design and operation of ATES facilities by augmenting experience provided by the relatively few field experiments and demonstration projects. ATESSS has been used to characterize the effect of different pumping/injection schedules on a hypothetical ATES system and to estimate the recovery at the St. Paul, Minnesota, field experiment.
A systematic literature review of automated clinical coding and classification systems.
Stanfill, Mary H; Williams, Margaret; Fenton, Susan H; Jenders, Robert A; Hersh, William R
2010-01-01
Clinical coding and classification processes transform natural language descriptions in clinical text into data that can subsequently be used for clinical care, research, and other purposes. This systematic literature review examined studies that evaluated all types of automated coding and classification systems to determine the performance of such systems. Studies indexed in Medline or other relevant databases prior to March 2009 were considered. The 113 studies included in this review show that automated tools exist for a variety of coding and classification purposes, focus on various healthcare specialties, and handle a wide variety of clinical document types. Automated coding and classification systems themselves are not generalizable, nor are the results of the studies evaluating them. Published research shows these systems hold promise, but these data must be considered in context, with performance relative to the complexity of the task and the desired outcome.
NASA Technical Reports Server (NTRS)
Shapiro, Wilbur
1991-01-01
The industrial codes will consist of modules of 2-D and simplified 2-D or 1-D codes, intended for expeditious parametric studies, analysis, and design of a wide variety of seals. Integration into a unified system is accomplished by the industrial Knowledge Based System (KBS), which will also provide user friendly interaction, contact sensitive and hypertext help, design guidance, and an expandable database. The types of analysis to be included with the industrial codes are interfacial performance (leakage, load, stiffness, friction losses, etc.), thermoelastic distortions, and dynamic response to rotor excursions. The first three codes to be completed and which are presently being incorporated into the KBS are the incompressible cylindrical code, ICYL, and the compressible cylindrical code, GCYL.
Interface requirements for coupling a containment code to a reactor system thermal hydraulic codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baratta, A.J.
1997-07-01
To perform a complete analysis of a reactor transient, not only the primary system response but the containment response must also be accounted for. Such transients and accidents as a loss of coolant accident in both pressurized water and boiling water reactors and inadvertent operation of safety relief valves all challenge the containment and may influence flows because of containment feedback. More recently, the advanced reactor designs put forth by General Electric and Westinghouse in the US and by Framatome and Seimens in Europe rely on the containment to act as the ultimate heat sink. Techniques used by analysts andmore » engineers to analyze the interaction of the containment and the primary system were usually iterative in nature. Codes such as RELAP or RETRAN were used to analyze the primary system response and CONTAIN or CONTEMPT the containment response. The analysis was performed by first running the system code and representing the containment as a fixed pressure boundary condition. The flows were usually from the primary system to the containment initially and generally under choked conditions. Once the mass flows and timing are determined from the system codes, these conditions were input into the containment code. The resulting pressures and temperatures were then calculated and the containment performance analyzed. The disadvantage of this approach becomes evident when one performs an analysis of a rapid depressurization or a long term accident sequence in which feedback from the containment can occur. For example, in a BWR main steam line break transient, the containment heats up and becomes a source of energy for the primary system. Recent advances in programming and computer technology are available to provide an alternative approach. The author and other researchers have developed linkage codes capable of transferring data between codes at each time step allowing discrete codes to be coupled together.« less
Sollie, Annet; Sijmons, Rolf H; Lindhout, Dick; van der Ploeg, Ans T; Rubio Gozalbo, M Estela; Smit, G Peter A; Verheijen, Frans; Waterham, Hans R; van Weely, Sonja; Wijburg, Frits A; Wijburg, Rudolph; Visser, Gepke
2013-07-01
Data sharing is essential for a better understanding of genetic disorders. Good phenotype coding plays a key role in this process. Unfortunately, the two most widely used coding systems in medicine, ICD-10 and SNOMED-CT, lack information necessary for the detailed classification and annotation of rare and genetic disorders. This prevents the optimal registration of such patients in databases and thus data-sharing efforts. To improve care and to facilitate research for patients with metabolic disorders, we developed a new coding system for metabolic diseases with a dedicated group of clinical specialists. Next, we compared the resulting codes with those in ICD and SNOMED-CT. No matches were found in 76% of cases in ICD-10 and in 54% in SNOMED-CT. We conclude that there are sizable gaps in the SNOMED-CT and ICD coding systems for metabolic disorders. There may be similar gaps for other classes of rare and genetic disorders. We have demonstrated that expert groups can help in addressing such coding issues. Our coding system has been made available to the ICD and SNOMED-CT organizations as well as to the Orphanet and HPO organizations for further public application and updates will be published online (www.ddrmd.nl and www.cineas.org). © 2013 WILEY PERIODICALS, INC.
Zafirah, S A; Nur, Amrizal Muhammad; Puteh, Sharifa Ezat Wan; Aljunid, Syed Mohamed
2018-01-25
The accuracy of clinical coding is crucial in the assignment of Diagnosis Related Groups (DRGs) codes, especially if the hospital is using Casemix System as a tool for resource allocations and efficiency monitoring. The aim of this study was to estimate the potential loss of income due to an error in clinical coding during the implementation of the Malaysia Diagnosis Related Group (MY-DRG ® ) Casemix System in a teaching hospital in Malaysia. Four hundred and sixty-four (464) coded medical records were selected, re-examined and re-coded by an independent senior coder (ISC). This ISC re-examined and re-coded the error code that was originally entered by the hospital coders. The pre- and post-coding results were compared, and if there was any disagreement, the codes by the ISC were considered the accurate codes. The cases were then re-grouped using a MY-DRG ® grouper to assess and compare the changes in the DRG assignment and the hospital tariff assignment. The outcomes were then verified by a casemix expert. Coding errors were found in 89.4% (415/424) of the selected patient medical records. Coding errors in secondary diagnoses were the highest, at 81.3% (377/464), followed by secondary procedures at 58.2% (270/464), principal procedures of 50.9% (236/464) and primary diagnoses at 49.8% (231/464), respectively. The coding errors resulted in the assignment of different MY-DRG ® codes in 74.0% (307/415) of the cases. From this result, 52.1% (160/307) of the cases had a lower assigned hospital tariff. In total, the potential loss of income due to changes in the assignment of the MY-DRG ® code was RM654,303.91. The quality of coding is a crucial aspect in implementing casemix systems. Intensive re-training and the close monitoring of coder performance in the hospital should be performed to prevent the potential loss of hospital income.
ERIC Educational Resources Information Center
New Mexico Univ., Albuquerque. American Indian Law Center.
The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…
Facilitating Internet-Scale Code Retrieval
ERIC Educational Resources Information Center
Bajracharya, Sushil Krishna
2010-01-01
Internet-Scale code retrieval deals with the representation, storage, and access of relevant source code from a large amount of source code available on the Internet. Internet-Scale code retrieval systems support common emerging practices among software developers related to finding and reusing source code. In this dissertation we focus on some…
The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grebe, A.; Leveling, A.; Lu, T.
The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay gamma-quanta by the residuals in the activated structures and scoring the prompt doses of these gamma-quanta at arbitrary distances frommore » those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and showed a good agreement. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.« less
The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose
NASA Astrophysics Data System (ADS)
Grebe, A.; Leveling, A.; Lu, T.; Mokhov, N.; Pronskikh, V.
2018-01-01
The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay γ-quanta by the residuals in the activated structures and scoring the prompt doses of these γ-quanta at arbitrary distances from those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and against experimental data from the CERF facility at CERN, and FermiCORD showed reasonable agreement with these. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.
An edge preserving differential image coding scheme
NASA Technical Reports Server (NTRS)
Rost, Martin C.; Sayood, Khalid
1992-01-01
Differential encoding techniques are fast and easy to implement. However, a major problem with the use of differential encoding for images is the rapid edge degradation encountered when using such systems. This makes differential encoding techniques of limited utility, especially when coding medical or scientific images, where edge preservation is of utmost importance. A simple, easy to implement differential image coding system with excellent edge preservation properties is presented. The coding system can be used over variable rate channels, which makes it especially attractive for use in the packet network environment.
Color Coded Cards for Student Behavior Management in Higher Education Environments
ERIC Educational Resources Information Center
Alhalabi, Wadee; Alhalabi, Mobeen
2017-01-01
The Color Coded Cards system as a possibly effective class management tool is the focus of this research. The Color Coded Cards system involves each student being given a card with a specific color based on his or her behavior. The main objective of the research is to find out whether this system effectively improves students' behavior, thus…
Organizational Effectiveness Information System (OEIS) User’s Manual
1986-09-01
SUBJECT CODES B-l C. LISTING OF VALID RESOURCE SYSTEM CODES C-l »TflerÄ*w»fi*%f*fc**v.nft; ^’.A/.V. A y.A/.AAA«•.*-A/. AAV ...the valid codes used la the Implementation and Design System. MACOM 01 COE 02 DARCOM 03 EUSA 04 FORSCOM 05 HSC 06 HQDA 07 INSCOM 08 MDW 09
Throughput of Coded Optical CDMA Systems with AND Detectors
NASA Astrophysics Data System (ADS)
Memon, Kehkashan A.; Umrani, Fahim A.; Umrani, A. W.; Umrani, Naveed A.
2012-09-01
Conventional detection techniques used in optical code-division multiple access (OCDMA) systems are not optimal and result in poor bit error rate performance. This paper analyzes the coded performance of optical CDMA systems with AND detectors for enhanced throughput efficiencies and improved error rate performance. The results show that the use of AND detectors significantly improve the performance of an optical channel.
ERIC Educational Resources Information Center
Moral, Cristian; de Antonio, Angelica; Ferre, Xavier; Lara, Graciela
2015-01-01
Introduction: In this article we propose a qualitative analysis tool--a coding system--that can support the formalisation of the information-seeking process in a specific field: research in computer science. Method: In order to elaborate the coding system, we have conducted a set of qualitative studies, more specifically a focus group and some…
Teacher-Child Dyadic Interaction: A Manual for Coding Classroom Behavior. Report Series No. 27.
ERIC Educational Resources Information Center
Brophy, Jere E.; Good, Thomas L.
This manual presents the rationale and coding system for the study of dyadic interaction between teachers and children in classrooms. The introduction notes major differences between this system and others in common use: 1) it is not a universal system that attempts to code all classroom behavior, and 2) the teacher's interactions in his class are…
A coded tracking telemetry system
Howey, P.W.; Seegar, W.S.; Fuller, M.R.; Titus, K.; Amlaner, Charles J.
1989-01-01
We describe the general characteristics of an automated radio telemetry system designed to operate for prolonged periods on a single frequency. Each transmitter sends a unique coded signal to a receiving system that encodes and records only the appropriater, pre-programmed codes. A record of the time of each reception is stored on diskettes in a micro-computer. This system enables continuous monitoring of infrequent signals (e.g. one per minute or one per hour), thus extending operation life or allowing size reduction of the transmitter, compared to conventional wildlife telemetry. Furthermore, when using unique codes transmitted on a single frequency, biologists can monitor many individuals without exceeding the radio frequency allocations for wildlife.
Learning of spatio-temporal codes in a coupled oscillator system.
Orosz, Gábor; Ashwin, Peter; Townley, Stuart
2009-07-01
In this paper, we consider a learning strategy that allows one to transmit information between two coupled phase oscillator systems (called teaching and learning systems) via frequency adaptation. The dynamics of these systems can be modeled with reference to a number of partially synchronized cluster states and transitions between them. Forcing the teaching system by steady but spatially nonhomogeneous inputs produces cyclic sequences of transitions between the cluster states, that is, information about inputs is encoded via a "winnerless competition" process into spatio-temporal codes. The large variety of codes can be learned by the learning system that adapts its frequencies to those of the teaching system. We visualize the dynamics using "weighted order parameters (WOPs)" that are analogous to "local field potentials" in neural systems. Since spatio-temporal coding is a mechanism that appears in olfactory systems, the developed learning rules may help to extract information from these neural ensembles.
HEC Applications on Columbia Project
NASA Technical Reports Server (NTRS)
Taft, Jim
2004-01-01
NASA's Columbia system consists of a cluster of twenty 512 processor SGI Altix systems. Each of these systems is 3 TFLOP/s in peak performance - approximately the same as the entire compute capability at NAS just one year ago. Each 512p system is a single system image machine with one Linunx O5, one high performance file system, and one globally shared memory. The NAS Terascale Applications Group (TAG) is chartered to assist in scaling NASA's mission critical codes to at least 512p in order to significantly improve emergency response during flight operations, as well as provide significant improvements in the codes. and rate of scientific discovery across the scientifc disciplines within NASA's Missions. Recent accomplishments are 4x improvements to codes in the ocean modeling community, 10x performance improvements in a number of computational fluid dynamics codes used in aero-vehicle design, and 5x improvements in a number of space science codes dealing in extreme physics. The TAG group will continue its scaling work to 2048p and beyond (10240 cpus) as the Columbia system becomes fully operational and the upgrades to the SGI NUMAlink memory fabric are in place. The NUMlink uprades dramatically improve system scalability for a single application. These upgrades will allow a number of codes to execute faster at higher fidelity than ever before on any other system, thus increasing the rate of scientific discovery even further
End-to-End Modeling with the Heimdall Code to Scope High-Power Microwave Systems
2007-06-01
END-TO-END MODELING WITH THE HEIMDALL CODE TO SCOPE HIGH - POWER MICROWAVE SYSTEMS ∗ John A. Swegleξ Savannah River National Laboratory, 743A...describe the expert-system code HEIMDALL, which is used to model full high - power microwave systems using over 60 systems-engineering models, developed in...of our calculations of the mass of a Supersystem producing 500-MW, 15-ns output pulses in the X band for bursts of 1 s , interspersed with 10- s
ISSYS: An integrated synergistic Synthesis System
NASA Technical Reports Server (NTRS)
Dovi, A. R.
1980-01-01
Integrated Synergistic Synthesis System (ISSYS), an integrated system of computer codes in which the sequence of program execution and data flow is controlled by the user, is discussed. The commands available to exert such control, the ISSYS major function and rules, and the computer codes currently available in the system are described. Computational sequences frequently used in the aircraft structural analysis and synthesis are defined. External computer codes utilized by the ISSYS system are documented. A bibliography on the programs is included.
MOZAIC-IAGOS program : 20 years of in-situ data in the UTLS
NASA Astrophysics Data System (ADS)
Thouret, Valérie; Sauvage, Bastien; Nédélec, Philippe; Petzold, Andreas; Volz-Thomas, Andreas
2014-05-01
The use of commercial aircraft allows the collection of highly relevant observations on a scale and in numbers impossible to achieve using research aircraft, and in regions where other measurement methods (e.g., satellites) have technical limitations. Since 1994, the MOZAIC program has been measuring ozone, water vapor and meteorological parameters (along with NOy for 2001-2005 and CO since 2001) on board 5 commercial aircraft. IAGOS (In-service Aircraft for a Global Observing System, http://www.iagos.org) was initiated in 2006 and combines the experience gained within the MOZAIC and CARIBIC programs. IAGOS is now one of the new European Research Infrastructures with the objective of establishing and operating a distributed infrastructure for long-term observations of atmospheric composition (O3, CO, CO2, NOy, NOx, H2O), aerosol and cloud particles on a global scale from a fleet of initially 10-20 long-range in-service aircraft of internationally operating airlines. Data are available in near real time for weather services and Copernicus service centres, as demonstrated in the MACC project (http://www.iagos.fr/macc). The IAGOS database is an essential part of the program and is still under development/improvement such as additional new value-added products (source-receptor link of observed pollutants) obtained by coupling the Lagrangian dispersion model FLEXPART to CO surface emissions from different inventories. Data access through http://www.iagos.fr is handled by an open access policy based on the submission of research requests. An overview of the most recent results focusing on UTLS data will be presented, including : - Five years of MOZAIC NOy observations that are used to characterize and describe large-scale plumes including lightning NOx emissions, in the upper troposphere between North America and Europe. - Characteristics of ozone and CO distributions over regions of interest never or poorly sampled by other platforms are measured: UTLS Northern mid-latitudes thanks to 5 aircraft based in Europe flying westbound and eastbound since 1994; Transects over the African continent thanks to daily Air Namibia flights between 2006 and 2013; South Atlantic area thanks to regular flights between Europe and South America; The Asian monsoon region thanks to regular flights between Europe and the Indian-South East Asia area sampling the UT under the influence of the Asian Monsoon Anticyclone (AMA). - Ten years of CO measurements which show an increase in concentration on moving from the Western to Eastern hemisphere. In the US, Atlantic and European sectors CO concentrations have fallen by about 2% per year. - Almost 20 years of Ozone measurements at northern mid-latitudes showing a leveling-off of the mixing ratios for the last 10-12 years over the Atlantic sector while ozone is still increasing over Asia. - Almost 20 years of relative humidity measurements showing that the upper troposphere (10 - 12 km altitude, which corresponds to the aircraft cruise level) is much wetter than reflected in the model analyses of the ECMWF (European Centre for Medium range Weather Forecast).
A COTS-Based Replacement Strategy for Aging Avionics Computers
2001-12-01
Communication Control Unit. A COTS-Based Replacement Strategy for Aging Avionics Computers COTS Microprocessor Real Time Operating System New Native Code...Native Code Objec ts Native Code Thread Real - Time Operating System Legacy Function x Virtual Component Environment Context Switch Thunk Add-in Replace
48 CFR 1501.105-1 - Publication and code arrangement.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false Publication and code arrangement. 1501.105-1 Section 1501.105-1 Federal Acquisition Regulations System ENVIRONMENTAL PROTECTION AGENCY GENERAL GENERAL Purpose, Authority, Issuance 1501.105-1 Publication and code arrangement. The...
Time trend of injection drug errors before and after implementation of bar-code verification system.
Sakushima, Ken; Umeki, Reona; Endoh, Akira; Ito, Yoichi M; Nasuhara, Yasuyuki
2015-01-01
Bar-code technology, used for verification of patients and their medication, could prevent medication errors in clinical practice. Retrospective analysis of electronically stored medical error reports was conducted in a university hospital. The number of reported medication errors of injected drugs, including wrong drug administration and administration to the wrong patient, was compared before and after implementation of the bar-code verification system for inpatient care. A total of 2867 error reports associated with injection drugs were extracted. Wrong patient errors decreased significantly after implementation of the bar-code verification system (17.4/year vs. 4.5/year, p< 0.05), although wrong drug errors did not decrease sufficiently (24.2/year vs. 20.3/year). The source of medication errors due to wrong drugs was drug preparation in hospital wards. Bar-code medication administration is effective for prevention of wrong patient errors. However, ordinary bar-code verification systems are limited in their ability to prevent incorrect drug preparation in hospital wards.
System Synchronizes Recordings from Separated Video Cameras
NASA Technical Reports Server (NTRS)
Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.
2009-01-01
A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.
Automatic Testcase Generation for Flight Software
NASA Technical Reports Server (NTRS)
Bushnell, David Henry; Pasareanu, Corina; Mackey, Ryan M.
2008-01-01
The TacSat3 project is applying Integrated Systems Health Management (ISHM) technologies to an Air Force spacecraft for operational evaluation in space. The experiment will demonstrate the effectiveness and cost of ISHM and vehicle systems management (VSM) technologies through onboard operation for extended periods. We present two approaches to automatic testcase generation for ISHM: 1) A blackbox approach that views the system as a blackbox, and uses a grammar-based specification of the system's inputs to automatically generate *all* inputs that satisfy the specifications (up to prespecified limits); these inputs are then used to exercise the system. 2) A whitebox approach that performs analysis and testcase generation directly on a representation of the internal behaviour of the system under test. The enabling technologies for both these approaches are model checking and symbolic execution, as implemented in the Ames' Java PathFinder (JPF) tool suite. Model checking is an automated technique for software verification. Unlike simulation and testing which check only some of the system executions and therefore may miss errors, model checking exhaustively explores all possible executions. Symbolic execution evaluates programs with symbolic rather than concrete values and represents variable values as symbolic expressions. We are applying the blackbox approach to generating input scripts for the Spacecraft Command Language (SCL) from Interface and Control Systems. SCL is an embedded interpreter for controlling spacecraft systems. TacSat3 will be using SCL as the controller for its ISHM systems. We translated the SCL grammar into a program that outputs scripts conforming to the grammars. Running JPF on this program generates all legal input scripts up to a prespecified size. Script generation can also be targeted to specific parts of the grammar of interest to the developers. These scripts are then fed to the SCL Executive. ICS's in-house coverage tools will be run to measure code coverage. Because the scripts exercise all parts of the grammar, we expect them to provide high code coverage. This blackbox approach is suitable for systems for which we do not have access to the source code. We are applying whitebox test generation to the Spacecraft Health INference Engine (SHINE) that is part of the ISHM system. In TacSat3, SHINE will execute an on-board knowledge base for fault detection and diagnosis. SHINE converts its knowledge base into optimized C code which runs onboard TacSat3. SHINE can translate its rules into an intermediate representation (Java) suitable for analysis with JPF. JPF will analyze SHINE's Java output using symbolic execution, producing testcases that can provide either complete or directed coverage of the code. Automatically generated test suites can provide full code coverage and be quickly regenerated when code changes. Because our tools analyze executable code, they fully cover the delivered code, not just models of the code. This approach also provides a way to generate tests that exercise specific sections of code under specific preconditions. This capability gives us more focused testing of specific sections of code.
Serial turbo trellis coded modulation using a serially concatenated coder
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Dolinar, Samuel J. (Inventor); Pollara, Fabrizio (Inventor)
2010-01-01
Serial concatenated trellis coded modulation (SCTCM) includes an outer coder, an interleaver, a recursive inner coder and a mapping element. The outer coder receives data to be coded and produces outer coded data. The interleaver permutes the outer coded data to produce interleaved data. The recursive inner coder codes the interleaved data to produce inner coded data. The mapping element maps the inner coded data to a symbol. The recursive inner coder has a structure which facilitates iterative decoding of the symbols at a decoder system. The recursive inner coder and the mapping element are selected to maximize the effective free Euclidean distance of a trellis coded modulator formed from the recursive inner coder and the mapping element. The decoder system includes a demodulation unit, an inner SISO (soft-input soft-output) decoder, a deinterleaver, an outer SISO decoder, and an interleaver.
Bar Coding and Tracking in Pathology.
Hanna, Matthew G; Pantanowitz, Liron
2016-03-01
Bar coding and specimen tracking are intricately linked to pathology workflow and efficiency. In the pathology laboratory, bar coding facilitates many laboratory practices, including specimen tracking, automation, and quality management. Data obtained from bar coding can be used to identify, locate, standardize, and audit specimens to achieve maximal laboratory efficiency and patient safety. Variables that need to be considered when implementing and maintaining a bar coding and tracking system include assets to be labeled, bar code symbologies, hardware, software, workflow, and laboratory and information technology infrastructure as well as interoperability with the laboratory information system. This article addresses these issues, primarily focusing on surgical pathology. Copyright © 2016 Elsevier Inc. All rights reserved.
Bar Coding and Tracking in Pathology.
Hanna, Matthew G; Pantanowitz, Liron
2015-06-01
Bar coding and specimen tracking are intricately linked to pathology workflow and efficiency. In the pathology laboratory, bar coding facilitates many laboratory practices, including specimen tracking, automation, and quality management. Data obtained from bar coding can be used to identify, locate, standardize, and audit specimens to achieve maximal laboratory efficiency and patient safety. Variables that need to be considered when implementing and maintaining a bar coding and tracking system include assets to be labeled, bar code symbologies, hardware, software, workflow, and laboratory and information technology infrastructure as well as interoperability with the laboratory information system. This article addresses these issues, primarily focusing on surgical pathology. Copyright © 2015 Elsevier Inc. All rights reserved.
Uniform emergency codes: will they improve safety?
2005-01-01
There are pros and cons to uniform code systems, according to emergency medicine experts. Uniformity can be a benefit when ED nurses and other staff work at several facilities. It's critical that your staff understand not only what the codes stand for, but what they must do when codes are called. If your state institutes a new system, be sure to hold regular drills to familiarize your ED staff.
Schnabel, M; Mann, D; Efe, T; Schrappe, M; V Garrel, T; Gotzen, L; Schaeg, M
2004-10-01
The introduction of the German Diagnostic Related Groups (D-DRG) system requires redesigning administrative patient management strategies. Wrong coding leads to inaccurate grouping and endangers the reimbursement of treatment costs. This situation emphasizes the roles of documentation and coding as factors of economical success. The aims of this study were to assess the quantity and quality of initial documentation and coding (ICD-10 and OPS-301) and find operative strategies to improve efficiency and strategic means to ensure optimal documentation and coding quality. In a prospective study, documentation and coding quality were evaluated in a standardized way by weekly assessment. Clinical data from 1385 inpatients were processed for initial correctness and quality of documentation and coding. Principal diagnoses were found to be accurate in 82.7% of cases, inexact in 7.1%, and wrong in 10.1%. Effects on financial returns occurred in 16%. Based on these findings, an optimized, interdisciplinary, and multiprofessional workflow on medical documentation, coding, and data control was developed. Workflow incorporating regular assessment of documentation and coding quality is required by the DRG system to ensure efficient accounting of hospital services. Interdisciplinary and multiprofessional cooperation is recognized to be an important factor in establishing an efficient workflow in medical documentation and coding.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rohatgi, U.S.; Cheng, H.S.; Khan, H.J.
This document is the User`s Manual for the Boiling Water Reactor (BWR), and Simplified Boiling Water Reactor (SBWR) systems transient code RAMONA-4B. The code uses a three-dimensional neutron-kinetics model coupled with a multichannel, nonequilibrium, drift-flux, phase-flow model of the thermal hydraulics of the reactor vessel. The code is designed to analyze a wide spectrum of BWR core and system transients. Chapter 1 gives an overview of the code`s capabilities and limitations; Chapter 2 describes the code`s structure, lists major subroutines, and discusses the computer requirements. Chapter 3 is on code, auxillary codes, and instructions for running RAMONA-4B on Sun SPARCmore » and IBM Workstations. Chapter 4 contains component descriptions and detailed card-by-card input instructions. Chapter 5 provides samples of the tabulated output for the steady-state and transient calculations and discusses the plotting procedures for the steady-state and transient calculations. Three appendices contain important user and programmer information: lists of plot variables (Appendix A) listings of input deck for sample problem (Appendix B), and a description of the plotting program PAD (Appendix C). 24 refs., 18 figs., 11 tabs.« less
Bandwidth efficient CCSDS coding standard proposals
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Perez, Lance C.; Wang, Fu-Quan
1992-01-01
The basic concatenated coding system for the space telemetry channel consists of a Reed-Solomon (RS) outer code, a symbol interleaver/deinterleaver, and a bandwidth efficient trellis inner code. A block diagram of this configuration is shown. The system may operate with or without the outer code and interleaver. In this recommendation, the outer code remains the (255,223) RS code over GF(2 exp 8) with an error correcting capability of t = 16 eight bit symbols. This code's excellent performance and the existence of fast, cost effective, decoders justify its continued use. The purpose of the interleaver/deinterleaver is to distribute burst errors out of the inner decoder over multiple codewords of the outer code. This utilizes the error correcting capability of the outer code more efficiently and reduces the probability of an RS decoder failure. Since the space telemetry channel is not considered bursty, the required interleaving depth is primarily a function of the inner decoding method. A diagram of an interleaver with depth 4 that is compatible with the (255,223) RS code is shown. Specific interleaver requirements are discussed after the inner code recommendations.
48 CFR 2001.104-1 - Publication and code arrangement.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false Publication and code arrangement. 2001.104-1 Section 2001.104-1 Federal Acquisition Regulations System NUCLEAR REGULATORY... 2001.104-1 Publication and code arrangement. (a) The NRCAR and its subsequent changes are: (1...
Moore, Brian C J
2003-03-01
To review how the properties of sounds are "coded" in the normal auditory system and to discuss the extent to which cochlear implants can and do represent these codes. Data are taken from published studies of the response of the cochlea and auditory nerve to simple and complex stimuli, in both the normal and the electrically stimulated ear. REVIEW CONTENT: The review describes: 1) the coding in the normal auditory system of overall level (which partly determines perceived loudness), spectral shape (which partly determines perceived timbre and the identity of speech sounds), periodicity (which partly determines pitch), and sound location; 2) the role of the active mechanism in the cochlea, and particularly the fast-acting compression associated with that mechanism; 3) the neural response patterns evoked by cochlear implants; and 4) how the response patterns evoked by implants differ from those observed in the normal auditory system in response to sound. A series of specific issues is then discussed, including: 1) how to compensate for the loss of cochlear compression; 2) the effective number of independent channels in a normal ear and in cochlear implantees; 3) the importance of independence of responses across neurons; 4) the stochastic nature of normal neural responses; 5) the possible role of across-channel coincidence detection; and 6) potential benefits of binaural implantation. Current cochlear implants do not adequately reproduce several aspects of the neural coding of sound in the normal auditory system. Improved electrode arrays and coding systems may lead to improved coding and, it is hoped, to better performance.
Munasinghe, A; Chang, D; Mamidanna, R; Middleton, S; Joy, M; Penninckx, F; Darzi, A; Livingston, E; Faiz, O
2014-07-01
Significant variation in colorectal surgery outcomes exists between different countries. Better understanding of the sources of variable outcomes using administrative data requires alignment of differing clinical coding systems. We aimed to map similar diagnoses and procedures across administrative coding systems used in different countries. Administrative data were collected in a central database as part of the Global Comparators (GC) Project. In order to unify these data, a systematic translation of diagnostic and procedural codes was undertaken. Codes for colorectal diagnoses, resections, operative complications and reoperative interventions were mapped across the respective national healthcare administrative coding systems. Discharge data from January 2006 to June 2011 for patients who had undergone colorectal surgical resections were analysed to generate risk-adjusted models for mortality, length of stay, readmissions and reoperations. In all, 52 544 case records were collated from 31 institutions in five countries. Mapping of all the coding systems was achieved so that diagnosis and procedures from the participant countries could be compared. Using the aligned coding systems to develop risk-adjusted models, the 30-day mortality rate for colorectal surgery was 3.95% (95% CI 0.86-7.54), the 30-day readmission rate was 11.05% (5.67-17.61), the 28-day reoperation rate was 6.13% (3.68-9.66) and the mean length of stay was 14 (7.65-46.76) days. The linkage of international hospital administrative data that we developed enabled comparison of documented surgical outcomes between countries. This methodology may facilitate international benchmarking. Colorectal Disease © 2014 The Association of Coloproctology of Great Britain and Ireland.
Diagnosis - Using automatic test equipment and artificial intelligence expert systems
NASA Astrophysics Data System (ADS)
Ramsey, J. E., Jr.
Three expert systems (ATEOPS, ATEFEXPERS, and ATEFATLAS), which were created to direct automatic test equipment (ATE), are reviewed. The purpose of the project was to develop an expert system to troubleshoot the converter-programmer power supply card for the F-15 aircraft and have that expert system direct the automatic test equipment. Each expert system uses a different knowledge base or inference engine, basing the testing on the circuit schematic, test requirements document, or ATLAS code. Implementing generalized modules allows the expert systems to be used for any different unit under test. Using converted ATLAS to LISP code allows the expert system to direct any ATE using ATLAS. The constraint propagated frame system allows for the expansion of control by creating the ATLAS code, checking the code for good software engineering techniques, directing the ATE, and changing the test sequence as needed (planning).
Translating expert system rules into Ada code with validation and verification
NASA Technical Reports Server (NTRS)
Becker, Lee; Duckworth, R. James; Green, Peter; Michalson, Bill; Gosselin, Dave; Nainani, Krishan; Pease, Adam
1991-01-01
The purpose of this ongoing research and development program is to develop software tools which enable the rapid development, upgrading, and maintenance of embedded real-time artificial intelligence systems. The goals of this phase of the research were to investigate the feasibility of developing software tools which automatically translate expert system rules into Ada code and develop methods for performing validation and verification testing of the resultant expert system. A prototype system was demonstrated which automatically translated rules from an Air Force expert system was demonstrated which detected errors in the execution of the resultant system. The method and prototype tools for converting AI representations into Ada code by converting the rules into Ada code modules and then linking them with an Activation Framework based run-time environment to form an executable load module are discussed. This method is based upon the use of Evidence Flow Graphs which are a data flow representation for intelligent systems. The development of prototype test generation and evaluation software which was used to test the resultant code is discussed. This testing was performed automatically using Monte-Carlo techniques based upon a constraint based description of the required performance for the system.
Nuclear thermal propulsion engine system design analysis code development
NASA Astrophysics Data System (ADS)
Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.; Ivanenok, Joseph F.
1992-01-01
A Nuclear Thermal Propulsion (NTP) Engine System Design Analyis Code has recently been developed to characterize key NTP engine system design features. Such a versatile, standalone NTP system performance and engine design code is required to support ongoing and future engine system and vehicle design efforts associated with proposed Space Exploration Initiative (SEI) missions of interest. Key areas of interest in the engine system modeling effort were the reactor, shielding, and inclusion of an engine multi-redundant propellant pump feed system design option. A solid-core nuclear thermal reactor and internal shielding code model was developed to estimate the reactor's thermal-hydraulic and physical parameters based on a prescribed thermal output which was integrated into a state-of-the-art engine system design model. The reactor code module has the capability to model graphite, composite, or carbide fuels. Key output from the model consists of reactor parameters such as thermal power, pressure drop, thermal profile, and heat generation in cooled structures (reflector, shield, and core supports), as well as the engine system parameters such as weight, dimensions, pressures, temperatures, mass flows, and performance. The model's overall analysis methodology and its key assumptions and capabilities are summarized in this paper.
The additional impact of liaison psychiatry on the future funding of general hospital services.
Udoh, G; Afif, M; MacHale, S
2012-01-01
Accurate coding system is fundamental in determining Casemix, which is likely to become a major determinant of future funding of health care services. Our aim was to determine whether the Hospital Inpatient Enquiry (HIPE) system assigned codes for psychiatric disorders were accurate and reflective of Liaison psychiatric input into patients' care. The HIPE system's coding for psychiatric disorders were compared with psychiatrists' coding for the same patients over a prospective 6 months period, using ICD-10 diagnostic criteria. A total of 262 cases were reviewed of which 135 (51%) were male and 127 (49%) were female. The mean age was 49 years, ranging from 16 years to 87 years (SD 17.3). Our findings show a significant disparity between HIPE and psychiatrists' coding. Only 94 (36%) of the HIPE coded cases were compatible with the psychiatrists' coding. The commonest cause of incompatibility was the coding personnel's failure to code for a psychiatric disorder in the present of one 117 (69.9%), others were coding for a different diagnosis 36 (21%), coding for a psychiatric disorder in the absent of one 11 (6.6%), different sub-type and others 2 (1.2%) respectively. HIPE data coded depression 30 (11.5%) as the commonest diagnosis and general examination 1 (0.4%) as least but failed to code for dementia, illicit drug use and somatoform disorder despite their being coded for by the psychiatrists. In contrast, the psychiatrists coded delirium 46 (18%) and dementia 1 (0.4%) as the commonest and the least diagnosed disorders respectively. Given the marked increase in case complexity associated with psychiatric co-morbidities, future funding streams are at risk of inadequate payment for services rendered.
Van Laere, Sven; Nyssen, Marc; Verbeke, Frank
2017-01-01
Clinical coding is a requirement to provide valuable data for billing, epidemiology and health care resource allocation. In sub-Saharan Africa, we observe a growing awareness of the need for coding of clinical data, not only in health insurances, but also in governments and the hospitals. Presently, coding systems in sub-Saharan Africa are often used for billing purposes. In this paper we consider the use of a nomenclature to also have a clinical impact. Often coding systems are assumed to be complex and too extensive to be used in daily practice. Here, we present a method for constructing a new nomenclature based on existing coding systems by considering a minimal subset in the sub-Saharan region. Evaluation of completeness will be done nationally using the requirements of national registries. The nomenclature requires an extension character for dealing with codes that have to be used for multiple registries. Hospitals will benefit most by using this extension character.
Use the Bar Code System to Improve Accuracy of the Patient and Sample Identification.
Chuang, Shu-Hsia; Yeh, Huy-Pzu; Chi, Kun-Hung; Ku, Hsueh-Chen
2018-01-01
In time and correct sample collection were highly related to patient's safety. The sample error rate was 11.1%, because misbranded patient information and wrong sample containers during January to April, 2016. We developed a barcode system of "Specimens Identify System" through process of reengineering of TRM, used bar code scanners, add sample container instructions, and mobile APP. Conclusion, the bar code systems improved the patient safety and created green environment.
NASA Technical Reports Server (NTRS)
Topol, David A.
1999-01-01
TFaNS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFaNS consists of: The codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. Cup3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report provides technical background for TFaNS including the organization of the system and CUP3D technical documentation. This document also provides information for code developers who must write Acoustic Property Files in the CUP3D format. This report is divided into three volumes: Volume I: System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume II: User's Manual, TFaNS Vers. 1.4; Volume III: Evaluation of System Codes.
Serial-data correlator/code translator
NASA Technical Reports Server (NTRS)
Morgan, L. E.
1977-01-01
System, consisting of sampling flip flop, memory (either RAM or ROM), and memory buffer, correlates sampled data with predetermined acceptance code patterns, translates acceptable code patterns to nonreturn-to-zero code, and identifies data dropouts.
Construction of a new regular LDPC code for optical transmission systems
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Tong, Qing-zhen; Xu, Liang; Huang, Sheng
2013-05-01
A novel construction method of the check matrix for the regular low density parity check (LDPC) code is proposed. The novel regular systematically constructed Gallager (SCG)-LDPC(3969,3720) code with the code rate of 93.7% and the redundancy of 6.69% is constructed. The simulation results show that the net coding gain (NCG) and the distance from the Shannon limit of the novel SCG-LDPC(3969,3720) code can respectively be improved by about 1.93 dB and 0.98 dB at the bit error rate (BER) of 10-8, compared with those of the classic RS(255,239) code in ITU-T G.975 recommendation and the LDPC(32640,30592) code in ITU-T G.975.1 recommendation with the same code rate of 93.7% and the same redundancy of 6.69%. Therefore, the proposed novel regular SCG-LDPC(3969,3720) code has excellent performance, and is more suitable for high-speed long-haul optical transmission systems.
Performance of MIMO-OFDM using convolution codes with QAM modulation
NASA Astrophysics Data System (ADS)
Astawa, I. Gede Puja; Moegiharto, Yoedy; Zainudin, Ahmad; Salim, Imam Dui Agus; Anggraeni, Nur Annisa
2014-04-01
Performance of Orthogonal Frequency Division Multiplexing (OFDM) system can be improved by adding channel coding (error correction code) to detect and correct errors that occur during data transmission. One can use the convolution code. This paper present performance of OFDM using Space Time Block Codes (STBC) diversity technique use QAM modulation with code rate ½. The evaluation is done by analyzing the value of Bit Error Rate (BER) vs Energy per Bit to Noise Power Spectral Density Ratio (Eb/No). This scheme is conducted 256 subcarrier which transmits Rayleigh multipath fading channel in OFDM system. To achieve a BER of 10-3 is required 10dB SNR in SISO-OFDM scheme. For 2×2 MIMO-OFDM scheme requires 10 dB to achieve a BER of 10-3. For 4×4 MIMO-OFDM scheme requires 5 dB while adding convolution in a 4x4 MIMO-OFDM can improve performance up to 0 dB to achieve the same BER. This proves the existence of saving power by 3 dB of 4×4 MIMO-OFDM system without coding, power saving 7 dB of 2×2 MIMO-OFDM and significant power savings from SISO-OFDM system.
The CCONE Code System and its Application to Nuclear Data Evaluation for Fission and Other Reactions
NASA Astrophysics Data System (ADS)
Iwamoto, O.; Iwamoto, N.; Kunieda, S.; Minato, F.; Shibata, K.
2016-01-01
A computer code system, CCONE, was developed for nuclear data evaluation within the JENDL project. The CCONE code system integrates various nuclear reaction models needed to describe nucleon, light charged nuclei up to alpha-particle and photon induced reactions. The code is written in the C++ programming language using an object-oriented technology. At first, it was applied to neutron-induced reaction data on actinides, which were compiled into JENDL Actinide File 2008 and JENDL-4.0. It has been extensively used in various nuclear data evaluations for both actinide and non-actinide nuclei. The CCONE code has been upgraded to nuclear data evaluation at higher incident energies for neutron-, proton-, and photon-induced reactions. It was also used for estimating β-delayed neutron emission. This paper describes the CCONE code system indicating the concept and design of coding and inputs. Details of the formulation for modelings of the direct, pre-equilibrium and compound reactions are presented. Applications to the nuclear data evaluations such as neutron-induced reactions on actinides and medium-heavy nuclei, high-energy nucleon-induced reactions, photonuclear reaction and β-delayed neutron emission are mentioned.
48 CFR 2501.104-1 - Publication and code arrangement.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 6 2013-10-01 2013-10-01 false Publication and code arrangement. 2501.104-1 Section 2501.104-1 Federal Acquisition Regulations System NATIONAL SCIENCE FOUNDATION... code arrangement. (a) The NSFAR is published in the daily issues of the Federal Register and, in...
48 CFR 2501.104-1 - Publication and code arrangement.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 6 2014-10-01 2014-10-01 false Publication and code arrangement. 2501.104-1 Section 2501.104-1 Federal Acquisition Regulations System NATIONAL SCIENCE FOUNDATION... code arrangement. (a) The NSFAR is published in the daily issues of the Federal Register and, in...
48 CFR 2501.104-1 - Publication and code arrangement.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 6 2012-10-01 2012-10-01 false Publication and code arrangement. 2501.104-1 Section 2501.104-1 Federal Acquisition Regulations System NATIONAL SCIENCE FOUNDATION... code arrangement. (a) The NSFAR is published in the daily issues of the Federal Register and, in...
48 CFR 3001.105-1 - Publication and code arrangement.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Publication and code arrangement. 3001.105-1 Section 3001.105-1 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND... Purpose, Authority, Issuance 3001.105-1 Publication and code arrangement. (a) The HSAR is published in: (1...
48 CFR 1001.105-1 - Publication and code arrangement.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false Publication and code arrangement. 1001.105-1 Section 1001.105-1 Federal Acquisition Regulations System DEPARTMENT OF THE TREASURY....105-1 Publication and code arrangement. The DTAR and its subsequent changes will be published in the...
48 CFR 1301.105-1 - Publication and code arrangement.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false Publication and code arrangement. 1301.105-1 Section 1301.105-1 Federal Acquisition Regulations System DEPARTMENT OF COMMERCE... Publication and code arrangement. (a) The CAR is published in the Federal Register, in cumulative form in the...
48 CFR 1001.105-1 - Publication and code arrangement.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false Publication and code arrangement. 1001.105-1 Section 1001.105-1 Federal Acquisition Regulations System DEPARTMENT OF THE TREASURY....105-1 Publication and code arrangement. The DTAR and its subsequent changes will be published in the...
48 CFR 3001.105-1 - Publication and code arrangement.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 7 2014-10-01 2014-10-01 false Publication and code arrangement. 3001.105-1 Section 3001.105-1 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND... Purpose, Authority, Issuance 3001.105-1 Publication and code arrangement. (a) The HSAR is published in: (1...
48 CFR 3001.105-1 - Publication and code arrangement.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 7 2011-10-01 2011-10-01 false Publication and code arrangement. 3001.105-1 Section 3001.105-1 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND... Purpose, Authority, Issuance 3001.105-1 Publication and code arrangement. (a) The HSAR is published in: (1...
48 CFR 1901.104-1 - Publication and code arrangement.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false Publication and code arrangement. 1901.104-1 Section 1901.104-1 Federal Acquisition Regulations System BROADCASTING BOARD OF..., Issuance 1901.104-1 Publication and code arrangement. (a) The IAAR is published in the Federal Register and...
48 CFR 3001.105-1 - Publication and code arrangement.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 7 2012-10-01 2012-10-01 false Publication and code arrangement. 3001.105-1 Section 3001.105-1 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND... Purpose, Authority, Issuance 3001.105-1 Publication and code arrangement. (a) The HSAR is published in: (1...
48 CFR 501.105-1 - Publication and code arrangement.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 4 2013-10-01 2013-10-01 false Publication and code arrangement. 501.105-1 Section 501.105-1 Federal Acquisition Regulations System GENERAL SERVICES... 501.105-1 Publication and code arrangement. The GSAR is published in the following sources: (a) Daily...
48 CFR 1301.105-1 - Publication and code arrangement.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false Publication and code arrangement. 1301.105-1 Section 1301.105-1 Federal Acquisition Regulations System DEPARTMENT OF COMMERCE... Publication and code arrangement. (a) The CAR is published in the Federal Register, in cumulative form in the...
48 CFR 501.105-1 - Publication and code arrangement.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 4 2012-10-01 2012-10-01 false Publication and code arrangement. 501.105-1 Section 501.105-1 Federal Acquisition Regulations System GENERAL SERVICES... 501.105-1 Publication and code arrangement. The GSAR is published in the following sources: (a) Daily...
48 CFR 501.105-1 - Publication and code arrangement.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 4 2014-10-01 2014-10-01 false Publication and code arrangement. 501.105-1 Section 501.105-1 Federal Acquisition Regulations System GENERAL SERVICES... 501.105-1 Publication and code arrangement. The GSAR is published in the following sources: (a) Daily...
The Modified Cognitive Constructions Coding System: Reliability and Validity Assessments
ERIC Educational Resources Information Center
Moran, Galia S.; Diamond, Gary M.
2006-01-01
The cognitive constructions coding system (CCCS) was designed for coding client's expressed problem constructions on four dimensions: intrapersonal-interpersonal, internal-external, responsible-not responsible, and linear-circular. This study introduces, and examines the reliability and validity of, a modified version of the CCCS--a version that…
Performance of the OVERFLOW-MLP and LAURA-MLP CFD Codes on the NASA Ames 512 CPU Origin System
NASA Technical Reports Server (NTRS)
Taft, James R.
2000-01-01
The shared memory Multi-Level Parallelism (MLP) technique, developed last year at NASA Ames has been very successful in dramatically improving the performance of important NASA CFD codes. This new and very simple parallel programming technique was first inserted into the OVERFLOW production CFD code in FY 1998. The OVERFLOW-MLP code's parallel performance scaled linearly to 256 CPUs on the NASA Ames 256 CPU Origin 2000 system (steger). Overall performance exceeded 20.1 GFLOP/s, or about 4.5x the performance of a dedicated 16 CPU C90 system. All of this was achieved without any major modification to the original vector based code. The OVERFLOW-MLP code is now in production on the inhouse Origin systems as well as being used offsite at commercial aerospace companies. Partially as a result of this work, NASA Ames has purchased a new 512 CPU Origin 2000 system to further test the limits of parallel performance for NASA codes of interest. This paper presents the performance obtained from the latest optimization efforts on this machine for the LAURA-MLP and OVERFLOW-MLP codes. The Langley Aerothermodynamics Upwind Relaxation Algorithm (LAURA) code is a key simulation tool in the development of the next generation shuttle, interplanetary reentry vehicles, and nearly all "X" plane development. This code sustains about 4-5 GFLOP/s on a dedicated 16 CPU C90. At this rate, expected workloads would require over 100 C90 CPU years of computing over the next few calendar years. It is not feasible to expect that this would be affordable or available to the user community. Dramatic performance gains on cheaper systems are needed. This code is expected to be perhaps the largest consumer of NASA Ames compute cycles per run in the coming year.The OVERFLOW CFD code is extensively used in the government and commercial aerospace communities to evaluate new aircraft designs. It is one of the largest consumers of NASA supercomputing cycles and large simulations of highly resolved full aircraft are routinely undertaken. Typical large problems might require 100s of Cray C90 CPU hours to complete. The dramatic performance gains with the 256 CPU steger system are exciting. Obtaining results in hours instead of months is revolutionizing the way in which aircraft manufacturers are looking at future aircraft simulation work. Figure 2 below is a current state of the art plot of OVERFLOW-MLP performance on the 512 CPU Lomax system. As can be seen, the chart indicates that OVERFLOW-MLP continues to scale linearly with CPU count up to 512 CPUs on a large 35 million point full aircraft RANS simulation. At this point performance is such that a fully converged simulation of 2500 time steps is completed in less than 2 hours of elapsed time. Further work over the next few weeks will improve the performance of this code even further.The LAURA code has been converted to the MLP format as well. This code is currently being optimized for the 512 CPU system. Performance statistics indicate that the goal of 100 GFLOP/s will be achieved by year's end. This amounts to 20x the 16 CPU C90 result and strongly demonstrates the viability of the new parallel systems rapidly solving very large simulations in a production environment.
NASA Astrophysics Data System (ADS)
Alipchenkov, V. M.; Anfimov, A. M.; Afremov, D. A.; Gorbunov, V. S.; Zeigarnik, Yu. A.; Kudryavtsev, A. V.; Osipov, S. L.; Mosunova, N. A.; Strizhov, V. F.; Usov, E. V.
2016-02-01
The conceptual fundamentals of the development of the new-generation system thermal-hydraulic computational HYDRA-IBRAE/LM code are presented. The code is intended to simulate the thermalhydraulic processes that take place in the loops and the heat-exchange equipment of liquid-metal cooled fast reactor systems under normal operation and anticipated operational occurrences and during accidents. The paper provides a brief overview of Russian and foreign system thermal-hydraulic codes for modeling liquid-metal coolants and gives grounds for the necessity of development of a new-generation HYDRA-IBRAE/LM code. Considering the specific engineering features of the nuclear power plants (NPPs) equipped with the BN-1200 and the BREST-OD-300 reactors, the processes and the phenomena are singled out that require a detailed analysis and development of the models to be correctly described by the system thermal-hydraulic code in question. Information on the functionality of the computational code is provided, viz., the thermalhydraulic two-phase model, the properties of the sodium and the lead coolants, the closing equations for simulation of the heat-mass exchange processes, the models to describe the processes that take place during the steam-generator tube rupture, etc. The article gives a brief overview of the usability of the computational code, including a description of the support documentation and the supply package, as well as possibilities of taking advantages of the modern computer technologies, such as parallel computations. The paper shows the current state of verification and validation of the computational code; it also presents information on the principles of constructing of and populating the verification matrices for the BREST-OD-300 and the BN-1200 reactor systems. The prospects are outlined for further development of the HYDRA-IBRAE/LM code, introduction of new models into it, and enhancement of its usability. It is shown that the program of development and practical application of the code will allow carrying out in the nearest future the computations to analyze the safety of potential NPP projects at a qualitatively higher level.
Computer optimization of reactor-thermoelectric space power systems
NASA Technical Reports Server (NTRS)
Maag, W. L.; Finnegan, P. M.; Fishbach, L. H.
1973-01-01
A computer simulation and optimization code that has been developed for nuclear space power systems is described. The results of using this code to analyze two reactor-thermoelectric systems are presented.
Potential Effects of Leak-Before-Break on Light Water Reactor Design.
1985-08-26
Boiler and Pressure Vessel Code . In fact, section 3 of that code was created for nuclear applications. This... Boiler and Pressure Vessel Code . The only major change which leak-before-break would require in these analyses would be that all piping to be considered...XI of the ASME Boiler and Pressure Vessel Code , and is already required for all Class I piping systems in the plant. Class I systems are those
Simulation realization of 2-D wavelength/time system utilizing MDW code for OCDMA system
NASA Astrophysics Data System (ADS)
Azura, M. S. A.; Rashidi, C. B. M.; Aljunid, S. A.; Endut, R.; Ali, N.
2017-11-01
This paper presents a realization of Wavelength/Time (W/T) Two-Dimensional Modified Double Weight (2-D MDW) code for Optical Code Division Multiple Access (OCDMA) system based on Spectral Amplitude Coding (SAC) approach. The MDW code has the capability to suppress Phase-Induce Intensity Noise (PIIN) and minimizing the Multiple Access Interference (MAI) noises. At the permissible BER 10-9, the 2-D MDW (APD) had shown minimum effective received power (Psr) = -71 dBm that can be obtained at the receiver side as compared to 2-D MDW (PIN) only received -61 dBm. The results show that 2-D MDW (APD) has better performance in achieving same BER with longer optical fiber length and with less received power (Psr). Also, the BER from the result shows that MDW code has the capability to suppress PIIN ad MAI.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uchibori, Akihiro; Kurihara, Akikazu; Ohshima, Hiroyuki
A multiphysics analysis system for sodium-water reaction phenomena in a steam generator of sodium-cooled fast reactors was newly developed. The analysis system consists of the mechanistic numerical analysis codes, SERAPHIM, TACT, and RELAP5. The SERAPHIM code calculates the multicomponent multiphase flow and sodium-water chemical reaction caused by discharging of pressurized water vapor. Applicability of the SERAPHIM code was confirmed through the analyses of the experiment on water vapor discharging in liquid sodium. The TACT code was developed to calculate heat transfer from the reacting jet to the adjacent tube and to predict the tube failure occurrence. The numerical models integratedmore » into the TACT code were verified through some related experiments. The RELAP5 code evaluates thermal hydraulic behavior of water inside the tube. The original heat transfer correlations were corrected for the tube rapidly heated by the reacting jet. The developed system enables evaluation of the wastage environment and the possibility of the failure propagation.« less
Power optimization of wireless media systems with space-time block codes.
Yousefi'zadeh, Homayoun; Jafarkhani, Hamid; Moshfeghi, Mehran
2004-07-01
We present analytical and numerical solutions to the problem of power control in wireless media systems with multiple antennas. We formulate a set of optimization problems aimed at minimizing total power consumption of wireless media systems subject to a given level of QoS and an available bit rate. Our formulation takes into consideration the power consumption related to source coding, channel coding, and transmission of multiple-transmit antennas. In our study, we consider Gauss-Markov and video source models, Rayleigh fading channels along with the Bernoulli/Gilbert-Elliott loss models, and space-time block codes.
The Library Systems Act and Rules for Administering the Library Systems Act.
ERIC Educational Resources Information Center
Texas State Library, Austin. Library Development Div.
This document contains the Texas Library Systems Act and rules for administering the Library Systems Act. Specifically, it includes the following documents: Texas Library Systems Act; Summary of Codes;Texas Administrative Code: Service Complaints and Protest Procedure; Criteria For Texas Library System Membership; and Certification Requirements…
TFaNS Tone Fan Noise Design/Prediction System. Volume 3; Evaluation of System Codes
NASA Technical Reports Server (NTRS)
Topol, David A.
1999-01-01
TFANS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFANS consists of: The codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. Cup3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report evaluates TFANS versus full-scale and ADP 22" fig data using the semi-empirical wake modelling in the system. This report is divided into three volumes: Volume 1: System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume II: User's Manual, TFANS Version 1.4; Volume III: Evaluation of System Codes.
The effect of total noise on two-dimension OCDMA codes
NASA Astrophysics Data System (ADS)
Dulaimi, Layth A. Khalil Al; Badlishah Ahmed, R.; Yaakob, Naimah; Aljunid, Syed A.; Matem, Rima
2017-11-01
In this research, we evaluate the performance of total noise effect on two dimension (2-D) optical code-division multiple access (OCDMA) performance systems using 2-D Modified Double Weight MDW under various link parameters. The impact of the multi-access interference (MAI) and other noise effect on the system performance. The 2-D MDW is compared mathematically with other codes which use similar techniques. We analyzed and optimized the data rate and effective receive power. The performance and optimization of MDW code in OCDMA system are reported, the bit error rate (BER) can be significantly improved when the 2-D MDW code desired parameters are selected especially the cross correlation properties. It reduces the MAI in the system compensate BER and phase-induced intensity noise (PIIN) in incoherent OCDMA The analysis permits a thorough understanding of PIIN, shot and thermal noises impact on 2-D MDW OCDMA system performance. PIIN is the main noise factor in the OCDMA network.
2014-01-01
Background The pediatric complex chronic conditions (CCC) classification system, developed in 2000, requires revision to accommodate the International Classification of Disease 10th Revision (ICD-10). To update the CCC classification system, we incorporated ICD-9 diagnostic codes that had been either omitted or incorrectly specified in the original system, and then translated between ICD-9 and ICD-10 using General Equivalence Mappings (GEMs). We further reviewed all codes in the ICD-9 and ICD-10 systems to include both diagnostic and procedural codes indicative of technology dependence or organ transplantation. We applied the provisional CCC version 2 (v2) system to death certificate information and 2 databases of health utilization, reviewed the resulting CCC classifications, and corrected any misclassifications. Finally, we evaluated performance of the CCC v2 system by assessing: 1) the stability of the system between ICD-9 and ICD-10 codes using data which included both ICD-9 codes and ICD-10 codes; 2) the year-to-year stability before and after ICD-10 implementation; and 3) the proportions of patients classified as having a CCC in both the v1 and v2 systems. Results The CCC v2 classification system consists of diagnostic and procedural codes that incorporate a new neonatal CCC category as well as domains of complexity arising from technology dependence or organ transplantation. CCC v2 demonstrated close comparability between ICD-9 and ICD-10 and did not detect significant discontinuity in temporal trends of death in the United States. Compared to the original system, CCC v2 resulted in a 1.0% absolute (10% relative) increase in the number of patients identified as having a CCC in national hospitalization dataset, and a 0.4% absolute (24% relative) increase in a national emergency department dataset. Conclusions The updated CCC v2 system is comprehensive and multidimensional, and provides a necessary update to accommodate widespread implementation of ICD-10. PMID:25102958
Matrix-Product-State Algorithm for Finite Fractional Quantum Hall Systems
NASA Astrophysics Data System (ADS)
Liu, Zhao; Bhatt, R. N.
2015-09-01
Exact diagonalization is a powerful tool to study fractional quantum Hall (FQH) systems. However, its capability is limited by the exponentially increasing computational cost. In order to overcome this difficulty, density-matrix-renormalization-group (DMRG) algorithms were developed for much larger system sizes. Very recently, it was realized that some model FQH states have exact matrix-product-state (MPS) representation. Motivated by this, here we report a MPS code, which is closely related to, but different from traditional DMRG language, for finite FQH systems on the cylinder geometry. By representing the many-body Hamiltonian as a matrix-product-operator (MPO) and using single-site update and density matrix correction, we show that our code can efficiently search the ground state of various FQH systems. We also compare the performance of our code with traditional DMRG. The possible generalization of our code to infinite FQH systems and other physical systems is also discussed.
Coded DS-CDMA Systems with Iterative Channel Estimation and no Pilot Symbols
2010-08-01
ar X iv :1 00 8. 31 96 v1 [ cs .I T ] 1 9 A ug 2 01 0 1 Coded DS - CDMA Systems with Iterative Channel Estimation and no Pilot Symbols Don...sequence code-division multiple-access ( DS - CDMA ) systems with quadriphase-shift keying in which channel estimation, coherent demodulation, and decoding...amplitude, phase, and the interference power spectral density (PSD) due to the combined interference and thermal noise is proposed for DS - CDMA systems
CELCAP: A Computer Model for Cogeneration System Analysis
NASA Technical Reports Server (NTRS)
1985-01-01
A description of the CELCAP cogeneration analysis program is presented. A detailed description of the methodology used by the Naval Civil Engineering Laboratory in developing the CELCAP code and the procedures for analyzing cogeneration systems for a given user are given. The four engines modeled in CELCAP are: gas turbine with exhaust heat boiler, diesel engine with waste heat boiler, single automatic-extraction steam turbine, and back-pressure steam turbine. Both the design point and part-load performances are taken into account in the engine models. The load model describes how the hourly electric and steam demand of the user is represented by 24 hourly profiles. The economic model describes how the annual and life-cycle operating costs that include the costs of fuel, purchased electricity, and operation and maintenance of engines and boilers are calculated. The CELCAP code structure and principal functions of the code are described to how the various components of the code are related to each other. Three examples of the application of the CELCAP code are given to illustrate the versatility of the code. The examples shown represent cases of system selection, system modification, and system optimization.
NASA Astrophysics Data System (ADS)
Zhou, Abel; White, Graeme L.; Davidson, Rob
2018-02-01
Anti-scatter grids are commonly used in x-ray imaging systems to reduce scatter radiation reaching the image receptor. Anti-scatter grid performance and validation can be simulated through use of Monte Carlo (MC) methods. Our recently reported work has modified existing MC codes resulting in improved performance when simulating x-ray imaging. The aim of this work is to validate the transmission of x-ray photons in grids from the recently reported new MC codes against experimental results and results previously reported in other literature. The results of this work show that the scatter-to-primary ratio (SPR), the transmissions of primary (T p), scatter (T s), and total (T t) radiation determined using this new MC code system have strong agreement with the experimental results and the results reported in the literature. T p, T s, T t, and SPR determined in this new MC simulation code system are valid. These results also show that the interference effect on Rayleigh scattering should not be neglected in both mammographic and general grids’ evaluation. Our new MC simulation code system has been shown to be valid and can be used for analysing and evaluating the designs of grids.
An Overview of the Greyscales Lethality Assessment Methodology
2011-01-01
code has already been integrated into the Weapon Systems Division MECA and DUEL missile engagement simulations. It can also be integrated into...incorporated into a variety of simulations. The code has already been integrated into the Weapon Systems Division MECA and DUEL missile engagement...capable of being incorporated into a variety of simulations. The code has already been integrated into the Weapon Systems Division MECA and DUEL missile
Information Retrieval System for Japanese Standard Disease-Code Master Using XML Web Service
Hatano, Kenji; Ohe, Kazuhiko
2003-01-01
Information retrieval system of Japanese Standard Disease-Code Master Using XML Web Service is developed. XML Web Service is a new distributed processing system by standard internet technologies. With seamless remote method invocation of XML Web Service, users are able to get the latest disease code master information from their rich desktop applications or internet web sites, which refer to this service. PMID:14728364
Model-Driven Engineering: Automatic Code Generation and Beyond
2015-03-01
and Weblogic as well as cloud environments such as Mi- crosoft Azure and Amazon Web Services®. Finally, while the generated code has dependencies on...code generation in the context of the full system lifecycle from development to sustainment. Acquisition programs in govern- ment or large commercial...Acquirers are concerned with the full system lifecycle, and they need confidence that the development methods will enable the system to meet the functional
Edge-diffraction effects in RCS predictions and their importance in systems analysis
NASA Astrophysics Data System (ADS)
Friess, W. F.; Klement, D.; Ruppel, M.; Stein, Volker
1996-06-01
In developing RCS prediction codes a variety of physical effects such as the edge diffraction effect have to be considered with the consequence that the computer effort increases considerably. This fact limits the field of application of such codes, especially if the RCS data serve as input parameters for system simulators which very often need these data for a high number of observation angles and/or frequencies. Vice versa the issues of a system analysis can be used to estimate the relevance of physical effects under system viewpoints and to rank them according to their magnitude. This paper tries to evaluate the importance of RCS predictions containing an edge diffracted field for systems analysis. A double dihedral with a strong depolarizing behavior and a generic airplane design containing many arbitrarily oriented edges are used as test structures. Data of the scattered field are generated by the RCS computer code SIGMA with and without including edge diffraction effects. These data are submitted to the code DORA to determine radar range and radar detectibility and to a SAR simulator code to generate SAR imagery. In both cases special scenarios are assumed. The essential features of the computer codes in their current state are described, the results are presented and discussed under systems viewpoints.
Single-channel voice-response-system program documentation volume I : system description
DOT National Transportation Integrated Search
1977-01-01
This report documents the design and implementation of a Voice Response System (VRS) using Adaptive Differential Pulse Code Modulation (ADPCM) voice coding. Implemented on a Digital Equipment Corporation PDP-11/20,R this VRS system supports a single ...
NASA Astrophysics Data System (ADS)
Jos, Sujit; Kumar, Preetam; Chakrabarti, Saswat
Orthogonal and quasi-orthogonal codes are integral part of any DS-CDMA based cellular systems. Orthogonal codes are ideal for use in perfectly synchronous scenario like downlink cellular communication. Quasi-orthogonal codes are preferred over orthogonal codes in the uplink communication where perfect synchronization cannot be achieved. In this paper, we attempt to compare orthogonal and quasi-orthogonal codes in presence of timing synchronization error. This will give insight into the synchronization demands in DS-CDMA systems employing the two classes of sequences. The synchronization error considered is smaller than chip duration. Monte-Carlo simulations have been carried out to verify the analytical and numerical results.
Advanced technology development for image gathering, coding, and processing
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.
1990-01-01
Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.
Optimized iterative decoding method for TPC coded CPM
NASA Astrophysics Data System (ADS)
Ma, Yanmin; Lai, Penghui; Wang, Shilian; Xie, Shunqin; Zhang, Wei
2018-05-01
Turbo Product Code (TPC) coded Continuous Phase Modulation (CPM) system (TPC-CPM) has been widely used in aeronautical telemetry and satellite communication. This paper mainly investigates the improvement and optimization on the TPC-CPM system. We first add the interleaver and deinterleaver to the TPC-CPM system, and then establish an iterative system to iteratively decode. However, the improved system has a poor convergence ability. To overcome this issue, we use the Extrinsic Information Transfer (EXIT) analysis to find the optimal factors for the system. The experiments show our method is efficient to improve the convergence performance.
Hjerpe, Per; Boström, Kristina Bengtsson; Lindblad, Ulf; Merlo, Juan
2012-12-01
To investigate the impact on ICD coding behaviour of a new case-mix reimbursement system based on coded patient diagnoses. The main hypothesis was that after the introduction of the new system the coding of chronic diseases like hypertension and cancer would increase and the variance in propensity for coding would decrease on both physician and health care centre (HCC) levels. Cross-sectional multilevel logistic regression analyses were performed in periods covering the time before and after the introduction of the new reimbursement system. Skaraborg primary care, Sweden. All patients (n = 76 546 to 79 826) 50 years of age and older visiting 468 to 627 physicians at the 22 public HCCs in five consecutive time periods of one year each. Registered codes for hypertension and cancer diseases in Skaraborg primary care database (SPCD). After the introduction of the new reimbursement system the adjusted prevalence of hypertension and cancer in SPCD increased from 17.4% to 32.2% and from 0.79% to 2.32%, respectively, probably partly due to an increased diagnosis coding of indirect patient contacts. The total variance in the propensity for coding declined simultaneously at the physician level for both diagnosis groups. Changes in the healthcare reimbursement system may directly influence the contents of a research database that retrieves data from clinical practice. This should be taken into account when using such a database for research purposes, and the data should be validated for each diagnosis.
Parallel Grand Canonical Monte Carlo (ParaGrandMC) Simulation Code
NASA Technical Reports Server (NTRS)
Yamakov, Vesselin I.
2016-01-01
This report provides an overview of the Parallel Grand Canonical Monte Carlo (ParaGrandMC) simulation code. This is a highly scalable parallel FORTRAN code for simulating the thermodynamic evolution of metal alloy systems at the atomic level, and predicting the thermodynamic state, phase diagram, chemical composition and mechanical properties. The code is designed to simulate multi-component alloy systems, predict solid-state phase transformations such as austenite-martensite transformations, precipitate formation, recrystallization, capillary effects at interfaces, surface absorption, etc., which can aid the design of novel metallic alloys. While the software is mainly tailored for modeling metal alloys, it can also be used for other types of solid-state systems, and to some degree for liquid or gaseous systems, including multiphase systems forming solid-liquid-gas interfaces.
Hybrid and concatenated coding applications.
NASA Technical Reports Server (NTRS)
Hofman, L. B.; Odenwalder, J. P.
1972-01-01
Results of a study to evaluate the performance and implementation complexity of a concatenated and a hybrid coding system for moderate-speed deep-space applications. It is shown that with a total complexity of less than three times that of the basic Viterbi decoder, concatenated coding improves a constraint length 8 rate 1/3 Viterbi decoding system by 1.1 and 2.6 dB at bit error probabilities of 0.0001 and one hundred millionth, respectively. With a somewhat greater total complexity, the hybrid coding system is shown to obtain a 0.9-dB computational performance improvement over the basic rate 1/3 sequential decoding system. Although substantial, these complexities are much less than those required to achieve the same performances with more complex Viterbi or sequential decoder systems.
Guffanti, Marianne C.; Miller, Thomas
2013-01-01
An alert-level system for communicating volcano hazard information to the aviation industry was devised by the Alaska Volcano Observatory (AVO) during the 1989–1990 eruption of Redoubt Volcano. The system uses a simple, color-coded ranking that focuses on volcanic ash emissions: Green—normal background; Yellow—signs of unrest; Orange—precursory unrest or minor ash eruption; Red—major ash eruption imminent or underway. The color code has been successfully applied on a regional scale in Alaska for a sustained period. During 2002–2011, elevated color codes were assigned by AVO to 13 volcanoes, eight of which erupted; for that decade, one or more Alaskan volcanoes were at Yellow on 67 % of days and at Orange or Red on 12 % of days. As evidence of its utility, the color code system is integrated into procedures of agencies responsible for air-traffic management and aviation meteorology in Alaska. Furthermore, it is endorsed as a key part of globally coordinated protocols established by the International Civil Aviation Organization to provide warnings of ash hazards to aviation worldwide. The color code and accompanying structured message (called a Volcano Observatory Notice for Aviation) comprise an effective early-warning message system according to the United Nations International Strategy for Disaster Reduction. The aviation color code system currently is used in the United States, Russia, New Zealand, Iceland, and partially in the Philippines, Papua New Guinea, and Indonesia. Although there are some barriers to implementation, with continued education and outreach to Volcano Observatories worldwide, greater use of the aviation color code system is achievable.
Guffanti, Marianne; Miller, Thomas P.
2013-01-01
An alert-level system for communicating volcano hazard information to the aviation industry was devised by the Alaska Volcano Observatory (AVO) during the 1989–1990 eruption of Redoubt Volcano. The system uses a simple, color-coded ranking that focuses on volcanic ash emissions: Green—normal background; Yellow—signs of unrest; Orange—precursory unrest or minor ash eruption; Red—major ash eruption imminent or underway. The color code has been successfully applied on a regional scale in Alaska for a sustained period. During 2002–2011, elevated color codes were assigned by AVO to 13 volcanoes, eight of which erupted; for that decade, one or more Alaskan volcanoes were at Yellow on 67 % of days and at Orange or Red on 12 % of days. As evidence of its utility, the color code system is integrated into procedures of agencies responsible for air-traffic management and aviation meteorology in Alaska. Furthermore, it is endorsed as a key part of globally coordinated protocols established by the International Civil Aviation Organization to provide warnings of ash hazards to aviation worldwide. The color code and accompanying structured message (called a Volcano Observatory Notice for Aviation) comprise an effective early-warning message system according to the United Nations International Strategy for Disaster Reduction. The aviation color code system currently is used in the United States, Russia, New Zealand, Iceland, and partially in the Philippines, Papua New Guinea, and Indonesia. Although there are some barriers to implementation, with continued education and outreach to Volcano Observatories worldwide, greater use of the aviation color code system is achievable.
Orchard, John; Rae, Katherine; Brooks, John; Hägglund, Martin; Til, Lluis; Wales, David; Wood, Tim
2010-01-01
The Orchard Sports Injury Classification System (OSICS) is one of the world’s most commonly used systems for coding injury diagnoses in sports injury surveillance systems. Its major strengths are that it has wide usage, has codes specific to sports medicine and that it is free to use. Literature searches and stakeholder consultations were made to assess the uptake of OSICS and to develop new versions. OSICS was commonly used in the sports of football (soccer), Australian football, rugby union, cricket and tennis. It is referenced in international papers in three sports and used in four commercially available computerised injury management systems. Suggested injury categories for the major sports are presented. New versions OSICS 9 (three digit codes) and OSICS 10.1 (four digit codes) are presented. OSICS is a potentially helpful component of a comprehensive sports injury surveillance system, but many other components are required. Choices made in developing these components should ideally be agreed upon by groups of researchers in consensus statements. PMID:24198559
Enhanced fault-tolerant quantum computing in d-level systems.
Campbell, Earl T
2014-12-05
Error-correcting codes protect quantum information and form the basis of fault-tolerant quantum computing. Leading proposals for fault-tolerant quantum computation require codes with an exceedingly rare property, a transversal non-Clifford gate. Codes with the desired property are presented for d-level qudit systems with prime d. The codes use n=d-1 qudits and can detect up to ∼d/3 errors. We quantify the performance of these codes for one approach to quantum computation known as magic-state distillation. Unlike prior work, we find performance is always enhanced by increasing d.
Development of an object-oriented ORIGEN for advanced nuclear fuel modeling applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skutnik, S.; Havloej, F.; Lago, D.
2013-07-01
The ORIGEN package serves as the core depletion and decay calculation module within the SCALE code system. A recent major re-factor to the ORIGEN code architecture as part of an overall modernization of the SCALE code system has both greatly enhanced its maintainability as well as afforded several new capabilities useful for incorporating depletion analysis into other code frameworks. This paper will present an overview of the improved ORIGEN code architecture (including the methods and data structures introduced) as well as current and potential future applications utilizing the new ORIGEN framework. (authors)
National Underground Mines Inventory
1983-10-01
system is well designed to minimize water accumulation on the drift levels. In many areas, sufficient water has accumulated to make the use of boots a...four characters designate Field office. 17-18 State Code Pic 99 FIPS code for state in which minets located. 19-21 County Code Plc 999 FIPS code for... Designate a general product class based onSIC code. 28-29 Nine Type Plc 99 Natal/Nonmetal mine type code. Based on subunit operations code and canvass code
Efficient Signal, Code, and Receiver Designs for MIMO Communication Systems
2003-06-01
167 5-31 Concatenation of a tilted-QAM inner code with an LDPC outer code with a two component iterative soft-decision decoder. . . . . . . . . 168 5...for AWGN channels has long been studied. There are well-known soft-decision codes like the turbo codes and LDPC codes that can approach capacity to...bits) low density parity check ( LDPC ) code 1. 2. The coded bits are randomly interleaved so that bits nearby go through different sub-channels, and are
Computer Code Aids Design Of Wings
NASA Technical Reports Server (NTRS)
Carlson, Harry W.; Darden, Christine M.
1993-01-01
AERO2S computer code developed to aid design engineers in selection and evaluation of aerodynamically efficient wing/canard and wing/horizontal-tail configurations that includes simple hinged-flap systems. Code rapidly estimates longitudinal aerodynamic characteristics of conceptual airplane lifting-surface arrangements. Developed in FORTRAN V on CDC 6000 computer system, and ported to MS-DOS environment.
Flexible Generation of Kalman Filter Code
NASA Technical Reports Server (NTRS)
Richardson, Julian; Wilson, Edward
2006-01-01
Domain-specific program synthesis can automatically generate high quality code in complex domains from succinct specifications, but the range of programs which can be generated by a given synthesis system is typically narrow. Obtaining code which falls outside this narrow scope necessitates either 1) extension of the code generator, which is usually very expensive, or 2) manual modification of the generated code, which is often difficult and which must be redone whenever changes are made to the program specification. In this paper, we describe adaptations and extensions of the AUTOFILTER Kalman filter synthesis system which greatly extend the range of programs which can be generated. Users augment the input specification with a specification of code fragments and how those fragments should interleave with or replace parts of the synthesized filter. This allows users to generate a much wider range of programs without their needing to modify the synthesis system or edit generated code. We demonstrate the usefulness of the approach by applying it to the synthesis of a complex state estimator which combines code from several Kalman filters with user-specified code. The work described in this paper allows the complex design decisions necessary for real-world applications to be reflected in the synthesized code. When executed on simulated input data, the generated state estimator was found to produce comparable estimates to those produced by a handcoded estimator
Rodrigues, J M; Trombert-Paviot, B; Baud, R; Wagner, J; Meusnier-Carriot, F
1998-01-01
GALEN has developed a language independent common reference model based on a medically oriented ontology and practical tools and techniques for managing healthcare terminology including natural language processing. GALEN-IN-USE is the current phase which applied the modelling and the tools to the development or the updating of coding systems for surgical procedures in different national coding centers co-operating within the European Federation of Coding Centre (EFCC) to create a language independent knowledge repository for multicultural Europe. We used an integrated set of artificial intelligence terminology tools named CLAssification Manager workbench to process French professional medical language rubrics into intermediate dissections and to the Grail reference ontology model representation. From this language independent concept model representation we generate controlled French natural language. The French national coding centre is then able to retrieve the initial professional rubrics with different categories of concepts, to compare the professional language proposed by expert clinicians to the French generated controlled vocabulary and to finalize the linguistic labels of the coding system in relation with the meanings of the conceptual system structure.
Code-modulated interferometric imaging system using phased arrays
NASA Astrophysics Data System (ADS)
Chauhan, Vikas; Greene, Kevin; Floyd, Brian
2016-05-01
Millimeter-wave (mm-wave) imaging provides compelling capabilities for security screening, navigation, and bio- medical applications. Traditional scanned or focal-plane mm-wave imagers are bulky and costly. In contrast, phased-array hardware developed for mass-market wireless communications and automotive radar promise to be extremely low cost. In this work, we present techniques which can allow low-cost phased-array receivers to be reconfigured or re-purposed as interferometric imagers, removing the need for custom hardware and thereby reducing cost. Since traditional phased arrays power combine incoming signals prior to digitization, orthogonal code-modulation is applied to each incoming signal using phase shifters within each front-end and two-bit codes. These code-modulated signals can then be combined and processed coherently through a shared hardware path. Once digitized, visibility functions can be recovered through squaring and code-demultiplexing operations. Pro- vided that codes are selected such that the product of two orthogonal codes is a third unique and orthogonal code, it is possible to demultiplex complex visibility functions directly. As such, the proposed system modulates incoming signals but demodulates desired correlations. In this work, we present the operation of the system, a validation of its operation using behavioral models of a traditional phased array, and a benchmarking of the code-modulated interferometer against traditional interferometer and focal-plane arrays.
DNA Barcoding through Quaternary LDPC Codes
Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar
2015-01-01
For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10−2 per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10−9 at the expense of a rate of read losses just in the order of 10−6. PMID:26492348
DNA Barcoding through Quaternary LDPC Codes.
Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar
2015-01-01
For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2) per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9) at the expense of a rate of read losses just in the order of 10(-6).
Automated Coding Software: Development and Use to Enhance Anti-Fraud Activities*
Garvin, Jennifer H.; Watzlaf, Valerie; Moeini, Sohrab
2006-01-01
This descriptive research project identified characteristics of automated coding systems that have the potential to detect improper coding and to minimize improper or fraudulent coding practices in the setting of automated coding used with the electronic health record (EHR). Recommendations were also developed for software developers and users of coding products to maximize anti-fraud practices. PMID:17238546
An approach to the origin of self-replicating system. I - Intermolecular interactions
NASA Technical Reports Server (NTRS)
Macelroy, R. D.; Coeckelenbergh, Y.; Rein, R.
1978-01-01
The present paper deals with the characteristics and potentialities of a recently developed computer-based molecular modeling system. Some characteristics of current coding systems are examined and are extrapolated to the apparent requirements of primitive prebiological coding systems.
EBT reactor systems analysis and cost code: description and users guide (Version 1)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santoro, R.T.; Uckan, N.A.; Barnes, J.M.
1984-06-01
An ELMO Bumpy Torus (EBT) reactor systems analysis and cost code that incorporates the most recent advances in EBT physics has been written. The code determines a set of reactors that fall within an allowed operating window determined from the coupling of ring and core plasma properties and the self-consistent treatment of the coupled ring-core stability and power balance requirements. The essential elements of the systems analysis and cost code are described, along with the calculational sequences leading to the specification of the reactor options and their associated costs. The input parameters, the constraints imposed upon them, and the operatingmore » range over which the code provides valid results are discussed. A sample problem and the interpretation of the results are also presented.« less
Tan, Edwin T.; Martin, Sarah R.; Fortier, Michelle A.; Kain, Zeev N.
2012-01-01
Objective To develop and validate a behavioral coding measure, the Children's Behavior Coding System-PACU (CBCS-P), for children's distress and nondistress behaviors while in the postanesthesia recovery unit. Methods A multidisciplinary team examined videotapes of children in the PACU and developed a coding scheme that subsequently underwent a refinement process (CBCS-P). To examine the reliability and validity of the coding system, 121 children and their parents were videotaped during their stay in the PACU. Participants were healthy children undergoing elective, outpatient surgery and general anesthesia. The CBCS-P was utilized and objective data from medical charts (analgesic consumption and pain scores) were extracted to establish validity. Results Kappa values indicated good-to-excellent (κ's > .65) interrater reliability of the individual codes. The CBCS-P had good criterion validity when compared to children's analgesic consumption and pain scores. Conclusions The CBCS-P is a reliable, observational coding method that captures children's distress and nondistress postoperative behaviors. These findings highlight the importance of considering context in both the development and application of observational coding schemes. PMID:22167123
Introduction to the Natural Anticipator and the Artificial Anticipator
NASA Astrophysics Data System (ADS)
Dubois, Daniel M.
2010-11-01
This short communication deals with the introduction of the concept of anticipator, which is one who anticipates, in the framework of computing anticipatory systems. The definition of anticipation deals with the concept of program. Indeed, the word program, comes from "pro-gram" meaning "to write before" by anticipation, and means a plan for the programming of a mechanism, or a sequence of coded instructions that can be inserted into a mechanism, or a sequence of coded instructions, as genes or behavioural responses, that is part of an organism. Any natural or artificial programs are thus related to anticipatory rewriting systems, as shown in this paper. All the cells in the body, and the neurons in the brain, are programmed by the anticipatory genetic code, DNA, in a low-level language with four signs. The programs in computers are also computing anticipatory systems. It will be shown, at one hand, that the genetic code DNA is a natural anticipator. As demonstrated by Nobel laureate McClintock [8], genomes are programmed. The fundamental program deals with the DNA genetic code. The properties of the DNA consist in self-replication and self-modification. The self-replicating process leads to reproduction of the species, while the self-modifying process leads to new species or evolution and adaptation in existing ones. The genetic code DNA keeps its instructions in memory in the DNA coding molecule. The genetic code DNA is a rewriting system, from DNA coding to DNA template molecule. The DNA template molecule is a rewriting system to the Messenger RNA molecule. The information is not destroyed during the execution of the rewriting program. On the other hand, it will be demonstrated that Turing machine is an artificial anticipator. The Turing machine is a rewriting system. The head reads and writes, modifying the content of the tape. The information is destroyed during the execution of the program. This is an irreversible process. The input data are lost.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-14
...), Row Arrangement (Code 557), Sprinkler System (Code 442), Tree/Shrub Site Preparation (Code 490), Waste.... Tree/Shrub Site Preparation (Code 490)--Only minor changes were made to the standard including...
Amoroso, P J; Smith, G S; Bell, N S
2000-04-01
Accurate injury cause data are essential for injury prevention research. U.S. military hospitals, unlike civilian hospitals, use the NATO STANAG system for cause-of-injury coding. Reported deficiencies in civilian injury cause data suggested a need to specifically evaluate the STANAG. The Total Army Injury and Health Outcomes Database (TAIHOD) was used to evaluate worldwide Army injury hospitalizations, especially STANAG Trauma, Injury, and Place of Occurrence coding. We conducted a review of hospital procedures at Tripler Army Medical Center (TAMC) including injury cause and intent coding, potential crossover between acute injuries and musculoskeletal conditions, and data for certain hospital patients who are not true admissions. We also evaluated the use of free-text injury comment fields in three hospitals. Army-wide review of injury records coding revealed full compliance with cause coding, although nonspecific codes appeared to be overused. A small but intensive single hospital records review revealed relatively poor intent coding but good activity and cause coding. Data on specific injury history were present on most acute injury records and 75% of musculoskeletal conditions. Place of Occurrence coding, although inherently nonspecific, was over 80% accurate. Review of text fields produced additional details of the injuries in over 80% of cases. STANAG intent coding specificity was poor, while coding of cause of injury was at least comparable to civilian systems. The strengths of military hospital data systems are an exceptionally high compliance with injury cause coding, the availability of free text, and capture of all population hospital records without regard to work-relatedness. Simple changes in procedures could greatly improve data quality.
Bosse, Stefan
2015-01-01
Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques. PMID:25690550
Efficient coding and detection of ultra-long IDs for visible light positioning systems.
Zhang, Hualong; Yang, Chuanchuan
2018-05-14
Visible light positioning (VLP) is a promising technique to complement Global Navigation Satellite System (GNSS) such as Global positioning system (GPS) and BeiDou Navigation Satellite System (BDS) which features the advantage of low-cost and high accuracy. The situation becomes even more crucial for indoor environments, where satellite signals are weak or even unavailable. For large-scale application of VLP, there would be a considerable number of Light emitting diode (LED) IDs, which bring forward the demand of long LED ID detection. In particular, to provision indoor localization globally, a convenient way is to program a unique ID into each LED during manufacture. This poses a big challenge for image sensors, such as the CMOS camera in everybody's hands since the long ID covers the span of multiple frames. In this paper, we investigate the detection of ultra-long ID using rolling shutter cameras. By analyzing the pattern of data loss in each frame, we proposed a novel coding technique to improve the efficiency of LED ID detection. We studied the performance of Reed-Solomon (RS) code in this system and designed a new coding method which considered the trade-off between performance and decoding complexity. Coding technique decreases the number of frames needed in data processing, significantly reduces the detection time, and improves the accuracy of detection. Numerical and experimental results show that the detected LED ID can be much longer with the coding technique. Besides, our proposed coding method is proved to achieve a performance close to that of RS code while the decoding complexity is much lower.
Bosse, Stefan
2015-02-16
Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.
SU-E-T-493: Accelerated Monte Carlo Methods for Photon Dosimetry Using a Dual-GPU System and CUDA.
Liu, T; Ding, A; Xu, X
2012-06-01
To develop a Graphics Processing Unit (GPU) based Monte Carlo (MC) code that accelerates dose calculations on a dual-GPU system. We simulated a clinical case of prostate cancer treatment. A voxelized abdomen phantom derived from 120 CT slices was used containing 218×126×60 voxels, and a GE LightSpeed 16-MDCT scanner was modeled. A CPU version of the MC code was first developed in C++ and tested on Intel Xeon X5660 2.8GHz CPU, then it was translated into GPU version using CUDA C 4.1 and run on a dual Tesla m 2 090 GPU system. The code was featured with automatic assignment of simulation task to multiple GPUs, as well as accurate calculation of energy- and material- dependent cross-sections. Double-precision floating point format was used for accuracy. Doses to the rectum, prostate, bladder and femoral heads were calculated. When running on a single GPU, the MC GPU code was found to be ×19 times faster than the CPU code and ×42 times faster than MCNPX. These speedup factors were doubled on the dual-GPU system. The dose Result was benchmarked against MCNPX and a maximum difference of 1% was observed when the relative error is kept below 0.1%. A GPU-based MC code was developed for dose calculations using detailed patient and CT scanner models. Efficiency and accuracy were both guaranteed in this code. Scalability of the code was confirmed on the dual-GPU system. © 2012 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system hasmore » been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system.« less
Study of an External Neutron Source for an Accelerator-Driven System using the PHITS Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sugawara, Takanori; Iwasaki, Tomohiko; Chiba, Takashi
A code system for the Accelerator Driven System (ADS) has been under development for analyzing dynamic behaviors of a subcritical core coupled with an accelerator. This code system named DSE (Dynamics calculation code system for a Subcritical system with an External neutron source) consists of an accelerator part and a reactor part. The accelerator part employs a database, which is calculated by using PHITS, for investigating the effect related to the accelerator such as the changes of beam energy, beam diameter, void generation, and target level. This analysis method using the database may introduce some errors into dynamics calculations sincemore » the neutron source data derived from the database has some errors in fitting or interpolating procedures. In this study, the effects of various events are investigated to confirm that the method based on the database is appropriate.« less
Modification of LAMPF's magnet-mapping code for offsets of center coordinates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurd, J.W.; Gomulka, S.; Merrill, F.
1991-01-01
One of the magnet measurements performed at LAMPF is the determination of the cylindrical harmonics of a quadrupole magnet using a rotating coil. The data are analyzed with the code HARMAL to derive the amplitudes of the harmonics. Initially, the origin of the polar coordinate system is the axis of the rotating coil. A new coordinate system is found by a simple translation of the old system such that the dipole moment in the new system is zero. The origin of this translated system is referred to as the magnetic center. Given this translation, the code calculates the coefficients ofmore » the cylindrical harmonics in the new system. The code has been modified to use an analytical calculation to determine these new coefficients. The method of calculation is described and some implications of this formulation are presented. 8 refs., 2 figs.« less
Interface requirements to couple thermal-hydraulic codes to 3D neutronic codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langenbuch, S.; Austregesilo, H.; Velkov, K.
1997-07-01
The present situation of thermalhydraulics codes and 3D neutronics codes is briefly described and general considerations for coupling of these codes are discussed. Two different basic approaches of coupling are identified and their relative advantages and disadvantages are discussed. The implementation of the coupling for 3D neutronics codes in the system ATHLET is presented. Meanwhile, this interface is used for coupling three different 3D neutronics codes.
NASA Astrophysics Data System (ADS)
Ratnam, Challa; Lakshmana Rao, Vadlamudi; Lachaa Goud, Sivagouni
2006-10-01
In the present paper, and a series of papers to follow, the Fourier analytical properties of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems are investigated. First, the transmission function for MACA and CMACA is derived using Fourier methods and, based on the Fresnel-Kirchoff diffraction theory, the formulae for the point spread function are formulated. The PSF maxima and minima are calculated for both the MACA and CMACA systems. The dependence of these properties on the number of zones is studied and reported in this paper.
NASA Astrophysics Data System (ADS)
Pan, Xiaolong; Liu, Bo; Zheng, Jianglong; Tian, Qinghua
2016-08-01
We propose and demonstrate a low complexity Reed-Solomon-based low-density parity-check (RS-LDPC) code with adaptive puncturing decoding algorithm for elastic optical transmission system. Partial received codes and the relevant column in parity-check matrix can be punctured to reduce the calculation complexity by adaptive parity-check matrix during decoding process. The results show that the complexity of the proposed decoding algorithm is reduced by 30% compared with the regular RS-LDPC system. The optimized code rate of the RS-LDPC code can be obtained after five times iteration.
Coding performance of the Probe-Orbiter-Earth communication link
NASA Technical Reports Server (NTRS)
Divsalar, D.; Dolinar, S.; Pollara, F.
1993-01-01
The coding performance of the Probe-Orbiter-Earth communication link is analyzed and compared for several cases. It is assumed that the coding system consists of a convolutional code at the Probe, a quantizer and another convolutional code at the Orbiter, and two cascaded Viterbi decoders or a combined decoder on the ground.
NASA Technical Reports Server (NTRS)
Denney, Ewen W.; Fischer, Bernd
2009-01-01
Model-based development and automated code generation are increasingly used for production code in safety-critical applications, but since code generators are typically not qualified, the generated code must still be fully tested, reviewed, and certified. This is particularly arduous for mathematical and control engineering software which requires reviewers to trace subtle details of textbook formulas and algorithms to the code, and to match requirements (e.g., physical units or coordinate frames) not represented explicitly in models or code. Both tasks are complicated by the often opaque nature of auto-generated code. We address these problems by developing a verification-driven approach to traceability and documentation. We apply the AUTOCERT verification system to identify and then verify mathematical concepts in the code, based on a mathematical domain theory, and then use these verified traceability links between concepts, code, and verification conditions to construct a natural language report that provides a high-level structured argument explaining why and how the code uses the assumptions and complies with the requirements. We have applied our approach to generate review documents for several sub-systems of NASA s Project Constellation.
A good performance watermarking LDPC code used in high-speed optical fiber communication system
NASA Astrophysics Data System (ADS)
Zhang, Wenbo; Li, Chao; Zhang, Xiaoguang; Xi, Lixia; Tang, Xianfeng; He, Wenxue
2015-07-01
A watermarking LDPC code, which is a strategy designed to improve the performance of the traditional LDPC code, was introduced. By inserting some pre-defined watermarking bits into original LDPC code, we can obtain a more correct estimation about the noise level in the fiber channel. Then we use them to modify the probability distribution function (PDF) used in the initial process of belief propagation (BP) decoding algorithm. This algorithm was tested in a 128 Gb/s PDM-DQPSK optical communication system and results showed that the watermarking LDPC code had a better tolerances to polarization mode dispersion (PMD) and nonlinearity than that of traditional LDPC code. Also, by losing about 2.4% of redundancy for watermarking bits, the decoding efficiency of the watermarking LDPC code is about twice of the traditional one.
Comparison of procedure coding systems for level 1 and 2 hospitals in South Africa.
Montewa, Lebogang; Hanmer, Lyn; Reagon, Gavin
2013-01-01
The ability of three procedure coding systems to reflect the procedure concepts extracted from patient records from six hospitals was compared, in order to inform decision making about a procedure coding standard for South Africa. A convenience sample of 126 procedure concepts was extracted from patient records at three level 1 hospitals and three level 2 hospitals. Each procedure concept was coded using ICPC-2, ICD-9-CM, and CCSA-2001. The extent to which each code assigned actually reflected the procedure concept was evaluated (between 'no match' and 'complete match'). For the study sample, CCSA-2001 was found to reflect the procedure concepts most completely, followed by ICD-9-CM and then ICPC-2. In practice, decision making about procedure coding standards would depend on multiple factors in addition to coding accuracy.
A Mechanism to Avoid Collusion Attacks Based on Code Passing in Mobile Agent Systems
NASA Astrophysics Data System (ADS)
Jaimez, Marc; Esparza, Oscar; Muñoz, Jose L.; Alins-Delgado, Juan J.; Mata-Díaz, Jorge
Mobile agents are software entities consisting of code, data, state and itinerary that can migrate autonomously from host to host executing their code. Despite its benefits, security issues strongly restrict the use of code mobility. The protection of mobile agents against the attacks of malicious hosts is considered the most difficult security problem to solve in mobile agent systems. In particular, collusion attacks have been barely studied in the literature. This paper presents a mechanism that avoids collusion attacks based on code passing. Our proposal is based on a Multi-Code agent, which contains a different variant of the code for each host. A Trusted Third Party is responsible for providing the information to extract its own variant to the hosts, and for taking trusted timestamps that will be used to verify time coherence.
Coded spread spectrum digital transmission system design study
NASA Technical Reports Server (NTRS)
Heller, J. A.; Odenwalder, J. P.; Viterbi, A. J.
1974-01-01
Results are presented of a comprehensive study of the performance of Viterbi-decoded convolutional codes in the presence of nonideal carrier tracking and bit synchronization. A constraint length 7, rate 1/3 convolutional code and parameters suitable for the space shuttle coded communications links are used. Mathematical models are developed and theoretical and simulation results are obtained to determine the tracking and acquisition performance of the system. Pseudorandom sequence spread spectrum techniques are also considered to minimize potential degradation caused by multipath.