Swab culture monitoring of automated endoscope reprocessors after high-level disinfection
Lu, Lung-Sheng; Wu, Keng-Liang; Chiu, Yi-Chun; Lin, Ming-Tzung; Hu, Tsung-Hui; Chiu, King-Wah
2012-01-01
AIM: To conduct a bacterial culture study for monitoring decontamination of automated endoscope reprocessors (AERs) after high-level disinfection (HLD). METHODS: From February 2006 to January 2011, authors conducted randomized consecutive sampling each month for 7 AERs. Authors collected a total of 420 swab cultures, including 300 cultures from 5 gastroscope AERs, and 120 cultures from 2 colonoscope AERs. Swab cultures were obtained from the residual water from the AERs after a full reprocessing cycle. Samples were cultured to test for aerobic bacteria, anaerobic bacteria, and mycobacterium tuberculosis. RESULTS: The positive culture rate of the AERs was 2.0% (6/300) for gastroscope AERs and 0.8% (1/120) for colonoscope AERs. All the positive cultures, including 6 from gastroscope and 1 from colonoscope AERs, showed monofloral colonization. Of the gastroscope AER samples, 50% (3/6) were colonized by aerobic bacterial and 50% (3/6) by fungal contaminations. CONCLUSION: A full reprocessing cycle of an AER with HLD is adequate for disinfection of the machine. Swab culture is a useful method for monitoring AER decontamination after each reprocessing cycle. Fungal contamination of AERs after reprocessing should also be kept in mind. PMID:22529696
AER synthetic generation in hardware for bio-inspired spiking systems
NASA Astrophysics Data System (ADS)
Linares-Barranco, Alejandro; Linares-Barranco, Bernabe; Jimenez-Moreno, Gabriel; Civit-Balcells, Anton
2005-06-01
Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate 'events' according to their activity levels. More active neurons generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. When building multi-chip muti-layered AER systems it is absolutely necessary to have a computer interface that allows (a) to read AER interchip traffic into the computer and visualize it on screen, and (b) convert conventional frame-based video stream in the computer into AER and inject it at some point of the AER structure. This is necessary for test and debugging of complex AER systems. This paper addresses the problem of converting, in a computer, a conventional frame-based video stream into the spike event based representation AER. There exist several proposed software methods for synthetic generation of AER for bio-inspired systems. This paper presents a hardware implementation for one method, which is based on Linear-Feedback-Shift-Register (LFSR) pseudo-random number generation. The sequence of events generated by this hardware, which follows a Poisson distribution like a biological neuron, has been reconstructed using two AER integrator cells. The error of reconstruction for a set of images that produces different traffic loads of event in the AER bus is used as evaluation criteria. A VHDL description of the method, that includes the Xilinx PCI Core, has been implemented and tested using a general purpose PCI-AER board. This PCI-AER board has been developed by authors, and uses a Spartan II 200 FPGA. This system for AER Synthetic Generation is capable of transforming frames of 64x64 pixels, received through a standard computer PCI bus, at a frame rate of 25 frames per second, producing spike events at a peak rate of 107 events per second.
Address-event-based platform for bioinspired spiking systems
NASA Astrophysics Data System (ADS)
Jiménez-Fernández, A.; Luján, C. D.; Linares-Barranco, A.; Gómez-Rodríguez, F.; Rivas, M.; Jiménez, G.; Civit, A.
2007-05-01
Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows a real-time virtual massive connectivity between huge number neurons, located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate "events" according to their activity levels. More active neurons generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. When building multi-chip muti-layered AER systems, it is absolutely necessary to have a computer interface that allows (a) reading AER interchip traffic into the computer and visualizing it on the screen, and (b) converting conventional frame-based video stream in the computer into AER and injecting it at some point of the AER structure. This is necessary for test and debugging of complex AER systems. In the other hand, the use of a commercial personal computer implies to depend on software tools and operating systems that can make the system slower and un-robust. This paper addresses the problem of communicating several AER based chips to compose a powerful processing system. The problem was discussed in the Neuromorphic Engineering Workshop of 2006. The platform is based basically on an embedded computer, a powerful FPGA and serial links, to make the system faster and be stand alone (independent from a PC). A new platform is presented that allow to connect up to eight AER based chips to a Spartan 3 4000 FPGA. The FPGA is responsible of the network communication based in Address-Event and, at the same time, to map and transform the address space of the traffic to implement a pre-processing. A MMU microprocessor (Intel XScale 400MHz Gumstix Connex computer) is also connected to the FPGA to allow the platform to implement eventbased algorithms to interact to the AER system, like control algorithms, network connectivity, USB support, etc. The LVDS transceiver allows a bandwidth of up to 1.32 Gbps, around ~66 Mega events per second (Mevps).
Marcus, Robin L; Smith, Sheldon; Morrell, Glen; Addison, Odessa; Dibble, Leland E; Wahoff-Stice, Donna; LaStayo, Paul C
2008-01-01
Background and Purpose: The purpose of this study was to compare the outcomes between a diabetes exercise training program using combined aerobic and high-force eccentric resistance exercise and a program of aerobic exercise only. Subjects and Methods: Fifteen participants with type 2 diabetes mellitus (T2DM) participated in a 16-week supervised exercise training program: 7 (mean age=50.7 years, SD=6.9) in a combined aerobic and eccentric resistance exercise program (AE/RE group) and 8 (mean age=58.5 years, SD=6.2) in a program of aerobic exercise only (AE group). Outcome measures included thigh lean tissue and intramuscular fat (IMF), glycosylated hemoglobin, body mass index (BMI), and 6-minute walk distance. Results: Both groups experienced decreases in mean glycosylated hemoglobin after training (AE/RE group: −0.59% [95% confidence interval (CI)=−1.5 to 0.28]; AE group: −0.31% [95% CI=−0.60 to −0.03]), with no significant between-group differences. There was an interaction between group and time with respect to change in thigh lean tissue cross-sectional area, with the AE/RE group gaining more lean tissue (AE/RE group: 15.1 cm2 [95% CI=7.6 to 22.5]; AE group: −5.6 cm2 [95% CI=−10.4 to 0.76]). Both groups experienced decreases in mean thigh IMF cross-sectional area (AE/RE group: −1.2 cm2 [95% CI=−2.6 to 0.26]; AE group: −2.2 cm2 [95% CI=−3.5 to −0.84]) and increases in 6-minute walk distance (AE/RE group: 45.5 m [95% CI=7.5 to 83.6]; AE group: 29.9 m [95% CI=−7.7 to 67.5]) after training, with no between-group differences. There was an interaction between group and time with respect to change in BMI, with the AE/RE group experiencing a greater decrease in BMI. Discussion and Conclusion: Significant improvements in long-term glycemic control, thigh composition, and physical performance were demonstrated in both groups after participating in a 16-week exercise program. Subjects in the AE/RE group demonstrated additional improvements in thigh lean tissue and BMI. Improvements in thigh lean tissue may be important in this population as a means to increase resting metabolic rate, protein reserve, exercise tolerance, and functional mobility. PMID:18801851
2012-01-01
Background The instrument channels of gastrointestinal (GI) endoscopes may be heavily contaminated with bacteria even after high-level disinfection (HLD). The British Society of Gastroenterology guidelines emphasize the benefits of manually brushing endoscope channels and using automated endoscope reprocessors (AERs) for disinfecting endoscopes. In this study, we aimed to assess the effectiveness of decontamination using reprocessors after HLD by comparing the cultured samples obtained from biopsy channels (BCs) of GI endoscopes and the internal surfaces of AERs. Methods We conducted a 5-year prospective study. Every month random consecutive sampling was carried out after a complete reprocessing cycle; 420 rinse and swabs samples were collected from BCs and internal surface of AERs, respectively. Of the 420 rinse samples collected from the BC of the GI endoscopes, 300 were obtained from the BCs of gastroscopes and 120 from BCs of colonoscopes. Samples were collected by flushing the BCs with sterile distilled water, and swabbing the residual water from the AERs after reprocessing. These samples were cultured to detect the presence of aerobic and anaerobic bacteria and mycobacteria. Results The number of culture-positive samples obtained from BCs (13.6%, 57/420) was significantly higher than that obtained from AERs (1.7%, 7/420). In addition, the number of culture-positive samples obtained from the BCs of gastroscopes (10.7%, 32/300) and colonoscopes (20.8%, 25/120) were significantly higher than that obtained from AER reprocess to gastroscopes (2.0%, 6/300) and AER reprocess to colonoscopes (0.8%, 1/120). Conclusions Culturing rinse samples obtained from BCs provides a better indication of the effectiveness of the decontamination of GI endoscopes after HLD than culturing the swab samples obtained from the inner surfaces of AERs as the swab samples only indicate whether the AERs are free from microbial contamination or not. PMID:22943739
Chiu, King-Wah; Tsai, Ming-Chao; Wu, Keng-Liang; Chiu, Yi-Chun; Lin, Ming-Tzung; Hu, Tsung-Hui
2012-09-03
The instrument channels of gastrointestinal (GI) endoscopes may be heavily contaminated with bacteria even after high-level disinfection (HLD). The British Society of Gastroenterology guidelines emphasize the benefits of manually brushing endoscope channels and using automated endoscope reprocessors (AERs) for disinfecting endoscopes. In this study, we aimed to assess the effectiveness of decontamination using reprocessors after HLD by comparing the cultured samples obtained from biopsy channels (BCs) of GI endoscopes and the internal surfaces of AERs. We conducted a 5-year prospective study. Every month random consecutive sampling was carried out after a complete reprocessing cycle; 420 rinse and swabs samples were collected from BCs and internal surface of AERs, respectively. Of the 420 rinse samples collected from the BC of the GI endoscopes, 300 were obtained from the BCs of gastroscopes and 120 from BCs of colonoscopes. Samples were collected by flushing the BCs with sterile distilled water, and swabbing the residual water from the AERs after reprocessing. These samples were cultured to detect the presence of aerobic and anaerobic bacteria and mycobacteria. The number of culture-positive samples obtained from BCs (13.6%, 57/420) was significantly higher than that obtained from AERs (1.7%, 7/420). In addition, the number of culture-positive samples obtained from the BCs of gastroscopes (10.7%, 32/300) and colonoscopes (20.8%, 25/120) were significantly higher than that obtained from AER reprocess to gastroscopes (2.0%, 6/300) and AER reprocess to colonoscopes (0.8%, 1/120). Culturing rinse samples obtained from BCs provides a better indication of the effectiveness of the decontamination of GI endoscopes after HLD than culturing the swab samples obtained from the inner surfaces of AERs as the swab samples only indicate whether the AERs are free from microbial contamination or not.
Burrows, Jill E.; Cravotta, Charles A.; Peters, Stephen C.
2017-01-01
Net-alkaline, anoxic coal-mine drainage containing ∼20 mg/L FeII and ∼0.05 mg/L Al and Zn was subjected to parallel batch experiments: control, aeration (Aer 1 12.6 mL/s; Aer 2 16.8 mL/s; Aer 3 25.0 mL/s), and hydrogen peroxide (H2O2) to test the hypothesis that aeration increases pH, FeII oxidation, hydrous FeIII oxide (HFO) formation, and trace-metal removal through adsorption and coprecipitation with HFO. During 5.5-hr field experiments, pH increased from 6.4 to 6.7, 7.1, 7.6, and 8.1 for the control, Aer 1, Aer 2, and Aer 3, respectively, but decreased to 6.3 for the H2O2 treatment. Aeration accelerated removal of dissolved CO2, Fe, Al, and Zn. In Aer 3, dissolved Al was completely removed within 1 h, but increased to ∼20% of the initial concentration after 2.5 h when pH exceeded 7.5. H2O2 promoted rapid removal of all dissolved Fe and Al, and 13% of dissolved Zn.Kinetic modeling with PHREEQC simulated effects of aeration on pH, CO2, Fe, Zn, and Al. Aeration enhanced Zn adsorption by increasing pH and HFO formation while decreasing aqueous CO2 available to form ZnCO30 and Zn(CO3)22− at high pH. Al concentrations were inconsistent with solubility control by Al minerals or Al-containing HFO, but could be simulated by adsorption on HFO at pH < 7.5 and desorption at higher pH where Al(OH)4− was predominant. Thus, aeration or chemical oxidation with pH adjustment to ∼7.5 could be effective for treating high-Fe and moderate-Zn concentrations, whereas chemical oxidation without pH adjustment may be effective for treating high-Fe and moderate-Al concentrations.
NASA Astrophysics Data System (ADS)
Yamada, A.; Saitoh, N.; Nonogaki, R.; Imasu, R.; Shiomi, K.; Kuze, A.
2016-12-01
The thermal infrared (TIR) band of Thermal and Near-infrared Sensor for Carbon Observation Fourier Transform Spectrometer (TANSO-FTS) onboard Greenhouse Gases Observing Satellite (GOSAT) observes CH4 profile at wavenumber range from 1210 cm-1 to 1360 cm-1 including CH4 ν4 band. The current retrieval algorithm (V1.0) uses LBLRTM V12.1 with AER V3.1 line database to calculate optical depth. LBLRTM V12.1 include MT_CKD 2.5.2 model to calculate continuum absorption. The continuum absorption has large uncertainty, especially temperature dependent coefficient, between BPS model and MT_CKD model in the wavenumber region of 1210-1250 cm-1(Paynter and Ramaswamy, 2014). The purpose of this study is to assess the impact on CH4 retrieval from the line parameter databases and the uncertainty of continuum absorption. We used AER v1.0 database, HITRAN2004 database, HITRAN2008 database, AER V3.2 database, and HITRAN2012 database (Rothman et al. 2005, 2009, and 2013. Clough et al., 2005). AER V1.0 database is based on HITRAN2000. The CH4 line parameters of AER V3.1 and V3.2 databases are developed from HITRAN2008 including updates until May 2009 with line mixing parameters. We compared the retrieved CH4 with the HIPPO CH4 observation (Wofsy et al., 2012). The difference of AER V3.2 was the smallest and 24.1 ± 45.9 ppbv. The differences of AER V1.0, HITRAN2004, HITRAN2008, and HITRAN2012 were 35.6 ± 46.5 ppbv, 37.6 ± 46.3 ppbv, 32.1 ± 46.1 ppbv, and 35.2 ± 46.0 ppbv, respectively. Compare AER V3.2 case to HITRAN2008 case, the line coupling effect reduced difference by 8.0 ppbv. Median values of Residual difference from HITRAN2008 to AER V1.0, HITRAN2004, AER V3.2, and HITRAN2012 were 0.6 K, 0.1 K, -0.08 K, and 0.08 K, respectively, while median values of transmittance difference were less than 0.0003 and transmittance differences have small wavenumber dependence. We also discuss the retrieval error from the uncertainty of the continuum absorption, the test of full grid configuration for retrieval, and the retrieval results using GOSAT TIR L1B V203203, which are sample products to evaluate the next level 1B algorithm.
Development of the Test Of Astronomy STandards (TOAST) Assessment Instrument
NASA Astrophysics Data System (ADS)
Slater, Timothy F.; Slater, S. J.
2008-05-01
Considerable effort in the astronomy education research (AER) community over the past several years has focused on developing assessment tools in the form of multiple-choice conceptual diagnostics and content knowledge surveys. This has been critically important in advancing the AER discipline so that researchers could establish the initial knowledge state of students as well as to attempt measure some of the impacts of innovative instructional interventions. Unfortunately, few of the existing instruments were constructed upon a solid list of clearly articulated and widely agreed upon learning objectives. This was not done in oversight, but rather as a result of the relative youth of AER as a discipline. Now that several important science education reform documents exist and are generally accepted by the AER community, we are in a position to develop, validate, and disseminate a new assessment instrument which is tightly aligned to the consensus learning goals stated by the American Astronomical Society - Chair's Conference on ASTRO 101, the American Association of the Advancement of Science's Project 2061 Benchmarks, and the National Research Council's National Science Education Standards. In response, researchers from the Cognition in Astronomy, Physics and Earth sciences Research (CAPER) Team at the University of Wyoming's Science & Math Teaching Center (UWYO SMTC) have designed a criterion-referenced assessment tool, called the Test Of Astronomy STandards (TOAST). Through iterative development, this instrument has a high degree of reliability and validity for instructors and researchers needing information on students’ initial knowledge state at the beginning of a course and can be used, in aggregate, to help measure the impact of course-length duration instructional strategies for courses with learning goals tightly aligned to the consensus goals of our community.
Long-term risks of subsequent primary neoplasms among survivors of childhood cancer.
Reulen, Raoul C; Frobisher, Clare; Winter, David L; Kelly, Julie; Lancashire, Emma R; Stiller, Charles A; Pritchard-Jones, Kathryn; Jenkinson, Helen C; Hawkins, Michael M
2011-06-08
Survivors of childhood cancer are at excess risk of developing subsequent primary neoplasms but the long-term risks are uncertain. To investigate long-term risks of subsequent primary neoplasms in survivors of childhood cancer, to identify the types that contribute most to long-term excess risk, and to identify subgroups of survivors at substantially increased risk of particular subsequent primary neoplasms that may require specific interventions. British Childhood Cancer Survivor Study--a population-based cohort of 17,981 5-year survivors of childhood cancer diagnosed with cancer at younger than 15 years between 1940 and 1991 in Great Britain, followed up through December 2006. Standardized incidence ratios (SIRs), absolute excess risks (AERs), and cumulative incidence of subsequent primary neoplasms. After a median follow-up time of 24.3 years (mean = 25.6 years), 1354 subsequent primary neoplasms were ascertained; the most frequently observed being central nervous system (n = 344), nonmelanoma skin cancer (n = 278), digestive (n = 105), genitourinary (n = 100), breast (n = 97), and bone (n = 94). The overall SIR was 4 times more than expected (SIR, 3.9; 95% confidence interval [CI], 3.6-4.2; AER, 16.8 per 10,000 person-years). The AER at older than 40 years was highest for digestive and genitourinary subsequent primary neoplasms (AER, 5.9 [95% CI, 2.5-9.3]; and AER, 6.0 [95%CI, 2.3-9.6] per 10,000 person-years, respectively); 36% of the total AER was attributable to these 2 subsequent primary neoplasm sites. The cumulative incidence of colorectal cancer for survivors treated with direct abdominopelvic irradiation was 1.4% (95% CI, 0.7%-2.6%) by age 50 years, comparable with the 1.2% risk in individuals with at least 2 first-degree relatives affected by colorectal cancer. Among a cohort of British childhood cancer survivors, the greatest excess risk associated with subsequent primary neoplasms at older than 40 years was for digestive and genitourinary neoplasms.
Conte, Daniele; Garaffo, Giulia; Lo Iacono, Nadia; Mantero, Stefano; Piccolo, Stefano; Cordenonsi, Michelangelo; Perez-Morga, David; Orecchia, Valeria; Poli, Valeria; Merlo, Giorgio R.
2016-01-01
The congenital malformation split hand/foot (SHFM) is characterized by missing central fingers and dysmorphology or fusion of the remaining ones. Type-1 SHFM is linked to deletions/rearrangements of the DLX5–DLX6 locus and point mutations in the DLX5 gene. The ectrodactyly phenotype is reproduced in mice by the double knockout (DKO) of Dlx5 and Dlx6. During limb development, the apical ectodermal ridge (AER) is a key-signaling center responsible for early proximal–distal growth and patterning. In Dlx5;6 DKO hindlimbs, the central wedge of the AER loses multilayered organization and shows down-regulation of FGF8 and Dlx2. In search for the mechanism, we examined the non-canonical Wnt signaling, considering that Dwnt-5 is a target of distalless in Drosophila and the knockout of Wnt5, Ryk, Ror2 and Vangl2 in the mouse causes severe limb malformations. We found that in Dlx5;6 DKO limbs, the AER expresses lower levels of Wnt5a, shows scattered β-catenin responsive cells and altered basolateral and planar cell polarity (PCP). The addition of Wnt5a to cultured embryonic limbs restored the expression of AER markers and its stratification. Conversely, the inhibition of the PCP molecule c-jun N-terminal kinase caused a loss of AER marker expression. In vitro, the addition of Wnt5a on mixed primary cultures of embryonic ectoderm and mesenchyme was able to confer re-polarization. We conclude that the Dlx-related ectrodactyly defect is associated with the loss of basoapical and PCP, due to reduced Wnt5a expression and that the restoration of the Wnt5a level is sufficient to partially reverts AER misorganization and dysmorphology. PMID:26685160
Penno, G; Chaturvedi, N; Talmud, P J; Cotroneo, P; Manto, A; Nannipieri, M; Luong, L A; Fuller, J H
1998-09-01
We examined whether the ACE gene insertion/deletion (I/D) polymorphism modulates renal disease progression in IDDM and how ACE inhibitors influence this relationship. The EURODIAB Controlled Trial of Lisinopril in IDDM is a multicenter randomized placebo-controlled trial in 530 nonhypertensive, mainly normoalbuminuric IDDM patients aged 20-59 years. Albumin excretion rate (AER) was measured every 6 months for 2 years. Genotype distribution was 15% II, 58% ID, and 27% DD. Between genotypes, there were no differences in baseline characteristics or in changes in blood pressure and glycemic control throughout the trial. There was a significant interaction between the II and DD genotype groups and treatment on change in AER (P = 0.05). Patients with the II genotype showed the fastest rate of AER progression on placebo but had an enhanced response to lisinopril. AER at 2 years (adjusted for baseline AER) was 51.3% lower on lisinopril than placebo in the II genotype patients (95% CI, 15.7 to 71.8; P = 0.01), 14.8% in the ID group (-7.8 to 32.7; P = 0.2), and 7.7% in the DD group (-36.6 to 37.6; P = 0.7). Absolute differences in AER between placebo and lisinopril at 2 years were 8.1, 1.7, and 0.8 microg/min in the II, ID, and DD groups, respectively. The significant beneficial effect of lisinopril on AER in the II group persisted when adjusted for center, blood pressure, and glycemic control, and also for diastolic blood pressure at 1 month into the study. Progression from normoalbuminuria to microalbuminuria (lisinopril versus placebo) was 0.27 (0.03-2.26; P = 0.2) in the II group, and 1.30 (0.33-5.17; P = 0.7) in the DD group (P = 0.6 for interaction). Knowledge of ACE genotype may be of value in determining the likely impact of ACE inhibitor treatment.
Zamarreno-Ramos, C; Linares-Barranco, A; Serrano-Gotarredona, T; Linares-Barranco, B
2013-02-01
This paper presents a modular, scalable approach to assembling hierarchically structured neuromorphic Address Event Representation (AER) systems. The method consists of arranging modules in a 2D mesh, each communicating bidirectionally with all four neighbors. Address events include a module label. Each module includes an AER router which decides how to route address events. Two routing approaches have been proposed, analyzed and tested, using either destination or source module labels. Our analyses reveal that depending on traffic conditions and network topologies either one or the other approach may result in better performance. Experimental results are given after testing the approach using high-end Virtex-6 FPGAs. The approach is proposed for both single and multiple FPGAs, in which case a special bidirectional parallel-serial AER link with flow control is exploited, using the FPGA Rocket-I/O interfaces. Extensive test results are provided exploiting convolution modules of 64 × 64 pixels with kernels with sizes up to 11 × 11, which process real sensory data from a Dynamic Vision Sensor (DVS) retina. One single Virtex-6 FPGA can hold up to 64 of these convolution modules, which is equivalent to a neural network with 262 × 10(3) neurons and almost 32 million synapses.
Aerobic and Strength Training in Concomitant Metabolic Syndrome and Type 2 Diabetes
Earnest, Conrad P.; Johannsen, Neil M.; Swift, Damon L.; Gillison, Fiona B.; Mikus, Catherine R.; Lucia, Alejandro; Kramer, Kimberly; Lavie, Carl J.; Church, Timothy S.
2014-01-01
Purpose Concomitant type 2 diabetes (T2D) and metabolic syndrome exacerbates mortality risk; yet, few studies have examined the effect of combining (AER+RES) aerobic (AER) and resistance (RES) training for individuals with T2D and metabolic syndrome. Methods We examined AER, RES, and AER+RES training (9-months) commensurate with physical activity guidelines in individuals with T2D (N=262, 63% female, 44% black). Primary outcomes were change in, and prevalence of, metabolic syndrome score at follow-up (mean, 95%CI). Secondary outcomes included maximal cardiorespiratory fitness (VO2peak and estimated METs from time-to-exhaustion (TTE), and exercise efficiency calculated as the slope of the line between ventilatory threshold, respiratory compensation, and maximal fitness. General linear models and bootstrapped Spearman correlations were used to examine changes in metabolic syndrome associated with training primary and secondary outcome variables. Results We observed a significant decrease in metabolic syndrome scores (P-for-trend, 0.003) for AER (−0.59, 95%CI, −1.00, −0.21) and AER+RES (−0.79, 95%CI, −1.40, −0.35), both being significant (P < 0.02) vs. Control (0.26, 95%CI, −0.58, 0.40) and RES (−0.13, 95%CI, −1.00, 0.24). This lead to a reduction in metabolic syndrome prevalence for the AER (56% vs. 43%) and AER+RES (55% vs. 46%) groups between baseline and follow-up. The observed decrease in metabolic syndrome was mediated by significant improvements in exercise efficiency for the AER and AER+RES training groups (P<0.05), which was more strongly related to TTE (25–30%; r= −0.38; 95% CI: −0.55, −0.19) than VO2peak (5–6%; r= −0.24; 95% CI: −0.45, −0.01). Conclusion Aerobic and AER+RES training significantly improves metabolic syndrome scores and prevalence in patients with T2D. These improvements appear to be associated with improved exercise efficiency and are more strongly related to improved TTE versus VO2peak. PMID:24389523
Benchmark problems and solutions
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.
1995-01-01
The scientific committee, after careful consideration, adopted six categories of benchmark problems for the workshop. These problems do not cover all the important computational issues relevant to Computational Aeroacoustics (CAA). The deciding factor to limit the number of categories to six was the amount of effort needed to solve these problems. For reference purpose, the benchmark problems are provided here. They are followed by the exact or approximate analytical solutions. At present, an exact solution for the Category 6 problem is not available.
Van Ryswyk, K; Wallace, L; Fugler, D; MacNeill, M; Héroux, M È; Gibson, M D; Guernsey, J R; Kindzierski, W; Wheeler, A J
2015-01-01
Residential air exchange rates (AERs) are vital in understanding the temporal and spatial drivers of indoor air quality (IAQ). Several methods to quantify AERs have been used in IAQ research, often with the assumption that the home is a single, well-mixed air zone. Since 2005, Health Canada has conducted IAQ studies across Canada in which AERs were measured using the perfluorocarbon tracer (PFT) gas method. Emitters and detectors of a single PFT gas were placed on the main floor to estimate a single-zone AER (AER1z). In three of these studies, a second set of emitters and detectors were deployed in the basement or second floor in approximately 10% of homes for a two-zone AER estimate (AER2z). In total, 287 daily pairs of AER2z and AER1z estimates were made from 35 homes across three cities. In 87% of the cases, AER2z was higher than AER1z. Overall, the AER1z estimates underestimated AER2z by approximately 16% (IQR: 5–32%). This underestimate occurred in all cities and seasons and varied in magnitude seasonally, between homes, and daily, indicating that when measuring residential air exchange using a single PFT gas, the assumption of a single well-mixed air zone very likely results in an under prediction of the AER. PMID:25399878
Impact of line parameter database and continuum absorption on GOSAT TIR methane retrieval
NASA Astrophysics Data System (ADS)
Yamada, A.; Saitoh, N.; Nonogaki, R.; Imasu, R.; Shiomi, K.; Kuze, A.
2017-12-01
The current methane retrieval algorithm (V1) at wavenumber range from 1210 cm-1 to 1360 cm-1 including CH4 ν 4 band from the thermal infrared (TIR) band of Thermal and Near-infrared Sensor for Carbon Observation Fourier Transform Spectrometer (TANSO-FTS) onboard Greenhouse Gases Observing Satellite (GOSAT) uses LBLRTM V12.1 with AER V3.1 line database and MT CKD 2.5.2 continuum absorption model to calculate optical depth. Since line parameter databases have been updated and the continuum absorption may have large uncertainty, the purpose of this study is to assess the impact on {CH}4 retrieval from the choice of line parameter databases and the uncertainty of continuum absorption. We retrieved {CH}4 profiles with replacement of line parameter database from AER V3.1 to AER v1.0, HITRAN 2004, HITRAN 2008, AER V3.2, or HITRAN 2012 (Rothman et al. 2005, 2009, and 2013. Clough et al., 2005), we assumed 10% larger continuum absorption coefficients and 50% larger temperature dependent coefficient of continuum absorption based on the report by Paynter and Ramaswamy (2014). We compared the retrieved CH4 with the HIPPO CH4 observation (Wofsy et al., 2012). The difference from HIPPO observation of AER V3.2 was the smallest and 24.1 ± 45.9 ppbv. The differences of AER V1.0, HITRAN 2004, HITRAN 2008, and HITRAN 2012 were 35.6 ± 46.5 ppbv, 37.6 ± 46.3 ppbv, 32.1 ± 46.1 ppbv, and 35.2 ± 46.0 ppbv, respectively. Maximum {CH}4 retrieval differences were -0.4 ppbv at the layer of 314 hPa when we used 10% larger absorption coefficients of {H}2O foreign continuum. Comparing AER V3.2 case to HITRAN 2008 case, the line coupling effect reduced difference by 8.0 ppbv. Line coupling effects were important for GOSAT TIR {CH}4 retrieval. Effects from the uncertainty of continuum absorption were negligible small for GOSAT TIR CH4 retrieval.
Ellis, Shmuel; Ganzach, Yoav; Castle, Evan; Sekely, Gal
2010-01-01
In the current study, we compared the effect of personal and filmed after-event reviews (AERs) on performance, and the role that self-efficacy plays in moderating and mediating the effects of these 2 types of AER on performance. The setting was one in which 49 men and 63 women participated twice in a simulated business decision-making task. In between, participants received a personal AER, watched a filmed AER, or had a break. We found that individuals who participated in an AER, whether personal or filmed, improved their performance significantly more than those who did not participate in a review. Furthermore, there was no significant difference in performance improvement between the personal and the filmed AER, which suggests that the 2 are quite similar in their effect. We also found that the differences in performance improvement between the personal AER group and the control group were somewhat greater than those found in the filmed AER group. Self-efficacy mediated the effect of AER on performance improvement in both types of AER. In addition, the effect of AER on performance improvement was moderated by initial self-efficacy in the personal but not in the filmed AER: The personal AER was more effective, the higher the initial self-efficacy. Copyright 2009 APA, all rights reserved.
A deep azygoesophageal recess may increase the risk of secondary spontaneous pneumothorax.
Takahashi, Tsuyoshi; Kawashima, Mitsuaki; Kuwano, Hideki; Nagayama, Kazuhiro; Nitadori, Jyunichi; Anraku, Masaki; Sato, Masaaki; Murakawa, Tomohiro; Nakajima, Jun
2017-09-01
The azygoesophageal recess (AER) is known as a possible cause of bulla formation in patients with spontaneous pneumothorax. However, there has been little focus on the depth of the AER. We evaluated the relationship between the depth of the AER and pneumothorax development. We conducted a retrospective study of 80 spontaneous pneumothorax patients who underwent surgery at our institution. We evaluated the depth of the AER on preoperative computed tomography scans. Ruptured bullae at the AER were found in 12 patients (52.2%) with secondary spontaneous pneumothorax (SSP) and 8 patients (14.0%) with primary spontaneous pneumothorax (PSP) (p < 0.001). In patients with ruptured bullae at the AER, 10 SSP patients (83.3%) had a deep AER while only 2 PSP patients (25%) had a deep AER (p = 0.015). A deep AER was more frequently associated with SSP than with PSP. A deep AER may contributes to bulla formation and rupture in SSP patients.
The MCNP6 Analytic Criticality Benchmark Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
2016-06-16
Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less
Differences in Antipsychotic-Related Adverse Events in Adult, Pediatric, and Geriatric Populations.
Sagreiya, Hersh; Chen, Yi-Ren; Kumarasamy, Narmadan A; Ponnusamy, Karthik; Chen, Doris; Das, Amar K
2017-02-26
In recent years, antipsychotic medications have increasingly been used in pediatric and geriatric populations, despite the fact that many of these drugs were approved based on clinical trials in adult patients only. Preliminary studies have shown that the "off-label" use of these drugs in pediatric and geriatric populations may result in adverse events not found in adults. In this study, we utilized the large-scale U.S. Food and Drug Administration (FDA) Adverse Events Reporting System (AERS) database to look at differences in adverse events from antipsychotics among adult, pediatric, and geriatric populations. We performed a systematic analysis of the FDA AERS database using MySQL by standardizing the database using structured terminologies and ontologies. We compared adverse event profiles of atypical versus typical antipsychotic medications among adult (18-65), pediatric (age < 18), and geriatric (> 65) populations. We found statistically significant differences between the number of adverse events in the pediatric versus adult populations with aripiprazole, clozapine, fluphenazine, haloperidol, olanzapine, quetiapine, risperidone, and thiothixene, and between the geriatric versus adult populations with aripiprazole, chlorpromazine, clozapine, fluphenazine, haloperidol, paliperidone, promazine, risperidone, thiothixene, and ziprasidone (p < 0.05, with adjustment for multiple comparisons). Furthermore, the particular types of adverse events reported also varied significantly between each population for aripiprazole, clozapine, haloperidol, olanzapine, quetiapine, risperidone, and ziprasidone (Chi-square, p < 10 -6 ). Diabetes was the most commonly reported side effect in the adult population, compared to behavioral problems in the pediatric population and neurologic symptoms in the geriatric population. We also found discrepancies between the frequencies of reports in AERS and in the literature. Our analysis of the FDA AERS database shows that there are significant differences in both the numbers and types of adverse events among these age groups and between atypical and typical antipsychotics. It is important for clinicians to be mindful of these differences when prescribing antipsychotics, especially when prescribing medications off-label.
Structure of CARB-4 and AER-1 CarbenicillinHydrolyzing β-Lactamases
Sanschagrin, François; Bejaoui, Noureddine; Levesque, Roger C.
1998-01-01
We determined the nucleotide sequences of blaCARB-4 encoding CARB-4 and deduced a polypeptide of 288 amino acids. The gene was characterized as a variant of group 2c carbenicillin-hydrolyzing β-lactamases such as PSE-4, PSE-1, and CARB-3. The level of DNA homology between the bla genes for these β-lactamases varied from 98.7 to 99.9%, while that between these genes and blaCARB-4 encoding CARB-4 was 86.3%. The blaCARB-4 gene was acquired from some other source because it has a G+C content of 39.1%, compared to a G+C content of 67% for typical Pseudomonas aeruginosa genes. DNA sequencing revealed that blaAER-1 shared 60.8% DNA identity with blaPSE-3 encoding PSE-3. The deduced AER-1 β-lactamase peptide was compared to class A, B, C, and D enzymes and had 57.6% identity with PSE-3, including an STHK tetrad at the active site. For CARB-4 and AER-1, conserved canonical amino acid boxes typical of class A β-lactamases were identified in a multiple alignment. Analysis of the DNA sequences flanking blaCARB-4 and blaAER-1 confirmed the importance of gene cassettes acquired via integrons in bla gene distribution. PMID:9687391
Van Ryswyk, K; Wallace, L; Fugler, D; MacNeill, M; Héroux, M È; Gibson, M D; Guernsey, J R; Kindzierski, W; Wheeler, A J
2015-12-01
Residential air exchange rates (AERs) are vital in understanding the temporal and spatial drivers of indoor air quality (IAQ). Several methods to quantify AERs have been used in IAQ research, often with the assumption that the home is a single, well-mixed air zone. Since 2005, Health Canada has conducted IAQ studies across Canada in which AERs were measured using the perfluorocarbon tracer (PFT) gas method. Emitters and detectors of a single PFT gas were placed on the main floor to estimate a single-zone AER (AER(1z)). In three of these studies, a second set of emitters and detectors were deployed in the basement or second floor in approximately 10% of homes for a two-zone AER estimate (AER(2z)). In total, 287 daily pairs of AER(2z) and AER(1z) estimates were made from 35 homes across three cities. In 87% of the cases, AER(2z) was higher than AER(1z). Overall, the AER(1z) estimates underestimated AER(2z) by approximately 16% (IQR: 5-32%). This underestimate occurred in all cities and seasons and varied in magnitude seasonally, between homes, and daily, indicating that when measuring residential air exchange using a single PFT gas, the assumption of a single well-mixed air zone very likely results in an under prediction of the AER. The results of this study suggest that the long-standing assumption that a home represents a single well-mixed air zone may result in a substantial negative bias in air exchange estimates. Indoor air quality professionals should take this finding into consideration when developing study designs or making decisions related to the recommendation and installation of residential ventilation systems. © 2014 Her Majesty the Queen in Right of Canada. Indoor Air published by John Wiley & Sons Ltd Reproduced with the permission of the Minister of Health Canada.
Samanta, Dipanjan; Widom, Joanne; Borbat, Peter P; Freed, Jack H; Crane, Brian R
2016-12-09
Flagellated bacteria modulate their swimming behavior in response to environmental cues through the CheA/CheY signaling pathway. In addition to responding to external chemicals, bacteria also monitor internal conditions that reflect the availability of oxygen, light, and reducing equivalents, in a process termed "energy taxis." In Escherichia coli, the transmembrane receptor Aer is the primary energy sensor for motility. Genetic and physiological data suggest that Aer monitors the electron transport chain through the redox state of its FAD cofactor. However, direct biochemical data correlating FAD redox chemistry with CheA kinase activity have been lacking. Here, we test this hypothesis via functional reconstitution of Aer into nanodiscs. As purified, Aer contains fully oxidized FAD, which can be chemically reduced to the anionic semiquinone (ASQ). Oxidized Aer activates CheA, whereas ASQ Aer reversibly inhibits CheA. Under these conditions, Aer cannot be further reduced to the hydroquinone, in contrast to the proposed Aer signaling model. Pulse ESR spectroscopy of the ASQ corroborates a potential mechanism for signaling in that the resulting distance between the two flavin-binding PAS (Per-Arnt-Sim) domains implies that they tightly sandwich the signal-transducing HAMP domain in the kinase-off state. Aer appears to follow oligomerization patterns observed for related chemoreceptors, as higher loading of Aer dimers into nanodiscs increases kinase activity. These results provide a new methodological platform to study Aer function along with new mechanistic details into its signal transduction process. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.
Roux, Perrine; Rojas Castro, Daniela; Ndiaye, Khadim; Debrus, Marie; Protopopescu, Camélia; Le Gall, Jean-Marie; Haas, Aurélie; Mora, Marion; Spire, Bruno; Suzan-Monti, Marie; Carrieri, Patrizia
2016-01-01
Aims The community-based AERLI intervention provided training and education to people who inject drugs (PWID) about HIV and HCV transmission risk reduction, with a focus on drug injecting practices, other injection-related complications, and access to HIV and HCV testing and care. We hypothesized that in such a population where HCV prevalence is very high and where few know their HCV serostatus, AERLI would lead to increased HCV testing. Methods The national multisite intervention study ANRS-AERLI consisted in assessing the impact of an injection-centered face-to-face educational session offered in volunteer harm reduction (HR) centers (“with intervention”) compared with standard HR centers (“without intervention”). The study included 271 PWID interviewed on three occasions: enrolment, 6 and 12 months. Participants in the intervention group received at least one face-to-face educational session during the first 6 months. Measurements The primary outcome of this analysis was reporting to have been tested for HCV during the previous 6 months. Statistical analyses used a two-step Heckman approach to account for bias arising from the non-randomized clustering design. This approach identified factors associated with HCV testing during the previous 6 months. Findings Of the 271 participants, 127 and 144 were enrolled in the control and intervention groups, respectively. Of the latter, 113 received at least one educational session. For the present analysis, we selected 114 and 88 participants eligible for HCV testing in the control and intervention groups, respectively. In the intervention group, 44% of participants reported having being tested for HCV during the previous 6 months at enrolment and 85% at 6 months or 12 months. In the control group, these percentages were 51% at enrolment and 78% at 12 months. Multivariable analyses showed that participants who received at least one educational session during follow-up were more likely to report HCV testing, compared with those who did not receive any intervention (95%[CI] = 4.13[1.03;16.60]). Conclusion The educational intervention AERLI had already shown efficiency in reducing HCV at-risk practices and associated cutaneous complications and also seems to have a positive impact in increasing HCV testing in PWID. PMID:27294271
NASA Astrophysics Data System (ADS)
Trisna, B. N.; Budayasa, I. K.; Siswono, T. Y. E.
2018-01-01
Metacognition is related to improving student learning outcomes. This study describes students’ metacognitive activities in solving the combinatorics problem. Two undergraduate students of mathematics education from STKIP PGRI Banjarmasin were selected as the participants of the study, one person has a holist cognitive style and the other a serialist. Data were collected by task-based interviews where the task contains a combinatorial problem. The interviews were conducted twice using equivalent problem at two different times. The study found that the participants showed metacognitive awareness (A), metacognitive evaluation (E), and metacognitive regulation (R) that operated as pathways from one function to another. Both, holist and serialist, have metacognitive activities in different pathway. The path of metacognitive activities of the holist is AERCAE-AAEER-ACRECCECC-AREERCE with the AERAE-AER-ARE-ARERE pattern, while the path of metacognitive activities of the serialist is AERCA-AAER-ACRERCERC-AREEEE with the AERA-AER-ARERER-ARE pattern. As an implication of these findings, teachers/lecturers need to pay attention to metacognitive awareness when they begin a stage in mathematical problem solving. Teachers/lecturers need to emphasize to students that in mathematical problem solving, processes and results are equally important.
An overview of the ENEA activities in the field of coupled codes NPP simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parisi, C.; Negrenti, E.; Sepielli, M.
2012-07-01
In the framework of the nuclear research activities in the fields of safety, training and education, ENEA (the Italian National Agency for New Technologies, Energy and the Sustainable Development) is in charge of defining and pursuing all the necessary steps for the development of a NPP engineering simulator at the 'Casaccia' Research Center near Rome. A summary of the activities in the field of the nuclear power plants simulation by coupled codes is here presented with the long term strategy for the engineering simulator development. Specifically, results from the participation in international benchmarking activities like the OECD/NEA 'Kalinin-3' benchmark andmore » the 'AER-DYN-002' benchmark, together with simulations of relevant events like the Fukushima accident, are here reported. The ultimate goal of such activities performed using state-of-the-art technology is the re-establishment of top level competencies in the NPP simulation field in order to facilitate the development of Enhanced Engineering Simulators and to upgrade competencies for supporting national energy strategy decisions, the nuclear national safety authority, and the R and D activities on NPP designs. (authors)« less
Probabilistic estimation of residential air exchange rates for ...
Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER measurements. An algorithm for probabilistically estimating AER was developed based on the Lawrence Berkley National Laboratory Infiltration model utilizing housing characteristics and meteorological data with adjustment for window opening behavior. The algorithm was evaluated by comparing modeled and measured AERs in four US cities (Los Angeles, CA; Detroit, MI; Elizabeth, NJ; and Houston, TX) inputting study-specific data. The impact on the modeled AER of using publically available housing data representative of the region for each city was also assessed. Finally, modeled AER based on region-specific inputs was compared with those estimated using literature-based distributions. While modeled AERs were similar in magnitude to the measured AER they were consistently lower for all cities except Houston. AERs estimated using region-specific inputs were lower than those using study-specific inputs due to differences in window opening probabilities. The algorithm produced more spatially and temporally variable AERs compared with literature-based distributions reflecting within- and between-city differences, helping reduce error in estimates of air pollutant exposure. Published in the Journal of
Measuring and modeling air exchange rates inside taxi cabs in Los Angeles, California
NASA Astrophysics Data System (ADS)
Shu, Shi; Yu, Nu; Wang, Yueyan; Zhu, Yifang
2015-12-01
Air exchange rates (AERs) have a direct impact on traffic-related air pollutant (TRAP) levels inside vehicles. Taxi drivers are occupationally exposed to TRAP on a daily basis, yet there is limited measurement of AERs in taxi cabs. To fill this gap, AERs were quantified in 22 representative Los Angeles taxi cabs including 10 Prius, 5 Crown Victoria, 3 Camry, 3 Caravan, and 1 Uplander under realistic driving (RD) conditions. To further study the impacts of window position and ventilation settings on taxi AERs, additional tests were conducted on 14 taxis with windows closed (WC) and on the other 8 taxis with not only windows closed but also medium fan speed (WC-MFS) under outdoor air mode. Under RD conditions, the AERs in all 22 cabs had a mean of 63 h-1 with a median of 38 h-1. Similar AERs were observed under WC condition when compared to those measured under RD condition. Under WC-MFS condition, AERs were significantly increased in all taxi cabs, when compared with those measured under RD condition. A General Estimating Equation (GEE) model was developed and the modeling results showed that vehicle model was a significant factor in determining the AERs in taxi cabs under RD condition. Driving speed and car age were positively associated with AERs but not statistically significant. Overall, AERs measured in taxi cabs were much higher than typical AERs people usually encounter in indoor environments such as homes, offices, and even regular passenger vehicles.
Are Medications Involved in Vision and Intracrancial Pressure Changes Seen in Spaceflight?
NASA Technical Reports Server (NTRS)
Faust, K. M.; Wotring, V. E.
2014-01-01
The Food and Drug Association Adverse Event Reports (FDA AER) from 2009-2011 were used to create a database from millions of known and suspected medication-related adverse events among the general public. Vision changes, sometimes associated with intracranial pressure changes (VIIP), have been noted in some long duration crewmembers. Changes in vision and blood pressure (which can subsequently affect intracranial pressure) are fairly common side effects of medications. The purpose of this study was to explore the possibility of medication involvement in crew VIIP symptoms. Reports of suspected medication-related adverse events may be filed with the Food and Drug Administration (FDA) by medical professionals or consumers. Quarterly compilations of these reports are available for public download. Adverse Event Reporting System (AERS) reports from 1/1/2009- 6/30/2012 were downloaded and compiled into a searchable database for this study. Reports involving individuals under the age of 18 and older than 65 were excluded from this analysis. Case reports involving chronic diseases such as cancer, diabetes, multiple sclerosis and other serious conditions were also excluded. A scan of the medical literature for medication-related VIIP-like adverse events was used to create a list of suspect medications. These medications, as well as certain medications used frequently by ISS crew, were used to query the database. Queries for use of suspected medications were run, and the nature of the symptoms reported in those cases were tabulated. Symptoms searched in the FDA AERS were chosen to include the typical symptoms noted in crewmembers with VIIP. Vision symptoms searched were: visual acuity reduced, visual impairment, and vitreous floaters. Pressure changes included: abnormal sensation in eye, intracranial pressure increased, intraocular pressure increased, optic neuritis, optic neuropathy, and papilloedema. Limited demographic information is included with the FDA AERS; relevant data were also sorted by age and sex from each report. RESULTS Steroid-containing oral contraceptives had the highest number of reports associated with vision (n=166) and pressure symptoms (n=54). Corticosteroid-containing medications were also high; prednisone, for example, had 137 reports of vision issues and 79 of pressure issues. Pain relievers were also a medication class with vision and pressure-related adverse events reported. Common over-the-counter medications such as acetaminophen, aspirin and ibuprofen each had multiple reports for both vision and pressure symptoms. Antimicrobial medications ciprofloxacin and diflucan were also associated with a number of vision and pressure-related AERS. Unexpectedly, pseudoephedrine and promethazine were mentioned in fewer than 20 reports each over the 3.5 years of data examined. The FDA AERS represents a wealth of data, but there are several limitations to its use. The data are entered by the public or medical professionals, but are not checked for accuracy or completeness and may even be entered multiple times. The causal relationship between a particular adverse event and a particular medication is not tested. The cases represent a broad spectrum of demographics, occupations, and health histories, and thus do not model the astronaut population well. There is no information on the frequency of use of a medication for comparison purposes; it is not possible to assign a rate for any particular adverse event. Nonetheless, there are compelling trends. Use of corticosteroid-containing medications, pain relievers (even over-the-counter), and oral contraceptives were associated with higher numbers of vision- or intracranial pressure-related adverse events. In general, there were more vision problems than pressure problems reported. Certain medications that were once suspected of playing a role in the crew VIIP syndrome, namely pseudoephedrine and promethazine, were found to have extremely low numbers of VIIP-like AERS in the FDA data. However, crew use of corticosteroid-containing medications and pain relievers may warrant additional investigation
AerChemMIP: Quantifying the effects of chemistry and aerosols in CMIP6
Collins, William J.; Lamarque, Jean -François; Schulz, Michael; ...
2017-02-09
The Aerosol Chemistry Model Intercomparison Project (AerChemMIP) is endorsed by the Coupled-Model Intercomparison Project 6 (CMIP6) and is designed to quantify the climate and air quality impacts of aerosols and chemically reactive gases. These are specifically near-term climate forcers (NTCFs: methane, tropospheric ozone and aerosols, and their precursors), nitrous oxide and ozone-depleting halocarbons. The aim of AerChemMIP is to answer four scientific questions. 1. How have anthropogenic emissions contributed to global radiative forcing and affected regional climate over the historical period? 2. How might future policies (on climate, air quality and land use) affect the abundances of NTCFs and theirmore » climate impacts? 3.How do uncertainties in historical NTCF emissions affect radiative forcing estimates? 4. How important are climate feedbacks to natural NTCF emissions, atmospheric composition, and radiative effects? These questions will be addressed through targeted simulations with CMIP6 climate models that include an interactive representation of tropospheric aerosols and atmospheric chemistry. These simulations build on the CMIP6 Diagnostic, Evaluation and Characterization of Klima (DECK) experiments, the CMIP6 historical simulations, and future projections performed elsewhere in CMIP6, allowing the contributions from aerosols and/or chemistry to be quantified. As a result, specific diagnostics are requested as part of the CMIP6 data request to highlight the chemical composition of the atmosphere, to evaluate the performance of the models, and to understand differences in behaviour between them.« less
AerChemMIP: Quantifying the effects of chemistry and aerosols in CMIP6
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, William J.; Lamarque, Jean -François; Schulz, Michael
The Aerosol Chemistry Model Intercomparison Project (AerChemMIP) is endorsed by the Coupled-Model Intercomparison Project 6 (CMIP6) and is designed to quantify the climate and air quality impacts of aerosols and chemically reactive gases. These are specifically near-term climate forcers (NTCFs: methane, tropospheric ozone and aerosols, and their precursors), nitrous oxide and ozone-depleting halocarbons. The aim of AerChemMIP is to answer four scientific questions. 1. How have anthropogenic emissions contributed to global radiative forcing and affected regional climate over the historical period? 2. How might future policies (on climate, air quality and land use) affect the abundances of NTCFs and theirmore » climate impacts? 3.How do uncertainties in historical NTCF emissions affect radiative forcing estimates? 4. How important are climate feedbacks to natural NTCF emissions, atmospheric composition, and radiative effects? These questions will be addressed through targeted simulations with CMIP6 climate models that include an interactive representation of tropospheric aerosols and atmospheric chemistry. These simulations build on the CMIP6 Diagnostic, Evaluation and Characterization of Klima (DECK) experiments, the CMIP6 historical simulations, and future projections performed elsewhere in CMIP6, allowing the contributions from aerosols and/or chemistry to be quantified. As a result, specific diagnostics are requested as part of the CMIP6 data request to highlight the chemical composition of the atmosphere, to evaluate the performance of the models, and to understand differences in behaviour between them.« less
2011-01-01
Background Reprocessing of endoscopes generally requires labour-intensive manual cleaning followed by high-level disinfection in an automated endoscope reprocessor (AER). EVOTECH Endoscope Cleaner and Reprocessor (ECR) is approved for fully automated cleaning and disinfection whereas AERs require manual cleaning prior to the high-level disinfection procedure. The purpose of this economic evaluation was to determine the cost-efficiency of the ECR versus AER methods of endoscopy reprocessing in an actual practice setting. Methods A time and motion study was conducted at a Canadian hospital to collect data on the personnel resources and consumable supplies costs associated with the use of EVOTECH ECR versus manual cleaning followed by AER with Medivators DSD-201. Reprocessing of all endoscopes was observed and timed for both reprocessor types over three days. Laboratory staff members were interviewed regarding the consumption and cost of all disposable supplies and equipment. Exact Wilcoxon rank sum test was used for assessing differences in total cycle reprocessing time. Results Endoscope reprocessing was significantly shorter with the ECR than with manual cleaning followed by AER. The differences in median time were 12.46 minutes per colonoscope (p < 0.0001), 6.31 minutes per gastroscope (p < 0.0001), and 5.66 minutes per bronchoscope (p = 0.0040). Almost 2 hours of direct labour time was saved daily with the ECR. The total per cycle cost of consumables and labour for maintenance was slightly higher for EVOTECH ECR versus manual cleaning followed by AER ($8.91 versus $8.31, respectively). Including the cost of direct labour time consumed in reprocessing scopes, the per cycle and annual costs of using the EVOTECH ECR was less than the cost of manual cleaning followed by AER disinfection ($11.50 versus $11.88). Conclusions The EVOTECH ECR was more efficient and less costly to use for the reprocessing of endoscopes than manual cleaning followed by AER disinfection. Although the cost of consumable supplies required to reprocess endoscopes with EVOTECH ECR was slightly higher, the value of the labour time saved with EVOTECH ECR more than offset the additional consumables cost. The increased efficiency with EVOTECH ECR could lead to even further cost-savings by shifting endoscopy laboratory personnel responsibilities but further study is required. PMID:21967345
Forte, Lindy; Shum, Cynthia
2011-10-03
Reprocessing of endoscopes generally requires labour-intensive manual cleaning followed by high-level disinfection in an automated endoscope reprocessor (AER). EVOTECH Endoscope Cleaner and Reprocessor (ECR) is approved for fully automated cleaning and disinfection whereas AERs require manual cleaning prior to the high-level disinfection procedure. The purpose of this economic evaluation was to determine the cost-efficiency of the ECR versus AER methods of endoscopy reprocessing in an actual practice setting. A time and motion study was conducted at a Canadian hospital to collect data on the personnel resources and consumable supplies costs associated with the use of EVOTECH ECR versus manual cleaning followed by AER with Medivators DSD-201. Reprocessing of all endoscopes was observed and timed for both reprocessor types over three days. Laboratory staff members were interviewed regarding the consumption and cost of all disposable supplies and equipment. Exact Wilcoxon rank sum test was used for assessing differences in total cycle reprocessing time. Endoscope reprocessing was significantly shorter with the ECR than with manual cleaning followed by AER. The differences in median time were 12.46 minutes per colonoscope (p < 0.0001), 6.31 minutes per gastroscope (p < 0.0001), and 5.66 minutes per bronchoscope (p = 0.0040). Almost 2 hours of direct labour time was saved daily with the ECR. The total per cycle cost of consumables and labour for maintenance was slightly higher for EVOTECH ECR versus manual cleaning followed by AER ($8.91 versus $8.31, respectively). Including the cost of direct labour time consumed in reprocessing scopes, the per cycle and annual costs of using the EVOTECH ECR was less than the cost of manual cleaning followed by AER disinfection ($11.50 versus $11.88). The EVOTECH ECR was more efficient and less costly to use for the reprocessing of endoscopes than manual cleaning followed by AER disinfection. Although the cost of consumable supplies required to reprocess endoscopes with EVOTECH ECR was slightly higher, the value of the labour time saved with EVOTECH ECR more than offset the additional consumables cost. The increased efficiency with EVOTECH ECR could lead to even further cost-savings by shifting endoscopy laboratory personnel responsibilities but further study is required.
del Carmen Burón-Barral, Maria; Gosink, Khoosheh K.; Parkinson, John S.
2006-01-01
The Escherichia coli Aer protein contains an N-terminal PAS domain that binds flavin adenine dinucleotide (FAD), senses aerotactic stimuli, and communicates with the output signaling domain. To explore the roles of the intervening F1 and HAMP segments in Aer signaling, we isolated plasmid-borne aerotaxis-defective mutations in a host strain lacking all chemoreceptors of the methyl-accepting chemotaxis protein (MCP) family. Under these conditions, Aer alone established the cell's run/tumble swimming pattern and modulated that behavior in response to oxygen gradients. We found two classes of Aer mutants: null and clockwise (CW) biased. Most mutant proteins exhibited the null phenotype: failure to elicit CW flagellar rotation, no aerosensing behavior in MCP-containing hosts, and no apparent FAD-binding ability. However, null mutants had low Aer expression levels caused by rapid degradation of apparently nonnative subunits. Their functional defects probably reflect the absence of a protein product. In contrast, CW-biased mutant proteins exhibited normal expression levels, wild-type FAD binding, and robust aerosensing behavior in MCP-containing hosts. The CW lesions evidently shift unstimulated Aer output to the CW signaling state but do not block the Aer input-output pathway. The distribution and properties of null and CW-biased mutations suggest that the Aer PAS domain may engage in two different interactions with HAMP and the HAMP-proximal signaling domain: one needed for Aer maturation and another for promoting CW output from the Aer signaling domain. Most aerotaxis-defective null mutations in these regions seemed to affect maturation only, indicating that these two interactions involve structurally distinct determinants. PMID:16672601
NASA Astrophysics Data System (ADS)
Gómez-Rodríguez, F.; Linares-Barranco, A.; Paz, R.; Miró-Amarante, L.; Jiménez, G.; Civit, A.
2007-05-01
Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows real-time virtual massive connectivity among huge number of neurons located on different chips.[1] By exploiting high speed digital communication circuits (with nano-seconds timing), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Neurons generate "events" according to their activity levels. That is, more active neurons generate more events per unit time and access the interchip communication channel more frequently than neurons with low activity. In Neuromorphic system development, AER brings some advantages to develop real-time image processing system: (1) AER represents the information like time continuous stream not like a frame; (2) AER sends the most important information first (although this depends on the sender); (3) AER allows to process information as soon as it is received. When AER is used in artificial vision field, each pixel is considered like a neuron, so pixel's intensity is represented like a sequence of events; modifying the number and the frequency of these events, it is possible to make some image filtering. In this paper we present four image filters using AER: (a) Noise addition and suppression, (b) brightness modification, (c) single moving object tracking and (d) geometrical transformations (rotation, translation, reduction and magnification). For testing and debugging, we use USB-AER board developed by Robotic and Technology of Computers Applied to Rehabilitation (RTCAR) research group. This board is based on an FPGA, devoted to manage the AER functionality. This board also includes a micro-controlled for USB communication, 2 Mbytes RAM and 2 AER ports (one for input and one for output).
Energetic Profile of the Basketball Exercise Simulation Test in Junior Elite Players.
Latzel, Richard; Hoos, Olaf; Stier, Sebastian; Kaufmann, Sebastian; Fresz, Volker; Reim, Dominik; Beneke, Ralph
2017-11-28
To analyze the energetic profile of the basketball exercise simulation test (BEST). 10 male elite junior basketball players (age: 15.5±0.6yrs, height: 180±9cm, body mass: 66.1±11.2kg) performed a modified BEST (20 circuits consisting of jumping, sprinting, jogging, shuffling, and short breaks) simulating professional basketball game play. Circuit time, sprint time, sprint decrement, oxygen uptake (VO2), heart rate (HR), and blood lactate concentration (BLC) were obtained. Metabolic energy and metabolic power above rest (W tot , P tot ) as well as energy share in terms of aerobic (W aer ), glycolytic (W blc ), and high energy phosphates (W PCr ) were calculated from VO2 during exercise, net lactate production, and the fast component of post-exercise VO2 kinetics, respectively. W aer , W blc , and W PCr reflect 89±2%, 5±1%, and 6±1% of total energy needed, respectively. Assuming an aerobic replenishment of PCr energy stores during short breaks, the adjusted energy share yielded W aer : 66±4%, W blc : 5±1%, and W PCr : 29±1%. W aer and W PCr were negatively correlated (-0.72, -0.59) with sprint time, which was not the case for W blc . Consistent with general findings on energy system interaction during repeated high intensity exercise bouts, the intermittent profile of the BEST relies primarily on aerobic energy combined with repetitive supplementation by anaerobic utilization of high energy phosphates.
Breen, Michael S; Breen, Miyuki; Williams, Ronald W; Schultz, Bradley D
2010-12-15
A critical aspect of air pollution exposure models is the estimation of the air exchange rate (AER) of individual homes, where people spend most of their time. The AER, which is the airflow into and out of a building, is a primary mechanism for entry of outdoor air pollutants and removal of indoor source emissions. The mechanistic Lawrence Berkeley Laboratory (LBL) AER model was linked to a leakage area model to predict AER from questionnaires and meteorology. The LBL model was also extended to include natural ventilation (LBLX). Using literature-reported parameter values, AER predictions from LBL and LBLX models were compared to data from 642 daily AER measurements across 31 detached homes in central North Carolina, with corresponding questionnaires and meteorological observations. Data was collected on seven consecutive days during each of four consecutive seasons. For the individual model-predicted and measured AER, the median absolute difference was 43% (0.17 h(-1)) and 40% (0.17 h(-1)) for the LBL and LBLX models, respectively. Additionally, a literature-reported empirical scale factor (SF) AER model was evaluated, which showed a median absolute difference of 50% (0.25 h(-1)). The capability of the LBL, LBLX, and SF models could help reduce the AER uncertainty in air pollution exposure models used to develop exposure metrics for health studies.
Phetrak, Athit; Lohwacharin, Jenyuk; Sakai, Hiroshi; Murakami, Michio; Oguma, Kumiko; Takizawa, Satoshi
2014-06-01
Anion exchange resins (AERs) with different properties were evaluated for their ability to remove dissolved organic matter (DOM) and bromide, and to reduce disinfection by-product (DBP) formation potentials of water collected from a eutrophic surface water source in Japan. DOM and bromide were simultaneously removed by all selected AERs in batch adsorption experiments. A polyacrylic magnetic ion exchange resin (MIEX®) showed faster dissolved organic carbon (DOC) removal than other AERs because it had the smallest resin bead size. Aromatic DOM fractions with molecular weight larger than 1600 Da and fluorescent organic fractions of fulvic acid- and humic acid-like compounds were efficiently removed by all AERs. Polystyrene AERs were more effective in bromide removal than polyacrylic AERs. This result implied that the properties of AERs, i.e. material and resin size, influenced not only DOM removal but also bromide removal efficiency. MIEX® showed significant chlorinated DBP removal because it had the highest DOC removal within 30 min, whereas polystyrene AERs efficiently removed brominated DBPs, especially brominated trihalomethane species. The results suggested that, depending on source water DOM and bromide concentration, selecting a suitable AER is a key factor in effective control of chlorinated and brominated DBPs in drinking water. Copyright © 2014 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.
Fgf16 is essential for pectoral fin bud formation in zebrafish
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nomura, Ryohei; Kamei, Eriko; Hotta, Yuuhei
2006-08-18
Zebrafish pectoral fin bud formation is an excellent model for studying morphogenesis. Fibroblast growth factors (Fgfs) and sonic hedgehog (shh) are essential for pectoral fin bud formation. We found that Fgf16 was expressed in the apical ectodermal ridge (AER) of fin buds. A knockdown of Fgf16 function resulted in no fin bud outgrowth. Fgf16 is required for cell proliferation and differentiation in the mesenchyme and the AER of the fin buds, respectively. Fgf16 functions downstream of Fgf10, a mesenchymal factor, signaling to induce the expression of Fgf4 and Fgf8 in the AER. Fgf16 in the AER and shh in themore » zone of polarizing activity (ZPA) interact to induce and/or maintain each other's expression. These findings have revealed that Fgf16, a newly identified AER factor, plays a crucial role in pectoral fin bud outgrowth by mediating the interactions of AER-mesenchyme and AER-ZPA.« less
Chloride Ion Adsorption Capacity of Anion Exchange Resin in Cement Mortar.
Lee, Yunsu; Lee, Hanseung; Jung, Dohyun; Chen, Zhengxin; Lim, Seungmin
2018-04-05
This paper presents the effect of anion exchange resin (AER) on the adsorption of chloride ions in cement mortar. The kinetic and equilibrium behaviors of AER were investigated in distilled water and Ca(OH)₂ saturated solutions, and then the adsorption of chloride ions by the AER in the mortar specimen was determined. The AER was used as a partial replacement for sand in the mortar specimen. The mortar specimen was coated with epoxy, except for an exposed surface, and then immersed in a NaCl solution for 140 days. The chloride content in the mortar specimen was characterized by energy dispersive X-ray fluorescence analysis and electron probe microanalysis. The results showed that the AER could adsorb the chloride ions from the solution rapidly but had a relatively low performance when the pH of its surrounding environment increased. When the AER was mixed in the cement mortar, its chloride content was higher than that of the cement matrix around it, which confirms the chloride ion adsorption capacity of the AER.
1997-06-21
Renal disease in people with insulin-dependent diabetes (IDDM) continues to pose a major health threat. Inhibitors of angiotensin-converting enzyme (ACE) slow the decline of renal function in advanced renal disease, but their effects at earlier stages are unclear, and the degree of albuminuria at which treatment should start is not known. We carried out a randomised, double-blind, placebo-controlled trial of the ACE inhibitor lisinopril in 530 men and women with IDDM aged 20-59 years with normoalbuminuria or microalbuminuria. Patients were recruited from 18 European centres, and were not on medication for hypertension. Resting blood pressure at entry was at least 75 and no more than 90 mm Hg diastolic, and no more than 155 mm Hg systolic. Urinary albumin excretion rate (AER) was centrally assessed by means of two overnight urine collections at baseline, 6, 12, 18, and 24 months. There were no difference in baseline characteristics by treatment group; mean AER was 8.0 micrograms/min in both groups; and prevalence of microalbuminuria was 13% and 17% in the placebo and lisinopril groups, respectively. On intention-to-treat analysis at 2 years, AER was 2.2 micrograms/min lower in the lisinopril than in the placebo group, a percentage difference of 18.8% (95% CI 2.0-32.7, p = 0.03), adjusted for baseline AER and centre, absolute difference 2.2 micrograms/min. In people with normoalbuminuria, the treatment difference was 1.0 microgram/min (12.7% [-2.9 to 26.0], p = 0.1). In those with microalbuminuria, however, the treatment difference was 34.2 micrograms/min (49.7% [-14.5 to 77.9], p = 0.1; for interaction, p = 0.04). For patients who completed 24 months on the trial, the final treatment difference in AER was 38.5 micrograms/min in those with microalbuminuria at baseline (p = 0.001), and 0.23 microgram/min in those with normoalbuminuria at baseline (p = 0.6). There was no treatment difference in hypoglycaemic events or in metabolic control as assessed by glycated haemoglobin. Lisinopril slows the progression of renal disease in normotensive IDDM patients with little or no albuminuria, though greatest effect was in those with microalbuminuria (AER > or = 20 micrograms/min). Our results show that lisinopril does not increase the risk of hypoglycaemic events in IDDM.
Measurement of air exchange rates in different indoor environments using continuous CO2 sensors.
You, Yan; Niu, Can; Zhou, Jian; Liu, Yating; Bai, Zhipeng; Zhang, Jiefeng; He, Fei; Zhang, Nan
2012-01-01
A new air exchange rate (AER) monitoring method using continuous CO2 sensors was developed and validated through both laboratory experiments and field studies. Controlled laboratory simulation tests were conducted in a 1-m3 environmental chamber at different AERs (0.1-10.0 hr(-1)). AERs were determined using the decay method based on box model assumptions. Field tests were conducted in classrooms, dormitories, meeting rooms and apartments during 2-5 weekdays using CO2 sensors coupled with data loggers. Indoor temperature, relative humidity (RH), and CO2 concentrations were continuously monitored while outdoor parameters combined with on-site climate conditions were recorded. Statistical results indicated that good laboratory performance was achieved: duplicate precision was within 10%, and the measured AERs were 90%-120% of the real AERs. Average AERs were 1.22, 1.37, 1.10, 1.91 and 0.73 hr(-1) in dormitories, air-conditioned classrooms, classrooms with an air circulation cooling system, reading rooms, and meeting rooms, respectively. In an elderly particulate matter exposure study, all the homes had AER values ranging from 0.29 to 3.46 hr(-1) in fall, and 0.12 to 1.39 hr(-1) in winter with a median AER of 1.15.
A Continuum of Renin-Independent Aldosteronism in Normotension
Baudrand, Rene; Guarda, Francisco J.; Fardella, Carlos; Hundemer, Gregory; Brown, Jenifer; Williams, Gordon; Vaidya, Anand
2017-01-01
Primary aldosteronism (PA) is a severe form of autonomous aldosteronism. Milder forms of autonomous and renin-independent aldosteronism may be common, even in normotension. We characterized aldosterone secretion in 210 normotensives who had suppressed plasma renin activity (PRA<1.0 ng/mL/h), completed an oral sodium suppression test, received an infusion of angiotensin II (AngII), and had measurements of blood pressure (BP) and renal plasma flow (RPF). Continuous associations between urinary aldosterone excretion rate (AER), renin, and potassium handling were investigated. Severe autonomous aldosterone secretion that was consistent with confirmed PA was defined based on accepted criteria of an AER >12 mcg/24h with urinary sodium excretion >200 mmol/24h. Across the population, there were strong and significant associations between higher AER and higher urinary potassium excretion, higher AngII-stimulated aldosterone, and lower PRA, suggesting a continuum of renin-independent aldosteronism and mineralocorticoid receptor activity. Autonomous aldosterone secretion that fulfilled confirmatory criteria for PA was detected in 29 participants (14%). Normotensives with evidence suggestive of confirmed PA had higher 24h urinary AER (20.2±12.2 vs. 6.2±2.9 mcg/24h, P<0.001) as expected, but also higher AngII-stimulated aldosterone (12.4±8.6 vs. 6.6±4.3 ng/dL, P<0.001) and lower 24h urinary sodium-to-potassium excretion (2.69±0.65 vs. 3.69±1.50 mmol/mmol, P=0.001); however, there were no differences in age, aldosterone-to-renin ratio, BP, or RPF between the two groups. These findings indicate a continuum of renin-independent aldosteronism and mineralocorticoid receptor activity in normotension that ranges from subtle to overtly dysregulated and autonomous. Longitudinal studies are needed to determine whether this spectrum of autonomous aldosterone secretion contributes to hypertension and cardiovascular disease. PMID:28289182
A review of air exchange rate models for air pollution exposure assessments.
Breen, Michael S; Schultz, Bradley D; Sohn, Michael D; Long, Thomas; Langstaff, John; Williams, Ronald; Isaacs, Kristin; Meng, Qing Yu; Stallings, Casson; Smith, Luther
2014-11-01
A critical aspect of air pollution exposure assessments is estimation of the air exchange rate (AER) for various buildings where people spend their time. The AER, which is the rate of exchange of indoor air with outdoor air, is an important determinant for entry of outdoor air pollutants and for removal of indoor-emitted air pollutants. This paper presents an overview and critical analysis of the scientific literature on empirical and physically based AER models for residential and commercial buildings; the models highlighted here are feasible for exposure assessments as extensive inputs are not required. Models are included for the three types of airflows that can occur across building envelopes: leakage, natural ventilation, and mechanical ventilation. Guidance is provided to select the preferable AER model based on available data, desired temporal resolution, types of airflows, and types of buildings included in the exposure assessment. For exposure assessments with some limited building leakage or AER measurements, strategies are described to reduce AER model uncertainty. This review will facilitate the selection of AER models in support of air pollution exposure assessments.
Chloride Ion Adsorption Capacity of Anion Exchange Resin in Cement Mortar
Lee, Hanseung; Jung, Dohyun; Chen, Zhengxin
2018-01-01
This paper presents the effect of anion exchange resin (AER) on the adsorption of chloride ions in cement mortar. The kinetic and equilibrium behaviors of AER were investigated in distilled water and Ca(OH)2 saturated solutions, and then the adsorption of chloride ions by the AER in the mortar specimen was determined. The AER was used as a partial replacement for sand in the mortar specimen. The mortar specimen was coated with epoxy, except for an exposed surface, and then immersed in a NaCl solution for 140 days. The chloride content in the mortar specimen was characterized by energy dispersive X-ray fluorescence analysis and electron probe microanalysis. The results showed that the AER could adsorb the chloride ions from the solution rapidly but had a relatively low performance when the pH of its surrounding environment increased. When the AER was mixed in the cement mortar, its chloride content was higher than that of the cement matrix around it, which confirms the chloride ion adsorption capacity of the AER. PMID:29621188
Yousefzadeh, Amirreza; Jablonski, Miroslaw; Iakymchuk, Taras; Linares-Barranco, Alejandro; Rosado, Alfredo; Plana, Luis A; Temple, Steve; Serrano-Gotarredona, Teresa; Furber, Steve B; Linares-Barranco, Bernabe
2017-10-01
Address event representation (AER) is a widely employed asynchronous technique for interchanging "neural spikes" between different hardware elements in neuromorphic systems. Each neuron or cell in a chip or a system is assigned an address (or ID), which is typically communicated through a high-speed digital bus, thus time-multiplexing a high number of neural connections. Conventional AER links use parallel physical wires together with a pair of handshaking signals (request and acknowledge). In this paper, we present a fully serial implementation using bidirectional SATA connectors with a pair of low-voltage differential signaling (LVDS) wires for each direction. The proposed implementation can multiplex a number of conventional parallel AER links for each physical LVDS connection. It uses flow control, clock correction, and byte alignment techniques to transmit 32-bit address events reliably over multiplexed serial connections. The setup has been tested using commercial Spartan6 FPGAs attaining a maximum event transmission speed of 75 Meps (Mega events per second) for 32-bit events at a line rate of 3.0 Gbps. Full HDL codes (vhdl/verilog) and example demonstration codes for the SpiNNaker platform will be made available.
An introduction to analytical methods for the postmarketing surveillance of veterinary vaccines.
Siev, D
1999-01-01
Any analysis of spontaneous AER data must consider the many biases inherent in the observation and reporting of vaccine adverse events. The absence of a clear probability structure requires statistical procedures to be used in a spirit of exploratory description rather than definitive confirmation. The extent of such descriptions should be temperate, without the implication that they extend to parent populations. It is important to recognize the presence of overdispersion in selecting methods and constructing models. Important stochastic or systematic features of the data may always be unknown. Our attempts to delineate what constitutes an AER have not eliminated all the fuzziness in its definition. Some count every event in a report as a separate AER. Besides confusing the role of event and report, this introduces a complex correlational structure, since multiple event descriptions received in a single report can hardly be considered independent. The many events described by one reporter would then become inordinately weighted. The alternative is to record an AER once, regardless of how many event descriptions it includes. As a practical compromise, many regard the simultaneous submission of several report forms by one reporter as a single AER, and the next submission by that reporter as another AER. This method is reasonable when reporters submit AERs very infrequently. When individual reporters make frequent reports, it becomes difficult to justify the inconsistency of counting multiple events as a single AER when they are submitted together, but as separate AERs when they are reported at different times. While either choice is imperfect, the latter approach is currently used by the USDA and its licensed manufacturers in developing a mandatory postmarketing surveillance system for veterinary immunobiologicals in the United States. Under the proposed system, summaries of an estimated 10,000 AERs received annually by the manufacturers would be submitted to the USDA. In quantitative summaries, AERs received from lay consumers are usually weighted equally with those received from veterinary health professionals, although arguments have been advanced for separate classifications. The emphasis on AER rate estimation differentiates the surveillance of veterinary vaccines by the USDA CVB from the surveillance of veterinary drugs as practiced by the Food and Drug Administration (FDA) Center for Veterinary Medicine (CVM). The FDA CVM does, in fact, perform a retrodictive causality assessment for individual AERs (Parkhie et al., 1995). This distinction reflects the differences between vaccines and drugs, as well as the difference in regulatory philosophy between the FDA and the USDA. The modified Kramer algorithm (Kramer et al., 1979) used by the FDA relies on features more appropriate to drug therapy than vaccination, such as an ongoing treatment regimen which allows evaluation of the response to dechallenge and rechallenge. In tracking AERs, the FDA has emphasized the inclusion of clinical manifestations on labels and inserts, while the USDA has been reluctant to have such information appear in product literature or to use postmarketing data for this purpose. The potential for the misuse of spontaneous AER data is great. Disinformation is likely when the nature of this type of data is misunderstood and inappropriate analytical methods blindly employed. A greater danger lies in the glib transformation of AER data into something else entirely. Since approval before publication is not required, advertisements for veterinary vaccines appear with claims such as "over 3 million doses, 99.9905% satisfaction rating," or "11,500,000 doses, 99.98% reaction free." These claims, presumably based on spontaneous AERs, are almost fraudulent in their deceptiveness. Are we to suppose that 11.5 million vaccinations were observed for reactions? In comparing the two advertisements, we find the second presumed AER rate is double the first. (ABSTRACT TRU
Breen, Michael S; Burke, Janet M; Batterman, Stuart A; Vette, Alan F; Godwin, Christopher; Croghan, Carry W; Schultz, Bradley D; Long, Thomas C
2014-11-07
Air pollution health studies often use outdoor concentrations as exposure surrogates. Failure to account for variability of residential infiltration of outdoor pollutants can induce exposure errors and lead to bias and incorrect confidence intervals in health effect estimates. The residential air exchange rate (AER), which is the rate of exchange of indoor air with outdoor air, is an important determinant for house-to-house (spatial) and temporal variations of air pollution infiltration. Our goal was to evaluate and apply mechanistic models to predict AERs for 213 homes in the Near-Road Exposures and Effects of Urban Air Pollutants Study (NEXUS), a cohort study of traffic-related air pollution exposures and respiratory effects in asthmatic children living near major roads in Detroit, Michigan. We used a previously developed model (LBL), which predicts AER from meteorology and questionnaire data on building characteristics related to air leakage, and an extended version of this model (LBLX) that includes natural ventilation from open windows. As a critical and novel aspect of our AER modeling approach, we performed a cross validation, which included both parameter estimation (i.e., model calibration) and model evaluation, based on daily AER measurements from a subset of 24 study homes on five consecutive days during two seasons. The measured AER varied between 0.09 and 3.48 h(-1) with a median of 0.64 h(-1). For the individual model-predicted and measured AER, the median absolute difference was 29% (0.19 h‑1) for both the LBL and LBLX models. The LBL and LBLX models predicted 59% and 61% of the variance in the AER, respectively. Daily AER predictions for all 213 homes during the three year study (2010-2012) showed considerable house-to-house variations from building leakage differences, and temporal variations from outdoor temperature and wind speed fluctuations. Using this novel approach, NEXUS will be one of the first epidemiology studies to apply calibrated and home-specific AER models, and to include the spatial and temporal variations of AER for over 200 individual homes across multiple years into an exposure assessment in support of improving risk estimates.
Verification of a neutronic code for transient analysis in reactors with Hex-z geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzalez-Pintor, S.; Verdu, G.; Ginestar, D.
Due to the geometry of the fuel bundles, to simulate reactors such as VVER reactors it is necessary to develop methods that can deal with hexagonal prisms as basic elements of the spatial discretization. The main features of a code based on a high order finite element method for the spatial discretization of the neutron diffusion equation and an implicit difference method for the time discretization of this equation are presented and the performance of the code is tested solving the first exercise of the AER transient benchmark. The obtained results are compared with the reference results of the benchmarkmore » and with the results provided by PARCS code. (authors)« less
Development of a Graphics Based Automated Emergency Response System (AERS) for Rail Transit Systems
DOT National Transportation Integrated Search
1989-05-01
This report presents an overview of the second generation Automated Emergency Response System (AERS2). Developed to assist transit systems in responding effectively to emergency situations, AERS2 is a microcomputer-based information retrieval system ...
Zhao, Bo; Ding, Ruoxi; Chen, Shoushun; Linares-Barranco, Bernabe; Tang, Huajin
2015-09-01
This paper introduces an event-driven feedforward categorization system, which takes data from a temporal contrast address event representation (AER) sensor. The proposed system extracts bio-inspired cortex-like features and discriminates different patterns using an AER based tempotron classifier (a network of leaky integrate-and-fire spiking neurons). One of the system's most appealing characteristics is its event-driven processing, with both input and features taking the form of address events (spikes). The system was evaluated on an AER posture dataset and compared with two recently developed bio-inspired models. Experimental results have shown that it consumes much less simulation time while still maintaining comparable performance. In addition, experiments on the Mixed National Institute of Standards and Technology (MNIST) image dataset have demonstrated that the proposed system can work not only on raw AER data but also on images (with a preprocessing step to convert images into AER events) and that it can maintain competitive accuracy even when noise is added. The system was further evaluated on the MNIST dynamic vision sensor dataset (in which data is recorded using an AER dynamic vision sensor), with testing accuracy of 88.14%.
The effect of obesity and type 1 diabetes on renal function in children and adolescents.
Franchini, Simone; Savino, Alessandra; Marcovecchio, M Loredana; Tumini, Stefano; Chiarelli, Francesco; Mohn, Angelika
2015-09-01
Early signs of renal complications can be common in youths with type 1 diabetes (T1D). Recently, there has been an increasing interest in potential renal complications associated with obesity, paralleling the epidemics of this condition, although there are limited data in children. Obese children and adolescents present signs of early alterations in renal function similar to non-obese peers with T1D. Eighty-three obese (age: 11.6 ± 3.0 yr), 164 non-obese T1D (age: 12.4 ± 3.2 yr), and 71 non-obese control (age: 12.3 ± 3.2 yr) children and adolescents were enrolled in the study. Anthropometric parameters and blood pressure were measured. Renal function was assessed by albumin excretion rate (AER), serum cystatin C, creatinine and estimated glomerular filtration rate (e-GFR), calculated using the Bouvet's formula. Obese and non-obese T1D youths had similar AER [8.9(5.9-10.8) vs. 8.7(5.9-13.1) µg/min] and e-GFR levels (114.8 ± 19.6 vs. 113.4 ± 19.1 mL/min), which were higher than in controls [AER: 8.1(5.9-8.7) µg/min, e-GFR: 104.7 ± 18.9 mL/min]. Prevalence of microalbuminuria and hyperfiltration was similar between obese and T1D youths and higher than their control peers (6.0 vs. 8.0 vs. 0%, p = 0.02; 15.9 vs. 15.9 vs. 4.3%, p = 0.03, respectively). Body mass index (BMI) z-score was independently related to e-GFR (r = 0.328; p < 0.001), and AER (r = 0.138; p = 0.017). Hemoglobin A1c (HbA1c) correlated with AER (r = 0.148; p = 0.007) but not with eGFR (r = 0.041; p = 0.310). Obese children and adolescents show early alterations in renal function, compared to normal weight peers, and they have similar renal profiles than age-matched peers with T1D. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Time-recovering PCI-AER interface for bio-inspired spiking systems
NASA Astrophysics Data System (ADS)
Paz-Vicente, R.; Linares-Barranco, A.; Cascado, D.; Vicente, S.; Jimenez, G.; Civit, A.
2005-06-01
Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate 'events' according to their activity levels. More active neurons generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. When building multi-chip muti-layered AER systems it is absolutely necessary to have a computer interface that allows (a) to read AER interchip traffic into the computer and visualize it on screen, and (b) inject a sequence of events at some point of the AER structure. This is necessary for testing and debugging complex AER systems. This paper presents a PCI to AER interface, that dispatches a sequence of events received from the PCI bus with embedded timing information to establish when each event will be delivered. A set of specialized states machines has been introduced to recovery the possible time delays introduced by the asynchronous AER bus. On the input channel, the interface capture events assigning a timestamp and delivers them through the PCI bus to MATLAB applications. It has been implemented in real time hardware using VHDL and it has been tested in a PCI-AER board, developed by authors, that includes a Spartan II 200 FPGA. The demonstration hardware is currently capable to send and receive events at a peak rate of 8,3 Mev/sec, and a typical rate of 1 Mev/sec.
Vlaeminck, Siegfried E.; Terada, Akihiko; Smets, Barth F.; De Clippeleir, Haydée; Schaubroeck, Thomas; Bolca, Selin; Demeestere, Lien; Mast, Jan; Boon, Nico; Carballa, Marta; Verstraete, Willy
2010-01-01
Aerobic ammonium-oxidizing bacteria (AerAOB) and anoxic ammonium-oxidizing bacteria (AnAOB) cooperate in partial nitritation/anammox systems to remove ammonium from wastewater. In this process, large granular microbial aggregates enhance the performance, but little is known about granulation so far. In this study, three suspended-growth oxygen-limited autotrophic nitrification-denitrification (OLAND) reactors with different inoculation and operation (mixing and aeration) conditions, designated reactors A, B, and C, were used. The test objectives were (i) to quantify the AerAOB and AnAOB abundance and the activity balance for the different aggregate sizes and (ii) to relate aggregate morphology, size distribution, and architecture putatively to the inoculation and operation of the three reactors. A nitrite accumulation rate ratio (NARR) was defined as the net aerobic nitrite production rate divided by the anoxic nitrite consumption rate. The smallest reactor A, B, and C aggregates were nitrite sources (NARR, >1.7). Large reactor A and C aggregates were granules capable of autonomous nitrogen removal (NARR, 0.6 to 1.1) with internal AnAOB zones surrounded by an AerAOB rim. Around 50% of the autotrophic space in these granules consisted of AerAOB- and AnAOB-specific extracellular polymeric substances. Large reactor B aggregates were thin film-like nitrite sinks (NARR, <0.5) in which AnAOB were not shielded by an AerAOB layer. Voids and channels occupied 13 to 17% of the anoxic zone of AnAOB-rich aggregates (reactors B and C). The hypothesized granulation pathways include granule replication by division and budding and are driven by growth and/or decay based on species-specific physiology and by hydrodynamic shear and mixing. PMID:19948857
Unstructured Adaptive Meshes: Bad for Your Memory?
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Feng, Hui-Yu; VanderWijngaart, Rob
2003-01-01
This viewgraph presentation explores the need for a NASA Advanced Supercomputing (NAS) parallel benchmark for problems with irregular dynamical memory access. This benchmark is important and necessary because: 1) Problems with localized error source benefit from adaptive nonuniform meshes; 2) Certain machines perform poorly on such problems; 3) Parallel implementation may provide further performance improvement but is difficult. Some examples of problems which use irregular dynamical memory access include: 1) Heat transfer problem; 2) Heat source term; 3) Spectral element method; 4) Base functions; 5) Elemental discrete equations; 6) Global discrete equations. Nonconforming Mesh and Mortar Element Method are covered in greater detail in this presentation.
Effect of temperature on the standard metabolic rates of juvenile and adult Exopalaemon carinicauda
NASA Astrophysics Data System (ADS)
Zhang, Chengsong; Li, Fuhua; Xiang, Jianhai
2015-03-01
Ridgetail white prawn ( Exopalaemon carinicauda) are of significant economic importance in China where they are widely cultured. However, there is little information on the basic biology of this species. We evaluated the effect of temperature (16, 19, 22, 25, 28, 31, and 34°C) on the standard metabolic rates (SMRs) of juvenile and adult E. carinicauda in the laboratory under static conditions. The oxygen consumption rate (OCR), ammonia-N excretion rate (AER), and atomic ratio of oxygen consumed to nitrogen consumed (O:N ratio) of juvenile and adult E. carinicauda were significantly influenced by temperature ( P < 0.05). Both the OCR and AER of juveniles increased significantly with increasing temperature from 16 to 34°C, but the maximum OCR for adults was at 31°C. Juvenile shrimp exhibited a higher OCR than the adults from 19 to 34°C. There was no significant difference between the AERs of the two life-stages from 16 to 31°C ( P >0.05). The O:N ratio in juveniles was significantly higher than that in the adults over the entire temperature range ( P <0.05). The temperature coefficient ( Q 10) of OCR and AER ranged from 5.03 to 0.86 and 6.30 to 0.85 for the adults, respectively, and from 6.09-1.03 and 3.66-1.80 for the juveniles, respectively. The optimal temperature range for growth of the juvenile and adult shrimp was from 28 to 31°C, based on Q 10 and SMR values. Results from the present study may be used to guide pond culture production of E. carinicauda.
Reichman, Rivka; Shirazi, Elham; Colliver, Donald G; Pennell, Kelly G
2017-02-22
Vapor intrusion (VI) is well-known to be difficult to characterize because indoor air (IA) concentrations exhibit considerable temporal and spatial variability in homes throughout impacted communities. To overcome this and other limitations, most VI science has focused on subsurface processes; however there is a need to understand the role of aboveground processes, especially building operation, in the context of VI exposure risks. This tutorial review focuses on building air exchange rates (AERs) and provides a review of literature related building AERs to inform decision making at VI sites. Commonly referenced AER values used by VI regulators and practitioners do not account for the variability in AER values that have been published in indoor air quality studies. The information presented herein highlights that seasonal differences, short-term weather conditions, home age and air conditioning status, which are well known to influence AERs, are also likely to influence IA concentrations at VI sites. Results of a 3D VI model in combination with relevant AER values reveal that IA concentrations can vary more than one order of magnitude due to air conditioning status and one order of magnitude due to house age. Collectively, the data presented strongly support the need to consider AERs when making decisions at VI sites.
Kern, Elizabeth O; Erhard, Penny; Sun, Wanjie; Genuth, Saul; Weiss, Miriam F
2010-01-01
Background Urinary markers were tested as predictors of macroalbuminuria or microalbuminuria in type 1 diabetes. Study Design Nested case:control of participants in the Diabetes Control and Complications Trial (DCCT) Setting & Participants Eighty-seven cases of microalbuminuria were matched to 174 controls in a 1:2 ratio, while 4 cases were matched to 4 controls in a 1:1 ratio, resulting in 91 cases and 178 controls for microalbuminuria. Fifty-five cases of macroalbuminuria were matched to 110 controls in a 1:2 ratio. Controls were free of micro/macroalbuminuria when their matching case first developed micro/macroalbuminuria. Predictors Urinary N-acetyl-β-D-glucosaminidase, pentosidine, AGE fluorescence, albumin excretion rate (AER) Outcomes Incident microalbuminuria (two consecutive annual AER > 40 but <= 300 mg/day), or macroalbuminuria (AER > 300 mg/day) Measurements Stored urine samples from DCCT entry, and 1–9 years later when macroalbuminuria or microalbuminuria occurred, were measured for the lysosomal enzyme, N-acetyl-β-D-glucosaminidase, and the advanced glycosylation end-products (AGEs) pentosidine and AGE-fluorescence. AER and adjustor variables were obtained from the DCCT. Results Sub-microalbuminuric levels of AER at baseline independently predicted microalbuminuria (adjusted OR 1.83; p<.001) and macroalbuminuria (adjusted OR 1.82; p<.001). Baseline N-acetyl-β-D-glucosaminidase independently predicted macroalbuminuria (adjusted OR 2.26; p<.001), and microalbuminuria (adjusted OR 1.86; p<.001). Baseline pentosidine predicted macroalbuminuria (adjusted OR 6.89; p=.002). Baseline AGE fluorescence predicted microalbuminuria (adjusted OR 1.68; p=.02). However, adjusted for N-acetyl-β-D-glucosaminidase, pentosidine and AGE-fluorescence lost predictive association with macroalbuminuria and microalbuminuria, respectively. Limitations Use of angiotensin converting-enzyme inhibitors was not directly ascertained, although their use was proscribed during the DCCT. Conclusions Early in type 1 diabetes, repeated measurements of AER and urinary NAG may identify individuals susceptible to future diabetic nephropathy. Combining the two markers may yield a better predictive model than either one alone. Renal tubule stress may be more severe, reflecting abnormal renal tubule processing of AGE-modified proteins, among individuals susceptible to diabetic nephropathy. PMID:20138413
Appropriate prediction of residential air exchange rate (AER) is important for estimating human exposures in the residential microenvironment, as AER drives the infiltration of outdoor-generated air pollutants indoors. AER differences among homes may result from a number of fact...
Coupling Processes Between Atmospheric Chemistry and Climate
NASA Technical Reports Server (NTRS)
Ko, M. K. W.; Weisenstein, Debra; Shia, Run-Li; Sze, N. D.
1997-01-01
This is the first semi-annual report for NAS5-97039 summarizing work performed for January 1997 through June 1997. Work in this project is related to NAS1-20666, also funded by NASA ACMAP. The work funded in this project also benefits from work at AER associated with the AER three-dimensional isentropic transport model funded by NASA AEAP and the AER two-dimensional climate-chemistry model (co-funded by Department of Energy). The overall objective of this project is to improve the understanding of coupling processes between atmospheric chemistry and climate. Model predictions of the future distributions of trace gases in the atmosphere constitute an important component of the input necessary for quantitative assessments of global change. We will concentrate on the changes in ozone and stratospheric sulfate aerosol, with emphasis on how ozone in the lower stratosphere would respond to natural or anthropogenic changes. The key modeling tools for this work are the AER two-dimensional chemistry-transport model, the AER two-dimensional stratospheric sulfate model, and the AER three-wave interactive model with full chemistry.
2013-01-08
This re- search ignores effects on long-term durability, trafficability, temperature rebar corrosion , and other concerns that are of minimal... concrete because it can cause corrosion of steel reinforcement. However, the corrosion problem develops slowly with time; therefore, this problem has a...ER D C/ CR RE L TR -1 3- 1 Laboratory Evaluation of Expedient Low- Temperature Concrete Admixtures for Repairing Blast Holes in Cold
A polishing hybrid AER/UF membrane process for the treatment of a high DOC content surface water.
Humbert, H; Gallard, H; Croué, J-P
2012-03-15
The efficacy of a combined AER/UF (Anion Exchange Resin/Ultrafiltration) process for the polishing treatment of a high DOC (Dissolved Organic Carbon) content (>8 mgC/L) surface water was investigated at lab-scale using a strong base AER. Both resin dose and bead size had a significant impact on the kinetic removal of DOC for short contact times (i.e. <15 min). For resin doses higher than 700 mg/L and median bead sizes below 250 μm DOC removal remained constant after 30 min of contact time with very high removal rates (80%). Optimum AER treatment conditions were applied in combination with UF membrane filtration on water previously treated by coagulation-flocculation (i.e. 3 mgC/L). A more severe fouling was observed for each filtration run in the presence of AER. This fouling was shown to be mainly reversible and caused by the progressive attrition of the AER through the centrifugal pump leading to the production of resin particles below 50 μm in diameter. More important, the presence of AER significantly lowered the irreversible fouling (loss of permeability recorded after backwash) and reduced the DOC content of the clarified water to l.8 mgC/L (40% removal rate), concentration that remained almost constant throughout the experiment. Copyright © 2011 Elsevier Ltd. All rights reserved.
Felicidade, I; Lima, J D; Pesarini, J R; Monreal, A C D; Mantovani, M S; Ribeiro, L R; Oliveira, R J
2014-11-28
Polyphenolic compounds present in rosemary were found to have antioxidant properties, anticarcinogenic activity, and to increase the detoxification of pro-carcinogens. The aim of the study was to determine the effect the aqueous extract of rosemary (AER) on mutagenicity induced by methylmethane sulfonate in meristematic cells of Allium cepa, as well as to describe its mode of action. Anti-mutagenicity experiments were carried out with 3 different concentrations of AER, which alone showed no mutagenic effects. In antimutagenicity experiments, AER showed chemopreventive activity in cultured meristematic cells of A. cepa against exposure to methylmethane sulfonate. Additionally, post-treatment and simultaneous treatment using pre-incubation protocols were the most effective. Evaluation of different protocols and the percent reduction in DNA indicated bioantimutagenic as well desmutagenic modes of action for AER. AER may be chemopreventive and antimutagenic.
Growth of Aeromonas species on increasing concentrations of sodium chloride.
Delamare, A P; Costa, S O; Da Silveira, M M; Echeverrigaray, S
2000-01-01
The growth of 16 strains of Aeromonas, representing 12 species of the genera, were examined at different salt levels (0-1.71 M NaCl). All the strains grew on media with 0.34 M NaCl, and nine on media with 0.68 M. Two strains, Aer. enteropelogenes and Aer. trota, were able to grow on media with 0.85 M and 1.02 M NaCl, respectively. Comparison of the growth curves of Aer. hydrophila ATCC7966 and Aer. trota ATCC 49657 on four concentrations of NaCl (0.08, 0.34, 0.68 and 1.02 M) confirm the high tolerance of Aer. trota, and indicate that high concentrations of salt increase the lag time and decrease the maximum growth rate. However, both strains were able to grow, slowly, in at least 0.68 M NaCl, a sodium chloride concentration currently used as food preservative.
High-quality endoscope reprocessing decreases endoscope contamination.
Decristoforo, P; Kaltseis, J; Fritz, A; Edlinger, M; Posch, W; Wilflingseder, D; Lass-Flörl, C; Orth-Höller, D
2018-02-24
Several outbreaks of severe infections due to contamination of gastrointestinal (GI) endoscopes, mainly duodenoscopes, have been described. The rate of microbial endoscope contamination varies dramatically in literature. The aim of this multicentre prospective study was to evaluate the hygiene quality of endoscopes and automated endoscope reprocessors (AERs) in Tyrol/Austria. In 2015 and 2016, a total of 463 GI endoscopes and 105 AERs from 29 endoscopy centres were analysed by a routine (R) and a combined routine and advanced (CRA) sampling procedure and investigated for microbial contamination by culture-based and molecular-based analyses. The contamination rate of GI endoscopes was 1.3%-4.6% according to the national guideline, suggesting that 1.3-4.6 patients out of 100 could have had contacts with hygiene-relevant microorganisms through an endoscopic intervention. Comparison of R and CRA sampling showed 1.8% of R versus 4.6% of CRA failing the acceptance criteria in phase I and 1.3% of R versus 3.0% of CRA samples failing in phase II. The most commonly identified indicator organism was Pseudomonas spp., mainly Pseudomonas oleovorans. None of the tested viruses were detected in 40 samples. While AERs in phase I failed (n = 9, 17.6%) mainly due to technical faults, phase II revealed lapses (n = 6, 11.5%) only on account of microbial contamination of the last rinsing water, mainly with Pseudomonas spp. In the present study the contamination rate of endoscopes was low compared with results from other European countries, possibly due to the high quality of endoscope reprocessing, drying and storage. Copyright © 2018 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
Co-Production of Quality in the Applied Education Research Scheme
ERIC Educational Resources Information Center
Ozga, Jenny
2007-01-01
This contribution looks at the ways in which research quality is defined and addressed in the Applied Education Research Scheme (AERS), particularly within the network on Schools and Social Capital, which is one of the four areas of work within the overall AERS scheme. AERS is a five-year programme, funded jointly by the Scottish Executive and the…
Anion-exchange resins (AERs) separate As(V) and As(lIl) in solution by retaining As(V) and allowing As(lIl) to pass through. AERs offer several advantages including portability, ease of use, and affordability (relative to other As speciation methods). The use of AERs for the inst...
Serrano-Gotarredona, Rafael; Oster, Matthias; Lichtsteiner, Patrick; Linares-Barranco, Alejandro; Paz-Vicente, Rafael; Gomez-Rodriguez, Francisco; Camunas-Mesa, Luis; Berner, Raphael; Rivas-Perez, Manuel; Delbruck, Tobi; Liu, Shih-Chii; Douglas, Rodney; Hafliger, Philipp; Jimenez-Moreno, Gabriel; Civit Ballcels, Anton; Serrano-Gotarredona, Teresa; Acosta-Jimenez, Antonio J; Linares-Barranco, Bernabé
2009-09-01
This paper describes CAVIAR, a massively parallel hardware implementation of a spike-based sensing-processing-learning-actuating system inspired by the physiology of the nervous system. CAVIAR uses the asychronous address-event representation (AER) communication framework and was developed in the context of a European Union funded project. It has four custom mixed-signal AER chips, five custom digital AER interface components, 45k neurons (spiking cells), up to 5M synapses, performs 12G synaptic operations per second, and achieves millisecond object recognition and tracking latencies.
Different regulation of limb development by p63 transcript variants.
Kawata, Manabu; Taniguchi, Yuki; Mori, Daisuke; Yano, Fumiko; Ohba, Shinsuke; Chung, Ung-Il; Shimogori, Tomomi; Mills, Alea A; Tanaka, Sakae; Saito, Taku
2017-01-01
The apical ectodermal ridge (AER), located at the distal end of each limb bud, is a key signaling center which controls outgrowth and patterning of the proximal-distal axis of the limb through secretion of various molecules. Fibroblast growth factors (FGFs), particularly Fgf8 and Fgf4, are representative molecules produced by AER cells, and essential to maintain the AER and cell proliferation in the underlying mesenchyme, meanwhile Jag2-Notch pathway negatively regulates the AER and limb development. p63, a transcription factor of the p53 family, is expressed in the AER and indispensable for limb formation. However, the underlying mechanisms and specific roles of p63 variants are unknown. Here, we quantified the expression of p63 variants in mouse limbs from embryonic day (E) 10.5 to E12.5, and found that ΔNp63γ was strongly expressed in limbs at all stages, while TAp63γ expression was rapidly increased in the later stages. Fluorescence-activated cell sorting analysis of limb bud cells from reporter mouse embryos at E11.5 revealed that all variants were abundantly expressed in AER cells, and their expression was very low in mesenchymal cells. We then generated AER-specific p63 knockout mice by mating mice with a null and a flox allele of p63, and Msx2-Cre mice (Msx2-Cre;p63Δ/fl). Msx2-Cre;p63Δ/fl neonates showed limb malformation that was more obvious in distal elements. Expression of various AER-related genes was decreased in Msx2-Cre;p63Δ/fl limb buds and embryoid bodies formed by p63-knockdown induced pluripotent stem cells. Promoter analyses and chromatin immunoprecipitation assays demonstrated Fgf8 and Fgf4 as transcriptional targets of ΔNp63γ, and Jag2 as that of TAp63γ. Furthermore, TAp63γ overexpression exacerbated the phenotype of Msx2-Cre;p63Δ/fl mice. These data indicate that ΔNp63 and TAp63 control limb development through transcriptional regulation of different target molecules with different roles in the AER. Our findings contribute to further understanding of the molecular network of limb development.
Perez-Peña, Fernando; Morgado-Estevez, Arturo; Linares-Barranco, Alejandro; Jimenez-Fernandez, Angel; Gomez-Rodriguez, Francisco; Jimenez-Moreno, Gabriel; Lopez-Coronado, Juan
2013-01-01
In this paper we present a complete spike-based architecture: from a Dynamic Vision Sensor (retina) to a stereo head robotic platform. The aim of this research is to reproduce intended movements performed by humans taking into account as many features as possible from the biological point of view. This paper fills the gap between current spike silicon sensors and robotic actuators by applying a spike processing strategy to the data flows in real time. The architecture is divided into layers: the retina, visual information processing, the trajectory generator layer which uses a neuroinspired algorithm (SVITE) that can be replicated into as many times as DoF the robot has; and finally the actuation layer to supply the spikes to the robot (using PFM). All the layers do their tasks in a spike-processing mode, and they communicate each other through the neuro-inspired AER protocol. The open-loop controller is implemented on FPGA using AER interfaces developed by RTC Lab. Experimental results reveal the viability of this spike-based controller. Two main advantages are: low hardware resources (2% of a Xilinx Spartan 6) and power requirements (3.4 W) to control a robot with a high number of DoF (up to 100 for a Xilinx Spartan 6). It also evidences the suitable use of AER as a communication protocol between processing and actuation. PMID:24264330
Perez-Peña, Fernando; Morgado-Estevez, Arturo; Linares-Barranco, Alejandro; Jimenez-Fernandez, Angel; Gomez-Rodriguez, Francisco; Jimenez-Moreno, Gabriel; Lopez-Coronado, Juan
2013-11-20
In this paper we present a complete spike-based architecture: from a Dynamic Vision Sensor (retina) to a stereo head robotic platform. The aim of this research is to reproduce intended movements performed by humans taking into account as many features as possible from the biological point of view. This paper fills the gap between current spike silicon sensors and robotic actuators by applying a spike processing strategy to the data flows in real time. The architecture is divided into layers: the retina, visual information processing, the trajectory generator layer which uses a neuroinspired algorithm (SVITE) that can be replicated into as many times as DoF the robot has; and finally the actuation layer to supply the spikes to the robot (using PFM). All the layers do their tasks in a spike-processing mode, and they communicate each other through the neuro-inspired AER protocol. The open-loop controller is implemented on FPGA using AER interfaces developed by RTC Lab. Experimental results reveal the viability of this spike-based controller. Two main advantages are: low hardware resources (2% of a Xilinx Spartan 6) and power requirements (3.4 W) to control a robot with a high number of DoF (up to 100 for a Xilinx Spartan 6). It also evidences the suitable use of AER as a communication protocol between processing and actuation.
NASA Astrophysics Data System (ADS)
Wolff, S.; Fraknoi, A.; Hockey, T.; Biemesderfer, C.; Johnson, J.
2010-08-01
Astronomy Education Review (AER) is an online journal and magazine, covering astronomy and space science education and outreach. Founded in 2001 by Andrew Fraknoi and Sidney Wolff, and published until recently by National Optical Astronomy Observatories (NOAO), the journal is now a proud part of the journals operation of the American Astronomical Society (AAS) found online at http://aer.aip.org. If you are presenting at this conference, or reading the conference proceedings, you may be an ideal candidate to publish in AER. Later in this paper, we present some encouraging hints and guidelines for publishing in the journal.
NASA Astrophysics Data System (ADS)
Steefel, C. I.
2015-12-01
Over the last 20 years, we have seen the evolution of multicomponent reactive transport modeling and the expanding range and increasing complexity of subsurface environmental applications it is being used to address. Reactive transport modeling is being asked to provide accurate assessments of engineering performance and risk for important issues with far-reaching consequences. As a result, the complexity and detail of subsurface processes, properties, and conditions that can be simulated have significantly expanded. Closed form solutions are necessary and useful, but limited to situations that are far simpler than typical applications that combine many physical and chemical processes, in many cases in coupled form. In the absence of closed form and yet realistic solutions for complex applications, numerical benchmark problems with an accepted set of results will be indispensable to qualifying codes for various environmental applications. The intent of this benchmarking exercise, now underway for more than five years, is to develop and publish a set of well-described benchmark problems that can be used to demonstrate simulator conformance with norms established by the subsurface science and engineering community. The objective is not to verify this or that specific code--the reactive transport codes play a supporting role in this regard—but rather to use the codes to verify that a common solution of the problem can be achieved. Thus, the objective of each of the manuscripts is to present an environmentally-relevant benchmark problem that tests the conceptual model capabilities, numerical implementation, process coupling, and accuracy. The benchmark problems developed to date include 1) microbially-mediated reactions, 2) isotopes, 3) multi-component diffusion, 4) uranium fate and transport, 5) metal mobility in mining affected systems, and 6) waste repositories and related aspects.
Hierarchical Address Event Routing for Reconfigurable Large-Scale Neuromorphic Systems.
Park, Jongkil; Yu, Theodore; Joshi, Siddharth; Maier, Christoph; Cauwenberghs, Gert
2017-10-01
We present a hierarchical address-event routing (HiAER) architecture for scalable communication of neural and synaptic spike events between neuromorphic processors, implemented with five Xilinx Spartan-6 field-programmable gate arrays and four custom analog neuromophic integrated circuits serving 262k neurons and 262M synapses. The architecture extends the single-bus address-event representation protocol to a hierarchy of multiple nested buses, routing events across increasing scales of spatial distance. The HiAER protocol provides individually programmable axonal delay in addition to strength for each synapse, lending itself toward biologically plausible neural network architectures, and scales across a range of hierarchies suitable for multichip and multiboard systems in reconfigurable large-scale neuromorphic systems. We show approximately linear scaling of net global synaptic event throughput with number of routing nodes in the network, at 3.6×10 7 synaptic events per second per 16k-neuron node in the hierarchy.
Fung, Chunkit; Fossa, Sophie D.; Milano, Michael T.; Sahasrabudhe, Deepak M.; Peterson, Derick R.; Travis, Lois B.
2015-01-01
Purpose Increased risks of incident cardiovascular disease (CVD) in patients with testicular cancer (TC) given chemotherapy in European studies were largely restricted to long-term survivors and included patients from the 1960s. Few population-based investigations have quantified CVD mortality during, shortly after, and for two decades after TC diagnosis in the era of cisplatin-based chemotherapy. Patients and Methods Standardized mortality ratios (SMRs) for CVD and absolute excess risks (AERs; number of excess deaths per 10,000 person-years) were calculated for 15,006 patients with testicular nonseminoma reported to the population-based Surveillance, Epidemiology, and End Results program (1980 to 2010) who initially received chemotherapy (n = 6,909) or surgery (n = 8,097) without radiotherapy and accrued 60,065 and 81,227 person-years of follow-up, respectively. Multivariable modeling evaluated effects of age, treatment, extent of disease, and other factors on CVD mortality. Results Significantly increased CVD mortality occurred after chemotherapy (SMR, 1.36; 95% CI, 1.03 to 1.78; n = 54) but not surgery (SMR, 0.81; 95% CI, 0.60 to 1.07; n = 50). Significant excess deaths after chemotherapy were restricted to the first year after TC diagnosis (SMR, 5.31; AER, 13.90; n = 11) and included cerebrovascular disease (SMR, 21.72; AER, 7.43; n = 5) and heart disease (SMR, 3.45; AER, 6.64; n = 6). In multivariable analyses, increased CVD mortality after chemotherapy was confined to the first year after TC diagnosis (hazard ratio, 4.86; 95% CI, 1.25 to 32.08); distant disease (P < .05) and older age at diagnosis (P < .01) were independent risk factors. Conclusion This is the first population-based study, to our knowledge, to quantify short- and long-term CVD mortality after TC diagnosis. The increased short-term risk of CVD deaths should be further explored in analytic studies that enumerate incident events and can serve to develop comprehensive evidence-based approaches for risk stratification and application of preventive and interventional efforts. PMID:26240226
Tabatabai, Reza; Baptista, Sheryl; Tiozzo, Caterina; Carraro, Gianni; Wheeler, Matthew; Barreto, Guillermo; Braun, Thomas; Li, Xiaokun; Hajihosseini, Mohammad K.; Bellusci, Saverio
2013-01-01
The vertebrate limbs develop through coordinated series of inductive, growth and patterning events. Fibroblast Growth Factor receptor 2b (FGFR2b) signaling controls the induction of the Apical Ectodermal Ridge (AER) but its putative roles in limb outgrowth and patterning, as well as in AER morphology and cell behavior have remained unclear. We have investigated these roles through graded and reversible expression of soluble dominant-negative FGFR2b molecules at various times during mouse limb development, using a doxycycline/transactivator/tet(O)-responsive system. Transient attenuation (≤24 hours) of FGFR2b-ligands signaling at E8.5, prior to limb bud induction, leads mostly to the loss or truncation of proximal skeletal elements with less severe impact on distal elements. Attenuation from E9.5 onwards, however, has an irreversible effect on the stability of the AER, resulting in a progressive loss of distal limb skeletal elements. The primary consequences of FGFR2b-ligands attenuation is a transient loss of cell adhesion and down-regulation of P63, β1-integrin and E-cadherin, and a permanent loss of cellular β-catenin organization and WNT signaling within the AER. Combined, these effects lead to the progressive transformation of the AER cells from pluristratified to squamous epithelial-like cells within 24 hours of doxycycline administration. These findings show that FGFR2b-ligands signaling has critical stage-specific roles in maintaining the AER during limb development. PMID:24167544
Vieux-Rochas, Maxence; Bouhali, Kamal; Mantero, Stefano; Garaffo, Giulia; Provero, Paolo; Astigiano, Simonetta; Barbieri, Ottavia; Caratozzolo, Mariano F.; Tullo, Apollonia; Guerrini, Luisa; Lallemand, Yvan; Robert, Benoît
2013-01-01
The Dlx and Msx homeodomain transcription factors play important roles in the control of limb development. The combined disruption of Msx1 and Msx2, as well as that of Dlx5 and Dlx6, lead to limb patterning defects with anomalies in digit number and shape. Msx1;Msx2 double mutants are characterized by the loss of derivatives of the anterior limb mesoderm which is not observed in either of the simple mutants. Dlx5;Dlx6 double mutants exhibit hindlimb ectrodactyly. While the morphogenetic action of Msx genes seems to involve the BMP molecules, the mode of action of Dlx genes still remains elusive. Here, examining the limb phenotypes of combined Dlx and Msx mutants we reveal a new Dlx-Msx regulatory loop directly involving BMPs. In Msx1;Dlx5;Dlx6 triple mutant mice (TKO), beside the expected ectrodactyly, we also observe the hallmark morphological anomalies of Msx1;Msx2 double mutants suggesting an epistatic role of Dlx5 and Dlx6 over Msx2. In Msx2;Dlx5;Dlx6 TKO mice we only observe an aggravation of the ectrodactyly defect without changes in the number of the individual components of the limb. Using a combination of qPCR, ChIP and bioinformatic analyses, we identify two Dlx/Msx regulatory pathways: 1) in the anterior limb mesoderm a non-cell autonomous Msx-Dlx regulatory loop involves BMP molecules through the AER and 2) in AER cells and, at later stages, in the limb mesoderm the regulation of Msx2 by Dlx5 and Dlx6 occurs also cell autonomously. These data bring new elements to decipher the complex AER-mesoderm dialogue that takes place during limb development and provide clues to understanding the etiology of congenital limb malformations. PMID:23382810
Vieux-Rochas, Maxence; Bouhali, Kamal; Mantero, Stefano; Garaffo, Giulia; Provero, Paolo; Astigiano, Simonetta; Barbieri, Ottavia; Caratozzolo, Mariano F; Tullo, Apollonia; Guerrini, Luisa; Lallemand, Yvan; Robert, Benoît; Levi, Giovanni; Merlo, Giorgio R
2013-01-01
The Dlx and Msx homeodomain transcription factors play important roles in the control of limb development. The combined disruption of Msx1 and Msx2, as well as that of Dlx5 and Dlx6, lead to limb patterning defects with anomalies in digit number and shape. Msx1;Msx2 double mutants are characterized by the loss of derivatives of the anterior limb mesoderm which is not observed in either of the simple mutants. Dlx5;Dlx6 double mutants exhibit hindlimb ectrodactyly. While the morphogenetic action of Msx genes seems to involve the BMP molecules, the mode of action of Dlx genes still remains elusive. Here, examining the limb phenotypes of combined Dlx and Msx mutants we reveal a new Dlx-Msx regulatory loop directly involving BMPs. In Msx1;Dlx5;Dlx6 triple mutant mice (TKO), beside the expected ectrodactyly, we also observe the hallmark morphological anomalies of Msx1;Msx2 double mutants suggesting an epistatic role of Dlx5 and Dlx6 over Msx2. In Msx2;Dlx5;Dlx6 TKO mice we only observe an aggravation of the ectrodactyly defect without changes in the number of the individual components of the limb. Using a combination of qPCR, ChIP and bioinformatic analyses, we identify two Dlx/Msx regulatory pathways: 1) in the anterior limb mesoderm a non-cell autonomous Msx-Dlx regulatory loop involves BMP molecules through the AER and 2) in AER cells and, at later stages, in the limb mesoderm the regulation of Msx2 by Dlx5 and Dlx6 occurs also cell autonomously. These data bring new elements to decipher the complex AER-mesoderm dialogue that takes place during limb development and provide clues to understanding the etiology of congenital limb malformations.
Neural Substrates of Auditory Emotion Recognition Deficits in Schizophrenia.
Kantrowitz, Joshua T; Hoptman, Matthew J; Leitman, David I; Moreno-Ortega, Marta; Lehrfeld, Jonathan M; Dias, Elisa; Sehatpour, Pejman; Laukka, Petri; Silipo, Gail; Javitt, Daniel C
2015-11-04
Deficits in auditory emotion recognition (AER) are a core feature of schizophrenia and a key component of social cognitive impairment. AER deficits are tied behaviorally to impaired ability to interpret tonal ("prosodic") features of speech that normally convey emotion, such as modulations in base pitch (F0M) and pitch variability (F0SD). These modulations can be recreated using synthetic frequency modulated (FM) tones that mimic the prosodic contours of specific emotional stimuli. The present study investigates neural mechanisms underlying impaired AER using a combined event-related potential/resting-state functional connectivity (rsfMRI) approach in 84 schizophrenia/schizoaffective disorder patients and 66 healthy comparison subjects. Mismatch negativity (MMN) to FM tones was assessed in 43 patients/36 controls. rsfMRI between auditory cortex and medial temporal (insula) regions was assessed in 55 patients/51 controls. The relationship between AER, MMN to FM tones, and rsfMRI was assessed in the subset who performed all assessments (14 patients, 21 controls). As predicted, patients showed robust reductions in MMN across FM stimulus type (p = 0.005), particularly to modulations in F0M, along with impairments in AER and FM tone discrimination. MMN source analysis indicated dipoles in both auditory cortex and anterior insula, whereas rsfMRI analyses showed reduced auditory-insula connectivity. MMN to FM tones and functional connectivity together accounted for ∼50% of the variance in AER performance across individuals. These findings demonstrate that impaired preattentive processing of tonal information and reduced auditory-insula connectivity are critical determinants of social cognitive dysfunction in schizophrenia, and thus represent key targets for future research and clinical intervention. Schizophrenia patients show deficits in the ability to infer emotion based upon tone of voice [auditory emotion recognition (AER)] that drive impairments in social cognition and global functional outcome. This study evaluated neural substrates of impaired AER in schizophrenia using a combined event-related potential/resting-state fMRI approach. Patients showed impaired mismatch negativity response to emotionally relevant frequency modulated tones along with impaired functional connectivity between auditory and medial temporal (anterior insula) cortex. These deficits contributed in parallel to impaired AER and accounted for ∼50% of variance in AER performance. Overall, these findings demonstrate the importance of both auditory-level dysfunction and impaired auditory/insula connectivity in the pathophysiology of social cognitive dysfunction in schizophrenia. Copyright © 2015 the authors 0270-6474/15/3514910-13$15.00/0.
Poluzzi, Elisabetta; Raschi, Emanuel; Motola, Domenico; Moretti, Ugo; De Ponti, Fabrizio
2010-04-01
Drug-induced torsades de pointes (TdP) is a complex regulatory and clinical problem due to the rarity of this sometimes fatal adverse event. In this context, the US FDA Adverse Event Reporting System (AERS) is an important source of information, which can be applied to the analysis of TdP liability of marketed drugs. To critically evaluate the risk of antimicrobial-induced TdP by detecting alert signals in the AERS, on the basis of both quantitative and qualitative analyses. Reports of TdP from January 2004 through December 2008 were retrieved from the public version of the AERS. The absolute number of cases and reporting odds ratio as a measure of disproportionality were evaluated for each antimicrobial drug (quantitative approach). A list of drugs with suspected TdP liability (provided by the Arizona Centre of Education and Research on Therapeutics [CERT]) was used as a reference to define signals. In a further analysis, to refine signal detection, we identified TdP cases without co-medications listed by Arizona CERT (qualitative approach). Over the 5-year period, 374 reports of TdP were retrieved: 28 antibacterials, 8 antifungals, 1 antileprosy and 26 antivirals were involved. Antimicrobials more frequently reported were levofloxacin (55) and moxifloxacin (37) among the antibacterials, fluconazole (47) and voriconazole (17) among the antifungals, and lamivudine (8) and nelfinavir (6) among the antivirals. A significant disproportionality was observed for 17 compounds, including several macrolides, fluoroquinolones, linezolid, triazole antifungals, caspofungin, indinavir and nelfinavir. With the qualitative approach, we identified the following additional drugs or fixed dose combinations, characterized by at least two TdP cases without co-medications listed by Arizona CERT: ceftriaxone, piperacillin/tazobactam, cotrimoxazole, metronidazole, ribavirin, lamivudine and lopinavir/ritonavir. Disproportionality for macrolides, fluoroquinolones and most of the azole antifungals should be viewed as 'expected' according to Arizona CERT list. By contrast, signals were generated by linezolid, caspofungin, posaconazole, indinavir and nelfinavir. Drugs detected only by the qualitative approach should be further investigated by increasing the sensitivity of the method, e.g. by searching also for the TdP surrogate marker, prolongation of the QT interval. The freely available version of the FDA AERS database represents an important source to detect signals of TdP. In particular, our analysis generated five signals among antimicrobials for which further investigations and active surveillance are warranted. These signals should be considered in evaluating the benefit-risk profile of these drugs.
Panduru, Nicolae M.; Forsblom, Carol; Saraheimo, Markku; Thorn, Lena; Bierhaus, Angelika; Humpert, Per M.; Groop, Per-Henrik
2013-01-01
OBJECTIVE Diabetic nephropathy (DN) has mainly been considered a glomerular disease, although tubular dysfunction may also play a role. This study assessed the predictive value for progression of a tubular marker, urinary liver-type fatty acid–binding protein (L-FABP), at all stages of DN. RESEARCH DESIGN AND METHODS At baseline, 1,549 patients with type 1 diabetes had an albumin excretion rate (AER) within normal reference ranges, 334 had microalbuminuria, and 363 had macroalbuminuria. Patients were monitored for a median of 5.8 years (95% CI 5.7–5.9). In addition, 208 nondiabetic subjects were studied. L-FABP was measured by ELISA and normalized with urinary creatinine. Different Cox proportional hazard models for the progression at every stage of DN were used to evaluate the predictive value of L-FABP. The potential benefit of using L-FABP alone or together with AER was assessed by receiver operating characteristic curve analyses. RESULTS L-FABP was an independent predictor of progression at all stages of DN. As would be expected, receiver operating characteristic curves for the prediction of progression were significantly larger for AER than for L-FABP, except for patients with baseline macroalbuminuria, in whom the areas were similar. Adding L-FABP to AER in the models did not significantly improve risk prediction of progression in favor of the combination of L-FABP plus AER compared with AER alone. CONCLUSIONS L-FABP is an independent predictor of progression of DN irrespective of disease stage. L-FABP used alone or together with AER may not improve the risk prediction of DN progression in patients with type 1 diabetes, but further studies are needed in this regard. PMID:23378622
Benchmark problems for numerical implementations of phase field models
Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; ...
2016-10-01
Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verifymore » new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.« less
& Speeches USDA Newsroom Videos Pet Travel Blog Z6_LO4C1BS0LO4EB0AER7MEEI2G47 Error Error Biosecurity ESF11 Farm Bill Horse Protection Hungry Pests Pet Travel Trade Veterinary Accreditation USDA.gov
Atmospheric Science Data Center
2018-04-23
DSCOVR_EPIC_L2_AER_01 The Aerosol UV product provides aerosol and UV products in three tiers. Tier 1 products include Absorbing Aerosol Index (AAI) and above-cloud-aerosol optical depth (ACAOD). Tier 2 ...
Amusia and protolanguage impairments in schizophrenia
Kantrowitz, J. T.; Scaramello, N.; Jakubovitz, A.; Lehrfeld, J. M.; Laukka, P.; Elfenbein, H. A.; Silipo, G.; Javitt, D. C.
2017-01-01
Background Both language and music are thought to have evolved from a musical protolanguage that communicated social information, including emotion. Individuals with perceptual music disorders (amusia) show deficits in auditory emotion recognition (AER). Although auditory perceptual deficits have been studied in schizophrenia, their relationship with musical/protolinguistic competence has not previously been assessed. Method Musical ability was assessed in 31 schizophrenia/schizo-affective patients and 44 healthy controls using the Montreal Battery for Evaluation of Amusia (MBEA). AER was assessed using a novel battery in which actors provided portrayals of five separate emotions. The Disorganization factor of the Positive and Negative Syndrome Scale (PANSS) was used as a proxy for language/thought disorder and the MATRICS Consensus Cognitive Battery (MCCB) was used to assess cognition. Results Highly significant deficits were seen between patients and controls across auditory tasks (p<0.001). Moreover, significant differences were seen in AER between the amusia and intact music-perceiving groups, which remained significant after controlling for group status and education. Correlations with AER were specific to the melody domain, and correlations between protolanguage (melody domain) and language were independent of overall cognition. Discussion This is the first study to document a specific relationship between amusia, AER and thought disorder, suggesting a shared linguistic/protolinguistic impairment. Once amusia was considered, other cognitive factors were no longer significant predictors of AER, suggesting that musical ability in general and melodic discrimination ability in particular may be crucial targets for treatment development and cognitive remediation in schizophrenia. PMID:25066878
Amusia and protolanguage impairments in schizophrenia.
Kantrowitz, J T; Scaramello, N; Jakubovitz, A; Lehrfeld, J M; Laukka, P; Elfenbein, H A; Silipo, G; Javitt, D C
2014-10-01
Both language and music are thought to have evolved from a musical protolanguage that communicated social information, including emotion. Individuals with perceptual music disorders (amusia) show deficits in auditory emotion recognition (AER). Although auditory perceptual deficits have been studied in schizophrenia, their relationship with musical/protolinguistic competence has not previously been assessed. Musical ability was assessed in 31 schizophrenia/schizo-affective patients and 44 healthy controls using the Montreal Battery for Evaluation of Amusia (MBEA). AER was assessed using a novel battery in which actors provided portrayals of five separate emotions. The Disorganization factor of the Positive and Negative Syndrome Scale (PANSS) was used as a proxy for language/thought disorder and the MATRICS Consensus Cognitive Battery (MCCB) was used to assess cognition. Highly significant deficits were seen between patients and controls across auditory tasks (p < 0.001). Moreover, significant differences were seen in AER between the amusia and intact music-perceiving groups, which remained significant after controlling for group status and education. Correlations with AER were specific to the melody domain, and correlations between protolanguage (melody domain) and language were independent of overall cognition. This is the first study to document a specific relationship between amusia, AER and thought disorder, suggesting a shared linguistic/protolinguistic impairment. Once amusia was considered, other cognitive factors were no longer significant predictors of AER, suggesting that musical ability in general and melodic discrimination ability in particular may be crucial targets for treatment development and cognitive remediation in schizophrenia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seiferlein, Katherine E.
The Annual Energy Review (AER) presents the Energy Information Administration’s historical energy statistics. For many series, statistics are given for every year from 1949 through 2002. The statistics, expressed in either physical units or British thermal units, cover all major energy activities, including consumption, production, trade, stocks, and prices, for all major energy commodities, including fossil fuels, electricity, and renewable energy sources. Publication of this report is required under Public Law 95–91 (Department of Energy Organization Act), Section 205(c), and is in keeping with responsibilities given to the Energy Information Administration (EIA) under Section 205(a)(2), which states: “The Administrator shallmore » be responsible for carrying out a central, comprehensive, and unified energy data and information program which will collect, evaluate, assemble, analyze, and disseminate data and information....” The AER is intended for use by Members of Congress, Federal and State agencies, energy analysts, and the general public. EIA welcomes suggestions from readers regarding data series in the AER and in other EIA publications. Related Publication: Readers of the AER may also be interested in EIA’s Monthly Energy Review, which presents monthly updates of many of the data in the AER. Contact our National Energy Information Center for more information.« less
2007-01-01
In this Evaluation, we examine whether the Steris Reliance EPS--a flexible endoscope reprocessing system that was recently introduced to the U.S. market--offers meaningful advantages over "traditional" automated endoscope reprocessors (AERs). Most AERs on the market function similarly to one another. The Reliance EPS, however, includes some unique features that distinguish it from other AERs. For example, it incorporates a "boot" technology for loading the endoscopes into the unit without requiring a lot of endoscope-specific connectors, and it dispenses the germicide used to disinfect the endoscopes from a single-use container. This Evaluation looks at whether the unique features of this model make it a better choice than traditional AERs for reprocessing flexible endoscopes. Our study focuses on whether the Reliance EPS is any more likely to be used correctly-thereby reducing the likelihood that an endoscope will be reprocessed inadequately-and whether the unit possesses any design flaws that could lead to reprocessing failures. We detail the unit's advantages and disadvantages compared with other AERs, and we describe what current users have to say. Our conclusions will help facilities determine whether to select the Reliance EPS.
Validation of tsunami inundation model TUNA-RP using OAR-PMEL-135 benchmark problem set
NASA Astrophysics Data System (ADS)
Koh, H. L.; Teh, S. Y.; Tan, W. K.; Kh'ng, X. Y.
2017-05-01
A standard set of benchmark problems, known as OAR-PMEL-135, is developed by the US National Tsunami Hazard Mitigation Program for tsunami inundation model validation. Any tsunami inundation model must be tested for its accuracy and capability using this standard set of benchmark problems before it can be gainfully used for inundation simulation. The authors have previously developed an in-house tsunami inundation model known as TUNA-RP. This inundation model solves the two-dimensional nonlinear shallow water equations coupled with a wet-dry moving boundary algorithm. This paper presents the validation of TUNA-RP against the solutions provided in the OAR-PMEL-135 benchmark problem set. This benchmark validation testing shows that TUNA-RP can indeed perform inundation simulation with accuracy consistent with that in the tested benchmark problem set.
A quasi-experimental study of after-event reviews and leadership development.
Derue, D Scott; Nahrgang, Jennifer D; Hollenbeck, John R; Workman, Kristina
2012-09-01
We examine how structured reflection through after-event reviews (AERs) promotes experience-based leadership development and how people's prior experiences and personality attributes influence the impact of AERs on leadership development. We test our hypotheses in a time-lagged, quasi-experimental study that followed 173 research participants for 9 months and across 4 distinct developmental experiences. Findings indicate that AERs have a positive effect on leadership development, and this effect is accentuated when people are conscientious, open to experience, and emotionally stable and have a rich base of prior developmental experiences.
MARC calculations for the second WIPP structural benchmark problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgan, H.S.
1981-05-01
This report describes calculations made with the MARC structural finite element code for the second WIPP structural benchmark problem. Specific aspects of problem implementation such as element choice, slip line modeling, creep law implementation, and thermal-mechanical coupling are discussed in detail. Also included are the computational results specified in the benchmark problem formulation.
Experimental evaluation of the power balance model of speed skating.
de Koning, Jos J; Foster, Carl; Lampen, Joanne; Hettinga, Floor; Bobbert, Maarten F
2005-01-01
Prediction of speed skating performance with a power balance model requires assumptions about the kinetics of energy production, skating efficiency, and skating technique. The purpose of this study was to evaluate these parameters during competitive imitations for the purpose of improving model predictions. Elite speed skaters (n = 8) performed races and submaximal efficiency tests. External power output (P(o)) was calculated from movement analysis and aerodynamic models and ice friction measurements. Aerobic kinetics was calculated from breath-by-breath oxygen uptake (Vo(2)). Aerobic power (P(aer)) was calculated from measured skating efficiency. Anaerobic power (P(an)) kinetics was determined by subtracting P(aer) from P(o). We found gross skating efficiency to be 15.8% (1.8%). In the 1,500-m event, the kinetics of P(an) was characterized by a first-order system as P(an) = 88 + 556e(-0.0494t) (in W, where t is time). The rate constant for the increase in P(aer) was -0.153 s(-1), the time delay was 8.7 s, and the peak P(aer) was 234 W; P(aer) was equal to 234[1 - e(-0.153(t-8.7))] (in W). Skating position changed with preextension knee angle increasing and trunk angle decreasing throughout the event. We concluded the pattern of P(aer) to be quite similar to that reported during other competitive imitations, with the exception that the increase in P(aer) was more rapid. The pattern of P(an) does not appear to fit an "all-out" pattern, with near zero values during the last portion of the event, as assumed in our previous model (De Koning JJ, de Groot G, and van Ingen Schenau GJ. J Biomech 25: 573-580, 1992). Skating position changed in ways different from those assumed in our previous model. In addition to allowing improved predictions, the results demonstrate the importance of observations in unique subjects to the process of model construction.
Coupling Processes between Atmospheric Chemistry and Climate
NASA Technical Reports Server (NTRS)
Ko, Malcolm K. W.; Weisenstein, Debra K.; Shia, Run-Lie; Scott, Courtney J.; Sze, Nien Dak
1998-01-01
This is the fourth semi-annual report for NAS5-97039, covering the time period July through December 1998. The overall objective of this project is to improve the understanding of coupling processes between atmospheric chemistry and climate. Model predictions of the future distributions of trace gases in the atmosphere constitute an important component of the input necessary for quantitative assessments of global change. We will concentrate on the changes in ozone and stratospheric sulfate aerosol, with emphasis on how ozone in the lower stratosphere would respond to natural or anthropogenic changes. The key modeling tools for this work are the Atmospheric and Environmental Research (AER) two-dimensional chemistry-transport model, the AER two-dimensional stratospheric sulfate model, and the AER three-wave interactive model with full chemistry. For this six month period, we report on a modeling study of new rate constant which modify the NOx/NOy ratio in the lower stratosphere; sensitivity to changes in stratospheric water vapor in the future atmosphere; a study of N2O and CH4 observations which has allowed us to adjust diffusion in the 2-D CTM in order to obtain appropriate polar vortex isolation; a study of SF6 and age of air with comparisons of models and measurements; and a report on the Models and Measurements II effort.
Combined anaerobic and aerobic digestion for increased solids reduction and nitrogen removal.
Novak, John T; Banjade, Sarita; Murthy, Sudhir N
2011-01-01
A unique sludge digestion system consisting of anaerobic digestion followed by aerobic digestion and then a recycle step where thickened sludge from the aerobic digester was recirculated back to the anaerobic unit was studied to determine the impact on volatile solids (VS) reduction and nitrogen removal. It was found that the combined anaerobic/aerobic/anaerobic (ANA/AER/ANA) system provided 70% VS reduction compared to 50% for conventional mesophilic anaerobic digestion with a 20 day SRT and 62% for combined anaerobic/aerobic (ANA/AER) digestion with a 15 day anaerobic and a 5 day aerobic SRT. Total Kjeldahl nitrogen (TKN) removal for the ANA/AER/ANA system was 70% for sludge wasted from the aerobic unit and 43.7% when wasted from the anaerobic unit. TKN removal was 64.5% for the ANA/AER system. Copyright © 2010 Elsevier Ltd. All rights reserved.
Edwards, Jessica C.; Johnson, Mark S.; Taylor, Barry L.
2007-01-01
SUMMARY Aerotaxis (oxygen-seeking) behavior in Escherichia coli is a response to changes in the electron transport system and not oxygen per se. Because changes in proton motive force (PMF) are coupled to respiratory electron transport, it is difficult to differentiate between PMF, electron transport or redox, all primary candidates for the signal sensed by the aerotaxis receptors, Aer and Tsr. We constructed electron transport mutants that produced different respiratory H+/e- stoichiometries. These strains expressed binary combinations of one NADH dehydrogenase and one quinol oxidase. We then introduced either an aer or tsr mutation into each mutant to create two sets of electron transport mutants. In vivo H+/e- ratios for strains grown in glycerol medium ranged from 1.46 ± 0.18 to 3.04 ± 0.47, but rates of respiration and growth were similar. The PMF jump in response to oxygen was proportional to the H+/e- ratio in each set of mutants (r2 = 0.986 to 0.996). The length of Tsr-mediated aerotaxis responses increased with the PMF jump (r2 = 0.988), but Aer-mediated responses did not correlate with either PMF changes (r2 = 0.297) or the rate of electron transport (r2 = 0.066). Aer-mediated responses were linked to NADH dehydrogenase I, although there was no absolute requirement. The data indicate that Tsr responds to changes in PMF, but strong Aer responses to oxygen are associated with redox changes in NADH dehydrogenase I PMID:16995896
Landry, Kelly A; Sun, Peizhe; Huang, Ching-Hua; Boyer, Treavor H
2015-01-01
This research advances the knowledge of ion-exchange of four non-steroidal anti-inflammatory drugs (NSAIDs) - diclofenac (DCF), ibuprofen (IBP), ketoprofen (KTP), and naproxen (NPX) - and one analgesic drug-paracetamol (PCM) - by strong-base anion exchange resin (AER) in synthetic ureolyzed urine. Freundlich, Langmuir, Dubinin-Astakhov, and Dubinin-Radushkevich isotherm models were fit to experimental equilibrium data using nonlinear least squares method. Favorable ion-exchange was observed for DCF, KTP, and NPX, whereas unfavorable ion-exchange was observed for IBP and PCM. The ion-exchange selectivity of the AER was enhanced by van der Waals interactions between the pharmaceutical and AER as well as the hydrophobicity of the pharmaceutical. For instance, the high selectivity of the AER for DCF was due to the combination of Coulombic interactions between quaternary ammonium functional group of resin and carboxylate functional group of DCF, van der Waals interactions between polystyrene resin matrix and benzene rings of DCF, and possibly hydrogen bonding between dimethylethanol amine functional group side chain and carboxylate and amine functional groups of DCF. Based on analysis of covariance, the presence of multiple pharmaceuticals did not have a significant effect on ion-exchange removal when the NSAIDs were combined in solution. The AER reached saturation of the pharmaceuticals in a continuous-flow column at varying bed volumes following a decreasing order of DCF > NPX ≈ KTP > IBP. Complete regeneration of the column was achieved using a 5% (m/m) NaCl, equal-volume water-methanol solution. Results from multiple treatment and regeneration cycles provide insight into the practical application of pharmaceutical ion-exchange in ureolyzed urine using AER.
The Involvement of Lipid Peroxide-Derived Aldehydes in Aluminum Toxicity of Tobacco Roots1[W][OA
Yin, Lina; Mano, Jun'ichi; Wang, Shiwen; Tsuji, Wataru; Tanaka, Kiyoshi
2010-01-01
Oxidative injury of the root elongation zone is a primary event in aluminum (Al) toxicity in plants, but the injuring species remain unidentified. We verified the hypothesis that lipid peroxide-derived aldehydes, especially highly electrophilic α,β-unsaturated aldehydes (2-alkenals), participate in Al toxicity. Transgenic tobacco (Nicotiana tabacum) overexpressing Arabidopsis (Arabidopsis thaliana) 2-alkenal reductase (AER-OE plants), wild-type SR1, and an empty vector-transformed control line (SR-Vec) were exposed to AlCl3 on their roots. Compared with the two controls, AER-OE plants suffered less retardation of root elongation under AlCl3 treatment and showed more rapid regrowth of roots upon Al removal. Under AlCl3 treatment, the roots of AER-OE plants accumulated Al and H2O2 to the same levels as did the sensitive controls, while they accumulated lower levels of aldehydes and suffered less cell death than SR1 and SR-Vec roots. In SR1 roots, AlCl3 treatment markedly increased the contents of the highly reactive 2-alkenals acrolein, 4-hydroxy-(E)-2-hexenal, and 4-hydroxy-(E)-2-nonenal and other aldehydes such as malondialdehyde and formaldehyde. In AER-OE roots, accumulation of these aldehydes was significantly less. Growth of the roots exposed to 4-hydroxy-(E)-2-nonenal and (E)-2-hexenal were retarded more in SR1 than in AER-OE plants. Thus, the lipid peroxide-derived aldehydes, formed downstream of reactive oxygen species, injured root cells directly. Their suppression by AER provides a new defense mechanism against Al toxicity. PMID:20023145
Efficacy of low-dose oral sulodexide in the management of diabetic nephropathy.
Blouza, Samira; Dakhli, Sabeur; Abid, Hafaoua; Aissaoui, Mohamed; Ardhaoui, Ilhem; Ben Abdallah, Nejib; Ben Brahim, Samir; Ben Ghorbel, Imed; Ben Salem, Nabila; Beji, Soumaya; Chamakhi, Said; Derbel, Adnene; Derouiche, Fethi; Djait, Faycal; Doghri, Taieb; Fourti, Yamina; Gharbi, Faycel; Jellouli, Kamel; Jellazi, Nabil; Kamoun, Kamel; Khedher, Adel; Letaief, Amel; Limam, Ridha; Mekaouer, Awatef; Miledi, Riadh; Nagati, Khemaies; Naouar, Meriem; Sellem, Sami; Tarzi, Hichem; Turki, Selma; Zidi, Borni; Achour, Abdellatif
2010-01-01
Diabetic nephropathy (DN) is the single greatest cause of end-stage renal disease (ESRD). Without specific interventions, microalbuminuria (incipient nephropathy) gradually progresses to macroalbuminuria (overt nephropathy) within 10-15 years in about 80% of type 1 and 30% of type 2 diabetic patients, and to ESRD within further 20 years in about 75% and 20%, respectively. A primary alteration in DN consists of decreased concentration of glycosaminoglycans (GAGs) in the glomerular extracellular matrix. This evidence has prompted interest in using exogenous GAGs and specifically sulodexide in DN treatment. In this uncontrolled multicenter study, diabetic patients with albumin excretion rate (AER) >or=30 mg/24 hours were treated with oral sulodexide 50 mg/day for 6 months, while receiving concomitant medication as required. Two hundred thirty-seven patients (54% males and 46% females, mean age 55 years, mean diabetes duration 11 years) were evaluated; 89% had type 2 and 11% type 1 diabetes mellitus, 67% microalbuminuria and 33% macroalbuminuria. AER was significantly and progressively reduced during sulodexide treatment (p<0.0001): geometric mean after 3 and 6 months was 63.7% (95% confidence interval [95% CI], 59.3%-68.4%) and 42.7% (95% CI, 37.8%-48.2%) of baseline, respectively. The reduction was similar in type 1 and type 2 diabetes and was slightly greater in macroalbuminuric than in microalbuminuric patients. Blood pressure was slightly lowered, while fasting glucose and glycosylated hemoglobin were moderately reduced. Adverse effects were observed in 5.5% of patients, including gastrointestinal in 3.8%. Sulodexide therapy was shown to reduce AER in patients with DN.
Incipient and overt diabetic nephropathy in African Americans with NIDDM.
Dasmahapatra, A; Bale, A; Raghuwanshi, M P; Reddi, A; Byrne, W; Suarez, S; Nash, F; Varagiannis, E; Skurnick, J H
1994-04-01
OBJECTIVE--To determine the prevalence of incipient and overt nephropathy in African-American subjects with non-insulin-dependent diabetes mellitus (NIDDM) attending a hospital clinic. Contributory factors, such as blood pressure (BP), duration and age at onset of diabetes, hyperglycemia, hyperlipidemia, and body mass index (BMI) also were evaluated. RESEARCH DESIGN AND METHODS--We recruited 116 African-American subjects with NIDDM for this cross-sectional, descriptive, and analytical study. BP, BMI, 24-h urine albumin excretion, creatinine clearance, serum creatinine, lipids, and GHb levels were measured. Albumin excretion rate (AER) was calculated, and subjects were divided into three groups: no nephropathy (AER < 20 micrograms/min), incipient nephropathy (AER 20-200 micrograms/min), and overt nephropathy (AER > 200 micrograms/min). Frequency of hypertension and nephropathy was analyzed by chi 2 testing, group means were compared using analysis of variance, and linear correlations were performed between AER and other variables. Multiple regression analysis was used to examine the association of these variables while controlling for the effects of other variables. RESULTS--Increased AER was present in 50% of our subjects; 31% had incipient and 19% had overt nephropathy. Hypertension was present in 72.4%; nephropathy, particularly overt nephropathy, was significantly more prevalent in the hypertensive group. Mean BP and diastolic blood pressure (dBP) were higher in the groups with incipient and overt nephropathy, and systolic blood pressure (sBP) was increased in overt nephropathy. Men with either form of nephropathy had higher sBP, dBP, and mean BP, whereas only women with overt nephropathy had increased sBP and mean BP. Subjects with incipient or overt nephropathy had a longer duration of diabetes, and those with overt nephropathy had a younger age at onset of diabetes. By multiple regression analysis, AER correlated with younger age at diabetes onset, but not with diabetes duration. No correlation with age, lipid levels, or GHb was noted. BMI correlated with AER. CONCLUSIONS--Incipient and overt nephropathy were observed frequently in these African-American subjects with NIDDM. Albuminuria correlated with BP, younger age at diabetes onset, and BMI. Association of albuminuria and increased cardiovascular mortality may place 50% of inner-city African-American patients with NIDDM at risk for developing cardiovascular complications.
The Roots of Individuality: Brain Waves and Perception.
ERIC Educational Resources Information Center
Rosenfeld, Anne H.; Rosenfeld, Sam A.
Described is research using computer techniques to study the brain's perceptual systems in both normal and pathological groups, including hyperactive children (6-12 years old). Reviewed are the early studies of A. Petrie, M. Buchsbaum, and J. Silverman using the electroencephalograph to obtain AER (average evoked response) records of…
Ekinci, Elif I; Thomas, Georgina; Thomas, David; Johnson, Cameron; Macisaac, Richard J; Houlihan, Christine A; Finch, Sue; Panagiotopoulos, Sianna; O'Callaghan, Chris; Jerums, George
2009-08-01
OBJECTIVE This prospective randomized double-blind placebo-controlled crossover study examined the effects of sodium chloride (NaCl) supplementation on the antialbuminuric action of telmisartan with or without hydrochlorothiazide (HCT) in hypertensive patients with type 2 diabetes, increased albumin excretion rate (AER), and habitual low dietary salt intake (LDS; <100 mmol sodium/24 h on two of three consecutive occasions) or high dietary salt intake (HDS; >200 mmol sodium/24 h on two of three consecutive occasions). RESEARCH DESIGN AND METHODS Following a washout period, subjects (n = 32) received 40 mg/day telmisartan for 4 weeks followed by 40 mg telmisartan plus 12.5 mg/day HCT for 4 weeks. For the last 2 weeks of each treatment period, patients received either 100 mmol/day NaCl or placebo capsules. After a second washout, the regimen was repeated with supplements in reverse order. AER and ambulatory blood pressure were measured at weeks 0, 4, 8, 14, 18, and 22. RESULTS In LDS, NaCl supplementation reduced the anti-albuminuric effect of telmisartan with or without HCT from 42.3% (placebo) to 9.5% (P = 0.004). By contrast, in HDS, NaCl supplementation did not reduce the AER response to telmisartan with or without HCT (placebo 30.9%, NaCl 28.1%, P = 0.7). Changes in AER were independent of changes in blood pressure. CONCLUSIONS The AER response to telmisartan with or without HCT under habitual low salt intake can be blunted by NaCl supplementation. By contrast, when there is already a suppressed renin angiotensin aldosterone system under habitual high dietary salt intake, the additional NaCl does not alter the AER response.
Ekinci, Elif I.; Thomas, Georgina; Thomas, David; Johnson, Cameron; MacIsaac, Richard J.; Houlihan, Christine A.; Finch, Sue; Panagiotopoulos, Sianna; O'Callaghan, Chris; Jerums, George
2009-01-01
OBJECTIVE This prospective randomized double-blind placebo-controlled crossover study examined the effects of sodium chloride (NaCl) supplementation on the antialbuminuric action of telmisartan with or without hydrochlorothiazide (HCT) in hypertensive patients with type 2 diabetes, increased albumin excretion rate (AER), and habitual low dietary salt intake (LDS; <100 mmol sodium/24 h on two of three consecutive occasions) or high dietary salt intake (HDS; >200 mmol sodium/24 h on two of three consecutive occasions). RESEARCH DESIGN AND METHODS Following a washout period, subjects (n = 32) received 40 mg/day telmisartan for 4 weeks followed by 40 mg telmisartan plus 12.5 mg/day HCT for 4 weeks. For the last 2 weeks of each treatment period, patients received either 100 mmol/day NaCl or placebo capsules. After a second washout, the regimen was repeated with supplements in reverse order. AER and ambulatory blood pressure were measured at weeks 0, 4, 8, 14, 18, and 22. RESULTS In LDS, NaCl supplementation reduced the anti-albuminuric effect of telmisartan with or without HCT from 42.3% (placebo) to 9.5% (P = 0.004). By contrast, in HDS, NaCl supplementation did not reduce the AER response to telmisartan with or without HCT (placebo 30.9%, NaCl 28.1%, P = 0.7). Changes in AER were independent of changes in blood pressure. CONCLUSIONS The AER response to telmisartan with or without HCT under habitual low salt intake can be blunted by NaCl supplementation. By contrast, when there is already a suppressed renin angiotensin aldosterone system under habitual high dietary salt intake, the additional NaCl does not alter the AER response. PMID:19549737
Darby, Stephen E; Leyland, Julian; Kummu, Matti; Räsänen, Timo A; Lauri, Hannu
2013-04-01
We evaluate links between climate and simulated river bank erosion for one of the world's largest rivers, the Mekong. We employ a process-based model to reconstruct multidecadal time series of bank erosion at study sites within the Mekong's two main hydrological response zones, defining a new parameter, accumulated excess runoff (AER), pertinent to bank erosion. We employ a hydrological model to isolate how snowmelt, tropical storms and monsoon precipitation each contribute to AER and thus modeled bank erosion. Our results show that melt (23.9% at the upstream study site, declining to 11.1% downstream) and tropical cyclones (17.5% and 26.4% at the upstream and downstream sites, respectively) both force significant fractions of bank erosion on the Mekong. We also show (i) small, but significant, declines in AER and hence assumed bank erosion during the 20th century, and; (ii) that significant correlations exist between AER and the Indian Ocean Dipole (IOD) and El Niño Southern Oscillation (ENSO). Of these modes of climate variability, we find that IOD events exert a greater control on simulated bank erosion than ENSO events; but the influences of both ENSO and IOD when averaged over several decades are found to be relatively weak. However, importantly, relationships between ENSO, IOD, and AER and hence inferred river bank erosion are not time invariant. Specifically, we show that there is an intense and prolonged epoch of strong coherence between ENSO and AER from the early 1980s to present, such that in recent decades derived Mekong River bank erosion has been more strongly affected by ENSO.
Darby, Stephen E; Leyland, Julian; Kummu, Matti; Räsänen, Timo A; Lauri, Hannu
2013-01-01
We evaluate links between climate and simulated river bank erosion for one of the world's largest rivers, the Mekong. We employ a process-based model to reconstruct multidecadal time series of bank erosion at study sites within the Mekong's two main hydrological response zones, defining a new parameter, accumulated excess runoff (AER), pertinent to bank erosion. We employ a hydrological model to isolate how snowmelt, tropical storms and monsoon precipitation each contribute to AER and thus modeled bank erosion. Our results show that melt (23.9% at the upstream study site, declining to 11.1% downstream) and tropical cyclones (17.5% and 26.4% at the upstream and downstream sites, respectively) both force significant fractions of bank erosion on the Mekong. We also show (i) small, but significant, declines in AER and hence assumed bank erosion during the 20th century, and; (ii) that significant correlations exist between AER and the Indian Ocean Dipole (IOD) and El Niño Southern Oscillation (ENSO). Of these modes of climate variability, we find that IOD events exert a greater control on simulated bank erosion than ENSO events; but the influences of both ENSO and IOD when averaged over several decades are found to be relatively weak. However, importantly, relationships between ENSO, IOD, and AER and hence inferred river bank erosion are not time invariant. Specifically, we show that there is an intense and prolonged epoch of strong coherence between ENSO and AER from the early 1980s to present, such that in recent decades derived Mekong River bank erosion has been more strongly affected by ENSO. PMID:23926362
Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER meas...
Unstructured Adaptive (UA) NAS Parallel Benchmark. Version 1.0
NASA Technical Reports Server (NTRS)
Feng, Huiyu; VanderWijngaart, Rob; Biswas, Rupak; Mavriplis, Catherine
2004-01-01
We present a complete specification of a new benchmark for measuring the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. It complements the existing NAS Parallel Benchmark suite. The benchmark involves the solution of a stylized heat transfer problem in a cubic domain, discretized on an adaptively refined, unstructured mesh.
Res-E Support Policies in the Baltic States: Electricity Price Aspect (Part II)
NASA Astrophysics Data System (ADS)
Bobinaite, V.; Priedite, I.
2015-04-01
Increasing volumes of electricity derived from renewable energy sources (RES-E) affect the electricity market prices and the prices for final electricity consumers in the Baltic States. The results of a multivariate regression analysis show that in 2013 the RES-E contributed to decreasing the electricity market prices in the Baltic States. However, the final electricity consumers pay for the promotion of RES-E through the approved RES-E component which has a tendency to increase. It is estimated that in 2013 the net benefits from the wind electricity promotion were achieved in Lithuania and Latvia while the net cost - in Estonia. This suggests that the economic efficiency of the wind electricity support scheme based on the application of feed-in tariffs was higher than that based on the feed-in premium. Rakstā analizēta elektroenerģijas ražošanas no atjaunojamiem energoresursiem (AER-E) palielināšanas ietekme uz elektroenerģijas tirgus cenu un gala cenu elektroenerģijas lietotājiem Baltijas valstīs. Daudzfaktoru regresijas analīzes rezultāti atklāja, ka AER-E 2013. gadā varētu samazināt elektroenerģijas tirgus cenas Baltijas valstīs. Tomēr jāņem vērā, ka elektroenerģijas lietotāja gala cenā ir iekļauta AER-E atbalsta komponente, kurai ir raksturīgi palielināties. Aprēķināts, ka no vēja elektroenerģijas ražošanas Latvijā un Lietuvā tika iegūta tīrā peļņa, bet Igaunijā tikai nosedza pašizmaksu. Tas liecina, ka vēja elektroenerģijas atbalsta shēmai, kas balstīta uz obligātā iepirkuma atbalsta principu, ir augstāka ekonomiskā efektivitāte, nekā atbalsta shēmai, kas balstīta uz piemaksu par no AER saražoto elektroenerģiju obligātā iepirkuma ietvaros.
NASA Astrophysics Data System (ADS)
Meng, Qing Yu; Spector, Dalia; Colome, Steven; Turpin, Barbara
2009-12-01
Effects of physical/environmental factors on fine particle (PM 2.5) exposure, outdoor-to-indoor transport and air exchange rate ( AER) were examined. The fraction of ambient PM 2.5 found indoors ( F INF) and the fraction to which people are exposed ( α) modify personal exposure to ambient PM 2.5. Because F INF, α, and AER are infrequently measured, some have used air conditioning (AC) as a modifier of ambient PM 2.5 exposure. We found no single variable that was a good predictor of AER. About 50% and 40% of the variation in F INF and α, respectively, was explained by AER and other activity variables. AER alone explained 36% and 24% of the variations in F INF and α, respectively. Each other predictor, including Central AC Operation, accounted for less than 4% of the variation. This highlights the importance of AER measurements to predict F INF and α. Evidence presented suggests that outdoor temperature and home ventilation features affect particle losses as well as AER, and the effects differ. Total personal exposures to PM 2.5 mass/species were reconstructed using personal activity and microenvironmental methods, and compared to direct personal measurement. Outdoor concentration was the dominant predictor of (partial R2 = 30-70%) and the largest contributor to (20-90%) indoor and personal exposures for PM 2.5 mass and most species. Several activities had a dramatic impact on personal PM 2.5 mass/species exposures for the few study participants exposed to or engaged in them, including smoking and woodworking. Incorporating personal activities (in addition to outdoor PM 2.5) improved the predictive power of the personal activity model for PM 2.5 mass/species; more detailed information about personal activities and indoor sources is needed for further improvement (especially for Ca, K, OC). Adequate accounting for particle penetration and persistence indoors and for exposure to non-ambient sources could potentially increase the power of epidemiological analyses linking health effects to particulate exposures.
Tao, Ye; Zhang, Yuan Ming
2012-05-01
Leaf hair points (LHPs) are important morphological structures in many desiccation-tolerant mosses, but study of their functions has been limited. A desert moss, Syntrichia caninervis, was chosen for examination of the ecological effects of LHPs on water retention and dew formation at individual and population (patch) levels. Although LHPs were only 4.77% of shoot weight, they were able to increase absolute water content (AWC) by 24.87%. The AWC of samples with LHPs was always greater than for those without LHPs during dehydration. The accumulative evaporation ratio (AER) showed an opposite trend. AWC, evaporation ratio and AER of shoots with LHPs took 20 min longer to reach a completely dehydrated state than shoots without LHPs. At the population level, dew formation on moss crusts with LHPs was faster than on crusts without LHPs, and the former had higher daily and total dew amounts. LHPs were able to improve dew amounts on crusts by 10.26%. Following three simulated rainfall events (1, 3 and 6 mm), AERs from crusts with LHPs were always lower than from crusts without LHPs. LHPs can therefore significantly delay and reduce evaporation. We confirm that LHPs are important desiccation-tolerant features of S. caninervis at both individual and population levels. LHPs greatly aid moss crusts in adapting to arid conditions.
Shirazi, Elham; Pennell, Kelly G
2017-12-13
Vapor intrusion (IV) exposure risks are difficult to characterize due to the role of atmospheric, building and subsurface processes. This study presents a three-dimensional VI model that extends the common subsurface fate and transport equations to incorporate wind and stack effects on indoor air pressure, building air exchange rate (AER) and indoor contaminant concentration to improve VI exposure risk estimates. The model incorporates three modeling programs: (1) COMSOL Multiphysics to model subsurface fate and transport processes, (2) CFD0 to model atmospheric air flow around the building, and (3) CONTAM to model indoor air quality. The combined VI model predicts AER values, zonal indoor air pressures and zonal indoor air contaminant concentrations as a function of wind speed, wind direction and outdoor and indoor temperature. Steady state modeling results for a single-story building with a basement demonstrate that wind speed, wind direction and opening locations in a building play important roles in changing the AER, indoor air pressure, and indoor air contaminant concentration. Calculated indoor air pressures ranged from approximately -10 Pa to +4 Pa depending on weather conditions and building characteristics. AER values, mass entry rates and indoor air concentrations vary depending on weather conditions and building characteristics. The presented modeling approach can be used to investigate the relationship between building features, AER, building pressures, soil gas concentrations, indoor air concentrations and VI exposure risks.
Verification of MCNP6.2 for Nuclear Criticality Safety Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise
2017-05-10
Several suites of verification/validation benchmark problems were run in early 2017 to verify that the new production release of MCNP6.2 performs correctly for nuclear criticality safety applications (NCS). MCNP6.2 results for several NCS validation suites were compared to the results from MCNP6.1 [1] and MCNP6.1.1 [2]. MCNP6.1 is the production version of MCNP® released in 2013, and MCNP6.1.1 is the update released in 2014. MCNP6.2 includes all of the standard features for NCS calculations that have been available for the past 15 years, along with new features for sensitivity-uncertainty based methods for NCS validation [3]. Results from the benchmark suitesmore » were compared with results from previous verification testing [4-8]. Criticality safety analysts should consider testing MCNP6.2 on their particular problems and validation suites. No further development of MCNP5 is planned. MCNP6.1 is now 4 years old, and MCNP6.1.1 is now 3 years old. In general, released versions of MCNP are supported only for about 5 years, due to resource limitations. All future MCNP improvements, bug fixes, user support, and new capabilities are targeted only to MCNP6.2 and beyond.« less
NASA Technical Reports Server (NTRS)
Barnett, Henry C; Hibbard, Robert R
1953-01-01
Since the release of the first NACA publication on fuel characteristics pertinent to the design of aircraft fuel systems (NACA-RM-E53A21), additional information has become available on MIL-F7914(AER) grade JP-5 fuel and several of the current grades of fuel oils. In order to make this information available to fuel-system designers as quickly as possible, the present report has been prepared as a supplement to NACA-RM-E53A21. Although JP-5 fuel is of greater interest in current fuel-system problems than the fuel oils, the available data are not as extensive. It is believed, however, that the limited data on JP-5 are sufficient to indicate the variations in stocks that the designer must consider under a given fuel specification. The methods used in the preparation and extrapolation of data presented in the tables and figures of this supplement are the same as those used in NACA-RM-E53A21.
Review of Air Exchange Rate Models for Air Pollution Exposure Assessments
A critical aspect of air pollution exposure assessments is estimation of the air exchange rate (AER) for various buildings, where people spend their time. The AER, which is rate the exchange of indoor air with outdoor air, is an important determinant for entry of outdoor air pol...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-19
... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] AER Energy Resources, Inc.; Alto Group Holdings, Inc.; Bizrocket.Com Inc.; Fox Petroleum, Inc.; Geopulse Explorations Inc.; Global Technologies... accuracy of press releases concerning the company's revenues. 4. Fox Petroleum, Inc. is a Nevada...
Webb, D J; Newman, D J; Chaturvedi, N; Fuller, J H
1996-03-01
In IDDM, microalbuminuria (urinary albumin excretion rate (AER) of 20-200 micrograms/min) is a predictor of persistent proteinuria and diabetic nephropathy. Early intervention may prevent or reduce the rate of progression of renal complications. The Micral-Test strip can be used to establish a semi-quantitative estimate of AER. We assessed the field performance of the Micral-Test strip in detecting microalbuminuria in the EUCLID study, an European wide, 18 centre study of 530 IDDM participants, aged 20 to 59 years. People with macroalbuminuria were excluded. On entry, all participants had albumin concentrations from two overnight urine collections measured by a central laboratory, and the corresponding Micral-Test performed on the two collections locally. a cut off of > or = mg/l albumin from the first Micral-Test, to detect a centrally measured albumin concentration > or = 20 mg/l, yielded 29 (5.8%) false negative results and 58 (11.6%) false positive results (sensitivity 70%, specificity 87%). The mean AER, from two collections, was compared with the corresponding 'pooled' Micral-Test results (mean of the two readings). Receiver Operating Characteristic (ROC) curves were used to assess if there was a suitable 'pooled' Micral-Test result for screening microalbuminuria. A 'pooled' Micral-Test result (> or = 15 mg/l) was used to detect mean AER > or = 20 micrograms/min (sensitivity 78%, specificity 77%). This 'pooled cut-off' had already been used for screening on to the study and led to an over-estimate (154 vs. 77) of the true number of microalbuminuric participants on the study. In conclusion, our findings suggest that the Micral-Test strip is not an effective screening tool for microalbuminuria, using the 'pooled' result from two measurements did not improve the sensitivity of the test.
Norsker, Filippa Nyboe; Rechnitzer, Catherine; Cederkvist, Luise; Tryggvadottir, Laufey; Madanat-Harjuoja, Laura-Maria; Øra, Ingrid; Thorarinsdottir, Halldora K; Vettenranta, Kim; Bautz, Andrea; Schrøder, Henrik; Hasle, Henrik; Winther, Jeanette Falck
2018-06-21
Because of the rarity of neuroblastoma and poor survival until the 1990s, information on late effects in neuroblastoma survivors is sparse. We comprehensively reviewed the long-term risk for somatic disease in neuroblastoma survivors. We identified 721 5-year survivors of neuroblastoma in Nordic population-based cancer registries and identified late effects in national hospital registries covering the period 1977-2012. Detailed treatment information was available for 46% of the survivors. The disease-specific rates of hospitalization of survivors and of 152,231 randomly selected population comparisons were used to calculate standardized hospitalization rate ratios (SHRRs) and absolute excess risks (AERs). During 5,500 person-years of follow-up, 501 5-year survivors had a first hospital contact yielding a SHRR of 2.3 (95% CI 2.1-2.6) and a corresponding AER of 52 (95% CI 44-60) per 1,000 person-years. The highest relative risks were for diseases of blood and blood-forming organs (SHRR 3.8; 95% CI 2.7-5.4), endocrine diseases (3.6 [3.1-4.2]), circulatory system diseases (3.1 [2.5-3.8]), and diseases of the nervous system (3.0 [2.6-3.3]). Approximately 60% of the excess new hospitalizations of survivors were for diseases of the nervous system, urinary system, endocrine system, and bone and soft tissue. The relative risks and AERs were highest for the survivors most intensively treated. Survivors of neuroblastoma have a highly increased long-term risk for somatic late effects in all the main disease groups as compared with background levels. Our results are useful for counseling survivors and should contribute to improving health care planning in post-therapy clinics. This article is protected by copyright. All rights reserved. © 2018 UICC.
Scholze, Stefan; Schiefer, Stefan; Partzsch, Johannes; Hartmann, Stephan; Mayr, Christian Georg; Höppner, Sebastian; Eisenreich, Holger; Henker, Stephan; Vogginger, Bernhard; Schüffny, Rene
2011-01-01
State-of-the-art large-scale neuromorphic systems require sophisticated spike event communication between units of the neural network. We present a high-speed communication infrastructure for a waferscale neuromorphic system, based on application-specific neuromorphic communication ICs in an field programmable gate arrays (FPGA)-maintained environment. The ICs implement configurable axonal delays, as required for certain types of dynamic processing or for emulating spike-based learning among distant cortical areas. Measurements are presented which show the efficacy of these delays in influencing behavior of neuromorphic benchmarks. The specialized, dedicated address-event-representation communication in most current systems requires separate, low-bandwidth configuration channels. In contrast, the configuration of the waferscale neuromorphic system is also handled by the digital packet-based pulse channel, which transmits configuration data at the full bandwidth otherwise used for pulse transmission. The overall so-called pulse communication subgroup (ICs and FPGA) delivers a factor 25–50 more event transmission rate than other current neuromorphic communication infrastructures. PMID:22016720
75 FR 66195 - Schedules of Controlled Substances: Placement of Propofol Into Schedule IV
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-27
... published abuse liability studies of propofol in humans in which the reinforcement and reward effects have... reporting by the subject feeling ``high,'' relative to the placebo. The motivation for abuse of propofol is... Reporting System (AERS) DataMart database). In the AERS database, there are reports of propofol diversion...
Willemse, Elias J; Joubert, Johan W
2016-09-01
In this article we present benchmark datasets for the Mixed Capacitated Arc Routing Problem under Time restrictions with Intermediate Facilities (MCARPTIF). The problem is a generalisation of the Capacitated Arc Routing Problem (CARP), and closely represents waste collection routing. Four different test sets are presented, each consisting of multiple instance files, and which can be used to benchmark different solution approaches for the MCARPTIF. An in-depth description of the datasets can be found in "Constructive heuristics for the Mixed Capacity Arc Routing Problem under Time Restrictions with Intermediate Facilities" (Willemseand Joubert, 2016) [2] and "Splitting procedures for the Mixed Capacitated Arc Routing Problem under Time restrictions with Intermediate Facilities" (Willemseand Joubert, in press) [4]. The datasets are publicly available from "Library of benchmark test sets for variants of the Capacitated Arc Routing Problem under Time restrictions with Intermediate Facilities" (Willemse and Joubert, 2016) [3].
Linking In-Vehicle Ultrafine Particle Exposures to On-Road Concentrations
Hudda, Neelakshi; Eckel, Sandrah P.; Knibbs, Luke D.; Sioutas, Constantinos; Delfino, Ralph J.; Fruin, Scott A.
2013-01-01
For traffic-related pollutants like ultrafine particles (UFP, Dp < 100 nm), a significant fraction of overall exposure occurs within or close to the transit microenvironment. Therefore, understanding exposure to these pollutants in such microenvironments is crucial to accurately assessing overall UFP exposure. The aim of this study was to develop models for predicting in-cabin UFP concentrations if roadway concentrations are known, taking into account vehicle characteristics, ventilation settings, driving conditions and air exchange rates (AER). Particle concentrations and AER were measured in 43 and 73 vehicles, respectively, under various ventilation settings and driving speeds. Multiple linear regression (MLR) and generalized estimating equation (GEE) regression models were used to identify and quantify the factors that determine inside-to-outside (I/O) UFP ratios and AERs across a full range of vehicle types and ages. AER was the most significant determinant of UFP I/O ratios, and was strongly influenced by ventilation setting (recirculation or outside air intake). Inclusion of ventilation fan speed, vehicle age or mileage, and driving speed explained greater than 79% of the variability in measured UFP I/O ratios. PMID:23888122
Yang, Na; Ren, Yueping; Li, Xiufen; Wang, Xinhua
2017-06-01
Anolyte acidification is a drawback restricting the electricity generation performance of the buffer-free microbial fuel cells (MFC). In this paper, a small amount of alkali-treated anion exchange resin (AER) was placed in front of the anode in the KCl mediated single-chamber MFC to slowly release hydroxyl ions (OH - ) and neutralize the H + ions that are generated by the anodic reaction in two running cycles. This short-term alkaline intervention to the KCl anolyte has promoted the proliferation of electroactive Geobacter sp. and enhanced the self-buffering capacity of the KCl-AER-MFC. The pH of the KCl anolyte in the KCl-AER-MFC increased and became more stable in each running cycle compared with that of the KCl-MFC after the short-term alkaline intervention. The maximum power density (P max ) of the KCl-AER-MFC increased from 307.5mW·m -2 to 542.8mW·m -2 , slightly lower than that of the PBS-MFC (640.7mW·m -2 ). The coulombic efficiency (CE) of the KCl-AER-MFC increased from 54.1% to 61.2% which is already very close to that of the PBS-MFC (61.9%). The results in this paper indicate that short-term alkaline intervention to the anolyte is an effective strategy to further promote the performance of buffer-free MFCs. Copyright © 2017 Elsevier B.V. All rights reserved.
Abu-Elala, N; Abdelsalam, M; Marouf, Sh; Setta, A
2015-11-01
The nucleotide sequence analysis of the gyrB gene indicated that the fish Aeromonas spp. isolates could be identified as Aeromonas hydrophila and Aeromonas veronii biovar sobria, whereas chicken Aeromonas spp. isolates identified as Aeromonas caviae. PCR data revealed the presence of Lip, Ser, Aer, ACT and CAI genes in fish Aer. hydrophila isolates, ACT, CAI and Aer genes in fish Aer. veronii bv sobria isolates and Ser and CAI genes in chicken Aer. caviae isolates. All chicken isolates showed variable resistance against all 12 tested antibiotic discs except for cefotaxime, nitrofurantoin, chloramphenicol and ciprofloxacin, only one isolate showed resistance to chloramphenicol and ciprofloxacin. Fish Aeromonads were sensitive to all tested antibiotic discs except amoxicillin, ampicillin-sulbactam and streptomycin. Many integrated fish farms depend on the application of poultry droppings/litter which served as a direct feed for the fish and also acted as pond fertilizers. The application of untreated poultry manure exerts an additional pressure on the microbial world of the fish's environment. Aeromonas species are one of the common bacteria that infect both fish and chicken. The aim of this study was to compare the phenotypic traits and genetic relatedness of aeromonads isolated from two diverse hosts (terrestrial and aquatic), and to investigate if untreated manure possibly enhances Aeromonas dissemination among cohabitant fish with special reference to virulence genes and antibiotic resistant traits. © 2015 The Society for Applied Microbiology.
Johnson, Ted; Myers, Jeffrey; Kelly, Thomas; Wisbith, Anthony; Ollison, Will
2004-01-01
A pilot study was conducted using an occupied, single-family test house in Columbus, OH, to determine whether a script-based protocol could be used to obtain data useful in identifying the key factors affecting air-exchange rate (AER) and the relationship between indoor and outdoor concentrations of selected traffic-related air pollutants. The test script called for hourly changes to elements of the test house considered likely to influence air flow and AER, including the position (open or closed) of each window and door and the operation (on/off) of the furnace, air conditioner, and ceiling fans. The script was implemented over a 3-day period (January 30-February 1, 2002) during which technicians collected hourly-average data for AER, indoor, and outdoor air concentrations for six pollutants (benzene, formaldehyde (HCHO), polycyclic aromatic hydrocarbons (PAH), carbon monoxide (CO), nitric oxide (NO), and nitrogen oxides (NO(x))), and selected meteorological variables. Consistent with expectations, AER tended to increase with the number of open exterior windows and doors. The 39 AER values measured during the study when all exterior doors and windows were closed varied from 0.36 to 2.29 h(-1) with a geometric mean (GM) of 0.77 h(-1) and a geometric standard deviation (GSD) of 1.435. The 27 AER values measured when at least one exterior door or window was opened varied from 0.50 to 15.8 h(-1) with a GM of 1.98 h(-1) and a GSD of 1.902. AER was also affected by temperature and wind speed, most noticeably when exterior windows and doors were closed. Results of a series of stepwise linear regression analyses suggest that (1) outdoor pollutant concentration and (2) indoor pollutant concentration during the preceding hour were the "variables of choice" for predicting indoor pollutant concentration in the test house under the conditions of this study. Depending on the pollutant and ventilation conditions, one or more of the following variables produced a small, but significant increase in the explained variance (R(2)-value) of the regression equations: AER, number and location of apertures, wind speed, air-conditioning operation, indoor temperature, outdoor temperature, and relative humidity. The indoor concentrations of CO, PAH, NO, and NO(x) were highly correlated with the corresponding outdoor concentrations. The indoor benzene concentrations showed only moderate correlation with outdoor benzene levels, possibly due to a weak indoor source. Indoor formaldehyde concentrations always exceeded outdoor levels, and the correlation between indoor and outdoor concentrations was not statistically significant, indicating the presence of a strong indoor source.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Peiyuan; Brown, Timothy; Fullmer, William D.
Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling ofmore » the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.« less
ERIC Educational Resources Information Center
Vicknair, David; Wright, Jeffrey
2015-01-01
Evidence of confusion in intermediate accounting textbooks regarding the annual percentage rate (APR) and annual effective rate (AER) is presented. The APR and AER are briefly discussed in the context of a note payable and correct formulas for computing each is provided. Representative examples of the types of confusion that we found is presented…
Zhang, Jing; Lipp, Ottmar V; Hu, Ping
2017-01-01
The current study investigated the interactive effects of individual differences in automatic emotion regulation (AER) and primed emotion regulation strategy on skin conductance level (SCL) and heart rate during provoked anger. The study was a 2 × 2 [AER tendency (expression vs. control) × priming (expression vs. control)] between subject design. Participants were assigned to two groups according to their performance on an emotion regulation-IAT (differentiating automatic emotion control tendency and automatic emotion expression tendency). Then participants of the two groups were randomly assigned to two emotion regulation priming conditions (emotion control priming or emotion expression priming). Anger was provoked by blaming participants for slow performance during a subsequent backward subtraction task. In anger provocation, SCL of individuals with automatic emotion control tendencies in the control priming condition was lower than of those with automatic emotion control tendencies in the expression priming condition. However, SCL of individuals with automatic emotion expression tendencies did no differ in the automatic emotion control priming or the automatic emotion expression priming condition. Heart rate during anger provocation was higher in individuals with automatic emotion expression tendencies than in individuals with automatic emotion control tendencies regardless of priming condition. This pattern indicates an interactive effect of individual differences in AER and emotion regulation priming on SCL, which is an index of emotional arousal. Heart rate was only sensitive to the individual differences in AER, and did not reflect this interaction. This finding has implications for clinical studies of the use of emotion regulation strategy training suggesting that different practices are optimal for individuals who differ in AER tendencies.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-07
...; Eagle Creek Hydro Power, LLC, Eagle Creek Water Resources, LLC, Eagle Creek Land Resources, LLC; Notice... 24, 2012, AER NY-Gen, LLC (transferor), Eagle Creek Hydro Power, LLC, Eagle Creek Water Resources.... Cherry, Eagle Creek Hydro Power, LLC, Eagle Creek Water Resources, LLC, and Eagle Creek Land Resources...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-14
... 9690-106] AER NY-Gen, LLC; Eagle Creek Hydro Power, LLC; Eagle Creek Water Resources, LLC; Eagle Creek... Power, LLC, Eagle Creek Water Resources, LLC, and Eagle Creek Land Resources, LLC (transferees) filed an.... Paul Ho, Eagle Creek Hydro Power, LLC, Eagle Creek Water Resources, LLC, and Eagle Creek Land Resources...
Benchmarking Strategies for Measuring the Quality of Healthcare: Problems and Prospects
Lovaglio, Pietro Giorgio
2012-01-01
Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed. PMID:22666140
Benchmarking strategies for measuring the quality of healthcare: problems and prospects.
Lovaglio, Pietro Giorgio
2012-01-01
Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed.
Within-Group Effect-Size Benchmarks for Problem-Solving Therapy for Depression in Adults
ERIC Educational Resources Information Center
Rubin, Allen; Yu, Miao
2017-01-01
This article provides benchmark data on within-group effect sizes from published randomized clinical trials that supported the efficacy of problem-solving therapy (PST) for depression among adults. Benchmarks are broken down by type of depression (major or minor), type of outcome measure (interview or self-report scale), whether PST was provided…
Coupling Processes Between Atmospheric Chemistry and Climate
NASA Technical Reports Server (NTRS)
Ko, Malcolm K. W.; Weisenstein, Debra; Shia, Run-Lie; Sze, N. D.
1998-01-01
The overall objective of this project is to improve the understanding of coupling processes between atmospheric chemistry and climate. Model predictions of the future distributions of trace gases in the atmosphere constitute an important component of the input necessary for quantitative assessments of global change. We will concentrate on the changes in ozone and stratospheric sulfate aerosol, with emphasis on how ozone in the lower stratosphere would respond to natural or anthropogenic changes. The key modeling tools for this work are the AER 2-dimensional chemistry-transport model, the AER 2-dimensional stratospheric sulfate model, and the AER three-wave interactive model with full chemistry. We will continue developing our three-wave model so that we can help NASA determine the strength and weakness of the next generation assessment models.
Coupling Processes Between Atmospheric Chemistry and Climate
NASA Technical Reports Server (NTRS)
Ko, M. K. W.; Weisenstein, Debra; Shia, Run-Lie; Sze, N. D.
1998-01-01
The overall objective of this project is to improve the understanding of coupling processes between atmospheric chemistry and climate. Model predictions of the future distributions of trace gases in the atmosphere constitute an important component of the input necessary for quantitative assessments of global change. We will concentrate on the changes in ozone and stratospheric sulfate aerosol, with emphasis on how ozone in the lower stratosphere would respond to natural or anthropogenic changes. The key modeling tools for this work are the AER two-dimensional chemistry-transport model, the AER two-dimensional stratospheric sulfate model, and the AER three-wave interactive model with full chemistry. We will continue developing our three-wave model so that we can help NASA determine the strength and weakness of the next generation assessment models.
Phase field benchmark problems for dendritic growth and linear elasticity
Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.; ...
2018-03-26
We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less
Phase field benchmark problems for dendritic growth and linear elasticity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.
We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less
Benchmarking--Measuring and Comparing for Continuous Improvement.
ERIC Educational Resources Information Center
Henczel, Sue
2002-01-01
Discussion of benchmarking focuses on the use of internal and external benchmarking by special librarians. Highlights include defining types of benchmarking; historical development; benefits, including efficiency, improved performance, increased competitiveness, and better decision making; problems, including inappropriate adaptation; developing a…
Shift Verification and Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pandya, Tara M.; Evans, Thomas M.; Davidson, Gregory G
2016-09-07
This documentation outlines the verification and validation of Shift for the Consortium for Advanced Simulation of Light Water Reactors (CASL). Five main types of problems were used for validation: small criticality benchmark problems; full-core reactor benchmarks for light water reactors; fixed-source coupled neutron-photon dosimetry benchmarks; depletion/burnup benchmarks; and full-core reactor performance benchmarks. We compared Shift results to measured data and other simulated Monte Carlo radiation transport code results, and found very good agreement in a variety of comparison measures. These include prediction of critical eigenvalue, radial and axial pin power distributions, rod worth, leakage spectra, and nuclide inventories over amore » burn cycle. Based on this validation of Shift, we are confident in Shift to provide reference results for CASL benchmarking.« less
Lutale, Janet Joy Kachuchuru; Thordarson, Hrafnkell; Abbas, Zulfiqarali Gulam; Vetvik, Kåre
2007-01-01
Background The prevalences and risk factors of microalbuminuria are not full described among black African diabetic patients. This study aimed at determining the prevalence of microalbuminuria among African diabetes patients in Dar es Salaam, Tanzania, and relate to socio-demographic features as well as clinical parameters. Methods Cross sectional study on 91 Type 1 and 153 Type 2 diabetic patients. Two overnight urine samples per patient were analysed. Albumin concentration was measured by an automated immunoturbidity assay. Average albumin excretion rate (AER) was used and were categorised as normalbuminuria (AER < 20 ug/min), microalbuminuria (AER 20–200 ug/min), and macroalbuminuria (AER > 200 ug/min). Information obtained also included age, diabetes duration, sex, body mass index, blood pressure, serum total cholesterol, high-density and low-density lipoprotein cholesterol, triglycerides, serum creatinine, and glycated hemoglobin A1c. Results Overall prevalence of microalbuminuria was 10.7% and macroalbuminuria 4.9%. In Type 1 patients microalbuminuria was 12% and macroalbuminuria 1%. Among Type 2 patients, 9.8% had microalbuminuria, and 7.2% had macroalbuminuria. Type 2 patients with abnormal albumin excretion rate had significantly longer diabetes duration 7.5 (0.2–24 yrs) than those with normal albumin excretion rate 3 (0–25 yrs), p < 0.001. Systolic and diastolic blood pressure among Type 2 patients with abnormal albumin excretion rate were significantly higher than in those with normal albumin excretion rate, (p < 0.001). No significant differences in body mass index, glycaemic control, and cholesterol levels was found among patients with normal compared with those with elevated albumin excretion rate either in Type 1 or Type 2 patients. A stepwise multiple linear regression analysis among Type 2 patients, revealed AER (natural log AER) as the dependent variable to be predicted by [odds ratio (95% confidence interval)] diabetes duration 0.090 (0.049, 0.131), p < 0.0001, systolic blood pressure 0.012 (0.003–0.021), p < 0.010 and serum creatinine 0.021 (0.012, 0.030). Conclusion The prevalence of micro and macroalbuminuria is higher among African Type 1 patients with relatively short diabetes duration compared with prevalences among Caucasians. In Type 2 patients, the prevalence is in accordance with findings in Caucasians. The present study detects, however, a much lower prevalence than previously demonstrated in studies from sub-Saharan Africa. Abnormal AER was significantly related to diabetes duration and systolic blood pressure. PMID:17224056
Moskvin, Oleg V; Gilles-Gonzalez, Marie-Alda; Gomelsky, Mark
2010-10-01
The SCHIC domain of the B12-binding domain family present in the Rhodobacter sphaeroides AppA protein binds heme and senses oxygen. Here we show that the predicted SCHIC domain PpaA/AerR regulators also bind heme and respond to oxygen in vitro, despite their low sequence identity with AppA.
Benchmark Problems for Spacecraft Formation Flying Missions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Leitner, Jesse A.; Burns, Richard D.; Folta, David C.
2003-01-01
To provide high-level focus to distributed space system flight dynamics and control research, several benchmark problems are suggested. These problems are not specific to any current or proposed mission, but instead are intended to capture high-level features that would be generic to many similar missions.
Second Computational Aeroacoustics (CAA) Workshop on Benchmark Problems
NASA Technical Reports Server (NTRS)
Tam, C. K. W. (Editor); Hardin, J. C. (Editor)
1997-01-01
The proceedings of the Second Computational Aeroacoustics (CAA) Workshop on Benchmark Problems held at Florida State University are the subject of this report. For this workshop, problems arising in typical industrial applications of CAA were chosen. Comparisons between numerical solutions and exact solutions are presented where possible.
Mauer, Michael; Caramori, Maria Luiza; Fioretto, Paola; Najafian, Behzad
2015-06-01
Studies of structural-functional relationships have improved understanding of the natural history of diabetic nephropathy (DN). However, in order to consider structural end points for clinical trials, the robustness of the resultant models needs to be verified. This study examined whether structural-functional relationship models derived from a large cohort of type 1 diabetic (T1D) patients with a wide range of renal function are robust. The predictability of models derived from multiple regression analysis and piecewise linear regression analysis was also compared. T1D patients (n = 161) with research renal biopsies were divided into two equal groups matched for albumin excretion rate (AER). Models to explain AER and glomerular filtration rate (GFR) by classical DN lesions in one group (T1D-model, or T1D-M) were applied to the other group (T1D-test, or T1D-T) and regression analyses were performed. T1D-M-derived models explained 70 and 63% of AER variance and 32 and 21% of GFR variance in T1D-M and T1D-T, respectively, supporting the substantial robustness of the models. Piecewise linear regression analyses substantially improved predictability of the models with 83% of AER variance and 66% of GFR variance explained by classical DN glomerular lesions alone. These studies demonstrate that DN structural-functional relationship models are robust, and if appropriate models are used, glomerular lesions alone explain a major proportion of AER and GFR variance in T1D patients. © The Author 2014. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Restelli, Michela; Lopardo, Teresa; Lo Iacono, Nadia; Garaffo, Giulia; Conte, Daniele; Rustighi, Alessandra; Napoli, Marco; Del Sal, Giannino; Perez-Morga, David; Costanzo, Antonio; Merlo, Giorgio Roberto; Guerrini, Luisa
2014-01-01
Ectrodactyly, or Split-Hand/Foot Malformation (SHFM), is a congenital condition characterized by the loss of central rays of hands and feet. The p63 and the DLX5;DLX6 transcription factors, expressed in the embryonic limb buds and ectoderm, are disease genes for these conditions. Mutations of p63 also cause the ectodermal dysplasia–ectrodactyly–cleft lip/palate (EEC) syndrome, comprising SHFM. Ectrodactyly is linked to defects of the apical ectodermal ridge (AER) of the developing limb buds. FGF8 is the key signaling molecule in this process, able to direct proximo-distal growth and patterning of the skeletal primordial of the limbs. In the limb buds of both p63 and Dlx5;Dlx6 murine models of SHFM, the AER is poorly stratified and FGF8 expression is severely reduced. We show here that the FGF8 locus is a downstream target of DLX5 and that FGF8 counteracts Pin1–ΔNp63α interaction. In vivo, lack of Pin1 leads to accumulation of the p63 protein in the embryonic limbs and ectoderm. We show also that ΔNp63α protein stability is negatively regulated by the interaction with the prolyl-isomerase Pin1, via proteasome-mediated degradation; p63 mutant proteins associated with SHFM or EEC syndromes are resistant to Pin1 action. Thus, DLX5, p63, Pin1 and FGF8 participate to the same time- and location-restricted regulatory loop essential for AER stratification, hence for normal patterning and skeletal morphogenesis of the limb buds. These results shed new light on the molecular mechanisms at the basis of the SHFM and EEC limb malformations. PMID:24569166
Tay, Jeannie; Thompson, Campbell H; Luscombe-Marsh, Natalie D; Noakes, Manny; Buckley, Jonathan D; Wittert, Gary A; Brinkworth, Grant D
2015-11-01
To compare the long-term effects of a very low carbohydrate, high-protein, low saturated fat (LC) diet with a traditional high unrefined carbohydrate, low-fat (HC) diet on markers of renal function in obese adults with type 2 diabetes (T2DM), but without overt kidney disease.One hundred fifteen adults (BMI 34.6 ± 4.3 kg/m, age 58 ± 7 years, HbA1c 7.3 ± 1.1%, 56 ± 12 mmol/mol, serum creatinine (SCr) 69 ± 15 μmol/L, glomerular filtration rate estimated by the Chronic Kidney Disease Epidemiology Collaboration formula (eGFR 94 ± 12 mL/min/1.73 m)) were randomized to consume either an LC (14% energy as carbohydrate [CHO < 50 g/day], 28% protein [PRO], 58% fat [<10% saturated fat]) or an HC (53% CHO, 17% PRO, 30% fat [<10% saturated fat]) energy-matched, weight-loss diet combined with supervised exercise training (60 min, 3 day/wk) for 12 months. Body weight, blood pressure, and renal function assessed by eGFR, estimated creatinine clearance (Cockcroft-Gault, Salazar-Corcoran) and albumin excretion rate (AER), were measured pre- and post-intervention.Both groups achieved similar completion rates (LC 71%, HC 65%) and reductions in weight (mean [95% CI]; -9.3 [-10.6, -8.0] kg) and blood pressure (-6 [-9, -4]/-6[-8, -5] mmHg), P ≥ 0.18. Protein intake calculated from 24 hours urinary urea was higher in the LC than HC group (LC 120.1 ± 38.2 g/day, 1.3 g/kg/day; HC 95.8 ± 27.8 g/day, 1 g/kg/day), P < 0.001 diet effect. Changes in SCr (LC 3 [1, 5], HC 1 [-1, 3] μmol/L) and eGFR (LC -4 [-6, -2], HC -2 [-3, 0] mL/min/1.73 m) did not differ between diets (P = 0.25). AER decreased independent of diet composition (LC --2.4 [-6, 1.2], HC -1.8 [-5.4, 1.8] mg/24 h, P = 0.24); 6 participants (LC 3, HC 3) had moderately elevated AER at baseline (30-300 mg/24 h), which normalized in 4 participants (LC 2, HC 2) after 52 weeks.Compared with a traditional HC weight loss diet, consumption of an LC high protein diet does not adversely affect clinical markers of renal function in obese adults with T2DM and no preexisting kidney disease.
A proposed benchmark problem for cargo nuclear threat monitoring
NASA Astrophysics Data System (ADS)
Wesley Holmes, Thomas; Calderon, Adan; Peeples, Cody R.; Gardner, Robin P.
2011-10-01
There is currently a great deal of technical and political effort focused on reducing the risk of potential attacks on the United States involving radiological dispersal devices or nuclear weapons. This paper proposes a benchmark problem for gamma-ray and X-ray cargo monitoring with results calculated using MCNP5, v1.51. The primary goal is to provide a benchmark problem that will allow researchers in this area to evaluate Monte Carlo models for both speed and accuracy in both forward and inverse calculational codes and approaches for nuclear security applications. A previous benchmark problem was developed by one of the authors (RPG) for two similar oil well logging problems (Gardner and Verghese, 1991, [1]). One of those benchmarks has recently been used by at least two researchers in the nuclear threat area to evaluate the speed and accuracy of Monte Carlo codes combined with variance reduction techniques. This apparent need has prompted us to design this benchmark problem specifically for the nuclear threat researcher. This benchmark consists of conceptual design and preliminary calculational results using gamma-ray interactions on a system containing three thicknesses of three different shielding materials. A point source is placed inside the three materials lead, aluminum, and plywood. The first two materials are in right circular cylindrical form while the third is a cube. The entire system rests on a sufficiently thick lead base so as to reduce undesired scattering events. The configuration was arranged in such a manner that as gamma-ray moves from the source outward it first passes through the lead circular cylinder, then the aluminum circular cylinder, and finally the wooden cube before reaching the detector. A 2 in.×4 in.×16 in. box style NaI (Tl) detector was placed 1 m from the point source located in the center with the 4 in.×16 in. side facing the system. The two sources used in the benchmark are 137Cs and 235U.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, Timothy P.; Martz, Roger L.; Kiedrowski, Brian C.
New unstructured mesh capabilities in MCNP6 (developmental version during summer 2012) show potential for conducting multi-physics analyses by coupling MCNP to a finite element solver such as Abaqus/CAE[2]. Before these new capabilities can be utilized, the ability of MCNP to accurately estimate eigenvalues and pin powers using an unstructured mesh must first be verified. Previous work to verify the unstructured mesh capabilities in MCNP was accomplished using the Godiva sphere [1], and this work attempts to build on that. To accomplish this, a criticality benchmark and a fuel assembly benchmark were used for calculations in MCNP using both the Constructivemore » Solid Geometry (CSG) native to MCNP and the unstructured mesh geometry generated using Abaqus/CAE. The Big Ten criticality benchmark [3] was modeled due to its geometry being similar to that of a reactor fuel pin. The C5G7 3-D Mixed Oxide (MOX) Fuel Assembly Benchmark [4] was modeled to test the unstructured mesh capabilities on a reactor-type problem.« less
Using AER to Improve Teacher Education
NASA Astrophysics Data System (ADS)
Ludwig, Randi R.
2013-06-01
In many ways, the astronomy education community is uniquely poised to influence pre-service and in-service teacher preparation. Astro101 courses are among those most commonly taken to satisfy general education requirements for non-science majors, including 9-25% education majors (Deming & Hufnagel, 2001; Rudolph et al. 2010). In addition, the astronomy community's numerous observatories and NASA centers engage in many efforts to satisfy demand for in-service teacher professional development (PD). These efforts represent a great laboratory in which we can apply conclusions from astronomy education research (AER) studies in particular and science education research (SER) in general. Foremost, we can work to align typical Astro101 and teacher PD content coverage to heavily hit topics in the Next Generation Science Standards (http://www.nextgenscience.org/) and utilize methods of teaching those topics that have been identified as successful in AER studies. Additionally, we can work to present teacher education using methodology that has been identified by the SER community as effective for lasting learning. In this presentation, I will highlight some of the big ideas from AER and SER that may be most useful in teacher education, many of which we implement at UT Austin in the Hands-on-Science program for pre-service teacher education and in-service teacher PD.
Novel Aeruginosin-865 from Nostoc sp. as a potent anti-inflammatory agent.
Kapuścik, Aleksandra; Hrouzek, Pavel; Kuzma, Marek; Bártová, Simona; Novák, Petr; Jokela, Jouni; Pflüger, Maren; Eger, Andreas; Hundsberger, Harald; Kopecký, Jiří
2013-11-25
Aeruginosin-865 (Aer-865), isolated from terrestrial cyanobacterium Nostoc sp. Lukešová 30/93, is the first aeruginosin-type peptide containing both a fatty acid and a carbohydrate moiety, and is the first aeruginosin to be found in the genus Nostoc. Mass spectrometry, chemical and spectroscopic analysis as well as one- and two-dimensional NMR and chiral HPLC analysis of Marfey derivatives were applied to determine the peptidic sequence: D-Hpla, D-Leu, 5-OH-Choi, Agma, with hexanoic and mannopyranosyl uronic acid moieties linked to Choi. We used an AlphaLISA assay to measure the levels of proinflammatory mediators IL-8 and ICAM-1 in hTNF-α-stimulated HLMVECs. Aer-865 showed significant reduction of both: with EC50 values of (3.5±1.5) μg mL(-1) ((4.0±1.7) μM) and (50.0±13.4) μg mL(-1) ((57.8±15.5) μM), respectively. Confocal laser scanning microscopy revealed that the anti-inflammatory effect of Aer-865 was directly associated with inhibition of NF-κB translocation to the nucleus. Moreover, Aer-865 did not show any cytotoxic effect. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Vilar, Santiago; Harpaz, Rave; Chase, Herbert S; Costanzi, Stefano; Rabadan, Raul
2011-01-01
Background Adverse drug events (ADE) cause considerable harm to patients, and consequently their detection is critical for patient safety. The US Food and Drug Administration maintains an adverse event reporting system (AERS) to facilitate the detection of ADE in drugs. Various data mining approaches have been developed that use AERS to detect signals identifying associations between drugs and ADE. The signals must then be monitored further by domain experts, which is a time-consuming task. Objective To develop a new methodology that combines existing data mining algorithms with chemical information by analysis of molecular fingerprints to enhance initial ADE signals generated from AERS, and to provide a decision support mechanism to facilitate the identification of novel adverse events. Results The method achieved a significant improvement in precision in identifying known ADE, and a more than twofold signal enhancement when applied to the ADE rhabdomyolysis. The simplicity of the method assists in highlighting the etiology of the ADE by identifying structurally similar drugs. A set of drugs with strong evidence from both AERS and molecular fingerprint-based modeling is constructed for further analysis. Conclusion The results demonstrate that the proposed methodology could be used as a pharmacovigilance decision support tool to facilitate ADE detection. PMID:21946238
Gong, Yu-Xin; Zhu, Bin; Liu, Guang-Lu; Liu, Lei; Ling, Fei; Wang, Gao-Xue; Xu, Xin-Gang
2015-01-01
To reduce the economic losses caused by diseases in aquaculture industry, more efficient and economic prophylactic measures should be urgently investigated. In this research, the effects of a novel functionalized single-walled carbon nanotubes (SWCNTs) applied as a delivery vehicle for recombinant Aeromonas hydrophila vaccine administration via bath or injection in juvenile grass carp were studied. The results showed that SWCNT as a vector for the recombinant protein aerA, augmented the production of specific antibodies, apparently stimulated the induction of immune-related genes, and induced higher level of survival rate compared with free aerA subunit vaccine. Furthermore, we compared the routes of bath and intramuscular injection immunization by SWCNTs-aerA vaccine, and found that similar antibody levels induced by SWCNTs-aerA were observed in both immunization routes. Meanwhile, a similar relative percentage survival (approximately 80%) was found in both a 40 mg/L bath immunization group, and a 20 μg injection group. The results indicate that functionalized SWCNTs could be a promising delivery vehicle to potentiate the immune response of recombinant vaccines, and might be used to vaccinate juvenile fish by bath administration method. Copyright © 2014 Elsevier Ltd. All rights reserved.
Vlaeminck, Siegfried E; Dierick, Katleen; Boon, Nico; Verstraete, Willy
2007-07-01
Ammonium can be removed as dinitrogen gas by cooperating aerobic and anaerobic ammonium-oxidizing bacteria (AerAOB and AnAOB). The goal of this study was to verify putative mutual benefits for aggregated AerAOB and AnAOB in a stagnant freshwater environment. In an ammonium fed water column, the biological oxygen consumption rate was, on average, 76 kg O(2) ha(-1) day(-1). As the oxygen transfer rate of an abiotic control column was only 17 kg O(2) ha(-1) day(-1), biomass activity enhanced the oxygen transfer. Increasing the AnAOB gas production increased the oxygen consumption rate with more than 50% as a result of enhanced vertical movement of the biomass. The coupled decrease in dissolved oxygen concentration increased the diffusional oxygen transfer from the atmosphere in the water. Physically preventing the biomass from rising to the upper water layer instantaneously decreased oxygen and ammonium consumption and even led to the occurrence of some sulfate reduction. Floating of the biomass was further confirmed to be beneficial, as this allowed for the development of a higher AerAOB and AnAOB activity, compared to settled biomass. Overall, the results support mutual benefits for aggregated AerAOB and AnAOB, derived from the biomass uplifting effect of AnAOB gas production.
Merton's problem for an investor with a benchmark in a Barndorff-Nielsen and Shephard market.
Lennartsson, Jan; Lindberg, Carl
2015-01-01
To try to outperform an externally given benchmark with known weights is the most common equity mandate in the financial industry. For quantitative investors, this task is predominantly approached by optimizing their portfolios consecutively over short time horizons with one-period models. We seek in this paper to provide a theoretical justification to this practice when the underlying market is of Barndorff-Nielsen and Shephard type. This is done by verifying that an investor who seeks to maximize her expected terminal exponential utility of wealth in excess of her benchmark will in fact use an optimal portfolio equivalent to the one-period Markowitz mean-variance problem in continuum under the corresponding Black-Scholes market. Further, we can represent the solution to the optimization problem as in Feynman-Kac form. Hence, the problem, and its solution, is analogous to Merton's classical portfolio problem, with the main difference that Merton maximizes expected utility of terminal wealth, not wealth in excess of a benchmark.
Third Computational Aeroacoustics (CAA) Workshop on Benchmark Problems
NASA Technical Reports Server (NTRS)
Dahl, Milo D. (Editor)
2000-01-01
The proceedings of the Third Computational Aeroacoustics (CAA) Workshop on Benchmark Problems cosponsored by the Ohio Aerospace Institute and the NASA Glenn Research Center are the subject of this report. Fan noise was the chosen theme for this workshop with representative problems encompassing four of the six benchmark problem categories. The other two categories were related to jet noise and cavity noise. For the first time in this series of workshops, the computational results for the cavity noise problem were compared to experimental data. All the other problems had exact solutions, which are included in this report. The Workshop included a panel discussion by representatives of industry. The participants gave their views on the status of applying computational aeroacoustics to solve practical industry related problems and what issues need to be addressed to make CAA a robust design tool.
Coupling Processes between Atmospheric Chemistry and Climate
NASA Technical Reports Server (NTRS)
Ko, M. K. W.; Weisenstein, Debra; Shia, Run-Lie; Sze, N. D.
1998-01-01
This is the third semi-annual report for NAS5-97039, covering January through June 1998. The overall objective of this project is to improve the understanding of coupling processes between atmospheric chemistry and climate. Model predictions of the future distributions of trace gases in the atmosphere constitute an important component of the input necessary for quantitative assessments of global change. We will concentrate on the changes in ozone and stratospheric sulfate aerosol, with emphasis on how ozone in the lower stratosphere would respond to natural or anthropogenic changes. The key modeling for this work are the AER 2-dimensional chemistry-transport model, the AER 2-dimensional stratospheric sulfate model, and the AER three-wave interactive model with full chemistry. We will continue developing our three-wave model so that we can help NASA determine the strengths and weaknesses of the next generation assessment models.
PMLB: a large benchmark suite for machine learning evaluation and comparison.
Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H
2017-01-01
The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob; Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)
2002-01-01
We provide a paper-and-pencil specification of a benchmark suite for computational grids. It is based on the NAS (NASA Advanced Supercomputing) Parallel Benchmarks (NPB) and is called the NAS Grid Benchmarks (NGB). NGB problems are presented as data flow graphs encapsulating an instance of a slightly modified NPB task in each graph node, which communicates with other nodes by sending/receiving initialization data. Like NPB, NGB specifies several different classes (problem sizes). In this report we describe classes S, W, and A, and provide verification values for each. The implementor has the freedom to choose any language, grid environment, security model, fault tolerance/error correction mechanism, etc., as long as the resulting implementation passes the verification test and reports the turnaround time of the benchmark.
Benchmarking on Tsunami Currents with ComMIT
NASA Astrophysics Data System (ADS)
Sharghi vand, N.; Kanoglu, U.
2015-12-01
There were no standards for the validation and verification of tsunami numerical models before 2004 Indian Ocean tsunami. Even, number of numerical models has been used for inundation mapping effort, evaluation of critical structures, etc. without validation and verification. After 2004, NOAA Center for Tsunami Research (NCTR) established standards for the validation and verification of tsunami numerical models (Synolakis et al. 2008 Pure Appl. Geophys. 165, 2197-2228), which will be used evaluation of critical structures such as nuclear power plants against tsunami attack. NCTR presented analytical, experimental and field benchmark problems aimed to estimate maximum runup and accepted widely by the community. Recently, benchmark problems were suggested by the US National Tsunami Hazard Mitigation Program Mapping & Modeling Benchmarking Workshop: Tsunami Currents on February 9-10, 2015 at Portland, Oregon, USA (http://nws.weather.gov/nthmp/index.html). These benchmark problems concentrated toward validation and verification of tsunami numerical models on tsunami currents. Three of the benchmark problems were: current measurement of the Japan 2011 tsunami in Hilo Harbor, Hawaii, USA and in Tauranga Harbor, New Zealand, and single long-period wave propagating onto a small-scale experimental model of the town of Seaside, Oregon, USA. These benchmark problems were implemented in the Community Modeling Interface for Tsunamis (ComMIT) (Titov et al. 2011 Pure Appl. Geophys. 168, 2121-2131), which is a user-friendly interface to the validated and verified Method of Splitting Tsunami (MOST) (Titov and Synolakis 1995 J. Waterw. Port Coastal Ocean Eng. 121, 308-316) model and is developed by NCTR. The modeling results are compared with the required benchmark data, providing good agreements and results are discussed. Acknowledgment: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement no 603839 (Project ASTARTE - Assessment, Strategy and Risk Reduction for Tsunamis in Europe)
Falconar, Andrew K. I.; Martinez, Fernando
2011-01-01
Antibody-enhanced replication (AER) of dengue type-2 virus (DENV-2) strains and production of antibody-enhanced disease (AED) was tested in out-bred mice. Polyclonal antibodies (PAbs) generated against the nonstructural-1 (NS1) glycoprotein candidate vaccine of the New Guinea-C (NG-C) or NSx strains reacted strongly and weakly with these antigens, respectively. These PAbs contained the IgG2a subclass, which cross-reacted with the virion-associated envelope (E) glycoprotein of the DENV-2 NSx strain, suggesting that they could generate its AER via all mouse Fcγ-receptor classes. Indeed, when these mice were challenged with a low dose (<0.5 LD50) of the DENV-2 NSx strain, but not the NG-C strain, they all generated dramatic and lethal DENV-2 AER/AED. These AER/AED mice developed life-threatening acute respiratory distress syndrome (ARDS), displayed by diffuse alveolar damage (DAD) resulting from i) dramatic interstitial alveolar septa-thickening with mononuclear cells, ii) some hyperplasia of alveolar type-II pneumocytes, iii) copious intra-alveolar protein secretion, iv) some hyaline membrane-covered alveolar walls, and v) DENV-2 antigen-positive alveolar macrophages. These mice also developed meningo-encephalitis, with greater than 90,000-fold DENV-2 AER titers in microglial cells located throughout their brain parenchyma, some of which formed nodules around dead neurons. Their spleens contained infiltrated megakaryocytes with DENV-2 antigen-positive red-pulp macrophages, while their livers displayed extensive necrosis, apoptosis and macro- and micro-steatosis, with DENV-2 antigen-positive Kuppfer cells and hepatocytes. Their infections were confirmed by DENV-2 isolations from their lungs, spleens and livers. These findings accord with those reported in fatal human “severe dengue” cases. This DENV-2 AER/AED was blocked by high concentrations of only the NG-C NS1 glycoprotein. These results imply a potential hazard of DENV NS1 glycoprotein-based vaccines, particularly against DENV strains that contain multiple mutations or genetic recombination within or between their DENV E and NS1 glycoprotein-encoding genes. The model provides potential for assessing DENV strain pathogenicity and anti-DENV therapies in normal mice. PMID:21731643
A suite of benchmark and challenge problems for enhanced geothermal systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Mark; Fu, Pengcheng; McClure, Mark
A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulation capabilitiesmore » to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of modern numerical simulation tools by recognized expert practitioners. We present the suite of benchmark and challenge problems developed for the GTO-CCS, providing problem descriptions and sample solutions.« less
Air exchange rates and migration of VOCs in basements and residences
Du, Liuliu; Batterman, Stuart; Godwin, Christopher; Rowe, Zachary; Chin, Jo-Yu
2015-01-01
Basements can influence indoor air quality by affecting air exchange rates (AERs) and by the presence of emission sources of volatile organic compounds (VOCs) and other pollutants. We characterized VOC levels, AERs and interzonal flows between basements and occupied spaces in 74 residences in Detroit, Michigan. Flows were measured using a steady-state multi-tracer system, and 7-day VOC measurements were collected using passive samplers in both living areas and basements. A walkthrough survey/inspection was conducted in each residence. AERs in residences and basements averaged 0.51 and 1.52 h−1, respectively, and had strong and opposite seasonal trends, e.g., AERs were highest in residences during the summer, and highest in basements during the winter. Air flows from basements to occupied spaces also varied seasonally. VOC concentration distributions were right-skewed, e.g., 90th percentile benzene, toluene, naphthalene and limonene concentrations were 4.0, 19.1, 20.3 and 51.0 μg m−3, respectively; maximum concentrations were 54, 888, 1117 and 134 μg m−3. Identified VOC sources in basements included solvents, household cleaners, air fresheners, smoking, and gasoline-powered equipment. The number and type of potential VOC sources found in basements are significant and problematic, and may warrant advisories regarding the storage and use of potentially strong VOCs sources in basements. PMID:25601281
Funk, S E; Reaven, N L
2014-04-01
The use of flexible endoscopes is growing rapidly around the world. Dominant approaches to high-level disinfection among resource-constrained countries include fully manual cleaning and disinfection and the use of automated endoscope reprocessors (AERs). Suboptimal reprocessing at any step can potentially lead to contamination, with consequences to patients and healthcare systems. To compare the potential results of guideline-recommended AERs to manual disinfection along three dimensions - productivity, need for endoscope repair, and infection transmission risk in India, China, and Russia. Financial modelling using data from peer-reviewed published literature and country-specific market research. In countries where revenue can be gained through productivity improvements, conversion to automated reprocessing has a positive direct impact on financial performance, paying back the capital investment within 14 months in China and seven months in Russia. In India, AER-generated savings and revenue offset nearly all of the additional operating costs needed to support automated reprocessing. Among endoscopy facilities in India and China, current survey-reported practices in endoscope reprocessing using manual soaking may place patients at risk of exposure to pathogens leading to infections. Conversion from manual soak to use of AERs, as recommended by the World Gastroenterology Organization, may generate cost and revenue offsets that could produce direct financial gains for some endoscopy units in Russia and China. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
A comparative study of upwind and MacCormack schemes for CAA benchmark problems
NASA Technical Reports Server (NTRS)
Viswanathan, K.; Sankar, L. N.
1995-01-01
In this study, upwind schemes and MacCormack schemes are evaluated as to their suitability for aeroacoustic applications. The governing equations are cast in a curvilinear coordinate system and discretized using finite volume concepts. A flux splitting procedure is used for the upwind schemes, where the signals crossing the cell faces are grouped into two categories: signals that bring information from outside into the cell, and signals that leave the cell. These signals may be computed in several ways, with the desired spatial and temporal accuracy achieved by choosing appropriate interpolating polynomials. The classical MacCormack schemes employed here are fourth order accurate in time and space. Results for categories 1, 4, and 6 of the workshop's benchmark problems are presented. Comparisons are also made with the exact solutions, where available. The main conclusions of this study are finally presented.
Hybrid parallel code acceleration methods in full-core reactor physics calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Courau, T.; Plagne, L.; Ponicot, A.
2012-07-01
When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadraturemore » required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)« less
Verification and benchmark testing of the NUFT computer code
NASA Astrophysics Data System (ADS)
Lee, K. H.; Nitao, J. J.; Kulshrestha, A.
1993-10-01
This interim report presents results of work completed in the ongoing verification and benchmark testing of the NUFT (Nonisothermal Unsaturated-saturated Flow and Transport) computer code. NUFT is a suite of multiphase, multicomponent models for numerical solution of thermal and isothermal flow and transport in porous media, with application to subsurface contaminant transport problems. The code simulates the coupled transport of heat, fluids, and chemical components, including volatile organic compounds. Grid systems may be cartesian or cylindrical, with one-, two-, or fully three-dimensional configurations possible. In this initial phase of testing, the NUFT code was used to solve seven one-dimensional unsaturated flow and heat transfer problems. Three verification and four benchmarking problems were solved. In the verification testing, excellent agreement was observed between NUFT results and the analytical or quasianalytical solutions. In the benchmark testing, results of code intercomparison were very satisfactory. From these testing results, it is concluded that the NUFT code is ready for application to field and laboratory problems similar to those addressed here. Multidimensional problems, including those dealing with chemical transport, will be addressed in a subsequent report.
Sensitivity Analysis of OECD Benchmark Tests in BISON
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.
2015-09-01
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining coremore » boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.« less
Analyzing the BBOB results by means of benchmarking concepts.
Mersmann, O; Preuss, M; Trautmann, H; Bischl, B; Weihs, C
2015-01-01
We present methods to answer two basic questions that arise when benchmarking optimization algorithms. The first one is: which algorithm is the "best" one? and the second one is: which algorithm should I use for my real-world problem? Both are connected and neither is easy to answer. We present a theoretical framework for designing and analyzing the raw data of such benchmark experiments. This represents a first step in answering the aforementioned questions. The 2009 and 2010 BBOB benchmark results are analyzed by means of this framework and we derive insight regarding the answers to the two questions. Furthermore, we discuss how to properly aggregate rankings from algorithm evaluations on individual problems into a consensus, its theoretical background and which common pitfalls should be avoided. Finally, we address the grouping of test problems into sets with similar optimizer rankings and investigate whether these are reflected by already proposed test problem characteristics, finding that this is not always the case.
Batterman, Stuart; Jia, Chunrong; Hatzivasilis, Gina; Godwin, Chris
2006-02-01
Air exchange rates and interzonal flows are critical ventilation parameters that affect thermal comfort, air migration, and contaminant exposure in buildings and other environments. This paper presents the development of an updated approach to measure these parameters using perfluorocarbon tracer (PFT) gases, the constant injection rate method, and adsorbent-based sampling of PFT concentrations. The design of miniature PFT sources using hexafluorotoluene and octafluorobenzene tracers, and the development and validation of an analytical GC/MS method for these tracers are described. We show that simultaneous deployment of sources and passive samplers, which is logistically advantageous, will not cause significant errors over multiday measurement periods in building, or over shorter periods in rapidly ventilated spaces like vehicle cabins. Measurement of the tracers over periods of hours to a week may be accomplished using active or passive samplers, and low method detection limits (<0.025 microg m(-3)) and high precisions (<10%) are easily achieved. The method obtains the effective air exchange rate (AER), which is relevant to characterizing long-term exposures, especially when ventilation rates are time-varying. In addition to measuring the PFT tracers, concentrations of other volatile organic compounds (VOCs) are simultaneously determined. Pilot tests in three environments (residence, garage, and vehicle cabin) demonstrate the utility of the method. The 4 day effective AER in the house was 0.20 h(-1), the 4 day AER in the attached garage was 0.80 h(-1), and 16% of the ventilation in the house migrated from the garage. The 5 h AER in a vehicle traveling at 100 km h(-1) under a low-to-medium vent condition was 92 h(-1), and this represents the highest speed test found in the literature. The method is attractive in that it simultaneously determines AERs, interzonal flows, and VOC concentrations over long and representative test periods. These measurements are practical, cost-effective, and helpful in indoor air quality and other investigations.
Plath, Johannes E; Seiberl, Wolfgang; Beitzel, Knut; Minzlaff, Philipp; Schwirtz, Ansgar; Imhoff, Andreas B; Buchmann, Stefan
2014-08-01
The purpose of this study was to investigate coactivation (CoA) testing as a clinical tool to monitor motor learning after latissimus dorsi tendon transfer. We evaluated 20 patients clinically with the American Shoulder and Elbow Surgeons (ASES) and University of California-Los Angeles (UCLA) outcomes scores, visual analog scale, active external rotation (aER), and isometric strength testing in abduction and external rotation. Measurements of aER were performed while the latissimus dorsi was activated in its new function of external rotation with concomitant activation (coactivation) of its native functions (adduction and extension). Bilateral surface electromyographic (EMG) activity was recorded during aER measurements and the strength testing procedure (EMG activity ratio: with/without CoA). Patients were divided into two groups (excellent/good vs fair/poor) according to the results of the ASES and UCLA scores. The mean follow-up was 57.8 ± 25.2 months. Subdivided by clinical scores, the superior outcome group lost aER with CoA, whereas the inferior outcome group gained aER (UCLA score: -2.2° ± 7.4° vs +4.3° ± 4.1°; P = .031). Patients with inferior outcomes in the ASES score showed higher latissimus dorsi EMG activity ratios (P = .027), suggesting an inadequate motor learning process. Isometric strength testing revealed that the latissimus dorsi transfer had significantly greater activity compared with the contralateral side (external rotation, P = .008; abduction, P = .006) but did not have comparable strength (external rotation, P = .017; abduction, P = .009). Patients with inferior clinical results were more likely to be dependent on CoA to gain external rotation. Therefore, CoA testing may be used as a tool to evaluate the status of postoperative motor learning after latissimus dorsi transfer. Copyright © 2014 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.
Least-Squares Spectral Element Solutions to the CAA Workshop Benchmark Problems
NASA Technical Reports Server (NTRS)
Lin, Wen H.; Chan, Daniel C.
1997-01-01
This paper presents computed results for some of the CAA benchmark problems via the acoustic solver developed at Rocketdyne CFD Technology Center under the corporate agreement between Boeing North American, Inc. and NASA for the Aerospace Industry Technology Program. The calculations are considered as benchmark testing of the functionality, accuracy, and performance of the solver. Results of these computations demonstrate that the solver is capable of solving the propagation of aeroacoustic signals. Testing of sound generation and on more realistic problems is now pursued for the industrial applications of this solver. Numerical calculations were performed for the second problem of Category 1 of the current workshop problems for an acoustic pulse scattered from a rigid circular cylinder, and for two of the first CAA workshop problems, i. e., the first problem of Category 1 for the propagation of a linear wave and the first problem of Category 4 for an acoustic pulse reflected from a rigid wall in a uniform flow of Mach 0.5. The aim for including the last two problems in this workshop is to test the effectiveness of some boundary conditions set up in the solver. Numerical results of the last two benchmark problems have been compared with their corresponding exact solutions and the comparisons are excellent. This demonstrates the high fidelity of the solver in handling wave propagation problems. This feature lends the method quite attractive in developing a computational acoustic solver for calculating the aero/hydrodynamic noise in a violent flow environment.
Principles for Developing Benchmark Criteria for Staff Training in Responsible Gambling.
Oehler, Stefan; Banzer, Raphaela; Gruenerbl, Agnes; Malischnig, Doris; Griffiths, Mark D; Haring, Christian
2017-03-01
One approach to minimizing the negative consequences of excessive gambling is staff training to reduce the rate of the development of new cases of harm or disorder within their customers. The primary goal of the present study was to assess suitable benchmark criteria for the training of gambling employees at casinos and lottery retailers. The study utilised the Delphi Method, a survey with one qualitative and two quantitative phases. A total of 21 invited international experts in the responsible gambling field participated in all three phases. A total of 75 performance indicators were outlined and assigned to six categories: (1) criteria of content, (2) modelling, (3) qualification of trainer, (4) framework conditions, (5) sustainability and (6) statistical indicators. Nine of the 75 indicators were rated as very important by 90 % or more of the experts. Unanimous support for importance was given to indicators such as (1) comprehensibility and (2) concrete action-guidance for handling with problem gamblers, Additionally, the study examined the implementation of benchmarking, when it should be conducted, and who should be responsible. Results indicated that benchmarking should be conducted every 1-2 years regularly and that one institution should be clearly defined and primarily responsible for benchmarking. The results of the present study provide the basis for developing a benchmarking for staff training in responsible gambling.
Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)
2013-01-01
Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seiferlein, Katherine E.
The Annual Energy Review (AER) presents the Energy Information Administration’s historical energy statistics. For many series, statistics are given for every year from 1949 through 2000. The statistics, expressed in either physical units or British thermal units, cover all major energy activities, including consumption, production, trade, stocks, and prices, for all major energy commodities, including fossil fuels, electricity, and renewable energy sources. Publication of this report is required under Public Law 95–91 (Department of Energy Organization Act), Section 205(c), and is in keeping with responsibilities given to the Energy Information Administration under Section 205(a)(2), which states: “The Administrator shall bemore » responsible for carrying out a central, comprehensive, and unified energy data and information program which will collect, evaluate, assemble, analyze, and disseminate data and information....” The AER is intended for use by Members of Congress, Federal and State agencies, energy analysts, and the general public. EIA welcomes suggestions from readers regarding data series in the AER and in other EIA publications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seiferlein, Katherine E.
1998-07-01
The Annual Energy Review (AER) presents the Energy Information Administration’s historical energy statistics. For many series, statistics are given for every year from 1949 through 1997. The statistics, expressed in either physical units or British thermal units, cover all major energy activities, including consumption, production, trade, stocks, and prices, for all major energy commodities, including fossil fuels, electricity, and renewable energy sources. Publication of this report is in keeping with responsibilities given to the Energy Information Administration (EIA) in Public Law 95–91 (Department of Energy Organization Act), which states, in part, in Section 205(a)(2) that: “The Administrator shall be responsiblemore » for carrying out a central, comprehensive, and unified energy data and information program which will collect, evaluate, assemble, analyze, and disseminate data and information....” The AER is intended for use by Members of Congress, Federal and State agencies, energy analysts, and the general public. EIA welcomes suggestions from readers regarding data series in the AER and in other EIA publications.« less
Tobe, Russell H; Corcoran, Cheryl M; Breland, Melissa; MacKay-Brandt, Anna; Klim, Casimir; Colcombe, Stanley J; Leventhal, Bennett L; Javitt, Daniel C
2016-08-01
Impairment in social cognition, including emotion recognition, has been extensively studied in both Autism Spectrum Disorders (ASD) and Schizophrenia (SZ). However, the relative patterns of deficit between disorders have been studied to a lesser degree. Here, we applied a social cognition battery incorporating both auditory (AER) and visual (VER) emotion recognition measures to a group of 19 high-functioning individuals with ASD relative to 92 individuals with SZ, and 73 healthy control adult participants. We examined group differences and correlates of basic auditory processing and processing speed. Individuals with SZ were impaired in both AER and VER while ASD individuals were impaired in VER only. In contrast to SZ participants, those with ASD showed intact basic auditory function. Our finding of a dissociation between AER and VER deficits in ASD relative to Sz support modality-specific theories of emotion recognition dysfunction. Future studies should focus on visual system-specific contributions to social cognitive impairment in ASD. Copyright © 2016 Elsevier Ltd. All rights reserved.
Memory-Intensive Benchmarks: IRAM vs. Cache-Based Machines
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Gaeke, Brian R.; Husbands, Parry; Li, Xiaoye S.; Oliker, Leonid; Yelick, Katherine A.; Biegel, Bryan (Technical Monitor)
2002-01-01
The increasing gap between processor and memory performance has lead to new architectural models for memory-intensive applications. In this paper, we explore the performance of a set of memory-intensive benchmarks and use them to compare the performance of conventional cache-based microprocessors to a mixed logic and DRAM processor called VIRAM. The benchmarks are based on problem statements, rather than specific implementations, and in each case we explore the fundamental hardware requirements of the problem, as well as alternative algorithms and data structures that can help expose fine-grained parallelism or simplify memory access patterns. The benchmarks are characterized by their memory access patterns, their basic control structures, and the ratio of computation to memory operation.
Autoimmune diseases in Adult Life after Childhood Cancer in Scandinavia (ALiCCS).
Holmqvist, Anna Sällfors; Olsen, Jørgen H; Mellemkjaer, Lene; Garwicz, Stanislaw; Hjorth, Lars; Moëll, Christian; Månsson, Bengt; Tryggvadottir, Laufey; Hasle, Henrik; Winther, Jeanette Falck
2016-09-01
The pattern of autoimmune diseases in childhood cancer survivors has not been investigated previously. We estimated the risk for an autoimmune disease after childhood cancer in a large, population-based setting with outcome measures from comprehensive, nationwide health registries. From the national cancer registries of Denmark, Iceland and Sweden, we identified 20 361 1-year survivors of cancer diagnosed before the age of 20 between the start of cancer registration in the 1940s and 1950s through 2008; 125 794 comparison subjects, matched by age, gender and country, were selected from national population registers. Study subjects were linked to the national hospital registers. Standardised hospitalisation rate ratios (SHRRs) and absolute excess risks (AERs) were calculated. Childhood cancer survivors had a significantly increased SHRR of 1.4 (95% CI 1.3 to 1.5) of all autoimmune diseases combined, corresponding to an AER of 67 per 100 000 person-years. The SHRRs were significantly increased for autoimmune haemolytic anaemia (16.3), Addison's disease (13.9), polyarteritis nodosa (5.8), chronic rheumatic heart disease (4.5), localised scleroderma (3.6), idiopathic thrombocytopenic purpura (3.4), Hashimoto's thyroiditis (3.1), pernicious anaemia (2.7), sarcoidosis (2.2), Sjögren's syndrome (2.0) and insulin-dependent diabetes mellitus (1.6). The SHRRs for any autoimmune disease were significantly increased after leukaemia (SHRR 1.6), Hodgkin's lymphoma (1.6), renal tumours (1.6) and central nervous system neoplasms (1.4). Childhood cancer survivors are at increased risk for certain types of autoimmune diseases. These findings underscore the need for prolonged follow-up of these survivors. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Sawhney, Vinit; Domenichini, Giulia; Gamble, James; Furniss, Guy; Panagopoulos, Dimitrios; Lambiase, Pier; Rajappan, Kim; Chow, Anthony; Lowe, Martin; Sporton, Simon; Earley, Mark J; Dhinoja, Mehul; Campbell, Niall; Hunter, Ross J; Haywood, Guy; Betts, Tim R; Schilling, Richard J
2018-06-01
Endocardial left ventricular (LV) pacing is a viable alternative in patients with failed coronary sinus (CS) lead implantation. However, long-term thrombo-embolic risk remains unknown. Much of the data have come from a small number of centres. We examined the safety and efficacy of endocardial LV pacing to determine the long-term thrombo-embolic risk. Registries from four UK centres were combined to include 68 patients with endocardial leads with a mean follow-up of 20 months. These were compared to a matched 1:2 control group with conventional CS leads. Medical records were reviewed, and patients contacted for follow-up. Ischaemic stroke occurred in four patients (6%) in the endocardial arm providing an annual event rate (AER) of 3.6% over a 20 month follow-up; compared to 9 patients (6.6%) amongst controls with an AER of 3.4% over a 23-month follow-up. Regression analyses showed a significant association between sub-therapeutic international normalized ratio and stroke (P = 0.0001) in the endocardial arm. There was no association between lead material and mode of delivery (transatrial/transventricular) and stroke. Mortality rate was 12 and 15 per 100 patient years in the endocardial and control arm respectively with end-stage heart failure being the commonest cause. Endocardial LV lead in heart failure patients has a good success rate at 1.6 year follow-up. However, it is associated with a thrombo-embolic risk (which is not different from conventional CS leads) attributable to sub-therapeutic anticoagulation. Randomized control trials and studies on non-vitamin K antagonist oral anticoagulants are required to ascertain the potential of widespread clinical application of this therapeutic modality.
An Intercomparison of 2-D Models Within a Common Framework
NASA Technical Reports Server (NTRS)
Weisenstein, Debra K.; Ko, Malcolm K. W.; Scott, Courtney J.; Jackman, Charles H.; Fleming, Eric L.; Considine, David B.; Kinnison, Douglas E.; Connell, Peter S.; Rotman, Douglas A.; Bhartia, P. K. (Technical Monitor)
2002-01-01
A model intercomparison among the Atmospheric and Environmental Research (AER) 2-D model, the Goddard Space Flight Center (GSFC) 2-D model, and the Lawrence Livermore National Laboratory 2-D model allows us to separate differences due to model transport from those due to the model's chemical formulation. This is accomplished by constructing two hybrid models incorporating the transport parameters of the GSFC and LLNL models within the AER model framework. By comparing the results from the native models (AER and e.g. GSFC) with those from the hybrid model (e.g. AER chemistry with GSFC transport), differences due to chemistry and transport can be identified. For the analysis, we examined an inert tracer whose emission pattern is based on emission from a High Speed Civil Transport (HSCT) fleet; distributions of trace species in the 2015 atmosphere; and the response of stratospheric ozone to an HSCT fleet. Differences in NO(y) in the upper stratosphere are found between models with identical transport, implying different model representations of atmospheric chemical processes. The response of O3 concentration to HSCT aircraft emissions differs in the models from both transport-dominated differences in the HSCT-induced perturbations of H2O and NO(y) as well as from differences in the model represent at ions of O3 chemical processes. The model formulations of cold polar processes are found to be the most significant factor in creating large differences in the calculated ozone perturbations
Clemente-Suárez, Vicente Javier; Dalamitros, Athanasios; Ribeiro, João; Sousa, Ana; Fernandes, Ricardo J; Vilas-Boas, J Paulo
2017-05-01
This study analysed the effects of two different periodization strategies on physiological parameters at various exercise intensities in competitive swimmers. Seventeen athletes of both sexes were divided to two groups, the traditional periodization (TPG, n = 7) and the reverse periodization group (RPG, n = 10). Each group followed a 10-week training period based on the two different periodization strategies. Before and after training, swimming velocity (SV), energy expenditure (EE), energy cost (EC) and percentage of aerobic (%Aer) and anaerobic (%An) energy contribution to the swimming intensities corresponding to the aerobic threshold (AerT), the anaerobic threshold (AnT) and the velocity at maximal oxygen uptake (vVO 2 max) were measured. Both groups increased the %An at the AerT and AnT intensity (P ≤ .05). In contrast, at the AnT intensity, EE and EC were only increased in TPG. Complementary, %Aer, %An, EE and EC at vVO 2 max did not alter in both groups (P > .05); no changes were observed in SV in TPG and RPG at all three intensities. These results indicate that both periodization schemes confer almost analogous adaptations in specific physiological parameters in competitive swimmers. However, given the large difference in the total training volume between the two groups, it is suggested that the implementation of the reverse periodization model is an effective and time-efficient strategy to improve performance mainly for swimming events where the AnT is an important performance indicator.
Reyes, M; Borrás, L; Seco, A; Ferrer, J
2015-01-01
Eight different phenotypes were studied in an activated sludge process (AeR) and anaerobic digester (AnD) in a full-scale wastewater treatment plant by means of fluorescent in situ hybridization (FISH) and automated FISH quantification software. The phenotypes were ammonia-oxidizing bacteria, nitrite-oxidizing bacteria, denitrifying bacteria, phosphate-accumulating organisms (PAO), glycogen-accumulating organisms (GAO), sulphate-reducing bacteria (SRB), methanotrophic bacteria and methanogenic archaea. Some findings were unexpected: (a) Presence of PAO, GAO and denitrifiers in the AeR possibly due to unexpected environmental conditions caused by oxygen deficiencies or its ability to survive aerobically; (b) presence of SRB in the AeR due to high sulphate content of wastewater intake and possibly also due to digested sludge being recycled back into the primary clarifier; (c) presence of methanogenic archaea in the AeR, which can be explained by the recirculation of digested sludge and its ability to survive periods of high oxygen levels; (d) presence of denitrifying bacteria in the AnD which cannot be fully explained because the nitrate level in the AnD was not measured. However, other authors reported the existence of denitrifiers in environments where nitrate or oxygen was not present suggesting that denitrifiers can survive in nitrate-free anaerobic environments by carrying out low-level fermentation; (e) the results of this paper are relevant because of the focus on the identification of nearly all the significant bacterial and archaeal groups of microorganisms with a known phenotype involved in the biological wastewater treatment.
Radiation Detection Computational Benchmark Scenarios
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.
2013-09-24
Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing differentmore » techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for compilation. This is a report describing the details of the selected Benchmarks and results from various transport codes.« less
Model Prediction Results for 2007 Ultrasonic Benchmark Problems
NASA Astrophysics Data System (ADS)
Kim, Hak-Joon; Song, Sung-Jin
2008-02-01
The World Federation of NDE Centers (WFNDEC) has addressed two types of problems for the 2007 ultrasonic benchmark problems: prediction of side-drilled hole responses with 45° and 60° refracted shear waves, and effects of surface curvatures on the ultrasonic responses of flat-bottomed hole. To solve this year's ultrasonic benchmark problems, we applied multi-Gaussian beam models for calculation of ultrasonic beam fields and the Kirchhoff approximation and the separation of variables method for calculation of far-field scattering amplitudes of flat-bottomed holes and side-drilled holes respectively In this paper, we present comparison results of model predictions to experiments for side-drilled holes and discuss effect of interface curvatures on ultrasonic responses by comparison of peak-to-peak amplitudes of flat-bottomed hole responses with different sizes and interface curvatures.
INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom; Javier Ortensi; Sonat Sen
2013-09-01
The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible formore » defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III results of all other international participants in 2014, while the remaining Phase II transient case results will be reported in 2015.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arnis Judzis
2002-10-01
This document details the progress to date on the OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE -- A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING contract for the quarter starting July 2002 through September 2002. Even though we are awaiting the optimization portion of the testing program, accomplishments include the following: (1) Smith International agreed to participate in the DOE Mud Hammer program. (2) Smith International chromed collars for upcoming benchmark tests at TerraTek, now scheduled for 4Q 2002. (3) ConocoPhillips had a field trial of the Smith fluid hammer offshore Vietnam. The hammer functioned properly, though themore » well encountered hole conditions and reaming problems. ConocoPhillips plan another field trial as a result. (4) DOE/NETL extended the contract for the fluid hammer program to allow Novatek to ''optimize'' their much delayed tool to 2003 and to allow Smith International to add ''benchmarking'' tests in light of SDS Digger Tools' current financial inability to participate. (5) ConocoPhillips joined the Industry Advisors for the mud hammer program. (6) TerraTek acknowledges Smith International, BP America, PDVSA, and ConocoPhillips for cost-sharing the Smith benchmarking tests allowing extension of the contract to complete the optimizations.« less
GEN-IV Benchmarking of Triso Fuel Performance Models under accident conditions modeling input data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collin, Blaise Paul
This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: • The modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release. • The modeling of the AGR-1 and HFR-EU1bis safety testing experiments. •more » The comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from “Case 5” of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. “Case 5” of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to “effects of the numerical calculation method rather than the physical model” [IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read this document thoroughly to make sure all the data needed for their calculations is provided in the document. Missing data will be added to a revision of the document if necessary. 09/2016: Tables 6 and 8 updated. AGR-2 input data added« less
ERIC Educational Resources Information Center
Herman, Joan L.; Baker, Eva L.
2005-01-01
Many schools are moving to develop benchmark tests to monitor their students' progress toward state standards throughout the academic year. Benchmark tests can provide the ongoing information that schools need to guide instructional programs and to address student learning problems. The authors discuss six criteria that educators can use to…
NASA Astrophysics Data System (ADS)
Trindade, B. C.; Reed, P. M.
2017-12-01
The growing access and reduced cost for computing power in recent years has promoted rapid development and application of multi-objective water supply portfolio planning. As this trend continues there is a pressing need for flexible risk-based simulation frameworks and improved algorithm benchmarking for emerging classes of water supply planning and management problems. This work contributes the Water Utilities Management and Planning (WUMP) model: a generalizable and open source simulation framework designed to capture how water utilities can minimize operational and financial risks by regionally coordinating planning and management choices, i.e. making more efficient and coordinated use of restrictions, water transfers and financial hedging combined with possible construction of new infrastructure. We introduce the WUMP simulation framework as part of a new multi-objective benchmark problem for planning and management of regionally integrated water utility companies. In this problem, a group of fictitious water utilities seek to balance the use of the mentioned reliability driven actions (e.g., restrictions, water transfers and infrastructure pathways) and their inherent financial risks. Several traits of this problem make it ideal for a benchmark problem, namely the presence of (1) strong non-linearities and discontinuities in the Pareto front caused by the step-wise nature of the decision making formulation and by the abrupt addition of storage through infrastructure construction, (2) noise due to the stochastic nature of the streamflows and water demands, and (3) non-separability resulting from the cooperative formulation of the problem, in which decisions made by stakeholder may substantially impact others. Both the open source WUMP simulation framework and its demonstration in a challenging benchmarking example hold value for promoting broader advances in urban water supply portfolio planning for regions confronting change.
Benchmarking: A Process for Improvement.
ERIC Educational Resources Information Center
Peischl, Thomas M.
One problem with the outcome-based measures used in higher education is that they measure quantity but not quality. Benchmarking, or the use of some external standard of quality to measure tasks, processes, and outputs, is partially solving that difficulty. Benchmarking allows for the establishment of a systematic process to indicate if outputs…
Solution of the neutronics code dynamic benchmark by finite element method
NASA Astrophysics Data System (ADS)
Avvakumov, A. V.; Vabishchevich, P. N.; Vasilev, A. O.; Strizhov, V. F.
2016-10-01
The objective is to analyze the dynamic benchmark developed by Atomic Energy Research for the verification of best-estimate neutronics codes. The benchmark scenario includes asymmetrical ejection of a control rod in a water-type hexagonal reactor at hot zero power. A simple Doppler feedback mechanism assuming adiabatic fuel temperature heating is proposed. The finite element method on triangular calculation grids is used to solve the three-dimensional neutron kinetics problem. The software has been developed using the engineering and scientific calculation library FEniCS. The matrix spectral problem is solved using the scalable and flexible toolkit SLEPc. The solution accuracy of the dynamic benchmark is analyzed by condensing calculation grid and varying degree of finite elements.
A Methodology for Benchmarking Relational Database Machines,
1984-01-01
user benchmarks is to compare the multiple users to the best-case performance The data for each query classification coll and the performance...called a benchmark. The term benchmark originates from the markers used by sur - veyors in establishing common reference points for their measure...formatted databases. In order to further simplify the problem, we restrict our study to those DBMs which support the relational model. A sur - vey
Creation of problem-dependent Doppler-broadened cross sections in the KENO Monte Carlo code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Shane W. D.; Celik, Cihangir; Maldonado, G. Ivan
2015-11-06
In this paper, we introduce a quick method for improving the accuracy of Monte Carlo simulations by generating one- and two-dimensional cross sections at a user-defined temperature before performing transport calculations. A finite difference method is used to Doppler-broaden cross sections to the desired temperature, and unit-base interpolation is done to generate the probability distributions for double differential two-dimensional thermal moderator cross sections at any arbitrarily user-defined temperature. The accuracy of these methods is tested using a variety of contrived problems. In addition, various benchmarks at elevated temperatures are modeled, and results are compared with benchmark results. Lastly, the problem-dependentmore » cross sections are observed to produce eigenvalue estimates that are closer to the benchmark results than those without the problem-dependent cross sections.« less
Peng, Hui; Ma, Guofu; Sun, Kanjun; Mu, Jingjing; Zhang, Zhe; Lei, Ziqiang
2014-12-10
Two-dimensional mesoporous carbon nanosheets (CNSs) have been prepared via simultaneous activation and catalytic carbonization route using macroporous anion-exchange resin (AER) as carbon precursor and ZnCl2 and FeCl3 as activating agent and catalyst, respectively. The iron catalyst in the skeleton of the AER may lead to carburization to form a sheetlike structure during the carbonization process. The obtained CNSs have a large number of mesopores, a maximum specific surface area of 1764.9 m(2) g(-1), and large pore volume of 1.38 cm(3) g(-1). As an electrode material for supercapacitors application, the CNSs electrode possesses a large specific capacitance of 283 F g(-1) at 0.5 A g(-1) and excellent rate capability (64% retention ratio even at 50 A g(-1)) in 6 mol L(-1) KOH. Furthermore, CNSs symmetric supercapacitor exhibits specific energies of 17.2 W h kg(-1) at a power density of 224 W kg(-1) operated in the voltage range of 0-1.8 V in 0.5 mol L(-1) Na2SO4 aqueous electrolyte, and outstanding cyclability (retains about 96% initial capacitance after 5000 cycles).
Wan, J; Wilcock, A; Coventry, M J
1998-02-01
Basil essential oils, including basil sweet linalool (BSL) and basil methyl chavicol (BMC), were screened for antimicrobial activity against a range of Gram-positive and Gram-negative bacteria, yeasts and moulds using an agar well diffusion method. Both essential oils showed antimicrobial activity against most of the micro-organisms examined except Clostridium sporogenes, Flavimonas oryzihabitans, and three species of Pseudomonas. The minimum inhibitory concentration (MIC) of BMC against Aeromonas hydrophila and Pseudomonas fluorescens in TSYE broth (as determined using an indirect impedance method) was 0.125 and 2% (v/v), respectively; the former was not greatly affected by the increase of challenge inoculum from 10(3) to 10(6) cfu ml-1. Results with resting cells demonstrated that BMC was bactericidal to both Aer. hydrophila and Ps. fluorescens. The growth of Aer. hydrophila in filter-sterilized lettuce extract was completely inhibited by 0.1% (v/v) BMC whereas that of Ps. fluorescens was not significantly affected by 1% (v/v) BMC. In addition, the effectiveness of washing fresh lettuce with 0.1 or 1% (v/v) BMC on survival of natural microbial flora was comparable with that effected by 125 ppm chlorine.
Improving the toughness of ultrahigh strength steel
NASA Astrophysics Data System (ADS)
Sato, Koji
2002-01-01
The ideal structural steel combines high strength with high fracture toughness. This dissertation discusses the toughening mechanism of the Fe/Co/Ni/Cr/Mo/C steel, AerMet 100, which has the highest toughness/strength combination among all commercial ultrahigh strength steels. The possibility of improving the toughness of this steel was examined by considering several relevant factors. Chapter 1 reviews the mechanical properties of ultrahigh strength steels and the physical metallurgy of AerMet 100. It also describes the fracture mechanisms of steel, i.e. ductile microvoid coalescence, brittle transgranular cleavage, and intergranular separation. Chapter 2 examines the strength-toughness relationship for three heats of AerMet 100. A wide variation of toughness is obtained at the same strength level. The toughness varies despite the fact that all heat fracture in the ductile fracture mode. The difference originates from the inclusion content. Lower inclusion volume fraction and larger inclusion spacing gives rise to a greater void growth factor and subsequently a higher fracture toughness. The fracture toughness value, JIc, is proportional to the particle spacing of the large non-metallic inclusions. Chapter 3 examines the ductile-brittle transition of AerMet 100 and the effect of a higher austenitization temperature, using the Charpy V-notch test. The standard heat treatment condition of AerMet 100 shows a gradual ductile-brittle transition due to its fine effective grain size. Austenitization at higher temperature increases the prior austenite grain size and packet size, leading to a steeper transition at a higher temperature. Both transgranular cleavage and intergranular separation are observed in the brittle fracture mode. Chapter 4 examines the effect of inclusion content, prior austenite grain size, and the amount of austenite on the strength-toughness relationship. The highest toughness is achieved by low inclusion content, small prior austenite grain size, and a small content of stable austenite. The low inclusion content increases the strain at the fracture. The reduction in prior austenite grain size prevents the fast unstable crack propagation by cleavage. And the stable austenite decreases the strength of the intergranular separation at the prior austenite grain boundary, which provides the stress relief at the crack tip.
Tribology of Langmuir-Blodgett Films
1992-03-01
poly - meric systems and the use of Langmuir - Blodgett films as lubricants. An, 1473 ESWnoW’oP’-oVsS osOLe UNCLASSIFIED SECUlRTV CLASSIFICATIONi O r ThIS...N/A N/A 4. TITLE (and Subtitle) S. TyPE OF REPORT & PERIOD COVERED Tribology of Langmuir - Blodgett Films Interim Technical Report 6. PERFORMING ORO...Co . - e o If neceesea mnd Identily by block number) Tribology, Langmuir - Blodgett Films 2 AerNACT t 44 Pem.e s* I peminp and tdolvart by block niber
Benchmarking a Visual-Basic based multi-component one-dimensional reactive transport modeling tool
NASA Astrophysics Data System (ADS)
Torlapati, Jagadish; Prabhakar Clement, T.
2013-01-01
We present the details of a comprehensive numerical modeling tool, RT1D, which can be used for simulating biochemical and geochemical reactive transport problems. The code can be run within the standard Microsoft EXCEL Visual Basic platform, and it does not require any additional software tools. The code can be easily adapted by others for simulating different types of laboratory-scale reactive transport experiments. We illustrate the capabilities of the tool by solving five benchmark problems with varying levels of reaction complexity. These literature-derived benchmarks are used to highlight the versatility of the code for solving a variety of practical reactive transport problems. The benchmarks are described in detail to provide a comprehensive database, which can be used by model developers to test other numerical codes. The VBA code presented in the study is a practical tool that can be used by laboratory researchers for analyzing both batch and column datasets within an EXCEL platform.
Damorim, Igor Rodrigues; Santos, Tony Meireles; Barros, Gustavo Willames Pimentel; Carvalho, Paulo Roberto Cavalcanti
2017-04-01
Resistance and aerobic training are recommended as an adjunctive treatment for hypertension. However, the number of sessions required until the hypotensive effect of the exercise has stabilized has not been clearly established. To establish the adaptive kinetics of the blood pressure (BP) responses as a function of time and type of training in hypertensive patients. We recruited 69 patients with a mean age of 63.4 ± 2.1 years, randomized into one group of resistance training (n = 32) and another of aerobic training (n = 32). Anthropometric measurements were obtained, and one repetition maximum (1RM) testing was performed. BP was measured before each training session with a digital BP arm monitor. The 50 training sessions were categorized into quintiles. To compare the effect of BP reduction with both training methods, we used two-way analysis of covariance (ANCOVA) adjusted for the BP values obtained before the interventions. The differences between the moments were established by one-way analysis of variance (ANOVA). The reductions in systolic (SBP) and diastolic BP (DBP) were 6.9 mmHg and 5.3 mmHg, respectively, with resistance training and 16.5 mmHg and 11.6 mmHg, respectively, with aerobic training. The kinetics of the hypotensive response of the SBP showed significant reductions until the 20th session in both groups. Stabilization of the DBP occurred in the 20th session of resistance training and in the 10th session of aerobic training. A total of 20 sessions of resistance or aerobic training are required to achieve the maximum benefits of BP reduction. The methods investigated yielded distinct adaptive kinetic patterns along the 50 sessions. Os treinamentos de força e aeróbio são indicados para o tratamento adjuvante da hipertensão. Entretanto, o número de sessões necessárias até estabilização do efeito hipotensor com o exercício ainda não está claramente estabelecido. Estabelecer a cinética adaptativa das respostas tensionais em função do tempo e do tipo de treinamento em hipertensos. Foram recrutados 69 hipertensos com idade média de 63,4 ± 2,1 anos, randomizados em um grupo de treinamento de força (n = 32) e outro de treinamento aeróbio (n = 32). Foram realizadas medidas antropométricas e testes de uma repetição máxima (1RM). A pressão arterial (PA) foi medida antes de cada sessão de treinamento com um aparelho de pressão digital de braço. As 50 sessões de treinamento foram categorizadas em quintis. Para comparar o efeito da redução da PA entre os métodos de treinamentos (between), utilizamos análise de covariância (ANCOVA) bifatorial ajustada para os valores de PA pré-intervenção. As diferenças entre os momentos foram estabelecidas por análise de variância (ANOVA) unifatorial. As reduções na PA sistólica (PAS) e diastólica (PAD) foram de 6,9 mmHg e 5,3 mmHg, respectivamente, com o treinamento de força e 16,5 mmHg e 11,6 mmHg, respectivamente, com o treinamento aeróbio. A cinética hipotensiva da PAS apresentou reduções significativas até a 20ª sessão em ambos os grupos. Observou-se estabilização da PAD na 20ª sessão com o treinamento de força e na 10ª sessão com o aeróbio. São necessárias 20 sessões de treinamento de força ou aeróbio para alcance dos benefícios máximos de redução da PA. Os métodos investigados proporcionaram padrões cinéticos adaptativos distintos ao longo das 50 sessões.
Harpaz, Rave; Vilar, Santiago; DuMouchel, William; Salmasian, Hojjat; Haerian, Krystl; Shah, Nigam H; Chase, Herbert S; Friedman, Carol
2013-01-01
Objective Data-mining algorithms that can produce accurate signals of potentially novel adverse drug reactions (ADRs) are a central component of pharmacovigilance. We propose a signal-detection strategy that combines the adverse event reporting system (AERS) of the Food and Drug Administration and electronic health records (EHRs) by requiring signaling in both sources. We claim that this approach leads to improved accuracy of signal detection when the goal is to produce a highly selective ranked set of candidate ADRs. Materials and methods Our investigation was based on over 4 million AERS reports and information extracted from 1.2 million EHR narratives. Well-established methodologies were used to generate signals from each source. The study focused on ADRs related to three high-profile serious adverse reactions. A reference standard of over 600 established and plausible ADRs was created and used to evaluate the proposed approach against a comparator. Results The combined signaling system achieved a statistically significant large improvement over AERS (baseline) in the precision of top ranked signals. The average improvement ranged from 31% to almost threefold for different evaluation categories. Using this system, we identified a new association between the agent, rasburicase, and the adverse event, acute pancreatitis, which was supported by clinical review. Conclusions The results provide promising initial evidence that combining AERS with EHRs via the framework of replicated signaling can improve the accuracy of signal detection for certain operating scenarios. The use of additional EHR data is required to further evaluate the capacity and limits of this system and to extend the generalizability of these results. PMID:23118093
NASA Astrophysics Data System (ADS)
Youssefi, Somayeh; Waring, Michael S.
2015-07-01
The ozonolysis of reactive organic gases (ROG), e.g. terpenes, generates secondary organic aerosol (SOA) indoors. The SOA formation strength of such reactions is parameterized by the aerosol mass fraction (AMF), a.k.a. SOA yield, which is the mass ratio of generated SOA to oxidized ROG. AMFs vary in magnitude both among and for individual ROGs. Here, we quantified dynamic SOA formation from the ozonolysis of α-pinene with 'transient AMFs,' which describe SOA formation due to pulse emission of a ROG in an indoor space with air exchange, as is common when consumer products are intermittently used in ventilated buildings. We performed 19 experiments at low, moderate, and high (0.30, 0.52, and 0.94 h-1, respectively) air exchange rates (AER) at varying concentrations of initial reactants. Transient AMFs as a function of peak SOA concentrations ranged from 0.071 to 0.25, and they tended to increase as the AER and product of the initial reactant concentrations increased. Compared to our similar research on limonene ozonolysis (Youssefi and Waring, 2014), for which formation strength was driven by secondary ozone reactions, the AER impact for α-pinene was opposite in direction and weaker, while the initial reactant product impact was in the same direction but stronger for α-pinene than for limonene. Linear fits of AMFs for α-pinene ozonolysis as a function of the AER and initial reactant concentrations are provided so that future indoor models can predict SOA formation strength.
Benchmarking comparison and validation of MCNP photon interaction data
NASA Astrophysics Data System (ADS)
Colling, Bethany; Kodeli, I.; Lilley, S.; Packer, L. W.
2017-09-01
The objective of the research was to test available photoatomic data libraries for fusion relevant applications, comparing against experimental and computational neutronics benchmarks. Photon flux and heating was compared using the photon interaction data libraries (mcplib 04p, 05t, 84p and 12p). Suitable benchmark experiments (iron and water) were selected from the SINBAD database and analysed to compare experimental values with MCNP calculations using mcplib 04p, 84p and 12p. In both the computational and experimental comparisons, the majority of results with the 04p, 84p and 12p photon data libraries were within 1σ of the mean MCNP statistical uncertainty. Larger differences were observed when comparing computational results with the 05t test photon library. The Doppler broadening sampling bug in MCNP-5 is shown to be corrected for fusion relevant problems through use of the 84p photon data library. The recommended libraries for fusion neutronics are 84p (or 04p) with MCNP6 and 84p if using MCNP-5.
Introduction to the IWA task group on biofilm modeling.
Noguera, D R; Morgenroth, E
2004-01-01
An International Water Association (IWA) Task Group on Biofilm Modeling was created with the purpose of comparatively evaluating different biofilm modeling approaches. The task group developed three benchmark problems for this comparison, and used a diversity of modeling techniques that included analytical, pseudo-analytical, and numerical solutions to the biofilm problems. Models in one, two, and three dimensional domains were also compared. The first benchmark problem (BM1) described a monospecies biofilm growing in a completely mixed reactor environment and had the purpose of comparing the ability of the models to predict substrate fluxes and concentrations for a biofilm system of fixed total biomass and fixed biomass density. The second problem (BM2) represented a situation in which substrate mass transport by convection was influenced by the hydrodynamic conditions of the liquid in contact with the biofilm. The third problem (BM3) was designed to compare the ability of the models to simulate multispecies and multisubstrate biofilms. These three benchmark problems allowed identification of the specific advantages and disadvantages of each modeling approach. A detailed presentation of the comparative analyses for each problem is provided elsewhere in these proceedings.
Benchmark Problems for Space Mission Formation Flying
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Leitner, Jesse A.; Folta, David C.; Burns, Richard
2003-01-01
To provide a high-level focus to distributed space system flight dynamics and control research, several benchmark problems are suggested for space mission formation flying. The problems cover formation flying in low altitude, near-circular Earth orbit, high altitude, highly elliptical Earth orbits, and large amplitude lissajous trajectories about co-linear libration points of the Sun-Earth/Moon system. These problems are not specific to any current or proposed mission, but instead are intended to capture high-level features that would be generic to many similar missions that are of interest to various agencies.
Simplified Numerical Analysis of ECT Probe - Eddy Current Benchmark Problem 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sikora, R.; Chady, T.; Gratkowski, S.
2005-04-09
In this paper a third eddy current benchmark problem is considered. The objective of the benchmark is to determine optimal operating frequency and size of the pancake coil designated for testing tubes made of Inconel. It can be achieved by maximization of the change in impedance of the coil due to a flaw. Approximation functions of the probe (coil) characteristic were developed and used in order to reduce number of required calculations. It results in significant speed up of the optimization process. An optimal testing frequency and size of the probe were achieved as a final result of the calculation.
NASA Astrophysics Data System (ADS)
Tokura, Norihito; Yamamoto, Takao; Kato, Hisato; Nakagawa, Akio
We have studied the dynamic avalanche phenomenon in an SOI lateral diode during reverse recovery by using a mixed-mode device simulation. In the study, it has been found that local impact ionization occurs near an anode-side field oxide edge, where a high-density hole current flows and a high electric field appears simultaneously. We propose that a p-type anode extension region (AER) along a trench side wall effectively sweeps out stored carriers beneath an anode p-diffusion layer during reverse recovery, resulting in reduction of the electric field and remarkable suppression of the dynamic avalanche. The AER reduces the total recovery charge and does not cause any increase in the total stored charge under a forward bias operation. This effect is verified experimentally by the fabricated device with AER. Thus, the developed SOI lateral diode is promising as a high-speed and highly rugged free-wheeling diode, which can be integrated into next-generation SOI microinverters.
Asarnow, R F; Cromwell, R L; Rennick, P M
1978-10-01
Twenty-four male schizophrenics, 12 (SFH) with schizophrenia in the immediate family and 12 (SNFH) with no evidence of schizophrenia in the family background, and 24 male control subjects, 12 highly educated (HEC), and 12 minimally educated (MEC), were assessed for premorbid social adjustment and were administered the Digit Symbol Substitution Test, a size estimation task, and the EEG average evoked response (AER) at different levels of stimulus intensity. As predicted from the stimulus redundancy formulation, the SFH patients were poorer in premorbid adjustment, were less often paranoid, functioned at a lower level of cognitive efficiency (poor digit symbol and greater absolute error on size estimation), were more chronic, and, in some respects, had size estimation indices of minimal scanning. Contrary to prediction, the SFH group had the strongest and most sustained augmenting response on AER, while the SNFH group shifted from an augmenting to a reducing pattern of response. The relationship between an absence of AER reducing and the presence of cognitive impairment in the SFH group was a major focus of discussion.
Jiang, Guoqian; Wang, Liwei; Liu, Hongfang; Solbrig, Harold R; Chute, Christopher G
2013-01-01
A semantically coded knowledge base of adverse drug events (ADEs) with severity information is critical for clinical decision support systems and translational research applications. However it remains challenging to measure and identify the severity information of ADEs. The objective of the study is to develop and evaluate a semantic web based approach for building a knowledge base of severe ADEs based on the FDA Adverse Event Reporting System (AERS) reporting data. We utilized a normalized AERS reporting dataset and extracted putative drug-ADE pairs and their associated outcome codes in the domain of cardiac disorders. We validated the drug-ADE associations using ADE datasets from SIDe Effect Resource (SIDER) and the UMLS. We leveraged the Common Terminology Criteria for Adverse Event (CTCAE) grading system and classified the ADEs into the CTCAE in the Web Ontology Language (OWL). We identified and validated 2,444 unique Drug-ADE pairs in the domain of cardiac disorders, of which 760 pairs are in Grade 5, 775 pairs in Grade 4 and 2,196 pairs in Grade 3.
Benchmarking Gas Path Diagnostic Methods: A Public Approach
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene
2008-01-01
Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.
Tay, Jeannie; Thompson, Campbell H.; Luscombe-Marsh, Natalie D.; Noakes, Manny; Buckley, Jonathan D.; Wittert, Gary A.; Brinkworth, Grant D.
2015-01-01
Abstract To compare the long-term effects of a very low carbohydrate, high-protein, low saturated fat (LC) diet with a traditional high unrefined carbohydrate, low-fat (HC) diet on markers of renal function in obese adults with type 2 diabetes (T2DM), but without overt kidney disease. One hundred fifteen adults (BMI 34.6 ± 4.3 kg/m2, age 58 ± 7 years, HbA1c 7.3 ± 1.1%, 56 ± 12 mmol/mol, serum creatinine (SCr) 69 ± 15 μmol/L, glomerular filtration rate estimated by the Chronic Kidney Disease Epidemiology Collaboration formula (eGFR 94 ± 12 mL/min/1.73 m2)) were randomized to consume either an LC (14% energy as carbohydrate [CHO < 50 g/day], 28% protein [PRO], 58% fat [<10% saturated fat]) or an HC (53% CHO, 17% PRO, 30% fat [<10% saturated fat]) energy-matched, weight-loss diet combined with supervised exercise training (60 min, 3 day/wk) for 12 months. Body weight, blood pressure, and renal function assessed by eGFR, estimated creatinine clearance (Cockcroft–Gault, Salazar–Corcoran) and albumin excretion rate (AER), were measured pre- and post-intervention. Both groups achieved similar completion rates (LC 71%, HC 65%) and reductions in weight (mean [95% CI]; −9.3 [−10.6, −8.0] kg) and blood pressure (−6 [−9, −4]/−6[−8, −5] mmHg), P ≥ 0.18. Protein intake calculated from 24 hours urinary urea was higher in the LC than HC group (LC 120.1 ± 38.2 g/day, 1.3 g/kg/day; HC 95.8 ± 27.8 g/day, 1 g/kg/day), P < 0.001 diet effect. Changes in SCr (LC 3 [1, 5], HC 1 [−1, 3] μmol/L) and eGFR (LC −4 [−6, −2], HC −2 [−3, 0] mL/min/1.73 m2) did not differ between diets (P = 0.25). AER decreased independent of diet composition (LC −−2.4 [−6, 1.2], HC −1.8 [−5.4, 1.8] mg/24 h, P = 0.24); 6 participants (LC 3, HC 3) had moderately elevated AER at baseline (30–300 mg/24 h), which normalized in 4 participants (LC 2, HC 2) after 52 weeks. Compared with a traditional HC weight loss diet, consumption of an LC high protein diet does not adversely affect clinical markers of renal function in obese adults with T2DM and no preexisting kidney disease. PMID:26632754
The PAC-MAN model: Benchmark case for linear acoustics in computational physics
NASA Astrophysics Data System (ADS)
Ziegelwanger, Harald; Reiter, Paul
2017-10-01
Benchmark cases in the field of computational physics, on the one hand, have to contain a certain complexity to test numerical edge cases and, on the other hand, require the existence of an analytical solution, because an analytical solution allows the exact quantification of the accuracy of a numerical simulation method. This dilemma causes a need for analytical sound field formulations of complex acoustic problems. A well known example for such a benchmark case for harmonic linear acoustics is the ;Cat's Eye model;, which describes the three-dimensional sound field radiated from a sphere with a missing octant analytically. In this paper, a benchmark case for two-dimensional (2D) harmonic linear acoustic problems, viz., the ;PAC-MAN model;, is proposed. The PAC-MAN model describes the radiated and scattered sound field around an infinitely long cylinder with a cut out sector of variable angular width. While the analytical calculation of the 2D sound field allows different angular cut-out widths and arbitrarily positioned line sources, the computational cost associated with the solution of this problem is similar to a 1D problem because of a modal formulation of the sound field in the PAC-MAN model.
Fourth Computational Aeroacoustics (CAA) Workshop on Benchmark Problems
NASA Technical Reports Server (NTRS)
Dahl, Milo D. (Editor)
2004-01-01
This publication contains the proceedings of the Fourth Computational Aeroacoustics (CAA) Workshop on Benchmark Problems. In this workshop, as in previous workshops, the problems were devised to gauge the technological advancement of computational techniques to calculate all aspects of sound generation and propagation in air directly from the fundamental governing equations. A variety of benchmark problems have been previously solved ranging from simple geometries with idealized acoustic conditions to test the accuracy and effectiveness of computational algorithms and numerical boundary conditions; to sound radiation from a duct; to gust interaction with a cascade of airfoils; to the sound generated by a separating, turbulent viscous flow. By solving these and similar problems, workshop participants have shown the technical progress from the basic challenges to accurate CAA calculations to the solution of CAA problems of increasing complexity and difficulty. The fourth CAA workshop emphasized the application of CAA methods to the solution of realistic problems. The workshop was held at the Ohio Aerospace Institute in Cleveland, Ohio, on October 20 to 22, 2003. At that time, workshop participants presented their solutions to problems in one or more of five categories. Their solutions are presented in this proceedings along with the comparisons of their solutions to the benchmark solutions or experimental data. The five categories for the benchmark problems were as follows: Category 1:Basic Methods. The numerical computation of sound is affected by, among other issues, the choice of grid used and by the boundary conditions. Category 2:Complex Geometry. The ability to compute the sound in the presence of complex geometric surfaces is important in practical applications of CAA. Category 3:Sound Generation by Interacting With a Gust. The practical application of CAA for computing noise generated by turbomachinery involves the modeling of the noise source mechanism as a vortical gust interacting with an airfoil. Category 4:Sound Transmission and Radiation. Category 5:Sound Generation in Viscous Problems. Sound is generated under certain conditions by a viscous flow as the flow passes an object or a cavity.
NASA Astrophysics Data System (ADS)
Yin, Xiang-Chu; Yu, Huai-Zhong; Kukshenko, Victor; Xu, Zhao-Yong; Wu, Zhishen; Li, Min; Peng, Keyin; Elizarov, Surgey; Li, Qi
2004-12-01
In order to verify some precursors such as LURR (Load/Unload Response Ratio) and AER (Accelerating Energy Release) before large earthquakes or macro-fracture in heterogeneous brittle media, four acoustic emission experiments involving large rock specimens under tri-axial stress, have been conducted. The specimens were loaded in two ways: monotonous or cycling. The experimental results confirm that LURR and AER are precursors of macro-fracture in brittle media. A new measure called the state vector has been proposed to describe the damage evolution of loaded rock specimens.
Neuron array with plastic synapses and programmable dendrites.
Ramakrishnan, Shubha; Wunderlich, Richard; Hasler, Jennifer; George, Suma
2013-10-01
We describe a novel neuromorphic chip architecture that models neurons for efficient computation. Traditional architectures of neuron array chips consist of large scale systems that are interfaced with AER for implementing intra- or inter-chip connectivity. We present a chip that uses AER for inter-chip communication but uses fast, reconfigurable FPGA-style routing with local memory for intra-chip connectivity. We model neurons with biologically realistic channel models, synapses and dendrites. This chip is suitable for small-scale network simulations and can also be used for sequence detection, utilizing directional selectivity properties of dendrites, ultimately for use in word recognition.
REFSIM Handbook of Variable Names.
1982-11-04
INT2 LREFSEEK R /AERO/ INT4 L: REFSEEK D2THET missile pitch acceleration in degrees/R /AERO/ AER02 M CREFAIR second**2. R /AERO/ AER03 M LREFAIR R...LREFSEEK R /INTERP/ AWM’l L REFSEEK R /INTERP/ ANMh2 CREFSEEK DELA Peak magnitude difference at port and R /CSAS/ AMERCS LREFENW4T starboard. (db/m**2) R...ASE/ GLINT2 L REFD1VMT R /ASE/ INIiT M LIREFSEEK, R /ASE,/ INT2 REFSEEK R /ASE/ INT4 L-RE’SEEK R /ASE/ LOCK2 L-EFSEEK R /ASE/ 1’tLOCK LIREFSEEK DELUhP
Planning and Execution for an Autonomous Aerobot
NASA Technical Reports Server (NTRS)
Gaines, Daniel M.; Estlin, Tara A.; Schaffer, Steven R.; Chouinard, Caroline M.
2010-01-01
The Aerial Onboard Autonomous Science Investigation System (AerOASIS) system provides autonomous planning and execution capabilities for aerial vehicles (see figure). The system is capable of generating high-quality operations plans that integrate observation requests from ground planning teams, as well as opportunistic science events detected onboard the vehicle while respecting mission and resource constraints. AerOASIS allows an airborne planetary exploration vehicle to summarize and prioritize the most scientifically relevant data; identify and select high-value science sites for additional investigation; and dynamically plan, schedule, and monitor the various science activities being performed, even during extended communications blackout periods with Earth.
Event generators for address event representation transmitters
NASA Astrophysics Data System (ADS)
Serrano-Gotarredona, Rafael; Serrano-Gotarredona, Teresa; Linares Barranco, Bernabe
2005-06-01
Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate 'events' according to their activity levels. More active neurons generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. In a typical AER transmitter chip, there is an array of neurons that generate events. They send events to a peripheral circuitry (let's call it "AER Generator") that transforms those events to neurons coordinates (addresses) which are put sequentially on an interchip high speed digital bus. This bus includes a parallel multi-bit address word plus a Rqst (request) and Ack (acknowledge) handshaking signals for asynchronous data exchange. There have been two main approaches published in the literature for implementing such "AER Generator" circuits. They differ on the way of handling event collisions coming from the array of neurons. One approach is based on detecting and discarding collisions, while the other incorporates arbitration for sequencing colliding events . The first approach is supposed to be simpler and faster, while the second is able to handle much higher event traffic. In this article we will concentrate on the second arbiter-based approach. Boahen has been publishing several techniques for implementing and improving the arbiter based approach. Originally, he proposed an arbitration squeme by rows, followed by a column arbitration. In this scheme, while one neuron was selected by the arbiters to transmit his event out of the chip, the rest of neurons in the array were freezed to transmit any further events during this time window. This limited the maximum transmission speed. In order to improve this speed, Boahen proposed an improved 'burst mode' scheme. In this scheme after the row arbitration, a complete row of events is pipelined out of the array and arbitered out of the chip at higher speed. During this single row event arbitration, the array is free to generate new events and communicate to the row arbiter, in a pipelined mode. This scheme significantly improves maximum event transmission speed, specially for high traffic situations were speed is more critical. We have analyzed and studied this approach and have detected some shortcomings in the circuits reported by Boahen, which may render some false situations under some statistical conditions. The present paper proposes some improvements to overcome such situations. The improved "AER Generator" has been implemented in an AER transmitter system
Implementing Cognitive Strategy Instruction across the School: The Benchmark Manual for Teachers.
ERIC Educational Resources Information Center
Gaskins, Irene; Elliot, Thorne
Improving reading instruction has been the primary focus at the Benchmark School in Media, Pennsylvania. This book describes the various phases of Benchmark's development of a program to create strategic learners, thinkers, and problem solvers across the curriculum. The goal is to provide teachers and administrators with a handbook that can be…
Adaptive unified continuum FEM modeling of a 3D FSI benchmark problem.
Jansson, Johan; Degirmenci, Niyazi Cem; Hoffman, Johan
2017-09-01
In this paper, we address a 3D fluid-structure interaction benchmark problem that represents important characteristics of biomedical modeling. We present a goal-oriented adaptive finite element methodology for incompressible fluid-structure interaction based on a streamline diffusion-type stabilization of the balance equations for mass and momentum for the entire continuum in the domain, which is implemented in the Unicorn/FEniCS software framework. A phase marker function and its corresponding transport equation are introduced to select the constitutive law, where the mesh tracks the discontinuous fluid-structure interface. This results in a unified simulation method for fluids and structures. We present detailed results for the benchmark problem compared with experiments, together with a mesh convergence study. Copyright © 2016 John Wiley & Sons, Ltd.
A Diagnostic Assessment of Evolutionary Multiobjective Optimization for Water Resources Systems
NASA Astrophysics Data System (ADS)
Reed, P.; Hadka, D.; Herman, J.; Kasprzyk, J.; Kollat, J.
2012-04-01
This study contributes a rigorous diagnostic assessment of state-of-the-art multiobjective evolutionary algorithms (MOEAs) and highlights key advances that the water resources field can exploit to better discover the critical tradeoffs constraining our systems. This study provides the most comprehensive diagnostic assessment of MOEAs for water resources to date, exploiting more than 100,000 MOEA runs and trillions of design evaluations. The diagnostic assessment measures the effectiveness, efficiency, reliability, and controllability of ten benchmark MOEAs for a representative suite of water resources applications addressing rainfall-runoff calibration, long-term groundwater monitoring (LTM), and risk-based water supply portfolio planning. The suite of problems encompasses a range of challenging problem properties including (1) many-objective formulations with 4 or more objectives, (2) multi-modality (or false optima), (3) nonlinearity, (4) discreteness, (5) severe constraints, (6) stochastic objectives, and (7) non-separability (also called epistasis). The applications are representative of the dominant problem classes that have shaped the history of MOEAs in water resources and that will be dominant foci in the future. Recommendations are provided for which modern MOEAs should serve as tools and benchmarks in the future water resources literature.
Effects of High-Intensity Interval Exercise Training on Skeletal Myopathy of Chronic Heart Failure.
Tzanis, Georgios; Philippou, Anastassios; Karatzanos, Eleftherios; Dimopoulos, Stavros; Kaldara, Elisavet; Nana, Emmeleia; Pitsolis, Theodoros; Rontogianni, Dimitra; Koutsilieris, Michael; Nanas, Serafim
2017-01-01
It remains controversial which type of exercise elicits optimum adaptations on skeletal myopathy of heart failure (HF). Our aim was to evaluate the effect of high-intensity interval training (HIIT), with or without the addition of strength training, on skeletal muscle of HF patients. Thirteen male HF patients (age 51 ± 13 years, body mass index 27 ± 4 kg/m 2 ) participated in either an HIIT (AER) or an HIIT combined with strength training (COM) 3-month program. Biopsy samples were obtained from the vastus lateralis. Analyses were performed on muscle fiber type, cross-section area (CSA), capillary density, and mRNA expression of insulin-like growth factor (IGF) 1 isoforms (ie, IGF-1Ea, IGF-1Eb, IGF-1Ec), type-1 receptor (IGF-1R), and binding protein 3 (IGFBP-3). Increased expression of IGF-1Ea, IGF-1Eb, IGF-1Ec, and IGFBP-3 transcripts was found (1.7 ± 0.8, 1.5 ± 0.8, 2.0 ± 1.32.4 ± 1.4 fold changes, respectively; P < .05). Type I fibers increased by 21% (42 ± 10% to 51 ± 7%; P < .001) and capillary/fiber ratio increased by 24% (1.27 ± 0.22 to 1.57 ± 0.41; P = .005) in both groups as a whole. Fibers' mean CSA increased by 10% in total, but the increase in type I fibers' CSA was greater after AER than COM (15% vs 6%; P < .05). The increased CSA correlated with the increased expression of IGF-1Ea and IGF-1Εb. HIIT reverses skeletal myopathy of HF patients, with the adaptive responses of the IGF-1 bioregulation system possibly contributing to these effects. AER program seemed to be superior to COM to induce muscle hypertrophy. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ward, V. L.; Singh, R.; Reed, P. M.; Keller, K.
2014-12-01
As water resources problems typically involve several stakeholders with conflicting objectives, multi-objective evolutionary algorithms (MOEAs) are now key tools for understanding management tradeoffs. Given the growing complexity of water planning problems, it is important to establish if an algorithm can consistently perform well on a given class of problems. This knowledge allows the decision analyst to focus on eliciting and evaluating appropriate problem formulations. This study proposes a multi-objective adaptation of the classic environmental economics "Lake Problem" as a computationally simple but mathematically challenging MOEA benchmarking problem. The lake problem abstracts a fictional town on a lake which hopes to maximize its economic benefit without degrading the lake's water quality to a eutrophic (polluted) state through excessive phosphorus loading. The problem poses the challenge of maintaining economic activity while confronting the uncertainty of potentially crossing a nonlinear and potentially irreversible pollution threshold beyond which the lake is eutrophic. Objectives for optimization are maximizing economic benefit from lake pollution, maximizing water quality, maximizing the reliability of remaining below the environmental threshold, and minimizing the probability that the town will have to drastically change pollution policies in any given year. The multi-objective formulation incorporates uncertainty with a stochastic phosphorus inflow abstracting non-point source pollution. We performed comprehensive diagnostics using 6 algorithms: Borg, MOEAD, eMOEA, eNSGAII, GDE3, and NSGAII to ascertain their controllability, reliability, efficiency, and effectiveness. The lake problem abstracts elements of many current water resources and climate related management applications where there is the potential for crossing irreversible, nonlinear thresholds. We show that many modern MOEAs can fail on this test problem, indicating its suitability as a useful and nontrivial benchmarking problem.
Dynamic vehicle routing with time windows in theory and practice.
Yang, Zhiwei; van Osta, Jan-Paul; van Veen, Barry; van Krevelen, Rick; van Klaveren, Richard; Stam, Andries; Kok, Joost; Bäck, Thomas; Emmerich, Michael
2017-01-01
The vehicle routing problem is a classical combinatorial optimization problem. This work is about a variant of the vehicle routing problem with dynamically changing orders and time windows. In real-world applications often the demands change during operation time. New orders occur and others are canceled. In this case new schedules need to be generated on-the-fly. Online optimization algorithms for dynamical vehicle routing address this problem but so far they do not consider time windows. Moreover, to match the scenarios found in real-world problems adaptations of benchmarks are required. In this paper, a practical problem is modeled based on the procedure of daily routing of a delivery company. New orders by customers are introduced dynamically during the working day and need to be integrated into the schedule. A multiple ant colony algorithm combined with powerful local search procedures is proposed to solve the dynamic vehicle routing problem with time windows. The performance is tested on a new benchmark based on simulations of a working day. The problems are taken from Solomon's benchmarks but a certain percentage of the orders are only revealed to the algorithm during operation time. Different versions of the MACS algorithm are tested and a high performing variant is identified. Finally, the algorithm is tested in situ: In a field study, the algorithm schedules a fleet of cars for a surveillance company. We compare the performance of the algorithm to that of the procedure used by the company and we summarize insights gained from the implementation of the real-world study. The results show that the multiple ant colony algorithm can get a much better solution on the academic benchmark problem and also can be integrated in a real-world environment.
Issues in Benchmark Metric Selection
NASA Astrophysics Data System (ADS)
Crolotte, Alain
It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bezler, P.; Hartzman, M.; Reich, M.
1980-08-01
A set of benchmark problems and solutions have been developed for verifying the adequacy of computer programs used for dynamic analysis and design of nuclear piping systems by the Response Spectrum Method. The problems range from simple to complex configurations which are assumed to experience linear elastic behavior. The dynamic loading is represented by uniform support motion, assumed to be induced by seismic excitation in three spatial directions. The solutions consist of frequencies, participation factors, nodal displacement components and internal force and moment components. Solutions to associated anchor point motion static problems are not included.
Bacchi, Elisabetta; Negri, Carlo; Targher, Giovanni; Faccioli, Niccolò; Lanza, Massimo; Zoppini, Giacomo; Zanolin, Elisabetta; Schena, Federico; Bonora, Enzo; Moghetti, Paolo
2013-10-01
Although lifestyle interventions are considered the first-line therapy for nonalcoholic fatty liver disease (NAFLD), which is extremely common in people with type 2 diabetes, no intervention studies have compared the effects of aerobic (AER) or resistance (RES) training on hepatic fat content in type 2 diabetic subjects with NAFLD. In this randomized controlled trial, we compared the 4-month effects of either AER or RES training on insulin sensitivity (by hyperinsulinemic euglycemic clamp), body composition (by dual-energy X-ray absorptiometry), as well as hepatic fat content and visceral (VAT), superficial (SSAT), and deep (DSAT) subcutaneous abdominal adipose tissue (all quantified by an in-opposed-phase magnetic resonance imaging technique) in 31 sedentary adults with type 2 diabetes and NAFLD. After training, hepatic fat content was markedly reduced (P < 0.001), to a similar extent, in both the AER and the RES training groups (mean relative reduction from baseline [95% confidence interval] -32.8% [-58.20 to -7.52] versus -25.9% [-50.92 to -0.94], respectively). Additionally, hepatic steatosis (defined as hepatic fat content >5.56%) disappeared in about one-quarter of the patients in each intervention group (23.1% in the AER group and 23.5% in the RES group). Insulin sensitivity during euglycemic clamp was increased, whereas total body fat mass, VAT, SSAT, and hemoglobin A1c were reduced comparably in both intervention groups. This is the first randomized controlled study to demonstrate that resistance training and aerobic training are equally effective in reducing hepatic fat content among type 2 diabetic patients with NAFLD. Copyright © 2013 by the American Association for the Study of Liver Diseases.
Sakaeda, Toshiyuki; Kadoyama, Kaori; Okuno, Yasushi
2011-01-01
Adverse event reports (AERs) submitted to the US Food and Drug Administration (FDA) were reviewed to assess the muscular and renal adverse events induced by the administration of 3-hydroxy-3-methylglutaryl coenzyme A (HMG-CoA) reductase inhibitors (statins) and to attempt to determine the rank-order of the association. After a revision of arbitrary drug names and the deletion of duplicated submissions, AERs involving pravastatin, simvastatin, atorvastatin, or rosuvastatin were analyzed. Authorized pharmacovigilance tools were used for quantitative detection of signals, i.e., drug-associated adverse events, including the proportional reporting ratio, the reporting odds ratio, the information component given by a Bayesian confidence propagation neural network, and the empirical Bayes geometric mean. Myalgia, rhabdomyolysis and an increase in creatine phosphokinase level were focused on as the muscular adverse events, and acute renal failure, non-acute renal failure, and an increase in blood creatinine level as the renal adverse events. Based on 1,644,220 AERs from 2004 to 2009, signals were detected for 4 statins with respect to myalgia, rhabdomyolysis, and an increase in creatine phosphokinase level, but these signals were stronger for rosuvastatin than pravastatin and atorvastatin. Signals were also detected for acute renal failure, though in the case of atorvastatin, the association was marginal, and furthermore, a signal was not detected for non-acute renal failure or for an increase in blood creatinine level. Data mining of the FDA's adverse event reporting system, AERS, is useful for examining statin-associated muscular and renal adverse events. The data strongly suggest the necessity of well-organized clinical studies with respect to statin-associated adverse events.
A digital pixel cell for address event representation image convolution processing
NASA Astrophysics Data System (ADS)
Camunas-Mesa, Luis; Acosta-Jimenez, Antonio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe
2005-06-01
Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number of neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate events according to their information levels. Neurons with more information (activity, derivative of activities, contrast, motion, edges,...) generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. AER technology has been used and reported for the implementation of various type of image sensors or retinae: luminance with local agc, contrast retinae, motion retinae,... Also, there has been a proposal for realizing programmable kernel image convolution chips. Such convolution chips would contain an array of pixels that perform weighted addition of events. Once a pixel has added sufficient event contributions to reach a fixed threshold, the pixel fires an event, which is then routed out of the chip for further processing. Such convolution chips have been proposed to be implemented using pulsed current mode mixed analog and digital circuit techniques. In this paper we present a fully digital pixel implementation to perform the weighted additions and fire the events. This way, for a given technology, there is a fully digital implementation reference against which compare the mixed signal implementations. We have designed, implemented and tested a fully digital AER convolution pixel. This pixel will be used to implement a full AER convolution chip for programmable kernel image convolution processing.
Robertson, Boakai K; Harden, Carol; Selvaraju, Suresh B; Pradhan, Suman; Yadav, Jagjit S
2014-01-01
Aeromonas is ubiquitous in aquatic environments and has been associated with a number of extra-gastrointestinal and gastrointestinal illnesses. This warrants monitoring of raw and processed water sources for pathogenic and toxigenic species of this human pathogen. In this study, a total of 17 different water samples [9 raw and 8 treated samples including 4 basin water (partial sand filtration) and 4 finished water samples] were screened for Aeromonas using selective culturing and a genus-specific real-time quantitative PCR assay. The selective culturing yielded Aeromonas counts ranging 0 – 2 x 103CFU/ml and 15 Aeromonas isolates from both raw and treated water samples. The qPCR analysis indicated presence of a considerable nonculturable population (3.4 x 101 – 2.4 x 104 cells/ml) of Aeromonas in drinking water samples. Virulence potential of the Aeromonas isolates was assessed by multiplex/singleplex PCR-based profiling of the hemolysin and enterotoxin genes viz cytotoxic heat-labile enterotoxin (act), heat-labile cytotonic enterotoxin (alt), heat-stable cytotonic enterotoxin (ast), and aerolysin (aerA) genes. The water isolates yielded five distinct toxigenicity profiles, viz. act, alt, act+alt, aerA+alt, and aerA+alt+act. The alt gene showed the highest frequency of occurrence (40%), followed by the aerA (20%), act (13%), and ast (0%) genes. Taken together, the study demonstrated the occurrence of a considerable population of nonculturable Aeromonads in water and prevalence of toxigenic Aeromonas spp. potentially pathogenic to humans. This emphasizes the importance of routine monitoring of both source and drinking water for this human pathogen and role of the developed molecular approaches in improving the Aeromonas monitoring scheme for water. PMID:24949108
Higher Education Ranking and Leagues Tables: Lessons Learned from Benchmarking
ERIC Educational Resources Information Center
Proulx, Roland
2007-01-01
The paper intends to contribute to the debate on ranking and league tables by adopting a critical approach to ranking methodologies from the point of view of a university benchmarking exercise. The absence of a strict benchmarking exercise in the ranking process has been, in the opinion of the author, one of the major problems encountered in the…
Land, Sander; Gurev, Viatcheslav; Arens, Sander; Augustin, Christoph M; Baron, Lukas; Blake, Robert; Bradley, Chris; Castro, Sebastian; Crozier, Andrew; Favino, Marco; Fastl, Thomas E; Fritz, Thomas; Gao, Hao; Gizzi, Alessio; Griffith, Boyce E; Hurtado, Daniel E; Krause, Rolf; Luo, Xiaoyu; Nash, Martyn P; Pezzuto, Simone; Plank, Gernot; Rossi, Simone; Ruprecht, Daniel; Seemann, Gunnar; Smith, Nicolas P; Sundnes, Joakim; Rice, J Jeremy; Trayanova, Natalia; Wang, Dafang; Jenny Wang, Zhinuo; Niederer, Steven A
2015-12-08
Models of cardiac mechanics are increasingly used to investigate cardiac physiology. These models are characterized by a high level of complexity, including the particular anisotropic material properties of biological tissue and the actively contracting material. A large number of independent simulation codes have been developed, but a consistent way of verifying the accuracy and replicability of simulations is lacking. To aid in the verification of current and future cardiac mechanics solvers, this study provides three benchmark problems for cardiac mechanics. These benchmark problems test the ability to accurately simulate pressure-type forces that depend on the deformed objects geometry, anisotropic and spatially varying material properties similar to those seen in the left ventricle and active contractile forces. The benchmark was solved by 11 different groups to generate consensus solutions, with typical differences in higher-resolution solutions at approximately 0.5%, and consistent results between linear, quadratic and cubic finite elements as well as different approaches to simulating incompressible materials. Online tools and solutions are made available to allow these tests to be effectively used in verification of future cardiac mechanics software.
1993-09-01
and (6) assistance with special problems by the purchasing department (Cavinato, 1987:10). Szilagyi and Wallace state-that an effective performance...Naval Research, November 1975. Szilagyi , Andrew D. Jr. and Marc J. Wallace , Jr. Organizational Behavior and Performance, Santa Monica CA, Goodyear...evaluation systems will the manager be able to achieve the bottom line--organizational effectiveness ( Szilagyi 1980:457). Benefits of Evaluatin2 Performance
Brandenburg, Marcus; Hahn, Gerd J
2018-06-01
Process industries typically involve complex manufacturing operations and thus require adequate decision support for aggregate production planning (APP). The need for powerful and efficient approaches to solve complex APP problems persists. Problem-specific solution approaches are advantageous compared to standardized approaches that are designed to provide basic decision support for a broad range of planning problems but inadequate to optimize under consideration of specific settings. This in turn calls for methods to compare different approaches regarding their computational performance and solution quality. In this paper, we present a benchmarking problem for APP in the chemical process industry. The presented problem focuses on (i) sustainable operations planning involving multiple alternative production modes/routings with specific production-related carbon emission and the social dimension of varying operating rates and (ii) integrated campaign planning with production mix/volume on the operational level. The mutual trade-offs between economic, environmental and social factors can be considered as externalized factors (production-related carbon emission and overtime working hours) as well as internalized ones (resulting costs). We provide data for all problem parameters in addition to a detailed verbal problem statement. We refer to Hahn and Brandenburg [1] for a first numerical analysis based on and for future research perspectives arising from this benchmarking problem.
High-Accuracy Finite Element Method: Benchmark Calculations
NASA Astrophysics Data System (ADS)
Gusev, Alexander; Vinitsky, Sergue; Chuluunbaatar, Ochbadrakh; Chuluunbaatar, Galmandakh; Gerdt, Vladimir; Derbov, Vladimir; Góźdź, Andrzej; Krassovitskiy, Pavel
2018-02-01
We describe a new high-accuracy finite element scheme with simplex elements for solving the elliptic boundary-value problems and show its efficiency on benchmark solutions of the Helmholtz equation for the triangle membrane and hypercube.
Real-time classification and sensor fusion with a spiking deep belief network.
O'Connor, Peter; Neil, Daniel; Liu, Shih-Chii; Delbruck, Tobi; Pfeiffer, Michael
2013-01-01
Deep Belief Networks (DBNs) have recently shown impressive performance on a broad range of classification problems. Their generative properties allow better understanding of the performance, and provide a simpler solution for sensor fusion tasks. However, because of their inherent need for feedback and parallel update of large numbers of units, DBNs are expensive to implement on serial computers. This paper proposes a method based on the Siegert approximation for Integrate-and-Fire neurons to map an offline-trained DBN onto an efficient event-driven spiking neural network suitable for hardware implementation. The method is demonstrated in simulation and by a real-time implementation of a 3-layer network with 2694 neurons used for visual classification of MNIST handwritten digits with input from a 128 × 128 Dynamic Vision Sensor (DVS) silicon retina, and sensory-fusion using additional input from a 64-channel AER-EAR silicon cochlea. The system is implemented through the open-source software in the jAER project and runs in real-time on a laptop computer. It is demonstrated that the system can recognize digits in the presence of distractions, noise, scaling, translation and rotation, and that the degradation of recognition performance by using an event-based approach is less than 1%. Recognition is achieved in an average of 5.8 ms after the onset of the presentation of a digit. By cue integration from both silicon retina and cochlea outputs we show that the system can be biased to select the correct digit from otherwise ambiguous input.
Chaturvedi, N; Fuller, J H; Pokras, F; Rottiers, R; Papazoglou, N; Aiello, L P
2001-04-01
To determine whether circulating plasma vascular endothelial growth factor (VEGF) is elevated in the presence of diabetic microvascular complications, and whether the impact of angiotensin-converting enzyme (ACE) inhibitors on these complications can be accounted for by changes in circulating VEGF. Samples (299/354 of those with retinal photographs) from the EUCLID placebo-controlled clinical trial of the ACE inhibitor lisinopril in mainly normoalbuminuric non-hypertensive Type 1 diabetic patients were used. Albumin excretion rate (AER) was measured 6 monthly. Geometric mean VEGF levels by baseline retinopathy status, change in retinopathy over 2 years, and by treatment with lisinopril were calculated. No significant correlation was observed between VEGF at baseline and age, diabetes duration, glycaemic control, blood pressure, smoking, fibrinogen and von Willebrand factor. Mean VEGF concentration at baseline was 11.5 (95% confidence interval 6.0--27.9) pg/ml in those without retinopathy, 12.9 (6.0--38.9) pg/ml in those with non-proliferative retinopathy, and 16.1 (8.1--33.5) pg/ml in those with proliferative retinopathy (P = 0.06 for trend). Baseline VEGF was 15.2 pg/ml in those who progressed by at least one level of retinopathy by 2 years compared to 11.8 pg/ml in those who did not (P = 0.3). VEGF levels were not altered by lisinopril treatment. Results were similar for AER. Circulating plasma VEGF concentration is not strongly correlated with risk factor status or microvascular disease in Type 1 diabetes, nor is it affected by ACE inhibition. Changes in circulating VEGF cannot account for the beneficial effect of ACE inhibition on retinopathy.
de Campos Mello, Fernando; de Moraes Bertuzzi, Rômulo Cássio; Grangeiro, Patricia Moreno; Franchini, Emerson
2009-11-01
This study investigated the energy system contributions of rowers in three different conditions: rowing on an ergometer without and with the slide and rowing in the water. For this purpose, eight rowers were submitted to 2,000 m race simulations in each of the situations defined above. The fractions of the aerobic (W (AER)), anaerobic alactic (W (PCR)) and anaerobic lactic (W ([La-])) systems were calculated based on the oxygen uptake, the fast component of excess post-exercise oxygen uptake and changes in net blood lactate, respectively. In the water, the metabolic work was significantly higher [(851 (82) kJ] than during both ergometer [674 (60) kJ] and ergometer with slide [663 (65) kJ] (P < or = 0.05). The time in the water [515 (11) s] was higher (P < 0.001) than in the ergometers with [398 (10) s] and without the slide [402 (15) s], resulting in no difference when relative energy expenditure was considered: in the water [99 (9) kJ min(-1)], ergometer without the slide [99.6 (9) kJ min(-1)] and ergometer with the slide [100.2 (9.6) kJ min(-1)]. The respective contributions of the W (AER), W (PCR) and W ([La-]) systems were water = 87 (2), 7 (2) and 6 (2)%, ergometer = 84 (2), 7 (2) and 9 (2)%, and ergometer with the slide = 84 (2), 7 (2) and 9 (1)%. VO2, HR and lactate were not different among conditions. These results seem to indicate that the ergometer braking system simulates conditions of a bigger and faster boat and not a single scull. Probably, a 2,500 m test should be used to properly simulate in the water single-scull race.
Pramanik, Biplob Kumar; Pramanik, Sagor Kumar; Sarker, Dipok Chandra; Suja, Fatihah
2017-06-01
The effects of ozonation, anion exchange resin (AER) and UV/H 2 O 2 were investigated as a pre-treatment to control organic fouling (OF) of ultrafiltration membrane in the treatment of drinking water. It was found that high molecular weight (MW) organics such as protein and polysaccharide substances were majorly responsible for reversible fouling which contributed to 90% of total fouling. The decline rate increased with successive filtration cycles due to deposition of protein content over time. All pre-treatment could reduce the foulants of a Ultrafiltration membrane which contributed to the improvement in flux, and there was a greater improvement of flux by UV/H 2 O 2 (61%) than ozonation (43%) which in turn was greater than AER (23%) treatment. This was likely due to the effective removal/breakdown of high MW organic content. AER gave greater removal of biofouling potential components (such as biodegradable dissolved organic carbon and assimilable organic carbon contents) compared to UV/H 2 O 2 and ozonation treatment. Overall, this study demonstrated the potential of pre-treatments for reducing OF of ultrafiltration for the treatment of drinking water.
Embryology meets molecular biology: Deciphering the apical ectodermal ridge.
Verheyden, Jamie M; Sun, Xin
2017-09-15
More than sixty years ago, while studying feather tracks on the shoulder of the chick embryo, Dr. John Saunders used Nile Blue dye to stain the tissue. There, he noticed a darkly stained line of cells that neatly rims the tip of the growing limb bud. Rather than ignoring this observation, he followed it up by removing this tissue and found that it led to a striking truncation of the limb skeletons. This landmark experiment marks the serendipitous discovery of the apical ectodermal ridge (AER), the quintessential embryonic structure that drives the outgrowth of the limb. Dr. Saunders continued to lead the limb field for the next fifty years, not just through his own work, but also by inspiring the next generation of researchers through his infectious love of science. Together, he and those who followed ushered in the discovery of fibroblast growth factor (FGF) as the AER molecule. The seamless marriage of embryology and molecular biology that led to the decoding of the AER serves as a shining example of how discoveries are made for the rest of the developmental biology field. Copyright © 2017 Elsevier Inc. All rights reserved.
Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis
NASA Technical Reports Server (NTRS)
Hoffman, Ross N.
2001-01-01
We completed the formulation of the smoothness penalty functional this past quarter. We used a simplified procedure for estimating the statistics of the FCA solution spectral coefficients from the results of the unconstrained, low-truncation FCA (stopping criterion) solutions. During the current reporting period we have completed the calculation of GEOS-2 model-equivalent brightness temperatures for the 6.7 micron and 11 micron window channels used in the GOES imagery for all 10 cases from August 1999. These were simulated using the AER-developed Optimal Spectral Sampling (OSS) model.
1983-10-01
a departure which will invariably be late, sometimes 4 to 6 hours late. Other, more minor difficulties, were encountered due to a lack of facility in...Lake and then through the connecting canals and streams to these other sites. Hence, the releases at Lake Alice probably played a minor role as a...philozeroide. (Mart.) Griseb. Cabomba Cabomba carolinian. Gray Cham Chara spp. Dckweed Lemna spp. Hydrifla Hydrill aer t Royle I iypphila Hygrophila
Performance of Multi-chaotic PSO on a shifted benchmark functions set
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan
2015-03-10
In this paper the performance of Multi-chaotic PSO algorithm is investigated using two shifted benchmark functions. The purpose of shifted benchmark functions is to simulate the time-variant real-world problems. The results of chaotic PSO are compared with canonical version of the algorithm. It is concluded that using the multi-chaotic approach can lead to better results in optimization of shifted functions.
Benchmarking image fusion system design parameters
NASA Astrophysics Data System (ADS)
Howell, Christopher L.
2013-06-01
A clear and absolute method for discriminating between image fusion algorithm performances is presented. This method can effectively be used to assist in the design and modeling of image fusion systems. Specifically, it is postulated that quantifying human task performance using image fusion should be benchmarked to whether the fusion algorithm, at a minimum, retained the performance benefit achievable by each independent spectral band being fused. The established benchmark would then clearly represent the threshold that a fusion system should surpass to be considered beneficial to a particular task. A genetic algorithm is employed to characterize the fused system parameters using a Matlab® implementation of NVThermIP as the objective function. By setting the problem up as a mixed-integer constraint optimization problem, one can effectively look backwards through the image acquisition process: optimizing fused system parameters by minimizing the difference between modeled task difficulty measure and the benchmark task difficulty measure. The results of an identification perception experiment are presented, where human observers were asked to identify a standard set of military targets, and used to demonstrate the effectiveness of the benchmarking process.
Bin packing problem solution through a deterministic weighted finite automaton
NASA Astrophysics Data System (ADS)
Zavala-Díaz, J. C.; Pérez-Ortega, J.; Martínez-Rebollar, A.; Almanza-Ortega, N. N.; Hidalgo-Reyes, M.
2016-06-01
In this article the solution of Bin Packing problem of one dimension through a weighted finite automaton is presented. Construction of the automaton and its application to solve three different instances, one synthetic data and two benchmarks are presented: N1C1W1_A.BPP belonging to data set Set_1; and BPP13.BPP belonging to hard28. The optimal solution of synthetic data is obtained. In the first benchmark the solution obtained is one more container than the ideal number of containers and in the second benchmark the solution is two more containers than the ideal solution (approximately 2.5%). The runtime in all three cases was less than one second.
Comparing the OpenMP, MPI, and Hybrid Programming Paradigm on an SMP Cluster
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Jin, Haoqiang; anMey, Dieter; Hatay, Ferhat F.
2003-01-01
With the advent of parallel hardware and software technologies users are faced with the challenge to choose a programming paradigm best suited for the underlying computer architecture. With the current trend in parallel computer architectures towards clusters of shared memory symmetric multi-processors (SMP), parallel programming techniques have evolved to support parallelism beyond a single level. Which programming paradigm is the best will depend on the nature of the given problem, the hardware architecture, and the available software. In this study we will compare different programming paradigms for the parallelization of a selected benchmark application on a cluster of SMP nodes. We compare the timings of different implementations of the same CFD benchmark application employing the same numerical algorithm on a cluster of Sun Fire SMP nodes. The rest of the paper is structured as follows: In section 2 we briefly discuss the programming models under consideration. We describe our compute platform in section 3. The different implementations of our benchmark code are described in section 4 and the performance results are presented in section 5. We conclude our study in section 6.
Towards unbiased benchmarking of evolutionary and hybrid algorithms for real-valued optimisation
NASA Astrophysics Data System (ADS)
MacNish, Cara
2007-12-01
Randomised population-based algorithms, such as evolutionary, genetic and swarm-based algorithms, and their hybrids with traditional search techniques, have proven successful and robust on many difficult real-valued optimisation problems. This success, along with the readily applicable nature of these techniques, has led to an explosion in the number of algorithms and variants proposed. In order for the field to advance it is necessary to carry out effective comparative evaluations of these algorithms, and thereby better identify and understand those properties that lead to better performance. This paper discusses the difficulties of providing benchmarking of evolutionary and allied algorithms that is both meaningful and logistically viable. To be meaningful the benchmarking test must give a fair comparison that is free, as far as possible, from biases that favour one style of algorithm over another. To be logistically viable it must overcome the need for pairwise comparison between all the proposed algorithms. To address the first problem, we begin by attempting to identify the biases that are inherent in commonly used benchmarking functions. We then describe a suite of test problems, generated recursively as self-similar or fractal landscapes, designed to overcome these biases. For the second, we describe a server that uses web services to allow researchers to 'plug in' their algorithms, running on their local machines, to a central benchmarking repository.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mkhabela, P.; Han, J.; Tyobeka, B.
2006-07-01
The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has accepted, through the Nuclear Science Committee (NSC), the inclusion of the Pebble-Bed Modular Reactor 400 MW design (PBMR-400) coupled neutronics/thermal hydraulics transient benchmark problem as part of their official activities. The scope of the benchmark is to establish a well-defined problem, based on a common given library of cross sections, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events through a set of multi-dimensional computational test problems. The benchmark includes three steady state exercises andmore » six transient exercises. This paper describes the first two steady state exercises, their objectives and the international participation in terms of organization, country and computer code utilized. This description is followed by a comparison and analysis of the participants' results submitted for these two exercises. The comparison of results from different codes allows for an assessment of the sensitivity of a result to the method employed and can thus help to focus the development efforts on the most critical areas. The two first exercises also allow for removing of user-related modeling errors and prepare core neutronics and thermal-hydraulics models of the different codes for the rest of the exercises in the benchmark. (authors)« less
NASA Astrophysics Data System (ADS)
Jacques, Diederik
2017-04-01
As soil functions are governed by a multitude of interacting hydrological, geochemical and biological processes, simulation tools coupling mathematical models for interacting processes are needed. Coupled reactive transport models are a typical example of such coupled tools mainly focusing on hydrological and geochemical coupling (see e.g. Steefel et al., 2015). Mathematical and numerical complexity for both the tool itself or of the specific conceptual model can increase rapidly. Therefore, numerical verification of such type of models is a prerequisite for guaranteeing reliability and confidence and qualifying simulation tools and approaches for any further model application. In 2011, a first SeSBench -Subsurface Environmental Simulation Benchmarking- workshop was held in Berkeley (USA) followed by four other ones. The objective is to benchmark subsurface environmental simulation models and methods with a current focus on reactive transport processes. The final outcome was a special issue in Computational Geosciences (2015, issue 3 - Reactive transport benchmarks for subsurface environmental simulation) with a collection of 11 benchmarks. Benchmarks, proposed by the participants of the workshops, should be relevant for environmental or geo-engineering applications; the latter were mostly related to radioactive waste disposal issues - excluding benchmarks defined for pure mathematical reasons. Another important feature is the tiered approach within a benchmark with the definition of a single principle problem and different sub problems. The latter typically benchmarked individual or simplified processes (e.g. inert solute transport, simplified geochemical conceptual model) or geometries (e.g. batch or one-dimensional, homogeneous). Finally, three codes should be involved into a benchmark. The SeSBench initiative contributes to confidence building for applying reactive transport codes. Furthermore, it illustrates the use of those type of models for different environmental and geo-engineering applications. SeSBench will organize new workshops to add new benchmarks in a new special issue. Steefel, C. I., et al. (2015). "Reactive transport codes for subsurface environmental simulation." Computational Geosciences 19: 445-478.
ogs6 - a new concept for porous-fractured media simulations
NASA Astrophysics Data System (ADS)
Naumov, Dmitri; Bilke, Lars; Fischer, Thomas; Rink, Karsten; Wang, Wenqing; Watanabe, Norihiro; Kolditz, Olaf
2015-04-01
OpenGeoSys (OGS) is a scientific open-source initiative for numerical simulation of thermo-hydro-mechanical/chemical (THMC) processes in porous and fractured media, continuously developed since the mid-eighties. The basic concept is to provide a flexible numerical framework for solving coupled multi-field problems. OGS is targeting mainly on applications in environmental geoscience, e.g. in the fields of contaminant hydrology, water resources management, waste deposits, or geothermal energy systems, but it has also been successfully applied to new topics in energy storage recently. OGS is actively participating several international benchmarking initiatives, e.g. DECOVALEX (waste management), CO2BENCH (CO2 storage and sequestration), SeSBENCH (reactive transport processes) and HM-Intercomp (coupled hydrosystems). Despite the broad applicability of OGS in geo-, hydro- and energy-sciences, several shortcomings became obvious concerning the computational efficiency as well as the code structure became too sophisticated for further efficient development. OGS-5 was designed for object-oriented FEM applications. However, in many multi-field problems a certain flexibility of tailored numerical schemes is essential. Therefore, a new concept was designed to overcome existing bottlenecks. The paradigms for ogs6 are: - Flexibility of numerical schemes (FEM#FVM#FDM), - Computational efficiency (PetaScale ready), - Developer- and user-friendly. ogs6 has a module-oriented architecture based on thematic libraries (e.g. MeshLib, NumLib) on the large scale and uses object-oriented approach for the small scale interfaces. Usage of a linear algebra library (Eigen3) for the mathematical operations together with the ISO C++11 standard increases the expressiveness of the code and makes it more developer-friendly. The new C++ standard also makes the template meta-programming technique code used for compile-time optimizations more compact. We have transitioned the main code development to the GitHub code hosting system (https://github.com/ufz/ogs). The very flexible revision control system Git in combination with issue tracking, developer feedback and the code review options improve the code quality and the development process in general. The continuous testing procedure of the benchmarks as it was established for OGS-5 is maintained. Additionally unit testing, which is automatically triggered by any code changes, is executed by two continuous integration frameworks (Jenkins CI, Travis CI) which build and test the code on different operating systems (Windows, Linux, Mac OS), in multiple configurations and with different compilers (GCC, Clang, Visual Studio). To improve the testing possibilities further, XML based file input formats are introduced helping with automatic validation of the user contributed benchmarks. The first ogs6 prototype version 6.0.1 has been implemented for solving generic elliptic problems. Next steps are envisaged to transient, non-linear and coupled problems. Literature: [1] Kolditz O, Shao H, Wang W, Bauer S (eds) (2014): Thermo-Hydro-Mechanical-Chemical Processes in Fractured Porous Media: Modelling and Benchmarking - Closed Form Solutions. In: Terrestrial Environmental Sciences, Vol. 1, Springer, Heidelberg, ISBN 978-3-319-11893-2, 315pp. http://www.springer.com/earth+sciences+and+geography/geology/book/978-3-319-11893-2 [2] Naumov D (2015): Computational Fluid Dynamics in Unconsolidated Sediments: Model Generation and Discrete Flow Simulations, PhD thesis, Technische Universität Dresden.
Benchmark problems in computational aeroacoustics
NASA Technical Reports Server (NTRS)
Porter-Locklear, Freda
1994-01-01
A recent directive at NASA Langley is aimed at numerically predicting principal noise sources. During my summer stay, I worked with high-order ENO code, developed by Dr. Harold Atkins, for solving the unsteady compressible Navier-Stokes equations, as it applies to computational aeroacoustics (CAA). A CAA workshop, composed of six categories of benchmark problems, has been organized to test various numerical properties of code. My task was to determine the robustness of Atkins' code for these test problems. In one category, we tested the nonlinear wave propagation of the code for the one-dimensional Euler equations, with initial pressure, density, and velocity conditions. Using freestream boundary conditions, our results were plausible. In another category, we solved the linearized two-dimensional Euler equations to test the effectiveness of radiation boundary conditions. Here we utilized MAPLE to compute eigenvalues and eigenvectors of the Jacobian given variable and flux vectors. We experienced a minor problem with inflow and outflow boundary conditions. Next, we solved the quasi one dimensional unsteady flow equations with an incoming acoustic wave of amplitude 10(exp -6). The small amplitude sound wave was incident on a convergent-divergent nozzle. After finding a steady-state solution and then marching forward, our solution indicated that after 30 periods the acoustic wave had dissipated (a period is time required for sound wave to traverse one end of nozzle to other end).
Revisiting Yasinsky and Henry`s benchmark using modern nodal codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feltus, M.A.; Becker, M.W.
1995-12-31
The numerical experiments analyzed by Yasinsky and Henry are quite trivial by comparison with today`s standards because they used the finite difference code WIGLE for their benchmark. Also, this problem is a simple slab (one-dimensional) case with no feedback mechanisms. This research attempts to obtain STAR (Ref. 2) and NEM (Ref. 3) code results in order to produce a more modern kinetics benchmark with results comparable WIGLE.
Benchmarks for target tracking
NASA Astrophysics Data System (ADS)
Dunham, Darin T.; West, Philip D.
2011-09-01
The term benchmark originates from the chiseled horizontal marks that surveyors made, into which an angle-iron could be placed to bracket ("bench") a leveling rod, thus ensuring that the leveling rod can be repositioned in exactly the same place in the future. A benchmark in computer terms is the result of running a computer program, or a set of programs, in order to assess the relative performance of an object by running a number of standard tests and trials against it. This paper will discuss the history of simulation benchmarks that are being used by multiple branches of the military and agencies of the US government. These benchmarks range from missile defense applications to chemical biological situations. Typically, a benchmark is used with Monte Carlo runs in order to tease out how algorithms deal with variability and the range of possible inputs. We will also describe problems that can be solved by a benchmark.
Multi-Constituent Simulation of Thrombus Deposition
NASA Astrophysics Data System (ADS)
Wu, Wei-Tao; Jamiolkowski, Megan A.; Wagner, William R.; Aubry, Nadine; Massoudi, Mehrdad; Antaki, James F.
2017-02-01
In this paper, we present a spatio-temporal mathematical model for simulating the formation and growth of a thrombus. Blood is treated as a multi-constituent mixture comprised of a linear fluid phase and a thrombus (solid) phase. The transport and reactions of 10 chemical and biological species are incorporated using a system of coupled convection-reaction-diffusion (CRD) equations to represent three processes in thrombus formation: initiation, propagation and stabilization. Computational fluid dynamic (CFD) simulations using the libraries of OpenFOAM were performed for two illustrative benchmark problems: in vivo thrombus growth in an injured blood vessel and in vitro thrombus deposition in micro-channels (1.5 mm × 1.6 mm × 0.1 mm) with small crevices (125 μm × 75 μm and 125 μm × 137 μm). For both problems, the simulated thrombus deposition agreed very well with experimental observations, both spatially and temporally. Based on the success with these two benchmark problems, which have very different flow conditions and biological environments, we believe that the current model will provide useful insight into the genesis of thrombosis in blood-wetted devices, and provide a tool for the design of less thrombogenic devices.
Multi-Constituent Simulation of Thrombus Deposition
Wu, Wei-Tao; Jamiolkowski, Megan A.; Wagner, William R.; Aubry, Nadine; Massoudi, Mehrdad; Antaki, James F.
2017-01-01
In this paper, we present a spatio-temporal mathematical model for simulating the formation and growth of a thrombus. Blood is treated as a multi-constituent mixture comprised of a linear fluid phase and a thrombus (solid) phase. The transport and reactions of 10 chemical and biological species are incorporated using a system of coupled convection-reaction-diffusion (CRD) equations to represent three processes in thrombus formation: initiation, propagation and stabilization. Computational fluid dynamic (CFD) simulations using the libraries of OpenFOAM were performed for two illustrative benchmark problems: in vivo thrombus growth in an injured blood vessel and in vitro thrombus deposition in micro-channels (1.5 mm × 1.6 mm × 0.1 mm) with small crevices (125 μm × 75 μm and 125 μm × 137 μm). For both problems, the simulated thrombus deposition agreed very well with experimental observations, both spatially and temporally. Based on the success with these two benchmark problems, which have very different flow conditions and biological environments, we believe that the current model will provide useful insight into the genesis of thrombosis in blood-wetted devices, and provide a tool for the design of less thrombogenic devices. PMID:28218279
Multi-Constituent Simulation of Thrombus Deposition.
Wu, Wei-Tao; Jamiolkowski, Megan A; Wagner, William R; Aubry, Nadine; Massoudi, Mehrdad; Antaki, James F
2017-02-20
In this paper, we present a spatio-temporal mathematical model for simulating the formation and growth of a thrombus. Blood is treated as a multi-constituent mixture comprised of a linear fluid phase and a thrombus (solid) phase. The transport and reactions of 10 chemical and biological species are incorporated using a system of coupled convection-reaction-diffusion (CRD) equations to represent three processes in thrombus formation: initiation, propagation and stabilization. Computational fluid dynamic (CFD) simulations using the libraries of OpenFOAM were performed for two illustrative benchmark problems: in vivo thrombus growth in an injured blood vessel and in vitro thrombus deposition in micro-channels (1.5 mm × 1.6 mm × 0.1 mm) with small crevices (125 μm × 75 μm and 125 μm × 137 μm). For both problems, the simulated thrombus deposition agreed very well with experimental observations, both spatially and temporally. Based on the success with these two benchmark problems, which have very different flow conditions and biological environments, we believe that the current model will provide useful insight into the genesis of thrombosis in blood-wetted devices, and provide a tool for the design of less thrombogenic devices.
Yue, Zhihua; Shi, Jinhai; Jiang, Pengli; Sun, He
2014-11-01
Little is known about the effects of drug-drug interactions between valacyclovir and non-steroidal anti-inflammatory drugs (NSAIDs). In this study, we analysed the adverse event 'acute kidney injury (AKI)' resulting from a possible interaction between loxoprofen (a non-selective NSAID) and valacyclovir in reports received by FDA Adverse Event Reporting System (AERS) database between January 2004 and June 2012. Adverse event reports of elderly patients aged ≥65 years old were included in the study. Exposure categories were divided into three index groups (only valacyclovir or loxoprofen was used, and both drugs were concomitantly used) and a reference group (neither valacyclovir nor loxoprofen were used). Case/non-case AKI reports associated with these drugs were recorded and analysed by the reporting odds ratio (ROR). In total, 447 002 reports were included in the study. The ROR, adjusted for year of reporting, age and sex, for an AKI in elderly patients who used only valacyclovir or loxoprofen compared with elderly patients who used neither valacyclovir nor loxoprofen was 4.6 (95%CI: 4.1-5.2) and 1.4 (95%CI: 1.2-1.6), respectively, while the adjusted ROR was 26.0 (95%CI: 19.2-35.3) when both drugs were concomitantly used. Case reports in AERS are suggestive that interactions between valacyclovir and loxoprofen resulting in AKI may occur, while this association needs to be analysed by other methods in more detail in order to determine the real strength of the relationship. Copyright © 2014 John Wiley & Sons, Ltd.
Chen, Shao-Yu; Dehart, Deborah B; Sulik, Kathleen K
2004-08-01
Based on previous in vitro studies that have illustrated prevention of ethanol-induced cell death by antioxidants, using an in vivo model, we have tested the anti-teratogenic potential of a potent synthetic superoxide dismutase plus catalase mimetic, EUK-134. The developing limb of C57BL/6J mice, which is sensitive to ethanol-induced reduction defects, served as the model system. On their ninth day of pregnancy, C57BL/6J mice were administered ethanol (two intraperitoneal doses of 2.9 g/kg given 4 h apart) alone or in combination with EUK-134 (two doses of 10 mg/kg). Pregnant control mice were similarly treated with either vehicle or EUK-134, alone. Within 15 h of the initial ethanol exposure, excessive apoptotic cell death was observed in the apical ectodermal ridge (AER) of the newly forming forelimb buds. Forelimb defects, including postaxial ectrodactyly, metacarpal, and ulnar deficiencies, occurred in 67.3% of the ethanol-exposed fetuses that were examined at 18 days of gestation. The right forelimbs were preferentially affected. No limb malformations were observed in control fetuses. Cell death in the AER of embryos concurrently exposed to ethanol and EUK-134 was notably reduced compared with that in embryos from ethanol-treated dams. Additionally, the antioxidant treatment reduced the incidence of forelimb malformations to 35.9%. This work illustrates that antioxidants can significantly improve the adverse developmental outcome that results from ethanol exposure in utero, diminishing the incidence and severity of major malformations that result from exposure to this important human teratogen.
Development of a sensor coordinated kinematic model for neural network controller training
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C.
1990-01-01
A robotic benchmark problem useful for evaluating alternative neural network controllers is presented. Specifically, it derives two camera models and the kinematic equations of a multiple degree of freedom manipulator whose end effector is under observation. The mapping developed include forward and inverse translations from binocular images to 3-D target position and the inverse kinematics of mapping point positions into manipulator commands in joint space. Implementation is detailed for a three degree of freedom manipulator with one revolute joint at the base and two prismatic joints on the arms. The example is restricted to operate within a unit cube with arm links of 0.6 and 0.4 units respectively. The development is presented in the context of more complex simulations and a logical path for extension of the benchmark to higher degree of freedom manipulators is presented.
Benchmarking the Collocation Stand-Alone Library and Toolkit (CSALT)
NASA Technical Reports Server (NTRS)
Hughes, Steven; Knittel, Jeremy; Shoan, Wendy; Kim, Youngkwang; Conway, Claire; Conway, Darrel J.
2017-01-01
This paper describes the processes and results of Verification and Validation (VV) efforts for the Collocation Stand Alone Library and Toolkit (CSALT). We describe the test program and environments, the tools used for independent test data, and comparison results. The VV effort employs classical problems with known analytic solutions, solutions from other available software tools, and comparisons to benchmarking data available in the public literature. Presenting all test results are beyond the scope of a single paper. Here we present high-level test results for a broad range of problems, and detailed comparisons for selected problems.
Benchmarking the Collocation Stand-Alone Library and Toolkit (CSALT)
NASA Technical Reports Server (NTRS)
Hughes, Steven; Knittel, Jeremy; Shoan, Wendy (Compiler); Kim, Youngkwang; Conway, Claire (Compiler); Conway, Darrel
2017-01-01
This paper describes the processes and results of Verification and Validation (V&V) efforts for the Collocation Stand Alone Library and Toolkit (CSALT). We describe the test program and environments, the tools used for independent test data, and comparison results. The V&V effort employs classical problems with known analytic solutions, solutions from other available software tools, and comparisons to benchmarking data available in the public literature. Presenting all test results are beyond the scope of a single paper. Here we present high-level test results for a broad range of problems, and detailed comparisons for selected problems.
PFLOTRAN Verification: Development of a Testing Suite to Ensure Software Quality
NASA Astrophysics Data System (ADS)
Hammond, G. E.; Frederick, J. M.
2016-12-01
In scientific computing, code verification ensures the reliability and numerical accuracy of a model simulation by comparing the simulation results to experimental data or known analytical solutions. The model is typically defined by a set of partial differential equations with initial and boundary conditions, and verification ensures whether the mathematical model is solved correctly by the software. Code verification is especially important if the software is used to model high-consequence systems which cannot be physically tested in a fully representative environment [Oberkampf and Trucano (2007)]. Justified confidence in a particular computational tool requires clarity in the exercised physics and transparency in its verification process with proper documentation. We present a quality assurance (QA) testing suite developed by Sandia National Laboratories that performs code verification for PFLOTRAN, an open source, massively-parallel subsurface simulator. PFLOTRAN solves systems of generally nonlinear partial differential equations describing multiphase, multicomponent and multiscale reactive flow and transport processes in porous media. PFLOTRAN's QA test suite compares the numerical solutions of benchmark problems in heat and mass transport against known, closed-form, analytical solutions, including documentation of the exercised physical process models implemented in each PFLOTRAN benchmark simulation. The QA test suite development strives to follow the recommendations given by Oberkampf and Trucano (2007), which describes four essential elements in high-quality verification benchmark construction: (1) conceptual description, (2) mathematical description, (3) accuracy assessment, and (4) additional documentation and user information. Several QA tests within the suite will be presented, including details of the benchmark problems and their closed-form analytical solutions, implementation of benchmark problems in PFLOTRAN simulations, and the criteria used to assess PFLOTRAN's performance in the code verification procedure. References Oberkampf, W. L., and T. G. Trucano (2007), Verification and Validation Benchmarks, SAND2007-0853, 67 pgs., Sandia National Laboratories, Albuquerque, NM.
Delta-ray Production in MCNP 6.2.0
NASA Astrophysics Data System (ADS)
Anderson, C.; McKinney, G.; Tutt, J.; James, M.
Secondary electrons in the form of delta-rays, also referred to as knock-on electrons, have been a feature of MCNP for electron and positron transport for over 20 years. While MCNP6 now includes transport for a suite of heavy-ions and charged particles from its integration with MCNPX, the production of delta-rays was still limited to electron and positron transport. In the newest release of MCNP6, version 6.2.0, delta-ray production has now been extended for all energetic charged particles. The basis of this production is the analytical formulation from Rossi and ICRU Report 37. This paper discusses the MCNP6 heavy charged-particle implementation and provides production results for several benchmark/test problems.
Zheng, Shaokui; Li, Xiaofeng; Zhang, Xueyu; Wang, Wei; Yuan, Shengliu
2017-09-01
This study investigated the potential effect of four frequently used inorganic regenerant properties (i.e., ionic strength, cation type, anion type, and regeneration solution volume) on the desorption and adsorption performance of 14 pharmaceuticals, belonging to 12 therapeutic classes with different predominant chemical forms and hydrophobicities, using polymeric anion exchange resin (AER)-packed fixed-bed column tests. After preconditioning with NaCl, NaOH, or saline-alkaline (SA) solutions, all resulting mobile counterion types of AERs effectively adsorbed all 14 pharmaceuticals, where the preferential magnitude of OH - -type = Cl - + OH - -type > Cl - -type. During regeneration, ionic strength (1 M versus 3 M NaCl) had no significant influence on desorption performance for any of the 14 pharmaceuticals, while no regenerant cation (HCl versus NaCl) or anion type (NaCl versus NaOH and SA) achieved higher desorption efficiencies for all pharmaceuticals. A volumetric increase in 1 M or 3 M NaCl solutions significantly improved the desorption efficiencies of most pharmaceuticals, irrespective of ionic strength. The results indicate that regeneration protocols, including regenerant cation type, anion type and volume, should be optimized to improve pharmaceutical removal by AERs. Copyright © 2017 Elsevier Ltd. All rights reserved.
2017-02-15
Maunz2 Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone...information processors have been demonstrated experimentally using superconducting circuits1–3, electrons in semiconductors4–6, trapped atoms and...qubit quantum information processor has been realized14, and single- qubit gates have demonstrated randomized benchmarking (RB) infidelities as low as 10
Evolutionary Optimization of a Geometrically Refined Truss
NASA Technical Reports Server (NTRS)
Hull, P. V.; Tinker, M. L.; Dozier, G. V.
2007-01-01
Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.
Benchmarking FEniCS for mantle convection simulations
NASA Astrophysics Data System (ADS)
Vynnytska, L.; Rognes, M. E.; Clark, S. R.
2013-01-01
This paper evaluates the usability of the FEniCS Project for mantle convection simulations by numerical comparison to three established benchmarks. The benchmark problems all concern convection processes in an incompressible fluid induced by temperature or composition variations, and cover three cases: (i) steady-state convection with depth- and temperature-dependent viscosity, (ii) time-dependent convection with constant viscosity and internal heating, and (iii) a Rayleigh-Taylor instability. These problems are modeled by the Stokes equations for the fluid and advection-diffusion equations for the temperature and composition. The FEniCS Project provides a novel platform for the automated solution of differential equations by finite element methods. In particular, it offers a significant flexibility with regard to modeling and numerical discretization choices; we have here used a discontinuous Galerkin method for the numerical solution of the advection-diffusion equations. Our numerical results are in agreement with the benchmarks, and demonstrate the applicability of both the discontinuous Galerkin method and FEniCS for such applications.
Edwards, Beatrice J.; Usmani, Sarah; Raisch, Dennis W.; McKoy, June M.; Samaras, Athena T.; Belknap, Steven M.; Trifilio, Steven M.; Hahr, Allison; Bunta, Andrew D.; Abu-Alfa, Ali; Langman, Craig B.; Rosen, Steve T.; West, Dennis P.
2013-01-01
Purpose: To determine whether acute kidney injury (AKI) is identified within the US Food and Drug Administration's Adverse Events and Reporting System (FDA AERS) as an adverse event resulting from bisphosphonate (BP) use in cancer therapy. Methods: A search of the FDA AERS records from January 1998 through June 2009 was performed; search terms were “renal problems” and all drug names for BPs. The search resulted in 2,091 reports. We analyzed for signals of disproportional association by calculating the proportional reporting ratio for zoledronic acid (ZOL) and pamidronate. Literature review of BP-associated renal injury within the cancer setting was conducted. Results: Four hundred eighty cases of BP-associated acute kidney injury (AKI) were identified in patients with cancer. Two hundred ninety-eight patients (56%) were female; mean age was 66 ± 10 years. Multiple myeloma (n = 220, 46%), breast cancer (n = 98, 20%), and prostate cancer (n = 24, 5%) were identified. Agents included ZOL (n = 411, 87.5%), pamidronate (n = 8, 17%), and alendronate (n = 36, 2%). Outcomes included hospitalization (n = 304, 63.3%) and death (n = 68, 14%). The proportional reporting ratio for ZOL was 1.22 (95% CI, 1.13 to 1.32) and for pamidronate was 1.55 (95% CI, 1.25 to 1.65), reflecting a nonsignificant safety signal for both drugs. Conclusion: AKI was identified in BP cancer clinical trials, although a safety signal for BPs and AKI within the FDA AERS was not detected. Our findings may be attributed, in part, to clinicians who believe that AKI occurs infrequently; ascribe the AKI to underlying premorbid disease, therapy, or cancer progression; or consider that AKI is a known adverse drug reaction of BPs and thus under-report AKI to the AERS. PMID:23814519
Bellavere, F; Cacciatori, V; Bacchi, E; Gemma, M L; Raimondo, D; Negri, C; Thomaseth, K; Muggeo, M; Bonora, E; Moghetti, P
2018-03-01
Both aerobic (AER) and resistance (RES) training improve metabolic control in patients with type 2 diabetes (T2DM). However, information on the effects of these training modalities on cardiovascular autonomic control is limited. Our aim was to compare the effects of AER and RES training on cardiovascular autonomic function in these subjects. Cardiovascular autonomic control was assessed by Power Spectral Analysis (PSA) of Heart Rate Variability (HRV) and baroreceptors function indexes in 30 subjects with T2DM, randomly assigned to aerobic or resistance training for 4 months. In particular, PSA of HRV measured the Low Frequency (LF) and High Frequency (HF) bands of RR variations, expression of prevalent sympathetic and parasympathetic drive, respectively. Furthermore, we measured the correlation occurring between systolic blood pressure and heart rate during a standardized Valsalva maneuver using two indexes, b2 and b4, considered an expression of baroreceptor sensitivity and peripheral vasoactive adaptations during predominant sympathetic and parasympathetic drive, respectively. After training, the LF/HF ratio, which summarizes the sympatho-vagal balance in HRV control, was similarly decreased in the AER and RES groups. After AER, b2 and b4 significantly improved. After RES, changes of b2 were of borderline significance, whereas changes of b4 did not reach statistical significance. However, comparison of changes in baroreceptor sensitivity indexes between groups did not show statistically significant differences. Both aerobic and resistance training improve several indices of the autonomic control of the cardiovascular system in patients with T2DM. Although these improvements seem to occur to a similar extent in both training modalities, some differences cannot be ruled out. NCT01182948, clinicaltrials.gov. Copyright © 2017 The Italian Society of Diabetology, the Italian Society for the Study of Atherosclerosis, the Italian Society of Human Nutrition, and the Department of Clinical Medicine and Surgery, Federico II University. Published by Elsevier B.V. All rights reserved.
Gandhi, Pranav K; Gentry, William M; Bottorff, Michael B
2012-10-01
To investigate reports of thrombotic events associated with the use of C1 esterase inhibitor products in patients with hereditary angioedema in the United States. Retrospective data mining analysis. The United States Food and Drug Administration (FDA) adverse event reporting system (AERS) database. Case reports of C1 esterase inhibitor products, thrombotic events, and C1 esterase inhibitor product-associated thrombotic events (i.e., combination cases) were extracted from the AERS database, using the time frames of each respective product's FDA approval date through the second quarter of 2011. Bayesian statistical methodology within the neural network architecture was implemented to identify potential signals of a drug-associated adverse event. A potential signal is generated when the lower limit of the 95% 2-sided confidence interval of the information component, denoted by IC₀₂₅ , is greater than zero. This suggests that the particular drug-associated adverse event was reported to the database more often than statistically expected from reports available in the database. Ten combination cases of thrombotic events associated with the use of one C1 esterase inhibitor product (Cinryze) were identified in patients with hereditary angioedema. A potential signal demonstrated by an IC₀₂₅ value greater than zero (IC₀₂₅ = 2.91) was generated for these combination cases. The extracted cases from the AERS indicate continuing reports of thrombotic events associated with the use of one C1 esterase inhibitor product among patients with hereditary angioedema. The AERS is incapable of establishing a causal link and detecting the true frequency of an adverse event associated with a drug; however, potential signals of C1 esterase inhibitor product-associated thrombotic events among patients with hereditary angioedema were identified in the extracted combination cases. © 2012 Pharmacotherapy Publications, Inc.
TRUST. I. A 3D externally illuminated slab benchmark for dust radiative transfer
NASA Astrophysics Data System (ADS)
Gordon, K. D.; Baes, M.; Bianchi, S.; Camps, P.; Juvela, M.; Kuiper, R.; Lunttila, T.; Misselt, K. A.; Natale, G.; Robitaille, T.; Steinacker, J.
2017-07-01
Context. The radiative transport of photons through arbitrary three-dimensional (3D) structures of dust is a challenging problem due to the anisotropic scattering of dust grains and strong coupling between different spatial regions. The radiative transfer problem in 3D is solved using Monte Carlo or Ray Tracing techniques as no full analytic solution exists for the true 3D structures. Aims: We provide the first 3D dust radiative transfer benchmark composed of a slab of dust with uniform density externally illuminated by a star. This simple 3D benchmark is explicitly formulated to provide tests of the different components of the radiative transfer problem including dust absorption, scattering, and emission. Methods: The details of the external star, the slab itself, and the dust properties are provided. This benchmark includes models with a range of dust optical depths fully probing cases that are optically thin at all wavelengths to optically thick at most wavelengths. The dust properties adopted are characteristic of the diffuse Milky Way interstellar medium. This benchmark includes solutions for the full dust emission including single photon (stochastic) heating as well as two simplifying approximations: One where all grains are considered in equilibrium with the radiation field and one where the emission is from a single effective grain with size-distribution-averaged properties. A total of six Monte Carlo codes and one Ray Tracing code provide solutions to this benchmark. Results: The solution to this benchmark is given as global spectral energy distributions (SEDs) and images at select diagnostic wavelengths from the ultraviolet through the infrared. Comparison of the results revealed that the global SEDs are consistent on average to a few percent for all but the scattered stellar flux at very high optical depths. The image results are consistent within 10%, again except for the stellar scattered flux at very high optical depths. The lack of agreement between different codes of the scattered flux at high optical depths is quantified for the first time. Convergence tests using one of the Monte Carlo codes illustrate the sensitivity of the solutions to various model parameters. Conclusions: We provide the first 3D dust radiative transfer benchmark and validate the accuracy of this benchmark through comparisons between multiple independent codes and detailed convergence tests.
NAS Grid Benchmarks: A Tool for Grid Space Exploration
NASA Technical Reports Server (NTRS)
Frumkin, Michael; VanderWijngaart, Rob F.; Biegel, Bryan (Technical Monitor)
2001-01-01
We present an approach for benchmarking services provided by computational Grids. It is based on the NAS Parallel Benchmarks (NPB) and is called NAS Grid Benchmark (NGB) in this paper. We present NGB as a data flow graph encapsulating an instance of an NPB code in each graph node, which communicates with other nodes by sending/receiving initialization data. These nodes may be mapped to the same or different Grid machines. Like NPB, NGB will specify several different classes (problem sizes). NGB also specifies the generic Grid services sufficient for running the bench-mark. The implementor has the freedom to choose any specific Grid environment. However, we describe a reference implementation in Java, and present some scenarios for using NGB.
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob; Biegel, Bryan A. (Technical Monitor)
2002-01-01
We describe a new problem size, called Class D, for the NAS Parallel Benchmarks (NPB), whose MPI source code implementation is being released as NPB 2.4. A brief rationale is given for how the new class is derived. We also describe the modifications made to the MPI (Message Passing Interface) implementation to allow the new class to be run on systems with 32-bit integers, and with moderate amounts of memory. Finally, we give the verification values for the new problem size.
Real-time classification and sensor fusion with a spiking deep belief network
O'Connor, Peter; Neil, Daniel; Liu, Shih-Chii; Delbruck, Tobi; Pfeiffer, Michael
2013-01-01
Deep Belief Networks (DBNs) have recently shown impressive performance on a broad range of classification problems. Their generative properties allow better understanding of the performance, and provide a simpler solution for sensor fusion tasks. However, because of their inherent need for feedback and parallel update of large numbers of units, DBNs are expensive to implement on serial computers. This paper proposes a method based on the Siegert approximation for Integrate-and-Fire neurons to map an offline-trained DBN onto an efficient event-driven spiking neural network suitable for hardware implementation. The method is demonstrated in simulation and by a real-time implementation of a 3-layer network with 2694 neurons used for visual classification of MNIST handwritten digits with input from a 128 × 128 Dynamic Vision Sensor (DVS) silicon retina, and sensory-fusion using additional input from a 64-channel AER-EAR silicon cochlea. The system is implemented through the open-source software in the jAER project and runs in real-time on a laptop computer. It is demonstrated that the system can recognize digits in the presence of distractions, noise, scaling, translation and rotation, and that the degradation of recognition performance by using an event-based approach is less than 1%. Recognition is achieved in an average of 5.8 ms after the onset of the presentation of a digit. By cue integration from both silicon retina and cochlea outputs we show that the system can be biased to select the correct digit from otherwise ambiguous input. PMID:24115919
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collin, Blaise P.
2014-09-01
This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparisonmore » of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read this document thoroughly to make sure all the data needed for their calculations is provided in the document. Missing data will be added to a revision of the document if necessary.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marck, Steven C. van der, E-mail: vandermarck@nrg.eu
Recent releases of three major world nuclear reaction data libraries, ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1, have been tested extensively using benchmark calculations. The calculations were performed with the latest release of the continuous energy Monte Carlo neutronics code MCNP, i.e. MCNP6. Three types of benchmarks were used, viz. criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 2000 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), tomore » mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for {sup 6}Li, {sup 7}Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D2O, H2O, concrete, polyethylene and teflon). The new functionality in MCNP6 to calculate the effective delayed neutron fraction was tested by comparison with more than thirty measurements in widely varying systems. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. The performance of the three libraries, in combination with MCNP6, is shown to be good. The results for the LEU-COMP-THERM category are on average very close to the benchmark value. Also for most other categories the results are satisfactory. Deviations from the benchmark values do occur in certain benchmark series, or in isolated cases within benchmark series. Such instances can often be related to nuclear data for specific non-fissile elements, such as C, Fe, or Gd. Indications are that the intermediate and mixed spectrum cases are less well described. The results for the shielding benchmarks are generally good, with very similar results for the three libraries in the majority of cases. Nevertheless there are, in certain cases, strong deviations between calculated and benchmark values, such as for Co and Mg. Also, the results show discrepancies at certain energies or angles for e.g. C, N, O, Mo, and W. The functionality of MCNP6 to calculate the effective delayed neutron fraction yields very good results for all three libraries.« less
NASA Technical Reports Server (NTRS)
Ganapol, Barry D.; Townsend, Lawrence W.; Wilson, John W.
1989-01-01
Nontrivial benchmark solutions are developed for the galactic ion transport (GIT) equations in the straight-ahead approximation. These equations are used to predict potential radiation hazards in the upper atmosphere and in space. Two levels of difficulty are considered: (1) energy independent, and (2) spatially independent. The analysis emphasizes analytical methods never before applied to the GIT equations. Most of the representations derived have been numerically implemented and compared to more approximate calculations. Accurate ion fluxes are obtained (3 to 5 digits) for nontrivial sources. For monoenergetic beams, both accurate doses and fluxes are found. The benchmarks presented are useful in assessing the accuracy of transport algorithms designed to accommodate more complex radiation protection problems. In addition, these solutions can provide fast and accurate assessments of relatively simple shield configurations.
A novel discrete PSO algorithm for solving job shop scheduling problem to minimize makespan
NASA Astrophysics Data System (ADS)
Rameshkumar, K.; Rajendran, C.
2018-02-01
In this work, a discrete version of PSO algorithm is proposed to minimize the makespan of a job-shop. A novel schedule builder has been utilized to generate active schedules. The discrete PSO is tested using well known benchmark problems available in the literature. The solution produced by the proposed algorithms is compared with best known solution published in the literature and also compared with hybrid particle swarm algorithm and variable neighborhood search PSO algorithm. The solution construction methodology adopted in this study is found to be effective in producing good quality solutions for the various benchmark job-shop scheduling problems.
Finite Element Modeling of the World Federation's Second MFL Benchmark Problem
NASA Astrophysics Data System (ADS)
Zeng, Zhiwei; Tian, Yong; Udpa, Satish; Udpa, Lalita
2004-02-01
This paper presents results obtained by simulating the second magnetic flux leakage benchmark problem proposed by the World Federation of NDE Centers. The geometry consists of notches machined on the internal and external surfaces of a rotating steel pipe that is placed between two yokes that are part of a magnetic circuit energized by an electromagnet. The model calculates the radial component of the leaked field at specific positions. The nonlinear material property of the ferromagnetic pipe is taken into account in simulating the problem. The velocity effect caused by the rotation of the pipe is, however, ignored for reasons of simplicity.
1980-02-01
shallow ground moraine over rock. The downstream channel is described as swamp. The rock is described on Geologic Overlay Sheet 22, as hornblende granite ...DAM 410-04’ hqa Scale: I" =I Mite LEGEND: PRECAMBRIAN gh Mostly Hornblende Granite and Gneiss. hqa Hyperstene-Quartz- And esine.-Gneiss. GEOLOGIC MAP L...A.J. 0o2o/) S CZ6 -§&S5 /,r/ C,4 7-1 ,4V-etaoe Dep4e&/LaL L* rt~~~c~~t4’A aeS’ OP~ ~ A AI 3CD PS?7V7,/ & zAer ’, ! v’.’:7- z - 6 c ,, ,, ,,g
Cardozo, Flávio Augusto; Gonzalez, Juan Miguel; Feitosa, Valker Araujo; Pessoa, Adalberto; Rivera, Irma Nelly Gutierrez
2017-10-27
N-Acetyl-D-glucosamine (GlcNAc) is a monosaccharide with great application potential in the food, cosmetic, pharmaceutical, and biomaterial areas. GlcNAc is currently produced by chemical hydrolysis of chitin, but the current processes are environmentally unfriendly, have low yield and high cost. This study demonstrates the potential to produce GlcNAc from α-chitin using chitinases of ten marine-derived Aeromonas isolates as a sustainable alternative to the current chemical process. The isolates were characterized as Aeromonas caviae by multilocus sequence analysis (MLSA) using six housekeeping genes (gltA, groL, gyrB, metG, ppsA, and recA), not presented the virulence genes verified (alt, act, ast, ahh1, aer, aerA, hlyA, ascV and ascFG), but showed hemolytic activity on blood agar. GlcNAc was produced at 37 °C, pH 5.0, 2% (w/v) colloidal chitin and crude chitinase extracts (0.5 U mL -1 ) by all the isolates with yields from 14 to 85% at 6 h, 17-89% at 12 h and 19-93% after 24 h. The highest yield of GlcNAc was observed by A. caviae CH129 (93%). This study demonstrates one of the most efficient chitin enzymatic hydrolysis procedures and A. caviae isolates with great potential for chitinases expression and GlcNAc production.
SMART- Small Motor AerRospace Technology
NASA Astrophysics Data System (ADS)
Balucani, M.; Crescenzi, R.; Ferrari, A.; Guarrea, G.; Pontetti, G.; Orsini, F.; Quattrino, L.; Viola, F.
2004-11-01
This paper presents the "SMART" (Small Motor AerRospace Tecnology) propulsion system, constituted of microthrusters array realised by semiconductor technology on silicon wafers. SMART system is obtained gluing three main modules: combustion chambers, igniters and nozzles. The module was then filled with propellant and closed by gluing a piece of silicon wafer in the back side of the combustion chambers. The complete assembled module composed of 25 micro- thrusters with a 3 x 5 nozzle is presented. The measurement showed a thrust of 129 mN and impulse of 56,8 mNs burning about 70mg of propellant for the micro-thruster with nozzle and a thrust of 21 mN and impulse of 8,4 mNs for the micro-thruster without nozzle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lell, R. M.; Schaefer, R. W.; McKnight, R. D.
Over a period of 30 years more than a hundred Zero Power Reactor (ZPR) critical assemblies were constructed at Argonne National Laboratory. The ZPR facilities, ZPR-3, ZPR-6, ZPR-9 and ZPPR, were all fast critical assembly facilities. The ZPR critical assemblies were constructed to support fast reactor development, but data from some of these assemblies are also well suited to form the basis for criticality safety benchmarks. Of the three classes of ZPR assemblies, engineering mockups, engineering benchmarks and physics benchmarks, the last group tends to be most useful for criticality safety. Because physics benchmarks were designed to test fast reactormore » physics data and methods, they were as simple as possible in geometry and composition. The principal fissile species was {sup 235}U or {sup 239}Pu. Fuel enrichments ranged from 9% to 95%. Often there were only one or two main core diluent materials, such as aluminum, graphite, iron, sodium or stainless steel. The cores were reflected (and insulated from room return effects) by one or two layers of materials such as depleted uranium, lead or stainless steel. Despite their more complex nature, a small number of assemblies from the other two classes would make useful criticality safety benchmarks because they have features related to criticality safety issues, such as reflection by soil-like material. The term 'benchmark' in a ZPR program connotes a particularly simple loading aimed at gaining basic reactor physics insight, as opposed to studying a reactor design. In fact, the ZPR-6/7 Benchmark Assembly (Reference 1) had a very simple core unit cell assembled from plates of depleted uranium, sodium, iron oxide, U3O8, and plutonium. The ZPR-6/7 core cell-average composition is typical of the interior region of liquid-metal fast breeder reactors (LMFBRs) of the era. It was one part of the Demonstration Reactor Benchmark Program,a which provided integral experiments characterizing the important features of demonstration-size LMFBRs. As a benchmark, ZPR-6/7 was devoid of many 'real' reactor features, such as simulated control rods and multiple enrichment zones, in its reference form. Those kinds of features were investigated experimentally in variants of the reference ZPR-6/7 or in other critical assemblies in the Demonstration Reactor Benchmark Program.« less
Nations that develop water quality benchmark values have relied primarily on standard data and methods. However, experience with chemicals such as Se, ammonia, and tributyltin has shown that standard methods do not adequately address some taxa, modes of exposure and effects. Deve...
Nations that develop water quality benchmark values have relied primarily on standard data and methods. However, experience with chemicals such as Se, ammonia, and tributyltin has shown that standard methods do not adequately address some taxa, modes of exposure and effects. Deve...
Benchmark Problems of the Geothermal Technologies Office Code Comparison Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Mark D.; Podgorney, Robert; Kelkar, Sharad M.
A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office has sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulationmore » capabilities to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. Study participants submitted solutions to problems for which their simulation tools were deemed capable or nearly capable. Some participating codes were originally developed for EGS applications whereas some others were designed for different applications but can simulate processes similar to those in EGS. Solution submissions from both were encouraged. In some cases, participants made small incremental changes to their numerical simulation codes to address specific elements of the problem, and in other cases participants submitted solutions with existing simulation tools, acknowledging the limitations of the code. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of modern numerical simulation tools by recognized expert practitioners.« less
BioPreDyn-bench: a suite of benchmark problems for dynamic modelling in systems biology.
Villaverde, Alejandro F; Henriques, David; Smallbone, Kieran; Bongard, Sophia; Schmid, Joachim; Cicin-Sain, Damjan; Crombach, Anton; Saez-Rodriguez, Julio; Mauch, Klaus; Balsa-Canto, Eva; Mendes, Pedro; Jaeger, Johannes; Banga, Julio R
2015-02-20
Dynamic modelling is one of the cornerstones of systems biology. Many research efforts are currently being invested in the development and exploitation of large-scale kinetic models. The associated problems of parameter estimation (model calibration) and optimal experimental design are particularly challenging. The community has already developed many methods and software packages which aim to facilitate these tasks. However, there is a lack of suitable benchmark problems which allow a fair and systematic evaluation and comparison of these contributions. Here we present BioPreDyn-bench, a set of challenging parameter estimation problems which aspire to serve as reference test cases in this area. This set comprises six problems including medium and large-scale kinetic models of the bacterium E. coli, baker's yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The level of description includes metabolism, transcription, signal transduction, and development. For each problem we provide (i) a basic description and formulation, (ii) implementations ready-to-run in several formats, (iii) computational results obtained with specific solvers, (iv) a basic analysis and interpretation. This suite of benchmark problems can be readily used to evaluate and compare parameter estimation methods. Further, it can also be used to build test problems for sensitivity and identifiability analysis, model reduction and optimal experimental design methods. The suite, including codes and documentation, can be freely downloaded from the BioPreDyn-bench website, https://sites.google.com/site/biopredynbenchmarks/ .
Benchmark Problems Used to Assess Computational Aeroacoustics Codes
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Envia, Edmane
2005-01-01
The field of computational aeroacoustics (CAA) encompasses numerical techniques for calculating all aspects of sound generation and propagation in air directly from fundamental governing equations. Aeroacoustic problems typically involve flow-generated noise, with and without the presence of a solid surface, and the propagation of the sound to a receiver far away from the noise source. It is a challenge to obtain accurate numerical solutions to these problems. The NASA Glenn Research Center has been at the forefront in developing and promoting the development of CAA techniques and methodologies for computing the noise generated by aircraft propulsion systems. To assess the technological advancement of CAA, Glenn, in cooperation with the Ohio Aerospace Institute and the AeroAcoustics Research Consortium, organized and hosted the Fourth CAA Workshop on Benchmark Problems. Participants from industry and academia from both the United States and abroad joined to present and discuss solutions to benchmark problems. These demonstrated technical progress ranging from the basic challenges to accurate CAA calculations to the solution of CAA problems of increasing complexity and difficulty. The results are documented in the proceedings of the workshop. Problems were solved in five categories. In three of the five categories, exact solutions were available for comparison with CAA results. A fourth category of problems representing sound generation from either a single airfoil or a blade row interacting with a gust (i.e., problems relevant to fan noise) had approximate analytical or completely numerical solutions. The fifth category of problems involved sound generation in a viscous flow. In this case, the CAA results were compared with experimental data.
Building America Industrialized Housing Partnership (BAIHP)
DOE Office of Scientific and Technical Information (OSTI.GOV)
McIlvaine, Janet; Chandra, Subrato; Barkaszi, Stephen
This final report summarizes the work conducted by the Building America Industrialized Housing Partnership (www.baihp.org) for the period 9/1/99-6/30/06. BAIHP is led by the Florida Solar Energy Center of the University of Central Florida and focuses on factory built housing. In partnership with over 50 factory and site builders, work was performed in two main areas--research and technical assistance. In the research area--through site visits in over 75 problem homes, we discovered the prime causes of moisture problems in some manufactured homes and our industry partners adopted our solutions to nearly eliminate this vexing problem. Through testing conducted in overmore » two dozen housing factories of six factory builders we documented the value of leak free duct design and construction which was embraced by our industry partners and implemented in all the thousands of homes they built. Through laboratory test facilities and measurements in real homes we documented the merits of 'cool roof' technologies and developed an innovative night sky radiative cooling concept currently being tested. We patented an energy efficient condenser fan design, documented energy efficient home retrofit strategies after hurricane damage, developed improved specifications for federal procurement for future temporary housing, compared the Building America benchmark to HERS Index and IECC 2006, developed a toolkit for improving the accuracy and speed of benchmark calculations, monitored the field performance of over a dozen prototype homes and initiated research on the effectiveness of occupancy feedback in reducing household energy use. In the technical assistance area we provided systems engineering analysis, conducted training, testing and commissioning that have resulted in over 128,000 factory built and over 5,000 site built homes which are saving their owners over $17,000,000 annually in energy bills. These include homes built by Palm Harbor Homes, Fleetwood, Southern Energy Homes, Cavalier and the manufacturers participating in the Northwest Energy Efficient Manufactured Home program. We worked with over two dozen Habitat for Humanity affiliates and helped them build over 700 Energy Star or near Energy Star homes. We have provided technical assistance to several show homes constructed for the International builders show in Orlando, FL and assisted with other prototype homes in cold climates that save 40% over the benchmark reference. In the Gainesville Fl area we have several builders that are consistently producing 15 to 30 homes per month in several subdivisions that meet the 30% benchmark savings goal. We have contributed to the 2006 DOE Joule goals by providing two community case studies meeting the 30% benchmark goal in marine climates.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gylenhaal, J.; Bronevetsky, G.
2007-05-25
CLOMP is the C version of the Livermore OpenMP benchmark deeloped to measure OpenMP overheads and other performance impacts due to threading (like NUMA memory layouts, memory contention, cache effects, etc.) in order to influence future system design. Current best-in-class implementations of OpenMP have overheads at least ten times larger than is required by many of our applications for effective use of OpenMP. This benchmark shows the significant negative performance impact of these relatively large overheads and of other thread effects. The CLOMP benchmark highly configurable to allow a variety of problem sizes and threading effects to be studied andmore » it carefully checks its results to catch many common threading errors. This benchmark is expected to be included as part of the Sequoia Benchmark suite for the Sequoia procurement.« less
Building Bridges Between Geoscience and Data Science through Benchmark Data Sets
NASA Astrophysics Data System (ADS)
Thompson, D. R.; Ebert-Uphoff, I.; Demir, I.; Gel, Y.; Hill, M. C.; Karpatne, A.; Güereque, M.; Kumar, V.; Cabral, E.; Smyth, P.
2017-12-01
The changing nature of observational field data demands richer and more meaningful collaboration between data scientists and geoscientists. Thus, among other efforts, the Working Group on Case Studies of the NSF-funded RCN on Intelligent Systems Research To Support Geosciences (IS-GEO) is developing a framework to strengthen such collaborations through the creation of benchmark datasets. Benchmark datasets provide an interface between disciplines without requiring extensive background knowledge. The goals are to create (1) a means for two-way communication between geoscience and data science researchers; (2) new collaborations, which may lead to new approaches for data analysis in the geosciences; and (3) a public, permanent repository of complex data sets, representative of geoscience problems, useful to coordinate efforts in research and education. The group identified 10 key elements and characteristics for ideal benchmarks. High impact: A problem with high potential impact. Active research area: A group of geoscientists should be eager to continue working on the topic. Challenge: The problem should be challenging for data scientists. Data science generality and versatility: It should stimulate development of new general and versatile data science methods. Rich information content: Ideally the data set provides stimulus for analysis at many different levels. Hierarchical problem statement: A hierarchy of suggested analysis tasks, from relatively straightforward to open-ended tasks. Means for evaluating success: Data scientists and geoscientists need means to evaluate whether the algorithms are successful and achieve intended purpose. Quick start guide: Introduction for data scientists on how to easily read the data to enable rapid initial data exploration. Geoscience context: Summary for data scientists of the specific data collection process, instruments used, any pre-processing and the science questions to be answered. Citability: A suitable identifier to facilitate tracking the use of the benchmark later on, e.g. allowing search engines to find all research papers using it. A first sample benchmark developed in collaboration with the Jet Propulsion Laboratory (JPL) deals with the automatic analysis of imaging spectrometer data to detect significant methane sources in the atmosphere.
Guturu, Parthasarathy; Dantu, Ram
2008-06-01
Many graph- and set-theoretic problems, because of their tremendous application potential and theoretical appeal, have been well investigated by the researchers in complexity theory and were found to be NP-hard. Since the combinatorial complexity of these problems does not permit exhaustive searches for optimal solutions, only near-optimal solutions can be explored using either various problem-specific heuristic strategies or metaheuristic global-optimization methods, such as simulated annealing, genetic algorithms, etc. In this paper, we propose a unified evolutionary algorithm (EA) to the problems of maximum clique finding, maximum independent set, minimum vertex cover, subgraph and double subgraph isomorphism, set packing, set partitioning, and set cover. In the proposed approach, we first map these problems onto the maximum clique-finding problem (MCP), which is later solved using an evolutionary strategy. The proposed impatient EA with probabilistic tabu search (IEA-PTS) for the MCP integrates the best features of earlier successful approaches with a number of new heuristics that we developed to yield a performance that advances the state of the art in EAs for the exploration of the maximum cliques in a graph. Results of experimentation with the 37 DIMACS benchmark graphs and comparative analyses with six state-of-the-art algorithms, including two from the smaller EA community and four from the larger metaheuristics community, indicate that the IEA-PTS outperforms the EAs with respect to a Pareto-lexicographic ranking criterion and offers competitive performance on some graph instances when individually compared to the other heuristic algorithms. It has also successfully set a new benchmark on one graph instance. On another benchmark suite called Benchmarks with Hidden Optimal Solutions, IEA-PTS ranks second, after a very recent algorithm called COVER, among its peers that have experimented with this suite.
Benchmarking in national health service procurement in Scotland.
Walker, Scott; Masson, Ron; Telford, Ronnie; White, David
2007-11-01
The paper reports the results of a study on benchmarking activities undertaken by the procurement organization within the National Health Service (NHS) in Scotland, namely National Procurement (previously Scottish Healthcare Supplies Contracts Branch). NHS performance is of course politically important, and benchmarking is increasingly seen as a means to improve performance, so the study was carried out to determine if the current benchmarking approaches could be enhanced. A review of the benchmarking activities used by the private sector, local government and NHS organizations was carried out to establish a framework of the motivations, benefits, problems and costs associated with benchmarking. This framework was used to carry out the research through case studies and a questionnaire survey of NHS procurement organizations both in Scotland and other parts of the UK. Nine of the 16 Scottish Health Boards surveyed reported carrying out benchmarking during the last three years. The findings of the research were that there were similarities in approaches between local government and NHS Scotland Health, but differences between NHS Scotland and other UK NHS procurement organizations. Benefits were seen as significant and it was recommended that National Procurement should pursue the formation of a benchmarking group with members drawn from NHS Scotland and external benchmarking bodies to establish measures to be used in benchmarking across the whole of NHS Scotland.
1976-03-01
AER< »DYNAMIC -INO TUNNELÜ.TL PAb£ SHEET _„ ... TESI 1 — ALPHA "" -0.01 PA»T N«CH mi0-4 PHI CON* b 0.4» 2.3...1N6 UE »ilOPMEHT CENTERUEDCI PROPULSION MlMO TUNNEL FaClLH • MRU » MISSILE T»IL EFFECTS rriP*T» •ERI 3DYNANIC MIND TUNNELUTl in P«6...MlSSILg T*lL CFftCTjj 0»T» P*GE swEcr 1 Of «1 TESI "»BI -»L* HAll—6 3 •« 2.cto 1.0 44.0 dl.OFjS TITT 20 UEL2 -20 T5ET5
Delta-ray Production in MCNP 6.2.0
Anderson, Casey Alan; McKinney, Gregg Walter; Tutt, James Robert; ...
2017-10-26
Secondary electrons in the form of delta-rays, also referred to as knock-on electrons, have been a feature of MCNP for electron and positron transport for over 20 years. While MCNP6 now includes transport for a suite of heavy-ions and charged particles from its integration with MCNPX, the production of delta-rays was still limited to electron and positron transport. In the newest release of MCNP6, version 6.2.0, delta-ray production has now been extended for all energetic charged particles. The basis of this production is the analytical formulation from Rossi and ICRU Report 37. As a result, this paper discusses the MCNP6more » heavy charged-particle implementation and provides production results for several benchmark/test problems.« less
Microbially Mediated Kinetic Sulfur Isotope Fractionation: Reactive Transport Modeling Benchmark
NASA Astrophysics Data System (ADS)
Wanner, C.; Druhan, J. L.; Cheng, Y.; Amos, R. T.; Steefel, C. I.; Ajo Franklin, J. B.
2014-12-01
Microbially mediated sulfate reduction is a ubiquitous process in many subsurface systems. Isotopic fractionation is characteristic of this anaerobic process, since sulfate reducing bacteria (SRB) favor the reduction of the lighter sulfate isotopologue (S32O42-) over the heavier isotopologue (S34O42-). Detection of isotopic shifts have been utilized as a proxy for the onset of sulfate reduction in subsurface systems such as oil reservoirs and aquifers undergoing uranium bioremediation. Reactive transport modeling (RTM) of kinetic sulfur isotope fractionation has been applied to field and laboratory studies. These RTM approaches employ different mathematical formulations in the representation of kinetic sulfur isotope fractionation. In order to test the various formulations, we propose a benchmark problem set for the simulation of kinetic sulfur isotope fractionation during microbially mediated sulfate reduction. The benchmark problem set is comprised of four problem levels and is based on a recent laboratory column experimental study of sulfur isotope fractionation. Pertinent processes impacting sulfur isotopic composition such as microbial sulfate reduction and dispersion are included in the problem set. To date, participating RTM codes are: CRUNCHTOPE, TOUGHREACT, MIN3P and THE GEOCHEMIST'S WORKBENCH. Preliminary results from various codes show reasonable agreement for the problem levels simulating sulfur isotope fractionation in 1D.
Holmqvist, Anna Sällfors; Olsen, Jørgen H; Andersen, Klaus Kaae; de Fine Licht, Sofie; Hjorth, Lars; Garwicz, Stanislaw; Moëll, Christian; Anderson, Harald; Wesenberg, Finn; Tryggvadottir, Laufey; Malila, Nea; Boice, John D; Hasle, Henrik; Winther, Jeanette Falck
2014-04-01
An increased risk for diabetes mellitus (DM) adds significantly to the burden of late complications in childhood cancer survivors. Complications of DM may be prevented by using appropriate screening. It is, therefore, important to better characterise the reported increased risk for DM in a large population-based setting. From the national cancer registries of the five Nordic countries, a cohort of 32,903 1-year survivors of cancer diagnosed before the age of 20 between start of cancer registration in the 1940s and 1950s through 2008 was identified; 212,393 comparison subjects of the same age, gender and country were selected from national population registers. Study subjects were linked to the national hospital registers. Absolute excess risks (AERs) and standardised hospitalisation rate ratios (SHRRs) were calculated. DM was diagnosed in 496 childhood cancer survivors, yielding an overall SHRR of 1.6 (95% confidence interval (CI), 1.5-1.8) and an AER of 43 per 100,000 person-years, increasing from approximately 20 extra cases of DM in ages 0-19 to more than 100 extra cases per 100,000 person-years in ages > or =50. The relative risks for DM were significantly increased after Wilms' tumour (SHRR, 2.9), leukaemia (2.0), CNS neoplasms (1.8), germ-cell neoplasms (1.7), malignant bone tumours (1.7) and Hodgkin's lymphoma (1.6). The risk for DM type 2 was slightly higher than that for type 1. Childhood cancer survivors are at increased risk for DM, with absolute risks increasing throughout life. These findings underscore the need for preventive interventions and prolonged follow-up of childhood cancer survivors. Copyright © 2014 Elsevier Ltd. All rights reserved.
Virulence Genes and Antibiotic Susceptibilities of Uropathogenic E. coli Strains.
Uzun, Cengiz; Oncül, Oral; Gümüş, Defne; Alan, Servet; Dayioğlu, Nurten; Küçüker, Mine Anğ
2015-01-01
The aim of this study is to detect the presence of and possible relation between virulence genes and antibiotic resistance in E. coli strains isolated from patients with acute, uncomplicated urinary tract infections (UTI). 62 E. coli strains isolated from patients with acute, uncomplicated urinary tract infections (50 strains isolated from acute uncomplicated cystitis cases (AUC); 12 strains from acute uncomplicated pyelonephritis cases (AUP)) were screened for virulence genes [pap (pyelonephritis-associated pili), sfa/foc (S and F1C fimbriae), afa (afimbrial adhesins), hly (hemolysin), cnf1 (cytotoxic necrotizing factor), aer (aerobactin), PAI (pathogenicity island marker), iroN (catecholate siderophore receptor), ompT (outer membrane protein T), usp (uropathogenic specific protein)] by PCR and for antimicrobial resistance by disk diffusion method according to CLSI criteria. It was found that 56 strains (90.3%) carried at least one virulence gene. The most common virulence genes were ompT (79%), aer (51.6%), PAI (51.6%) and usp (56.5%). 60% of the strains were resistant to at least one antibiotic. The highest resistance rates were against ampicillin (79%) and co-trimoxazole (41.9%). Fifty percent of the E. coli strains (31 strains) were found to be multiple resistant. Eight (12.9%) out of 62 strains were found to be ESBL positive. Statistically significant relationships were found between the absence of usp and AMP - SXT resistance, iroN and OFX - CIP resistance, PAI and SXT resistance, cnf1 and AMP resistance, and a significant relationship was also found between the presence of the afa and OFX resistance. No difference between E. coli strains isolated from two different clinical presentations was found in terms of virulence genes and antibiotic susceptibility.
NAS Parallel Benchmark Results 11-96. 1.0
NASA Technical Reports Server (NTRS)
Bailey, David H.; Bailey, David; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
The NAS Parallel Benchmarks have been developed at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a "pencil and paper" fashion. In other words, the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. These results represent the best results that have been reported to us by the vendors for the specific 3 systems listed. In this report, we present new NPB (Version 1.0) performance results for the following systems: DEC Alpha Server 8400 5/440, Fujitsu VPP Series (VX, VPP300, and VPP700), HP/Convex Exemplar SPP2000, IBM RS/6000 SP P2SC node (120 MHz), NEC SX-4/32, SGI/CRAY T3E, SGI Origin200, and SGI Origin2000. We also report High Performance Fortran (HPF) based NPB results for IBM SP2 Wide Nodes, HP/Convex Exemplar SPP2000, and SGI/CRAY T3D. These results have been submitted by Applied Parallel Research (APR) and Portland Group Inc. (PGI). We also present sustained performance per dollar for Class B LU, SP and BT benchmarks.
Particle swarm optimization with recombination and dynamic linkage discovery.
Chen, Ying-Ping; Peng, Wen-Chih; Jian, Ming-Chung
2007-12-01
In this paper, we try to improve the performance of the particle swarm optimizer by incorporating the linkage concept, which is an essential mechanism in genetic algorithms, and design a new linkage identification technique called dynamic linkage discovery to address the linkage problem in real-parameter optimization problems. Dynamic linkage discovery is a costless and effective linkage recognition technique that adapts the linkage configuration by employing only the selection operator without extra judging criteria irrelevant to the objective function. Moreover, a recombination operator that utilizes the discovered linkage configuration to promote the cooperation of particle swarm optimizer and dynamic linkage discovery is accordingly developed. By integrating the particle swarm optimizer, dynamic linkage discovery, and recombination operator, we propose a new hybridization of optimization methodologies called particle swarm optimization with recombination and dynamic linkage discovery (PSO-RDL). In order to study the capability of PSO-RDL, numerical experiments were conducted on a set of benchmark functions as well as on an important real-world application. The benchmark functions used in this paper were proposed in the 2005 Institute of Electrical and Electronics Engineers Congress on Evolutionary Computation. The experimental results on the benchmark functions indicate that PSO-RDL can provide a level of performance comparable to that given by other advanced optimization techniques. In addition to the benchmark, PSO-RDL was also used to solve the economic dispatch (ED) problem for power systems, which is a real-world problem and highly constrained. The results indicate that PSO-RDL can successfully solve the ED problem for the three-unit power system and obtain the currently known best solution for the 40-unit system.
Donini, Lorenzo Maria
2015-01-01
In obese diabetic subjects, a correct life style, including diet and physical activity, is part of a correct intervention protocol. Thus, the aim of this study was to evaluate the effects of aerobic training intervention, based on heart rate at aerobic gas exchange threshold (AerTge), on clinical and physiological parameters in obese elderly subjects with type 2 diabetes (OT2DM). Thirty OT2DM subjects were randomly assigned to an intervention (IG) or control group (CG). The IG performed a supervised aerobic exercise training based on heart rate at AerTge whereas CG maintained their usual lifestyle. Anthropometric measures, blood analysis, peak oxygen consumption (V˙O2peak), metabolic equivalent (METpeak), work rate (WRpeak), and WRAerTge were assessed at baseline and after intervention. After training, patients enrolled in the IG had significantly higher (P < 0.001) V˙O2peak, METpeak, WRpeak, and WRAerTge and significantly lower (P < 0.005) weight, BMI, %FM, and waist circumference than before intervention. Both IG and CG subjects had lower glycated haemoglobin levels after intervention period. No significant differences were found for all the other parameters between pre- and posttraining and between groups. Aerobic exercise prescription based upon HR at AerTge could be a valuable physical intervention tool to improve the fitness level and metabolic equilibrium in OT2DM patients. PMID:26089890
Large-eddy simulation of a turbulent flow over the DrivAer fastback vehicle model
NASA Astrophysics Data System (ADS)
Ruettgers, Mario; Park, Junshin; You, Donghyun
2017-11-01
In 2012 the Technical University of Munich (TUM) made realistic generic car models called DrivAer available to the public. These detailed models allow a precise calculation of the flow around a lifelike car which was limited to simplified geometries in the past. In the present study, the turbulent flow around one of the models, the DrivAer Fastback model, is simulated using large-eddy simulation (LES). The goal of the study is to give a deeper physical understanding of highly turbulent regions around the car, like at the side mirror or at the rear end. For each region the contribution to the total drag is worked out. The results have shown that almost 35% of the drag is generated from the car wheels whereas the side mirror only contributes 4% of the total drag. Detailed frequency analysis on velocity signals in each wake region have also been conducted and found 3 dominant frequencies which correspond to the dominant frequency of the total drag. Furthermore, vortical structures are visualized and highly energetic points are identified. This work was supported by the National Research Foundation of Korea(NRF) Grant funded by the Korea government(Ministry of Science, ICT and Future Planning) (No. 2014R1A2A1A11049599, No. 2015R1A2A1A15056086, No. 2016R1E1A2A01939553).
Campbell, Asharie J.; Watts, Kylie J.; Johnson, Mark S.; Taylor, Barry L.
2010-01-01
Summary The Aer receptor monitors internal energy (redox) levels in Escherichia coli with an FAD-containing PAS domain. Here, we randomly mutagenized the region encoding residues 14 to 119 of the PAS domain and found 72 aerotaxis-defective mutants, 24 of which were gain-of-function, signal-on mutants. The mutations were mapped onto an Aer homology model based on the structure of the PAS-FAD domain in NifL from Azotobacter vinlandii. Signal-on lesions clustered in the FAD binding pocket, the β-scaffolding and in the N-cap loop. We suggest that the signal-on lesions mimic the “signal-on” state of the PAS domain, and therefore may be markers for the signal-in and signal-out regions of this domain. We propose that the reduction of FAD rearranges the FAD binding pocket in a way that repositions the β-scaffolding and the N-cap loop. The resulting conformational changes are likely to be conveyed directly to the HAMP domain, and on to the kinase control module. In support of this hypothesis, we demonstrated disulfide band formation between cysteines substituted at residues N98C or I114C in the PAS β-scaffold and residue Q248C in the HAMP AS-2 helix. PMID:20545849
NASA Astrophysics Data System (ADS)
Hummel, John R.; Bergenthal, Jeff J.; Seng, William F.; Moulton, Joseph R., Jr.; Prager, S. D.
2004-08-01
The Joint Synthetic Battlespace for the Air Force (JSB-AF) is being developed to provide realistic representations of friendly and threat capabilities and the natural environmental conditions to support a variety of Department of Defense missions including training, mission rehearsal, decision support, acquisition, deployment, employment, operations, and the development of Courses of Action. This paper addresses three critical JSB issues associated with providing environ-mental representations to Modeling and Simulation (M&S) applications. First, how should the requirements for envi-ronmental functionality in a JSB-AF application be collected, analyzed, and used to define an Authoritative Environ-mental Representation (AER)? Second, how can JSB-AF AERs be generated? Third, once an AER has been generated, how should it be "served up" to the JSB-AF components? Our analyses of these issues will be presented from a general M&S perspective, with examples given from a JSB-AF centered view. In the context of this effort, the term "representa-tions" is meant to incorporate both basic environmental "data" (e.g., temperature, pressure, slope, elevation, etc.) and "effects", properties that can be derived from these data using physics-based models or empirical relationship from the fundamental data (e.g., extinction coefficients, radiance, soil moisture strength, etc.) We present a state-of-the-art review of the existing processes and technologies that address these questions.
Indoor environmental quality in French dwellings and building characteristics
NASA Astrophysics Data System (ADS)
Langer, Sarka; Ramalho, Olivier; Derbez, Mickaël; Ribéron, Jacques; Kirchner, Severine; Mandin, Corinne
2016-03-01
A national survey on indoor environmental quality covering 567 residences in mainland France was performed during 2003-2005. The measured parameters were temperature, relative humidity, CO2, and the indoor air pollutants: fourteen individual volatile organic compounds (VOC), four aldehydes and particulate matter PM10 and PM2.5. The measured indoor concentrations were analyzed for correlations with the building characteristics: type of dwelling, period of construction, dwelling location, type of ventilation system, building material, attached garage and retrofitting. The median night time air exchange rate (AER) for all dwellings was 0.44 h-1. The night time AER was higher in apartments (median = 0.49 h-1) than in single-family houses (median = 0.41 h-1). Concentration of formaldehyde was approximately 30% higher in dwellings built after 1990 compared with older ones; it was higher in dwellings with mechanical ventilation and in concrete buildings. The VOC concentrations depended on the building characteristics to various extents. The sampling season influenced the majority of the indoor climate parameters and the concentrations of the air pollutants to a higher degree than the building characteristics. Multivariate linear regression models revealed that the indoor-outdoor difference in specific humidity, a proxy for number of occupants and their indoor activities, remained a significant predictor for most gaseous and particulate air pollutants. The other strong predictors were outdoor concentration, smoking, attached garage and AER (in descending order).
Benchmarking and Threshold Standards in Higher Education. Staff and Educational Development Series.
ERIC Educational Resources Information Center
Smith, Helen, Ed.; Armstrong, Michael, Ed.; Brown, Sally, Ed.
This book explores the issues involved in developing standards in higher education, examining the practical issues involved in benchmarking and offering a critical analysis of the problems associated with this developmental tool. The book focuses primarily on experience in the United Kingdom (UK), but looks also at international activity in this…
Improving Federal Education Programs through an Integrated Performance and Benchmarking System.
ERIC Educational Resources Information Center
Department of Education, Washington, DC. Office of the Under Secretary.
This document highlights the problems with current federal education program data collection activities and lists several factors that make movement toward a possible solution, then discusses the vision for the Integrated Performance and Benchmarking System (IPBS), a vision of an Internet-based system for harvesting information from states about…
A Critical Thinking Benchmark for a Department of Agricultural Education and Studies
ERIC Educational Resources Information Center
Perry, Dustin K.; Retallick, Michael S.; Paulsen, Thomas H.
2014-01-01
Due to an ever changing world where technology seemingly provides endless answers, today's higher education students must master a new skill set reflecting an emphasis on critical thinking, problem solving, and communications. The purpose of this study was to establish a departmental benchmark for critical thinking abilities of students majoring…
Benchmarking NNWSI flow and transport codes: COVE 1 results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayden, N.K.
1985-06-01
The code verification (COVE) activity of the Nevada Nuclear Waste Storage Investigations (NNWSI) Project is the first step in certification of flow and transport codes used for NNWSI performance assessments of a geologic repository for disposing of high-level radioactive wastes. The goals of the COVE activity are (1) to demonstrate and compare the numerical accuracy and sensitivity of certain codes, (2) to identify and resolve problems in running typical NNWSI performance assessment calculations, and (3) to evaluate computer requirements for running the codes. This report describes the work done for COVE 1, the first step in benchmarking some of themore » codes. Isothermal calculations for the COVE 1 benchmarking have been completed using the hydrologic flow codes SAGUARO, TRUST, and GWVIP; the radionuclide transport codes FEMTRAN and TRUMP; and the coupled flow and transport code TRACR3D. This report presents the results of three cases of the benchmarking problem solved for COVE 1, a comparison of the results, questions raised regarding sensitivities to modeling techniques, and conclusions drawn regarding the status and numerical sensitivities of the codes. 30 refs.« less
Augmented neural networks and problem structure-based heuristics for the bin-packing problem
NASA Astrophysics Data System (ADS)
Kasap, Nihat; Agarwal, Anurag
2012-08-01
In this article, we report on a research project where we applied augmented-neural-networks (AugNNs) approach for solving the classical bin-packing problem (BPP). AugNN is a metaheuristic that combines a priority rule heuristic with the iterative search approach of neural networks to generate good solutions fast. This is the first time this approach has been applied to the BPP. We also propose a decomposition approach for solving harder BPP, in which subproblems are solved using a combination of AugNN approach and heuristics that exploit the problem structure. We discuss the characteristics of problems on which such problem structure-based heuristics could be applied. We empirically show the effectiveness of the AugNN and the decomposition approach on many benchmark problems in the literature. For the 1210 benchmark problems tested, 917 problems were solved to optimality and the average gap between the obtained solution and the upper bound for all the problems was reduced to under 0.66% and computation time averaged below 33 s per problem. We also discuss the computational complexity of our approach.
Dynamic Inertia Weight Binary Bat Algorithm with Neighborhood Search
2017-01-01
Binary bat algorithm (BBA) is a binary version of the bat algorithm (BA). It has been proven that BBA is competitive compared to other binary heuristic algorithms. Since the update processes of velocity in the algorithm are consistent with BA, in some cases, this algorithm also faces the premature convergence problem. This paper proposes an improved binary bat algorithm (IBBA) to solve this problem. To evaluate the performance of IBBA, standard benchmark functions and zero-one knapsack problems have been employed. The numeric results obtained by benchmark functions experiment prove that the proposed approach greatly outperforms the original BBA and binary particle swarm optimization (BPSO). Compared with several other heuristic algorithms on zero-one knapsack problems, it also verifies that the proposed algorithm is more able to avoid local minima. PMID:28634487
Dynamic Inertia Weight Binary Bat Algorithm with Neighborhood Search.
Huang, Xingwang; Zeng, Xuewen; Han, Rui
2017-01-01
Binary bat algorithm (BBA) is a binary version of the bat algorithm (BA). It has been proven that BBA is competitive compared to other binary heuristic algorithms. Since the update processes of velocity in the algorithm are consistent with BA, in some cases, this algorithm also faces the premature convergence problem. This paper proposes an improved binary bat algorithm (IBBA) to solve this problem. To evaluate the performance of IBBA, standard benchmark functions and zero-one knapsack problems have been employed. The numeric results obtained by benchmark functions experiment prove that the proposed approach greatly outperforms the original BBA and binary particle swarm optimization (BPSO). Compared with several other heuristic algorithms on zero-one knapsack problems, it also verifies that the proposed algorithm is more able to avoid local minima.
Multi-Complementary Model for Long-Term Tracking
Zhang, Deng; Zhang, Junchang; Xia, Chenyang
2018-01-01
In recent years, video target tracking algorithms have been widely used. However, many tracking algorithms do not achieve satisfactory performance, especially when dealing with problems such as object occlusions, background clutters, motion blur, low illumination color images, and sudden illumination changes in real scenes. In this paper, we incorporate an object model based on contour information into a Staple tracker that combines the correlation filter model and color model to greatly improve the tracking robustness. Since each model is responsible for tracking specific features, the three complementary models combine for more robust tracking. In addition, we propose an efficient object detection model with contour and color histogram features, which has good detection performance and better detection efficiency compared to the traditional target detection algorithm. Finally, we optimize the traditional scale calculation, which greatly improves the tracking execution speed. We evaluate our tracker on the Object Tracking Benchmarks 2013 (OTB-13) and Object Tracking Benchmarks 2015 (OTB-15) benchmark datasets. With the OTB-13 benchmark datasets, our algorithm is improved by 4.8%, 9.6%, and 10.9% on the success plots of OPE, TRE and SRE, respectively, in contrast to another classic LCT (Long-term Correlation Tracking) algorithm. On the OTB-15 benchmark datasets, when compared with the LCT algorithm, our algorithm achieves 10.4%, 12.5%, and 16.1% improvement on the success plots of OPE, TRE, and SRE, respectively. At the same time, it needs to be emphasized that, due to the high computational efficiency of the color model and the object detection model using efficient data structures, and the speed advantage of the correlation filters, our tracking algorithm could still achieve good tracking speed. PMID:29425170
Implementation and verification of global optimization benchmark problems
NASA Astrophysics Data System (ADS)
Posypkin, Mikhail; Usov, Alexander
2017-12-01
The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its' gradient at a given point and the interval estimates of a function and its' gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.
Benchmarking optimization software with COPS 3.0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolan, E. D.; More, J. J.; Munson, T. S.
2004-05-24
The authors describe version 3.0 of the COPS set of nonlinearly constrained optimization problems. They have added new problems, as well as streamlined and improved most of the problems. They also provide a comparison of the FILTER, KNITRO, LOQO, MINOS, and SNOPT solvers on these problems.
Simon, T-P; Schuerholz, T; Haugvik, S P; Forberger, C; Burmeister, M-A; Marx, G
2013-01-01
There is evidence that suggests that early fluid resuscitation is beneficial in the treatment of sepsis. We previously demonstrated that hydroxyethyl starch (HES) 130/0.42 attenuated capillary leakage better than HES 200/0.5. Using a similar porcine fecal sepsis model, we tested the effects of two new synthetic high molecular weight (700 kDa) hydroxyethyl starches with the same molar substitution of 0.42 but with a different C2/C6 ratio compared to 6% HES 130/0.42 on plasma volume (PV), systemic and tissue oxygenation. This was a prospective, randomized, controlled animal study. Twenty-five anesthetized and mechanically ventilated pigs (28.4±2.3 kg) were observed over 8 h. Septic shock was induced with fecal peritonitis. Animals were randomized for volume-replacement therapy with HES 700/0.42 C2/C6/2.5:1 (N.=5), HES 700/0.42 C2/C6/6:1 (N.=5), HES 130/0.42 C2/C6/5:1 (N.=5) or Ringer’s Solution (RS, N.=5), and compared to non-septic controls receiving RS (N.=5). The albumin escape rate (AER) was calculated and plasma volume was determined at the end of the study. Tissue Oxygen Saturation was measured with the InSpectra™ Device (InSpectra Tissue Spectrometer, Hutchinson Technology Inc., Hutchinson, MN, USA). The AER increased in all groups compared to control. All colloids (HES 700/6:1 68±15; HES 130 67±4; HES 700/2.5:1 71±12; P<0.05) but not RS (44±7) stabilized PV (mL/kg BW) after eight hours of sepsis. Systemic oxygenation was significantly lower in the RS group (44±17%; P<0.05) compared to all other groups at study end (P<0.05). In this porcine fecal peritonitis model, the high molecular weight artificial colloids HES 700/2.5:1 and HES 700/6:1 were not more effective in maintaining plasma volume and systemic and tissue oxygenation than HES 130. In comparison to crystalloid RS, all HES solutions were more effective at maintaining plasma volume, mean arterial pressure (MAP), and systemic and tissue oxygenation.
Ferrari, D; Lichtler, A C; Pan, Z Z; Dealy, C N; Upholt, W B; Kosher, R A
1998-05-01
During early stages of chick limb development, the homeobox-containing gene Msx-2 is expressed in the mesoderm at the anterior margin of the limb bud and in a discrete group of mesodermal cells at the midproximal posterior margin. These domains of Msx-2 expression roughly demarcate the anterior and posterior boundaries of the progress zone, the highly proliferating posterior mesodermal cells underneath the apical ectodermal ridge (AER) that give rise to the skeletal elements of the limb and associated structures. Later in development as the AER loses its activity, Msx-2 expression expands into the distal mesoderm and subsequently into the interdigital mesenchyme which demarcates the developing digits. The domains of Msx-2 expression exhibit considerably less proliferation than the cells of the progress zone and also encompass several regions of programmed cell death including the anterior and posterior necrotic zones and interdigital mesenchyme. We have thus suggested that Msx-2 may be in a regulatory network that delimits the progress zone by suppressing the morphogenesis of the regions of the limb mesoderm in which it is highly expressed. In the present study we show that ectopic expression of Msx-2 via a retroviral expression vector in the posterior mesoderm of the progress zone from the time of initial formation of the limb bud severely impairs limb morphogenesis. Msx-2-infected limbs are typically very narrow along the anteroposterior axis, are occasionally truncated, and exhibit alterations in the pattern of formation of skeletal elements, indicating that as a consequence of ectopic Msx-2 expression the morphogenesis of large portions of the posterior mesoderm has been suppressed. We further show that Msx-2 impairs limb morphogenesis by reducing cell proliferation and promoting apoptosis in the regions of the posterior mesoderm in which it is ectopically expressed. The domains of ectopic Msx-2 expression in the posterior mesoderm also exhibit ectopic expression of BMP-4, a secreted signaling molecule that is coexpressed with Msx-2 during normal limb development in the anterior limb mesoderm, the posterior necrotic zone, and interdigital mesenchyme. This indicates that Msx-2 regulates BMP-4 expression and that the suppressive effects of Msx-2 on limb morphogenesis might be mediated in part by BMP-4. These studies indicate that during normal limb development Msx-2 is a key component of a regulatory network that delimits the boundaries of the progress zone by suppressing the morphogenesis of the regions of the limb mesoderm in which it is highly expressed, thus restricting the outgrowth and formation of skeletal elements and associated structures to the progress zone. We also report that rather large numbers of apoptotic cells as well as proliferating cells are present throughout the AER during all stages of normal limb development we have examined, indicating that many of the cells of the AER are continuously undergoing programmed cell death at the same time that new AER cells are being generated by cell proliferation. Thus, a balance between cell proliferation and programmed cell death may play a very important role in maintaining the activity of the AER. Copyright 1998 Academic Press.
NASA Astrophysics Data System (ADS)
Velioǧlu, Deniz; Cevdet Yalçıner, Ahmet; Zaytsev, Andrey
2016-04-01
Tsunamis are huge waves with long wave periods and wave lengths that can cause great devastation and loss of life when they strike a coast. The interest in experimental and numerical modeling of tsunami propagation and inundation increased considerably after the 2011 Great East Japan earthquake. In this study, two numerical codes, FLOW 3D and NAMI DANCE, that analyze tsunami propagation and inundation patterns are considered. Flow 3D simulates linear and nonlinear propagating surface waves as well as long waves by solving three-dimensional Navier-Stokes (3D-NS) equations. NAMI DANCE uses finite difference computational method to solve 2D depth-averaged linear and nonlinear forms of shallow water equations (NSWE) in long wave problems, specifically tsunamis. In order to validate these two codes and analyze the differences between 3D-NS and 2D depth-averaged NSWE equations, two benchmark problems are applied. One benchmark problem investigates the runup of long waves over a complex 3D beach. The experimental setup is a 1:400 scale model of Monai Valley located on the west coast of Okushiri Island, Japan. Other benchmark problem is discussed in 2015 National Tsunami Hazard Mitigation Program (NTHMP) Annual meeting in Portland, USA. It is a field dataset, recording the Japan 2011 tsunami in Hilo Harbor, Hawaii. The computed water surface elevation and velocity data are compared with the measured data. The comparisons showed that both codes are in fairly good agreement with each other and benchmark data. The differences between 3D-NS and 2D depth-averaged NSWE equations are highlighted. All results are presented with discussions and comparisons. Acknowledgements: Partial support by Japan-Turkey Joint Research Project by JICA on earthquakes and tsunamis in Marmara Region (JICA SATREPS - MarDiM Project), 603839 ASTARTE Project of EU, UDAP-C-12-14 project of AFAD Turkey, 108Y227, 113M556 and 213M534 projects of TUBITAK Turkey, RAPSODI (CONCERT_Dis-021) of CONCERT-Japan Joint Call and Istanbul Metropolitan Municipality are all acknowledged.
Test One to Test Many: A Unified Approach to Quantum Benchmarks
NASA Astrophysics Data System (ADS)
Bai, Ge; Chiribella, Giulio
2018-04-01
Quantum benchmarks are routinely used to validate the experimental demonstration of quantum information protocols. Many relevant protocols, however, involve an infinite set of input states, of which only a finite subset can be used to test the quality of the implementation. This is a problem, because the benchmark for the finitely many states used in the test can be higher than the original benchmark calculated for infinitely many states. This situation arises in the teleportation and storage of coherent states, for which the benchmark of 50% fidelity is commonly used in experiments, although finite sets of coherent states normally lead to higher benchmarks. Here, we show that the average fidelity over all coherent states can be indirectly probed with a single setup, requiring only two-mode squeezing, a 50-50 beam splitter, and homodyne detection. Our setup enables a rigorous experimental validation of quantum teleportation, storage, amplification, attenuation, and purification of noisy coherent states. More generally, we prove that every quantum benchmark can be tested by preparing a single entangled state and measuring a single observable.
Optimally Stopped Optimization
NASA Astrophysics Data System (ADS)
Vinci, Walter; Lidar, Daniel A.
2016-11-01
We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark simulated annealing on a class of maximum-2-satisfiability (MAX2SAT) problems. We also compare the performance of a D-Wave 2X quantum annealer to the Hamze-Freitas-Selby (HFS) solver, a specialized classical heuristic algorithm designed for low-tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N =1098 variables, the D-Wave device is 2 orders of magnitude faster than the HFS solver, and, modulo known caveats related to suboptimal annealing times, exhibits identical scaling with problem size.
Mitchell, L
1996-01-01
The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.
FY 2002 Customer Satisfaction & Top 200 Users Survey Composite Report
2002-11-01
Federal Government Benchmark 68.6% 71.1% DTIC Excels by +8.4 +11 *ACSI is the official service quality benchmark for the...care. § The American Customer Satisfaction Index (ACSI), the official service quality benchmark for the Federal Government, is currently 71.1%; DTIC...ACSI is the official service quality benchmark for the Federal GovernmentFig 1FY 20020Comparison of Customer Satisfaction (Customer Care
Solutions of the benchmark problems by the dispersion-relation-preserving scheme
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.; Shen, H.; Kurbatskii, K. A.; Auriault, L.
1995-01-01
The 7-point stencil Dispersion-Relation-Preserving scheme of Tam and Webb is used to solve all the six categories of the CAA benchmark problems. The purpose is to show that the scheme is capable of solving linear, as well as nonlinear aeroacoustics problems accurately. Nonlinearities, inevitably, lead to the generation of spurious short wave length numerical waves. Often, these spurious waves would overwhelm the entire numerical solution. In this work, the spurious waves are removed by the addition of artificial selective damping terms to the discretized equations. Category 3 problems are for testing radiation and outflow boundary conditions. In solving these problems, the radiation and outflow boundary conditions of Tam and Webb are used. These conditions are derived from the asymptotic solutions of the linearized Euler equations. Category 4 problems involved solid walls. Here, the wall boundary conditions for high-order schemes of Tam and Dong are employed. These conditions require the use of one ghost value per boundary point per physical boundary condition. In the second problem of this category, the governing equations, when written in cylindrical coordinates, are singular along the axis of the radial coordinate. The proper boundary conditions at the axis are derived by applying the limiting process of r approaches 0 to the governing equations. The Category 5 problem deals with the numerical noise issue. In the present approach, the time-independent mean flow solution is computed first. Once the residual drops to the machine noise level, the incident sound wave is turned on gradually. The solution is marched in time until a time-periodic state is reached. No exact solution is known for the Category 6 problem. Because of this, the problem is formulated in two totally different ways, first as a scattering problem then as a direct simulation problem. There is good agreement between the two numerical solutions. This offers confidence in the computed results. Both formulations are solved as initial value problems. As such, no Kutta condition is required at the trailing edge of the airfoil.
Microwave-based medical diagnosis using particle swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Modiri, Arezoo
This dissertation proposes and investigates a novel architecture intended for microwave-based medical diagnosis (MBMD). Furthermore, this investigation proposes novel modifications of particle swarm optimization algorithm for achieving enhanced convergence performance. MBMD has been investigated through a variety of innovative techniques in the literature since the 1990's and has shown significant promise in early detection of some specific health threats. In comparison to the X-ray- and gamma-ray-based diagnostic tools, MBMD does not expose patients to ionizing radiation; and due to the maturity of microwave technology, it lends itself to miniaturization of the supporting systems. This modality has been shown to be effective in detecting breast malignancy, and hence, this study focuses on the same modality. A novel radiator device and detection technique is proposed and investigated in this dissertation. As expected, hardware design and implementation are of paramount importance in such a study, and a good deal of research, analysis, and evaluation has been done in this regard which will be reported in ensuing chapters of this dissertation. It is noteworthy that an important element of any detection system is the algorithm used for extracting signatures. Herein, the strong intrinsic potential of the swarm-intelligence-based algorithms in solving complicated electromagnetic problems is brought to bear. This task is accomplished through addressing both mathematical and electromagnetic problems. These problems are called benchmark problems throughout this dissertation, since they have known answers. After evaluating the performance of the algorithm for the chosen benchmark problems, the algorithm is applied to MBMD tumor detection problem. The chosen benchmark problems have already been tackled by solution techniques other than particle swarm optimization (PSO) algorithm, the results of which can be found in the literature. However, due to the relatively high level of complexity and randomness inherent to the selection of electromagnetic benchmark problems, a trend to resort to oversimplification in order to arrive at reasonable solutions has been taken in literature when utilizing analytical techniques. Here, an attempt has been made to avoid oversimplification when using the proposed swarm-based optimization algorithms.
NASA Astrophysics Data System (ADS)
Hanssen, R. F.
2017-12-01
In traditional geodesy, one is interested in determining the coordinates, or the change in coordinates, of predefined benchmarks. These benchmarks are clearly identifiable and are especially established to be representative of the signal of interest. This holds, e.g., for leveling benchmarks, for triangulation/trilateration benchmarks, and for GNSS benchmarks. The desired coordinates are not identical to the basic measurements, and need to be estimated using robust estimation procedures, where the stochastic nature of the measurements is taken into account. For InSAR, however, the `benchmarks' are not predefined. In fact, usually we do not know where an effective benchmark is located, even though we can determine its dynamic behavior pretty well. This poses several significant problems. First, we cannot describe the quality of the measurements, unless we already know the dynamic behavior of the benchmark. Second, if we don't know the quality of the measurements, we cannot compute the quality of the estimated parameters. Third, rather harsh assumptions need to be made to produce a result. These (usually implicit) assumptions differ between processing operators and the used software, and are severely affected by the amount of available data. Fourth, the `relative' nature of the final estimates is usually not explicitly stated, which is particularly problematic for non-expert users. Finally, whereas conventional geodesy applies rigorous testing to check for measurement or model errors, this is hardly ever done in InSAR-geodesy. These problems make it rather impossible to provide a precise, reliable, repeatable, and `universal' InSAR product or service. Here we evaluate the requirements and challenges to move towards InSAR as a geodetically-proof product. In particular this involves the explicit inclusion of contextual information, as well as InSAR procedures, standards and a technical protocol, supported by the International Association of Geodesy and the international scientific community.
A Comparative Study of Simulated and Measured Gear-Flap Flow Interaction
NASA Technical Reports Server (NTRS)
Khorrami, Mehdi R.; Mineck, Raymond E.; Yao, Chungsheng; Jenkins, Luther N.; Fares, Ehab
2015-01-01
The ability of two CFD solvers to accurately characterize the transient, complex, interacting flowfield asso-ciated with a realistic gear-flap configuration is assessed via comparison of simulated flow with experimental measurements. The simulated results, obtained with NASA's FUN3D and Exa's PowerFLOW® for a high-fidelity, 18% scale semi-span model of a Gulfstream aircraft in landing configuration (39 deg flap deflection, main landing gear on and off) are compared to two-dimensional and stereo particle image velocimetry measurements taken within the gear-flap flow interaction region during wind tunnel tests of the model. As part of the bench-marking process, direct comparisons of the mean and fluctuating velocity fields are presented in the form of planar contour plots and extracted line profiles at measurement planes in various orientations stationed in the main gear wake. The measurement planes in the vicinity of the flap side edge and downstream of the flap trailing edge are used to highlight the effects of gear presence on tip vortex development and the ability of the computational tools to accurately capture such effects. The present study indicates that both computed datasets contain enough detail to construct a relatively accurate depiction of gear-flap flow interaction. Such a finding increases confidence in using the simulated volumetric flow solutions to examine the behavior of pertinent aer-odynamic mechanisms within the gear-flap interaction zone.
How Much Debt Is Too Much? Defining Benchmarks for Manageable Student Debt
ERIC Educational Resources Information Center
Baum, Sandy; Schwartz, Saul
2006-01-01
Many discussions of student loan repayment focus on those students for whom repayment is a problem and conclude that the reliance on debt to finance postsecondary education is excessive. However, from both a pragmatic perspective and a logical perspective, a more appropriate approach is to develop different benchmarks for students in different…
Rahman, Helina; Deka, Manab
2014-04-01
Urinary tract infections (UTI) are a serious health problem affecting millions of people each year. Although appreciable work on various aspects of UTI including aetiology per se has been done, information on the emerging pathogens like necrotoxigenic Escherichia coli (NTEC) is largely lacking in India. In the present study E. coli isolates from patients with urinary tract infection from northeastern India were investigated for detection and characterization of NTEC. E. coli isolated and identified from urine samples of patients with UTI were serotyped. Antibiogram was determined by disc diffusion test. Plasmid profile was also determined. Virulence genes of NTEC (cnf1, cnf2, pap, aer, sfa, hly, afa) were detected by PCR assay. E.coli isolates carrying cnf gene (s) were identified as NTEC. A total of 550 E. coli were isolated and tested for the presence of cnf genes. Of these, 84 (15.27%) belonged to NTEC. The cnf1 gene was present in 52 (61.9%) isolates, cnf2 in 23 (27.4%) and 9 (10.7%) carried both cnf1 and cnf2 genes. All the NTEC strains were found to harbour the pap and aer genes. Serogroup O4 was found to be the most common among the 12 serogroups identified amongst the NTEC isolates. Majority of the isolates (96.4%) were sensitive to furazolidone and were highly resistant to ampicillin. NTEC were found to harbour different numbers of plasmids (1 to 7). No association was observed between the number of plasmids and the antibiotic resistance of the isolates. The results of the present study showed that about 15 per cent of E. coli isolates associated with UTI belonged to NTEC. More studies need to be done from other parts of the country.
McLawhorn, Melinda W; Goulding, Margie R; Gill, Rajdeep K; Michele, Theresa M
2013-01-01
To augment the December 2010 United States Food and Drug Administration (FDA) Drug Safety Communication on accidental ingestion of benzonatate in children less than 10 years old by summarizing data on emergency department visits, benzonatate exposure, and reports of benzonatate overdoses from several data sources. Retrospective review of adverse-event reports and drug utilization data of benzonatate. The FDA Adverse Event Reporting System (AERS) database (1969-2010), the National Electronic Injury Surveillance System-Cooperative Adverse Drug Event Surveillance Project (NEISS-CADES, 2004-2009), and the IMS commercial data vendor (2004-2009). Any patient who reported an adverse event with benzonatate captured in the AERS or NEISS-CADES database or received a prescription for benzonatate according to the IMS commercial data vendor. Postmarketing adverse events with benzonatate were collected from the AERS database, emergency department visits due to adverse events with benzonatate were collected from the NEISS-CADES database, and outpatient drug utilization data were collected from the IMS commercial data vendor. Of 31 overdose cases involving benzonatate reported in the AERS database, 20 had a fatal outcome, and five of these fatalities occurred from accidental ingestions in children 2 years of age and younger. The NEISS-CADES database captured emergency department visits involving 12 cases of overdose from accidental benzonatate ingestions in children aged 1-3 years. Signs and symptoms of overdose included seizures, cardiac arrest, coma, brain edema or anoxic encephalopathy, apnea, tachycardia, and respiratory arrest and occurred in some patients within 15 minutes of ingestion. Dispensed benzonatate prescriptions increased by approximately 52% from 2004 to 2009. Although benzonatate has a long history of safe use, accumulating cases of fatal overdose, especially in children, prompted the FDA to notify health care professionals about the risks of benzonatate overdose. Pharmacists may have a role in preventing benzonatate overdoses by counseling patients on signs and symptoms of benzonatate overdose, the need for immediate medical care, and safe storage and disposal of benzonatate. © 2013 Pharmacotherapy Publications, Inc.
A novel hybrid meta-heuristic technique applied to the well-known benchmark optimization problems
NASA Astrophysics Data System (ADS)
Abtahi, Amir-Reza; Bijari, Afsane
2017-03-01
In this paper, a hybrid meta-heuristic algorithm, based on imperialistic competition algorithm (ICA), harmony search (HS), and simulated annealing (SA) is presented. The body of the proposed hybrid algorithm is based on ICA. The proposed hybrid algorithm inherits the advantages of the process of harmony creation in HS algorithm to improve the exploitation phase of the ICA algorithm. In addition, the proposed hybrid algorithm uses SA to make a balance between exploration and exploitation phases. The proposed hybrid algorithm is compared with several meta-heuristic methods, including genetic algorithm (GA), HS, and ICA on several well-known benchmark instances. The comprehensive experiments and statistical analysis on standard benchmark functions certify the superiority of the proposed method over the other algorithms. The efficacy of the proposed hybrid algorithm is promising and can be used in several real-life engineering and management problems.
A Bayesian approach to traffic light detection and mapping
NASA Astrophysics Data System (ADS)
Hosseinyalamdary, Siavash; Yilmaz, Alper
2017-03-01
Automatic traffic light detection and mapping is an open research problem. The traffic lights vary in color, shape, geolocation, activation pattern, and installation which complicate their automated detection. In addition, the image of the traffic lights may be noisy, overexposed, underexposed, or occluded. In order to address this problem, we propose a Bayesian inference framework to detect and map traffic lights. In addition to the spatio-temporal consistency constraint, traffic light characteristics such as color, shape and height is shown to further improve the accuracy of the proposed approach. The proposed approach has been evaluated on two benchmark datasets and has been shown to outperform earlier studies. The results show that the precision and recall rates for the KITTI benchmark are 95.78 % and 92.95 % respectively and the precision and recall rates for the LARA benchmark are 98.66 % and 94.65 % .
NASA Technical Reports Server (NTRS)
Lockard, David P.
2011-01-01
Fifteen submissions in the tandem cylinders category of the First Workshop on Benchmark problems for Airframe Noise Computations are summarized. Although the geometry is relatively simple, the problem involves complex physics. Researchers employed various block-structured, overset, unstructured and embedded Cartesian grid techniques and considerable computational resources to simulate the flow. The solutions are compared against each other and experimental data from 2 facilities. Overall, the simulations captured the gross features of the flow, but resolving all the details which would be necessary to compute the noise remains challenging. In particular, how to best simulate the effects of the experimental transition strip, and the associated high Reynolds number effects, was unclear. Furthermore, capturing the spanwise variation proved difficult.
Novel probabilistic neuroclassifier
NASA Astrophysics Data System (ADS)
Hong, Jiang; Serpen, Gursel
2003-09-01
A novel probabilistic potential function neural network classifier algorithm to deal with classes which are multi-modally distributed and formed from sets of disjoint pattern clusters is proposed in this paper. The proposed classifier has a number of desirable properties which distinguish it from other neural network classifiers. A complete description of the algorithm in terms of its architecture and the pseudocode is presented. Simulation analysis of the newly proposed neuro-classifier algorithm on a set of benchmark problems is presented. Benchmark problems tested include IRIS, Sonar, Vowel Recognition, Two-Spiral, Wisconsin Breast Cancer, Cleveland Heart Disease and Thyroid Gland Disease. Simulation results indicate that the proposed neuro-classifier performs consistently better for a subset of problems for which other neural classifiers perform relatively poorly.
Synthesis and Antibacterial Evaluation of Novel 3-Substituted Ocotillol-Type Derivatives as Leads.
Bi, Yi; Liu, Xian-Xuan; Zhang, Heng-Yuan; Yang, Xiao; Liu, Ze-Yun; Lu, Jing; Lewis, Peter John; Wang, Chong-Zhi; Xu, Jin-Yi; Meng, Qing-Guo; Ma, Cong; Yuan, Chun-Su
2017-04-07
Due to the rapidly growing bacterial antibiotic-resistance and the scarcity of novel agents in development, bacterial infection is still a global problem. Therefore, new types of antibacterial agents, which are effective both alone and in combination with traditional antibiotics, are urgently needed. In this paper, a series of antibacterial ocotillol-type C-24 epimers modified from natural 20( S )-protopanaxadiol were synthesized and evaluated for their antibacterial activity. According to the screening results of Gram-positive bacteria ( B. subtilis 168 and MRSA USA300) and Gram-negative bacteria ( P. aer PAO1 and A. baum ATCC19606) in vitro, the derivatives exhibited good antibacterial activity, particularly against Gram-positive bacteria with an minimum inhibitory concentrations (MIC) value of 2-16 µg/mL. The subsequent synergistic antibacterial assay showed that derivatives 5c and 6c enhanced the susceptibility of B. subtilis 168 and MRSA USA300 to chloramphenicol (CHL) and kanamycin (KAN) (FICI < 0.5). Our data showed that ocotillol-type derivatives with long-chain amino acid substituents at C-3 were good leads against antibiotic-resistant pathogens MRSA USA300, which could improve the ability of KAN and CHL to exhibit antibacterial activity at much lower concentrations with reduced toxicity.
High-resolution Self-Organizing Maps for advanced visualization and dimension reduction.
Saraswati, Ayu; Nguyen, Van Tuc; Hagenbuchner, Markus; Tsoi, Ah Chung
2018-05-04
Kohonen's Self Organizing feature Map (SOM) provides an effective way to project high dimensional input features onto a low dimensional display space while preserving the topological relationships among the input features. Recent advances in algorithms that take advantages of modern computing hardware introduced the concept of high resolution SOMs (HRSOMs). This paper investigates the capabilities and applicability of the HRSOM as a visualization tool for cluster analysis and its suitabilities to serve as a pre-processor in ensemble learning models. The evaluation is conducted on a number of established benchmarks and real-world learning problems, namely, the policeman benchmark, two web spam detection problems, a network intrusion detection problem, and a malware detection problem. It is found that the visualization resulted from an HRSOM provides new insights concerning these learning problems. It is furthermore shown empirically that broad benefits from the use of HRSOMs in both clustering and classification problems can be expected. Copyright © 2018 Elsevier Ltd. All rights reserved.
I/O-Efficient Scientific Computation Using TPIE
NASA Technical Reports Server (NTRS)
Vengroff, Darren Erik; Vitter, Jeffrey Scott
1996-01-01
In recent years, input/output (I/O)-efficient algorithms for a wide variety of problems have appeared in the literature. However, systems specifically designed to assist programmers in implementing such algorithms have remained scarce. TPIE is a system designed to support I/O-efficient paradigms for problems from a variety of domains, including computational geometry, graph algorithms, and scientific computation. The TPIE interface frees programmers from having to deal not only with explicit read and write calls, but also the complex memory management that must be performed for I/O-efficient computation. In this paper we discuss applications of TPIE to problems in scientific computation. We discuss algorithmic issues underlying the design and implementation of the relevant components of TPIE and present performance results of programs written to solve a series of benchmark problems using our current TPIE prototype. Some of the benchmarks we present are based on the NAS parallel benchmarks while others are of our own creation. We demonstrate that the central processing unit (CPU) overhead required to manage I/O is small and that even with just a single disk, the I/O overhead of I/O-efficient computation ranges from negligible to the same order of magnitude as CPU time. We conjecture that if we use a number of disks in parallel this overhead can be all but eliminated.
2013-01-01
Background Vitamin D receptor activators reduce albuminuria, and may improve survival in chronic kidney disease (CKD). Animal studies suggest that these pleiotropic effects of vitamin D may be mediated by suppression of renin. However, randomized trials in humans have yet to establish this relationship. Methods In a randomized, placebo-controlled, double-blinded crossover study, the effect of oral paricalcitol (2 μg/day) was investigated in 26 patients with non-diabetic, albuminuric stage III-IV CKD. After treatment, plasma concentrations of renin (PRC), angiotensin II (AngII) and aldosterone (Aldo) were measured. GFR was determined by 51Cr-EDTA clearance. Assessment of renal NO dependency was performed by infusion of NG-monomethyl-L-arginine (L-NMMA). Albumin excretion rate (AER) was analyzed in 24-h urine and during 51Cr-EDTA clearance. Results Paricalcitol did not alter plasma levels of renin, AngII, Aldo, or urinary excretion of sodium and potassium. A modest reduction of borderline significance was observed in AER, and paricalcitol abrogated the albuminuric response to L-NMMA. Conclusions In this randomized, placebo-controlled trial paricalcitol only marginally decreased AER and did not alter circulating levels of renin, AngII or Aldo. The abrogation of the rise in albumin excretion by paricalcitol during NOS blockade may indicate that favourable modulation of renal NO dependency could be involved in mediating reno-protection and survival benefits in CKD. Trial registration ClinicalTrials.gov identifier: NCT01136564 PMID:23889806
Vågstrand, Karin; Lindroos, Anna Karin; Linné, Yvonne
2009-02-01
To describe the differences in socio-economic characteristics and body measurements between low, adequate and high energy reporting (LER, AER and HER) teenagers; furthermore, to investigate the relationship to misreporting mothers. Cross-sectional study. Habitual dietary intake was reported in a questionnaire. Classification into LER, AER and HER using the Goldberg equation within three activity groups based on physical activity questionnaire and calculated BMR. Stockholm, Sweden. Four hundred and forty-one 16-17-year-old teenagers (57 % girls) and their mothers. Of the teenagers, 17-19 % were classified as HER, while 13-16 % as LER. There was a highly significant trend from HER to LER in BMI (P < 0.001) and body fat % (P < 0.001). There was also a trend in number of working hours of mother (P = 0.01), family income (P = 0.008) and number of siblings (among boys only) (P = 0.02), but not in educational level of either father or mother. HER teenagers were lean, had mothers working fewer hours with lower income and had siblings. It was more likely that an LER girl had an LER mother than an AER mother (OR = 3.32; P = 0.002). The reasons for the high number of over-reporters could be many: misclassification due to growth, lacking established eating pattern due to young age or method-specific. Nevertheless, the inverted characteristic of HER compared to LER indicates that this is a specific group, worth further investigation.
Benchmarks for health expenditures, services and outcomes in Africa during the 1990s.
Peters, D. H.; Elmendorf, A. E.; Kandola, K.; Chellaraj, G.
2000-01-01
There is limited information on national health expenditures, services, and outcomes in African countries during the 1990s. We intend to make statistical information available for national level comparisons. National level data were collected from numerous international databases, and supplemented by national household surveys and World Bank expenditure reviews. The results were tabulated and analysed in an exploratory fashion to provide benchmarks for groupings of African countries and individual country comparison. There is wide variation in scale and outcome of health care spending between African countries, with poorer countries tending to do worse than wealthier ones. From 1990-96, the median annual per capita government expenditure on health was nearly US$ 6, but averaged US$ 3 in the lowest-income countries, compared to US$ 72 in middle-income countries. Similar trends were found for health services and outcomes. Results from individual countries (particularly Ethiopia, Ghana, Côte d'Ivoire and Gabon) are used to indicate how the data can be used to identify areas of improvement in health system performance. Serious gaps in data, particularly concerning private sector delivery and financing, health service utilization, equity and efficiency measures, hinder more effective health management. Nonetheless, the data are useful for providing benchmarks for performance and for crudely identifying problem areas in health systems for individual countries. PMID:10916913
Hospital-affiliated practices reduce 'red ink'.
Bohlmann, R C
1998-01-01
Many complain that hospital-group practice affiliations are a failed model and should be abandoned. The author argues for a less rash approach, saying the goal should be to understand the problems precisely, then fix them. Benchmarking is a good place to start. The article outlines the basic definition and ground rules of bench-marking and explains what resources help accomplish the task.
Benchmarking the Multidimensional Stellar Implicit Code MUSIC
NASA Astrophysics Data System (ADS)
Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.
2017-04-01
We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.
Enhanced Verification Test Suite for Physics Simulation Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamm, J R; Brock, J S; Brandon, S T
2008-10-10
This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations. The key points of this document are: (1) Verification deals with mathematical correctness of the numerical algorithms in a code, while validation deals with physical correctness of a simulation in a regime of interest.more » This document is about verification. (2) The current seven-problem Tri-Laboratory Verification Test Suite, which has been used for approximately five years at the DOE WP laboratories, is limited. (3) Both the methodology for and technology used in verification analysis have evolved and been improved since the original test suite was proposed. (4) The proposed test problems are in three basic areas: (a) Hydrodynamics; (b) Transport processes; and (c) Dynamic strength-of-materials. (5) For several of the proposed problems we provide a 'strong sense verification benchmark', consisting of (i) a clear mathematical statement of the problem with sufficient information to run a computer simulation, (ii) an explanation of how the code result and benchmark solution are to be evaluated, and (iii) a description of the acceptance criterion for simulation code results. (6) It is proposed that the set of verification test problems with which any particular code be evaluated include some of the problems described in this document. Analysis of the proposed verification test problems constitutes part of a necessary--but not sufficient--step that builds confidence in physics and engineering simulation codes. More complicated test cases, including physics models of greater sophistication or other physics regimes (e.g., energetic material response, magneto-hydrodynamics), would represent a scientifically desirable complement to the fundamental test cases discussed in this report. The authors believe that this document can be used to enhance the verification analyses undertaken at the DOE WP Laboratories and, thus, to improve the quality, credibility, and usefulness of the simulation codes that are analyzed with these problems.« less
Accurate ω-ψ Spectral Solution of the Singular Driven Cavity Problem
NASA Astrophysics Data System (ADS)
Auteri, F.; Quartapelle, L.; Vigevano, L.
2002-08-01
This article provides accurate spectral solutions of the driven cavity problem, calculated in the vorticity-stream function representation without smoothing the corner singularities—a prima facie impossible task. As in a recent benchmark spectral calculation by primitive variables of Botella and Peyret, closed-form contributions of the singular solution for both zero and finite Reynolds numbers are subtracted from the unknown of the problem tackled here numerically in biharmonic form. The method employed is based on a split approach to the vorticity and stream function equations, a Galerkin-Legendre approximation of the problem for the perturbation, and an evaluation of the nonlinear terms by Gauss-Legendre numerical integration. Results computed for Re=0, 100, and 1000 compare well with the benchmark steady solutions provided by the aforementioned collocation-Chebyshev projection method. The validity of the proposed singularity subtraction scheme for computing time-dependent solutions is also established.
NASA Astrophysics Data System (ADS)
Job, Joshua; Wang, Zhihui; Rønnow, Troels; Troyer, Matthias; Lidar, Daniel
2014-03-01
We report on experimental work benchmarking the performance of the D-Wave Two programmable annealer on its native Ising problem, and a comparison to available classical algorithms. In this talk we will focus on the comparison with an algorithm originally proposed and implemented by Alex Selby. This algorithm uses dynamic programming to repeatedly optimize over randomly selected maximal induced trees of the problem graph starting from a random initial state. If one is looking for a quantum advantage over classical algorithms, one should compare to classical algorithms which are designed and optimized to maximally take advantage of the structure of the type of problem one is using for the comparison. In that light, this classical algorithm should serve as a good gauge for any potential quantum speedup for the D-Wave Two.
(U) Analytic First and Second Derivatives of the Uncollided Leakage for a Homogeneous Sphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Favorite, Jeffrey A.
2017-04-26
The second-order adjoint sensitivity analysis methodology (2nd-ASAM), developed by Cacuci, has been applied by Cacuci to derive second derivatives of a response with respect to input parameters for uncollided particles in an inhomogeneous transport problem. In this memo, we present an analytic benchmark for verifying the derivatives of the 2nd-ASAM. The problem is a homogeneous sphere, and the response is the uncollided total leakage. This memo does not repeat the formulas given in Ref. 2. We are preparing a journal article that will include the derivation of Ref. 2 and the benchmark of this memo.
A Study of Fixed-Order Mixed Norm Designs for a Benchmark Problem in Structural Control
NASA Technical Reports Server (NTRS)
Whorton, Mark S.; Calise, Anthony J.; Hsu, C. C.
1998-01-01
This study investigates the use of H2, p-synthesis, and mixed H2/mu methods to construct full-order controllers and optimized controllers of fixed dimensions. The benchmark problem definition is first extended to include uncertainty within the controller bandwidth in the form of parametric uncertainty representative of uncertainty in the natural frequencies of the design model. The sensitivity of H2 design to unmodelled dynamics and parametric uncertainty is evaluated for a range of controller levels of authority. Next, mu-synthesis methods are applied to design full-order compensators that are robust to both unmodelled dynamics and to parametric uncertainty. Finally, a set of mixed H2/mu compensators are designed which are optimized for a fixed compensator dimension. These mixed norm designs recover the H, design performance levels while providing the same levels of robust stability as the u designs. It is shown that designing with the mixed norm approach permits higher levels of controller authority for which the H, designs are destabilizing. The benchmark problem is that of an active tendon system. The controller designs are all based on the use of acceleration feedback.
Integrated Sensing Processor, Phase 2
2005-12-01
performance analysis for several baseline classifiers including neural nets, linear classifiers, and kNN classifiers. Use of CCDR as a preprocessing step...below the level of the benchmark non-linear classifier for this problem ( kNN ). Furthermore, the CCDR preconditioned kNN achieved a 10% improvement over...the benchmark kNN without CCDR. Finally, we found an important connection between intrinsic dimension estimation via entropic graphs and the optimal
Neocolonialism and Health Care Access among Marshall Islanders in the United States.
Duke, Michael R
2017-09-01
In the Marshall Islands, a history of extensive nuclear weapons testing and covert biomedical research, coupled with the U.S.'s ongoing military presence in the country, has severely compromised the health of the local population. Despite the U.S.'s culpability in producing ill health along with high rates of emigration from the islands to the mainland United States, the large portion of Marshallese who reside in the United States face substantial barriers to accessing health care. Drawing from ongoing field research with a Marshallese community in Arkansas, this article explores the multifaceted impediments that U.S.-based Marshall Islanders face in receiving medical treatment. Calling on an expansive and inclusive notion of neocolonialism, I argue that Marshallese structural vulnerability with regard to health and health care treatment derives from their status as neocolonial subjects and from their limited claims to health-related deservingness associated with this status. [Marshall Islanders, health care access, neocolonialism, radiation exposure, immigrant health] L̗ōmn̗ak ko rōttin̗o: Ilo M̗ajel̗, juon bwebwenato kōn kōmmālmel im nuclear baam̗ ko im ekkatak ko rōttin̗o̗ kōn wāwein an baijin ko jelōt armej, barāinwōt an to an ri tarinae ro an Amedka pād ilo aelōn̄ kein, em̗ōj an jelōt ājmour an armej ro ilo aelōn̄ kein. Men̄e alikkar bwe Amedka in ear jino nan̄inmej kein im ej un eo armej rein rej em̗m̗akūt jān āne kein āne er n̄an ioon Amedka, elōn̄ iaan ri M̗ajel̗ rein rej jelm̗ae elōn̄ apan̄ ko n̄an aer del̗o̗n̄e jikin ājmour ko. Jān ekkatak eo ej bōk jikin kiō, jerbal in ej etali kabōjrak rak kein rōlōn̄ im armej in M̗ajel̗ ro ioon Amedka in rej jelm̗ae ilo aer jibadōk lo̗k jikin taktō. Ilo an kar Amedka jibadōk juon jea eo eutiej imejān lal̗ in, ij kwal̗ok juon aō akweelel bwe apan̄ ko an armej in M̗ajel̗ ikijjeen ājmour im jikin taktō ej itok jān aer kar ri kōm̗akoko ilo an kar Amedka lelōn̄ l̗o̗k etan ilo mejān lal̗ im jān aer jab pukot jipan kein ej aer bwe kōn jōkjōk in. © 2017 by the American Anthropological Association.
Numerical Boundary Conditions for Computational Aeroacoustics Benchmark Problems
NASA Technical Reports Server (NTRS)
Tam, Chritsopher K. W.; Kurbatskii, Konstantin A.; Fang, Jun
1997-01-01
Category 1, Problems 1 and 2, Category 2, Problem 2, and Category 3, Problem 2 are solved computationally using the Dispersion-Relation-Preserving (DRP) scheme. All these problems are governed by the linearized Euler equations. The resolution requirements of the DRP scheme for maintaining low numerical dispersion and dissipation as well as accurate wave speeds in solving the linearized Euler equations are now well understood. As long as 8 or more mesh points per wavelength is employed in the numerical computation, high quality results are assured. For the first three categories of benchmark problems, therefore, the real challenge is to develop high quality numerical boundary conditions. For Category 1, Problems 1 and 2, it is the curved wall boundary conditions. For Category 2, Problem 2, it is the internal radiation boundary conditions inside the duct. For Category 3, Problem 2, they are the inflow and outflow boundary conditions upstream and downstream of the blade row. These are the foci of the present investigation. Special nonhomogeneous radiation boundary conditions that generate the incoming disturbances and at the same time allow the outgoing reflected or scattered acoustic disturbances to leave the computation domain without significant reflection are developed. Numerical results based on these boundary conditions are provided.
Supply network configuration—A benchmarking problem
NASA Astrophysics Data System (ADS)
Brandenburg, Marcus
2018-03-01
Managing supply networks is a highly relevant task that strongly influences the competitiveness of firms from various industries. Designing supply networks is a strategic process that considerably affects the structure of the whole network. In contrast, supply networks for new products are configured without major adaptations of the existing structure, but the network has to be configured before the new product is actually launched in the marketplace. Due to dynamics and uncertainties, the resulting planning problem is highly complex. However, formal models and solution approaches that support supply network configuration decisions for new products are scant. The paper at hand aims at stimulating related model-based research. To formulate mathematical models and solution procedures, a benchmarking problem is introduced which is derived from a case study of a cosmetics manufacturer. Tasks, objectives, and constraints of the problem are described in great detail and numerical values and ranges of all problem parameters are given. In addition, several directions for future research are suggested.
NASA Astrophysics Data System (ADS)
Li, Zixiang; Janardhanan, Mukund Nilakantan; Tang, Qiuhua; Nielsen, Peter
2018-05-01
This article presents the first method to simultaneously balance and sequence robotic mixed-model assembly lines (RMALB/S), which involves three sub-problems: task assignment, model sequencing and robot allocation. A new mixed-integer programming model is developed to minimize makespan and, using CPLEX solver, small-size problems are solved for optimality. Two metaheuristics, the restarted simulated annealing algorithm and co-evolutionary algorithm, are developed and improved to address this NP-hard problem. The restarted simulated annealing method replaces the current temperature with a new temperature to restart the search process. The co-evolutionary method uses a restart mechanism to generate a new population by modifying several vectors simultaneously. The proposed algorithms are tested on a set of benchmark problems and compared with five other high-performing metaheuristics. The proposed algorithms outperform their original editions and the benchmarked methods. The proposed algorithms are able to solve the balancing and sequencing problem of a robotic mixed-model assembly line effectively and efficiently.
The rotating movement of three immiscible fluids - A benchmark problem
Bakker, M.; Oude, Essink G.H.P.; Langevin, C.D.
2004-01-01
A benchmark problem involving the rotating movement of three immiscible fluids is proposed for verifying the density-dependent flow component of groundwater flow codes. The problem consists of a two-dimensional strip in the vertical plane filled with three fluids of different densities separated by interfaces. Initially, the interfaces between the fluids make a 45??angle with the horizontal. Over time, the fluids rotate to the stable position whereby the interfaces are horizontal; all flow is caused by density differences. Two cases of the problem are presented, one resulting in a symmetric flow field and one resulting in an asymmetric flow field. An exact analytical solution for the initial flow field is presented by application of the vortex theory and complex variables. Numerical results are obtained using three variable-density groundwater flow codes (SWI, MOCDENS3D, and SEAWAT). Initial horizontal velocities of the interfaces, as simulated by the three codes, compare well with the exact solution. The three codes are used to simulate the positions of the interfaces at two times; the three codes produce nearly identical results. The agreement between the results is evidence that the specific rotational behavior predicted by the models is correct. It also shows that the proposed problem may be used to benchmark variable-density codes. It is concluded that the three models can be used to model accurately the movement of interfaces between immiscible fluids, and have little or no numerical dispersion. ?? 2003 Elsevier B.V. All rights reserved.
2018-01-01
Selective digestive decontamination (SDD, topical antibiotic regimens applied to the respiratory tract) appears effective for preventing ventilator associated pneumonia (VAP) in intensive care unit (ICU) patients. However, potential contextual effects of SDD on Staphylococcus aureus infections in the ICU remain unclear. The S. aureus ventilator associated pneumonia (S. aureus VAP), VAP overall and S. aureus bacteremia incidences within component (control and intervention) groups within 27 SDD studies were benchmarked against 115 observational groups. Component groups from 66 studies of various interventions other than SDD provided additional points of reference. In 27 SDD study control groups, the mean S. aureus VAP incidence is 9.6% (95% CI; 6.9–13.2) versus a benchmark derived from 115 observational groups being 4.8% (95% CI; 4.2–5.6). In nine SDD study control groups the mean S. aureus bacteremia incidence is 3.8% (95% CI; 2.1–5.7) versus a benchmark derived from 10 observational groups being 2.1% (95% CI; 1.1–4.1). The incidences of S. aureus VAP and S. aureus bacteremia within the control groups of SDD studies are each higher than literature derived benchmarks. Paradoxically, within the SDD intervention groups, the incidences of both S. aureus VAP and VAP overall are more similar to the benchmarks. PMID:29300363
A new numerical benchmark for variably saturated variable-density flow and transport in porous media
NASA Astrophysics Data System (ADS)
Guevara, Carlos; Graf, Thomas
2016-04-01
In subsurface hydrological systems, spatial and temporal variations in solute concentration and/or temperature may affect fluid density and viscosity. These variations could lead to potentially unstable situations, in which a dense fluid overlies a less dense fluid. These situations could produce instabilities that appear as dense plume fingers migrating downwards counteracted by vertical upwards flow of freshwater (Simmons et al., Transp. Porous Medium, 2002). As a result of unstable variable-density flow, solute transport rates are increased over large distances and times as compared to constant-density flow. The numerical simulation of variable-density flow in saturated and unsaturated media requires corresponding benchmark problems against which a computer model is validated (Diersch and Kolditz, Adv. Water Resour, 2002). Recorded data from a laboratory-scale experiment of variable-density flow and solute transport in saturated and unsaturated porous media (Simmons et al., Transp. Porous Medium, 2002) is used to define a new numerical benchmark. The HydroGeoSphere code (Therrien et al., 2004) coupled with PEST (www.pesthomepage.org) are used to obtain an optimized parameter set capable of adequately representing the data set by Simmons et al., (2002). Fingering in the numerical model is triggered using random hydraulic conductivity fields. Due to the inherent randomness, a large number of simulations were conducted in this study. The optimized benchmark model adequately predicts the plume behavior and the fate of solutes. This benchmark is useful for model verification of variable-density flow problems in saturated and/or unsaturated media.
Benchmark dose risk assessment software (BMDS) was designed by EPA to generate dose-response curves and facilitate the analysis, interpretation and synthesis of toxicological data. Partial results of QA/QC testing of the EPA benchmark dose software (BMDS) are presented. BMDS pr...
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Signe K.; Purohit, Sumit; Boyd, Lauren W.
The Geothermal Technologies Office Code Comparison Study (GTO-CCS) aims to support the DOE Geothermal Technologies Office in organizing and executing a model comparison activity. This project is directed at testing, diagnosing differences, and demonstrating modeling capabilities of a worldwide collection of numerical simulators for evaluating geothermal technologies. Teams of researchers are collaborating in this code comparison effort, and it is important to be able to share results in a forum where technical discussions can easily take place without requiring teams to travel to a common location. Pacific Northwest National Laboratory has developed an open-source, flexible framework called Velo that providesmore » a knowledge management infrastructure and tools to support modeling and simulation for a variety of types of projects in a number of scientific domains. GTO-Velo is a customized version of the Velo Framework that is being used as the collaborative tool in support of the GTO-CCS project. Velo is designed around a novel integration of a collaborative Web-based environment and a scalable enterprise Content Management System (CMS). The underlying framework provides a flexible and unstructured data storage system that allows for easy upload of files that can be in any format. Data files are organized in hierarchical folders and each folder and each file has a corresponding wiki page for metadata. The user interacts with Velo through a web browser based wiki technology, providing the benefit of familiarity and ease of use. High-level folders have been defined in GTO-Velo for the benchmark problem descriptions, descriptions of simulator/code capabilities, a project notebook, and folders for participating teams. Each team has a subfolder with write access limited only to the team members, where they can upload their simulation results. The GTO-CCS participants are charged with defining the benchmark problems for the study, and as each GTO-CCS Benchmark problem is defined, the problem creator can provide a description using a template on the metadata page corresponding to the benchmark problem folder. Project documents, references and videos of the weekly online meetings are shared via GTO-Velo. A results comparison tool allows users to plot their uploaded simulation results on the fly, along with those of other teams, to facilitate weekly discussions of the benchmark problem results being generated by the teams. GTO-Velo is an invaluable tool providing the project coordinators and team members with a framework for collaboration among geographically dispersed organizations.« less
NASA Technical Reports Server (NTRS)
Ko, Malcolm K. W.; Weisenstein, Debra K.; Sze, Nein Dak; Rodriguez, Jose M.; Heisey, Curtis
1991-01-01
The AER two-dimensional chemistry-transport model is used to study the effect on stratospheric ozone (O3) from operations of supersonic and subsonic aircraft. The study is based on six emission scenarios provided to AER. The study showed that: (1) the O3 response is dominated by the portion of the emitted nitrogen compounds that is entrained in the stratosphere; (2) the entrainment is a sensitive function of the altitude at which the material is injected; (3) the O3 removal efficiency of the emitted material depends on the concentrations of trace gases in the background atmosphere; and (4) evaluation of the impact of fleet operations in the future atmosphere must take into account the expected changes in trace gas concentrations from other activities. Areas for model improvements in future studies are also discussed.
Coupling Processes Between Atmospheric Chemistry and Climate
NASA Technical Reports Server (NTRS)
Ko, Malcolm K. W.; Weisenstein, Debra; Rodriguez, Jose; Danilin, Michael; Scott, Courtney; Shia, Run-Lie; Eluszkiewicz, Junusz; Sze, Nien-Dak
1999-01-01
This is the final report. The overall objective of this project is to improve the understanding of coupling processes among atmospheric chemistry, aerosol and climate, all important for quantitative assessments of global change. Among our priority are changes in ozone and stratospheric sulfate aerosol, with emphasis on how ozone in the lower stratosphere would respond to natural or anthropogenic changes. The work emphasizes two important aspects: (1) AER's continued participation in preparation of, and providing scientific input for, various scientific reports connected with assessment of stratospheric ozone and climate. These include participation in various model intercomparison exercises as well as preparation of national and international reports. and (2) Continued development of the AER three-wave interactive model to address how the transport circulation will change as ozone and the thermal properties of the atmosphere change, and assess how these new findings will affect our confidence in the ozone assessment results.
SPACE PROPULSION SYSTEM PHASED-MISSION PROBABILITY ANALYSIS USING CONVENTIONAL PRA METHODS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis Smith; James Knudsen
As part of a series of papers on the topic of advance probabilistic methods, a benchmark phased-mission problem has been suggested. This problem consists of modeling a space mission using an ion propulsion system, where the mission consists of seven mission phases. The mission requires that the propulsion operate for several phases, where the configuration changes as a function of phase. The ion propulsion system itself consists of five thruster assemblies and a single propellant supply, where each thruster assembly has one propulsion power unit and two ion engines. In this paper, we evaluate the probability of mission failure usingmore » the conventional methodology of event tree/fault tree analysis. The event tree and fault trees are developed and analyzed using Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE). While the benchmark problem is nominally a "dynamic" problem, in our analysis the mission phases are modeled in a single event tree to show the progression from one phase to the next. The propulsion system is modeled in fault trees to account for the operation; or in this case, the failure of the system. Specifically, the propulsion system is decomposed into each of the five thruster assemblies and fed into the appropriate N-out-of-M gate to evaluate mission failure. A separate fault tree for the propulsion system is developed to account for the different success criteria of each mission phase. Common-cause failure modeling is treated using traditional (i.e., parametrically) methods. As part of this paper, we discuss the overall results in addition to the positive and negative aspects of modeling dynamic situations with non-dynamic modeling techniques. One insight from the use of this conventional method for analyzing the benchmark problem is that it requires significant manual manipulation to the fault trees and how they are linked into the event tree. The conventional method also requires editing the resultant cut sets to obtain the correct results. While conventional methods may be used to evaluate a dynamic system like that in the benchmark, the level of effort required may preclude its use on real-world problems.« less
Standardised Benchmarking in the Quest for Orthologs
Altenhoff, Adrian M.; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A.; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P.; Schreiber, Fabian; Sousa da Silva, Alan; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Juhl Jensen, Lars; Martin, Maria J.; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E.; Thomas, Paul D.; Sonnhammer, Erik; Dessimoz, Christophe
2016-01-01
The identification of evolutionarily related genes across different species—orthologs in particular—forms the backbone of many comparative, evolutionary, and functional genomic analyses. Achieving high accuracy in orthology inference is thus essential. Yet the true evolutionary history of genes, required to ascertain orthology, is generally unknown. Furthermore, orthologs are used for very different applications across different phyla, with different requirements in terms of the precision-recall trade-off. As a result, assessing the performance of orthology inference methods remains difficult for both users and method developers. Here, we present a community effort to establish standards in orthology benchmarking and facilitate orthology benchmarking through an automated web-based service (http://orthology.benchmarkservice.org). Using this new service, we characterise the performance of 15 well-established orthology inference methods and resources on a battery of 20 different benchmarks. Standardised benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimal requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882
Algorithm and Architecture Independent Benchmarking with SEAK
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tallent, Nathan R.; Manzano Franco, Joseph B.; Gawande, Nitin A.
2016-05-23
Many applications of high performance embedded computing are limited by performance or power bottlenecks. We have designed the Suite for Embedded Applications & Kernels (SEAK), a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions; and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) and goal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user black-box evaluation that can capture tradeoffs between performance, power, accuracy, size, andmore » weight. The tradeoffs are especially informative for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.« less
Monitoring of endoscope reprocessing with an adenosine triphosphate (ATP) bioluminescence method.
Parohl, Nina; Stiefenhöfer, Doris; Heiligtag, Sabine; Reuter, Henning; Dopadlik, Dana; Mosel, Frank; Gerken, Guido; Dechêne, Alexander; Heintschel von Heinegg, Evelyn; Jochum, Christoph; Buer, Jan; Popp, Walter
2017-01-01
Background: The arising challenges over endoscope reprocessing quality proposes to look for possibilities to measure and control the process of endoscope reprocessing. Aim: The goal of this study was to evaluate the feasibility of monitoring endoscope reprocessing with an adenosine triphosphate (ATP) based bioluminescence system. Methods: 60 samples of eight gastroscopes have been assessed from routine clinical use in a major university hospital in Germany. Endoscopes have been assessed with an ATP system and microbial cultures at different timepoints during the reprocessing. Findings: After the bedside flush the mean ATP level in relative light units (RLU) was 19,437 RLU, after the manual cleaning 667 RLU and after the automated endoscope reprocessor (AER) 227 RLU. After the manual cleaning the mean total viable count (TVC) per endoscope was 15.3 CFU/10 ml, and after the AER 5.7 CFU/10 ml. Our results show that there are reprocessing cycles which are not able to clean a patient used endoscope. Conclusion: Our data suggest that monitoring of flexible endoscope with ATP can identify a number of different influence factors, like the endoscope condition and the endoscopic procedure, or especially the quality of the bedside flush and manual cleaning before the AER. More process control is one option to identify and improve influence factors to finally increase the overall reprocessing quality, best of all by different methods. ATP measurement seems to be a valid technique that allows an immediate repeat of the manual cleaning if the ATP results after manual cleaning exceed the established cutoff of 200 RLU.
[Virulence markers of Escherichia coli O1 strains].
Makarova, M A; Kaftyreva, L A; Grigor'eva, N S; Kicha, E V; Lipatova, L A
2011-01-01
To detect virulence genes in clinical isolates of Escherichia coli O1 using polymerase chain reaction (PCR). One hundred and twenty strains of E.coli O1 strains isolated from faeces of patients with acute diarrhea (n = 45) and healthy persons (n = 75) were studied. PCR with primers for rfb and fliC genes, which control synthesis of O- and H- antigens respectively, was used. Fourteen virulence genes (pap, aaf, sfa, afa, eaeA, bfpA, ial, hly, cnf, stx1, stx2, lt, st, and aer) were detected by PCR primers. K1-antigen was determined by Pastorex Meningo B/E. coli O1 kit (Bio-Rad). rfb gene controlling O-antigen synthesis in serogroup O1 as well as fliC gene controlling synthesis of H7 and K1 antigens were detected in all strains. Thus all E. coli strains had antigenic structure O1:K1 :H-:F7. Virulence genes aafl, sfa, afa, eaeA, bfpA, ial, hly, cnf, stx1, stx2, lt, and st were not detected. All strains owned pap and aer genes regardless of the presence of acute diarrhea symptoms. It was shown that E. coli O1:KI:H-:F7 strains do not have virulence genes which are characteristic for diarrhea-causing Escherichia. In accordance with the presence of pap and aer genes they could be attributed to uropathogenic Escherichia (UPEC) or avian-pathogenic Escherichia (APEC). It is necessary to detect virulence factors in order to determine E. coli as a cause of intestinal infection.
Benchmarking Problems Used in Second Year Level Organic Chemistry Instruction
ERIC Educational Resources Information Center
Raker, Jeffrey R.; Towns, Marcy H.
2010-01-01
Investigations of the problem types used in college-level general chemistry examinations have been reported in this Journal and were first reported in the "Journal of Chemical Education" in 1924. This study extends the findings from general chemistry to the problems of four college-level organic chemistry courses. Three problem…
NASA Astrophysics Data System (ADS)
Velioglu Sogut, Deniz; Yalciner, Ahmet Cevdet
2018-06-01
Field observations provide valuable data regarding nearshore tsunami impact, yet only in inundation areas where tsunami waves have already flooded. Therefore, tsunami modeling is essential to understand tsunami behavior and prepare for tsunami inundation. It is necessary that all numerical models used in tsunami emergency planning be subject to benchmark tests for validation and verification. This study focuses on two numerical codes, NAMI DANCE and FLOW-3D®, for validation and performance comparison. NAMI DANCE is an in-house tsunami numerical model developed by the Ocean Engineering Research Center of Middle East Technical University, Turkey and Laboratory of Special Research Bureau for Automation of Marine Research, Russia. FLOW-3D® is a general purpose computational fluid dynamics software, which was developed by scientists who pioneered in the design of the Volume-of-Fluid technique. The codes are validated and their performances are compared via analytical, experimental and field benchmark problems, which are documented in the ``Proceedings and Results of the 2011 National Tsunami Hazard Mitigation Program (NTHMP) Model Benchmarking Workshop'' and the ``Proceedings and Results of the NTHMP 2015 Tsunami Current Modeling Workshop". The variations between the numerical solutions of these two models are evaluated through statistical error analysis.
Nonlinear model updating applied to the IMAC XXXII Round Robin benchmark system
NASA Astrophysics Data System (ADS)
Kurt, Mehmet; Moore, Keegan J.; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.
2017-05-01
We consider the application of a new nonlinear model updating strategy to a computational benchmark system. The approach relies on analyzing system response time series in the frequency-energy domain by constructing both Hamiltonian and forced and damped frequency-energy plots (FEPs). The system parameters are then characterized and updated by matching the backbone branches of the FEPs with the frequency-energy wavelet transforms of experimental and/or computational time series. The main advantage of this method is that no nonlinearity model is assumed a priori, and the system model is updated solely based on simulation and/or experimental measured time series. By matching the frequency-energy plots of the benchmark system and its reduced-order model, we show that we are able to retrieve the global strongly nonlinear dynamics in the frequency and energy ranges of interest, identify bifurcations, characterize local nonlinearities, and accurately reconstruct time series. We apply the proposed methodology to a benchmark problem, which was posed to the system identification community prior to the IMAC XXXII (2014) and XXXIII (2015) Conferences as a "Round Robin Exercise on Nonlinear System Identification". We show that we are able to identify the parameters of the non-linear element in the problem with a priori knowledge about its position.
Sprague, Brian L; Arao, Robert F; Miglioretti, Diana L; Henderson, Louise M; Buist, Diana S M; Onega, Tracy; Rauscher, Garth H; Lee, Janie M; Tosteson, Anna N A; Kerlikowske, Karla; Lehman, Constance D
2017-04-01
Purpose To establish contemporary performance benchmarks for diagnostic digital mammography with use of recent data from the Breast Cancer Surveillance Consortium (BCSC). Materials and Methods Institutional review board approval was obtained for active or passive consenting processes or to obtain a waiver of consent to enroll participants, link data, and perform analyses. Data were obtained from six BCSC registries (418 radiologists, 92 radiology facilities). Mammogram indication and assessments were prospectively collected for women undergoing diagnostic digital mammography and linked with cancer diagnoses from state cancer registries. The study included 401 548 examinations conducted from 2007 to 2013 in 265 360 women. Results Overall diagnostic performance measures were as follows: cancer detection rate, 34.7 per 1000 (95% confidence interval [CI]: 34.1, 35.2); abnormal interpretation rate, 12.6% (95% CI: 12.5%, 12.7%); positive predictive value (PPV) of a biopsy recommendation (PPV 2 ), 27.5% (95% CI: 27.1%, 27.9%); PPV of biopsies performed (PPV 3 ), 30.4% (95% CI: 29.9%, 30.9%); false-negative rate, 4.8 per 1000 (95% CI: 4.6, 5.0); sensitivity, 87.8% (95% CI: 87.3%, 88.4%); and specificity, 90.5% (95% CI: 90.4%, 90.6%). Among cancers detected, 63.4% were stage 0 or 1 cancers, 45.6% were minimal cancers, the mean size of invasive cancers was 21.2 mm, and 69.6% of invasive cancers were node negative. Performance metrics varied widely across diagnostic indications, with cancer detection rate (64.5 per 1000) and abnormal interpretation rate (18.7%) highest for diagnostic mammograms obtained to evaluate a breast problem with a lump. Compared with performance during the screen-film mammography era, diagnostic digital performance showed increased abnormal interpretation and cancer detection rates and decreasing PPVs, with less than 70% of radiologists within acceptable ranges for PPV 2 and PPV 3 . Conclusion These performance measures can serve as national benchmarks that may help transform the marked variation in radiologists' diagnostic performance into targeted quality improvement efforts. © RSNA, 2017 Online supplemental material is available for this article.
Cloud Forming Potential of Aerosol from Light-duty Gasoline Direct Injection Vehicles
DOT National Transportation Integrated Search
2017-12-01
In this study, we evaluate the hygroscopicity and droplet kinetics of fresh and aged emissions from new generation gasoline direct injector engines retrofitted with a gasoline particulate filter (GPF). Furthermore, ageing and subsequent secondary aer...
Inverse Design of Low-Boom Supersonic Concepts Using Reversed Equivalent-Area Targets
NASA Technical Reports Server (NTRS)
Li, Wu; Rallabhand, Sriam
2011-01-01
A promising path for developing a low-boom configuration is a multifidelity approach that (1) starts from a low-fidelity low-boom design, (2) refines the low-fidelity design with computational fluid dynamics (CFD) equivalent-area (Ae) analysis, and (3) improves the design with sonic-boom analysis by using CFD off-body pressure distributions. The focus of this paper is on the third step of this approach, in which the design is improved with sonic-boom analysis through the use of CFD calculations. A new inverse design process for off-body pressure tailoring is formulated and demonstrated with a low-boom supersonic configuration that was developed by using the mixed-fidelity design method with CFD Ae analysis. The new inverse design process uses the reverse propagation of the pressure distribution (dp/p) from a mid-field location to a near-field location, converts the near-field dp/p into an equivalent-area distribution, generates a low-boom target for the reversed equivalent area (Ae,r) of the configuration, and modifies the configuration to minimize the differences between the configuration s Ae,r and the low-boom target. The new inverse design process is used to modify a supersonic demonstrator concept for a cruise Mach number of 1.6 and a cruise weight of 30,000 lb. The modified configuration has a fully shaped ground signature that has a perceived loudness (PLdB) value of 78.5, while the original configuration has a partially shaped aft signature with a PLdB of 82.3.
Sorption, desorption, and surface oxidative fate of nicotine.
Petrick, Lauren; Destaillats, Hugo; Zouev, Irena; Sabach, Sara; Dubowski, Yael
2010-09-21
Nicotine dynamics in an indoor environment can be greatly affected by building parameters (e.g. relative humidity (RH), air exchange rate (AER), and presence of ozone), as well as surface parameters (e.g. surface area (SA) and polarity). To better understand the indoor fate of nicotine, these parameter effects on its sorption, desorption, and oxidation rates were investigated on model indoor surfaces that included fabrics, wallboard paper, and wood materials. Nicotine sorption under dry conditions was enhanced by higher SA and higher polarity of the substrate. Interestingly, nicotine sorption to cotton and nylon was facilitated by increased RH, while sorption to polyester was hindered by it. Desorption was affected by RH, AER, and surface type. Heterogeneous nicotine-ozone reaction was investigated by Fourier transform infrared spectrometry with attenuated total reflection (FTIR-ATR), and revealed a pseudo first-order surface reaction rate of 0.035 +/- 0.015 min(-1) (at [O(3)] = 6 +/- 0.3 x 10(15) molecules cm(-3)) that was partially inhibited at high RH. Extrapolation to a lower ozone level ([O(3)] = 42 ppb) showed oxidation on the order of 10(-5) min(-1) corresponding to a half-life of 1 week. In addition, similar surface products were identified in dry and high RH using gas chromatography-mass spectrometry (GC-MS). However, FTIR analysis revealed different product spectra for these conditions, suggesting additional unidentified products and association with surface water. Knowing the indoor fate of condensed and gas phase nicotine and its oxidation products will provide a better understanding of nicotine's impact on personal exposures as well as overall indoor air quality.
Benchmark results in the 2D lattice Thirring model with a chemical potential
NASA Astrophysics Data System (ADS)
Ayyar, Venkitesh; Chandrasekharan, Shailesh; Rantaharju, Jarno
2018-03-01
We study the two-dimensional lattice Thirring model in the presence of a fermion chemical potential. Our model is asymptotically free and contains massive fermions that mimic a baryon and light bosons that mimic pions. Hence, it is a useful toy model for QCD, especially since it, too, suffers from a sign problem in the auxiliary field formulation in the presence of a fermion chemical potential. In this work, we formulate the model in both the world line and fermion-bag representations and show that the sign problem can be completely eliminated with open boundary conditions when the fermions are massless. Hence, we are able accurately compute a variety of interesting quantities in the model, and these results could provide benchmarks for other methods that are being developed to solve the sign problem in QCD.
Comas, J; Rodríguez-Roda, I; Poch, M; Gernaey, K V; Rosen, C; Jeppsson, U
2006-01-01
Wastewater treatment plant operators encounter complex operational problems related to the activated sludge process and usually respond to these by applying their own intuition and by taking advantage of what they have learnt from past experiences of similar problems. However, previous process experiences are not easy to integrate in numerical control, and new tools must be developed to enable re-use of plant operating experience. The aim of this paper is to investigate the usefulness of a case-based reasoning (CBR) approach to apply learning and re-use of knowledge gained during past incidents to confront actual complex problems through the IWA/COST Benchmark protocol. A case study shows that the proposed CBR system achieves a significant improvement of the benchmark plant performance when facing a high-flow event disturbance.
ICASE/LaRC Workshop on Benchmark Problems in Computational Aeroacoustics (CAA)
NASA Technical Reports Server (NTRS)
Hardin, Jay C. (Editor); Ristorcelli, J. Ray (Editor); Tam, Christopher K. W. (Editor)
1995-01-01
The proceedings of the Benchmark Problems in Computational Aeroacoustics Workshop held at NASA Langley Research Center are the subject of this report. The purpose of the Workshop was to assess the utility of a number of numerical schemes in the context of the unusual requirements of aeroacoustical calculations. The schemes were assessed from the viewpoint of dispersion and dissipation -- issues important to long time integration and long distance propagation in aeroacoustics. Also investigated were the effect of implementation of different boundary conditions. The Workshop included a forum in which practical engineering problems related to computational aeroacoustics were discussed. This discussion took the form of a dialogue between an industrial panel and the workshop participants and was an effort to suggest the direction of evolution of this field in the context of current engineering needs.
Benitez-Aguirre, Paul Z.; Craig, Maria E.; Jenkins, Alicia J.; Gallego, Patricia H.; Cusumano, Janine; Duffin, Anthony C.; Hing, Stephen; Donaghue, Kim C.
2012-01-01
Aim The aim was to study the longitudinal relationship between plantar fascia thickness (PFT) as a measure of tissue glycation and microvascular (MV) complications in young persons with type 1 diabetes (T1DM). Methods We conducted a prospective longitudinal cohort study of 152 (69 male) adolescents with T1DM who underwent repeated MV complications assessments and ultrasound measurements of PFT from baseline (1997–2002) until 2008. Retinopathy was assessed by 7-field stereoscopic fundal photography and nephropathy by albumin excretion rate (AER) from three timed overnight urine specimens. Longitudinal analysis was performed using generalized estimating equations (GEE). Results Median (interquartile range) age at baseline was 15.1 (13.4–16.8) years, and median follow-up was 8.3 (7.0–9.5) years, with 4 (3–6) visits per patient. Glycemic control improved from baseline to final visit [glycated hemoglobin (HbA1c) 8.5% to 8.0%, respectively; p = .004]. Prevalence of retinopathy increased from 20% to 51% (p < .001) and early elevation of AER (>7.5 µg/min) increased from 26% to 29% (p = .2). A greater increase in PFT (mm/year) was associated with retinopathy at the final assessment (ΔPFT 1st vs. 2nd–4th quartiles, χ2 = 9.87, p = .02). In multivariate GEE, greater PFT was longitudinally associated with retinopathy [odds ratio (OR) 4.6, 95% confidence interval (CI) 2.0–10.3] and early renal dysfunction (OR 3.2, CI 1.3–8.0) after adjusting for gender, blood pressure standard deviation scores, HbA1c, and total cholesterol. Conclusions In young people with T1DM, PFT was longitudinally associated with retinopathy and early renal dysfunction, highlighting the importance of early glycemic control and supporting the role of metabolic memory in MV complications. Measurement of PFT by ultrasound offers a noninvasive estimate of glycemic burden and tissue glycation. PMID:22538146
Benitez-Aguirre, Paul Z; Craig, Maria E; Jenkins, Alicia J; Gallego, Patricia H; Cusumano, Janine; Duffin, Anthony C; Hing, Stephen; Donaghue, Kim C
2012-03-01
The aim was to study the longitudinal relationship between plantar fascia thickness (PFT) as a measure of tissue glycation and microvascular (MV) complications in young persons with type 1 diabetes (T1DM). We conducted a prospective longitudinal cohort study of 152 (69 male) adolescents with T1DM who underwent repeated MV complications assessments and ultrasound measurements of PFT from baseline (1997-2002) until 2008. Retinopathy was assessed by 7-field stereoscopic fundal photography and nephropathy by albumin excretion rate (AER) from three timed overnight urine specimens. Longitudinal analysis was performed using generalized estimating equations (GEE). Median (interquartile range) age at baseline was 15.1 (13.4-16.8) years, and median follow-up was 8.3 (7.0-9.5) years, with 4 (3-6) visits per patient. Glycemic control improved from baseline to final visit [glycated hemoglobin (HbA1c) 8.5% to 8.0%, respectively; p = .004]. Prevalence of retinopathy increased from 20% to 51% (p < .001) and early elevation of AER (>7.5 μg/min) increased from 26% to 29% (p = .2). A greater increase in PFT (mm/year) was associated with retinopathy at the final assessment (ΔPFT 1st vs. 2nd-4th quartiles, χ(2) = 9.87, p = .02). In multivariate GEE, greater PFT was longitudinally associated with retinopathy [odds ratio (OR) 4.6, 95% confidence interval (CI) 2.0-10.3] and early renal dysfunction (OR 3.2, CI 1.3-8.0) after adjusting for gender, blood pressure standard deviation scores, HbA1c, and total cholesterol. In young people with T1DM, PFT was longitudinally associated with retinopathy and early renal dysfunction, highlighting the importance of early glycemic control and supporting the role of metabolic memory in MV complications. Measurement of PFT by ultrasound offers a noninvasive estimate of glycemic burden and tissue glycation. © 2012 Diabetes Technology Society.
Applebaum, Mark A.; Vaksman, Zalman; Lee, Sang Mee; Hungate, Eric A.; Henderson, Tara O.; London, Wendy B.; Pinto, Navin; Volchenboum, Samuel L.; Park, Julie R.; Naranjo, Arlene; Hero, Barbara; Pearson, Andrew D.; Stranger, Barbara E.; Cohn, Susan L.; Diskin, Sharon J.
2017-01-01
Background The incidence of SMN within the first ten years of diagnosis in high-risk neuroblastoma patients treated with modern, intensive therapy is unknown. Further, the underlying germline genetics that contribute to SMN in these survivors are not known. Methods The International Neuroblastoma Risk Group (INRG) database of patients diagnosed from 1990–2010 was analyzed. SMN risk was accessed by cumulative incidence, standardized incidence ratios (SIR), and absolute excess risk (AER). A candidate gene-based association study evaluated genetic susceptibility to SMN in neuroblastoma survivors. Results Of the 5,987 patients in the INRG database with SMN data enrolled in a clinical trial, 43 (0.72%) developed a SMN. The 10-year cumulative incidence of SMN for high-risk patients was 1.8% (95% CI 1.0–2.6%) compared to 0.38% (95% CI: 0.22–0.94%) for low-risk patients (P=0.01). High-risk patients had an almost 18-fold higher incidence of SMN compared to age and sex matched controls (SIR=17.5 (95% CI: 11.4–25.3), AER=27.6). For patients treated on high- and intermediate-risk clinical trials, the SIR of acute myelogenous leukemia (AML) was 106.8 (95% CI: 28.7–273.4) and 127.7 (95%CI: 25.7–373.3), respectively. Variants implicating DNA repair genes XRCC3 (rs861539: P=0.006; Odds Ratio: 2.04, 95%CI: 1.19–3.46) and MSH2 (rs17036651: P=0.009; Odds Ratio: 0.26, 95% CI: 0.08–0.81) were associated with SMN. Conclusion The intensive multi-modality treatment strategy currently used to treat high-risk neuroblastoma is associated with a significantly increased risk of secondary AML. Defining the interactions of treatment exposures and genetic factors that promote the development of SMN is critical for optimizing survivorship care PMID:28033528
A Benchmark Problem for Development of Autonomous Structural Modal Identification
NASA Technical Reports Server (NTRS)
Pappa, Richard S.; Woodard, Stanley E.; Juang, Jer-Nan
1996-01-01
This paper summarizes modal identification results obtained using an autonomous version of the Eigensystem Realization Algorithm on a dynamically complex, laboratory structure. The benchmark problem uses 48 of 768 free-decay responses measured in a complete modal survey test. The true modal parameters of the structure are well known from two previous, independent investigations. Without user involvement, the autonomous data analysis identified 24 to 33 structural modes with good to excellent accuracy in 62 seconds of CPU time (on a DEC Alpha 4000 computer). The modal identification technique described in the paper is the baseline algorithm for NASA's Autonomous Dynamics Determination (ADD) experiment scheduled to fly on International Space Station assembly flights in 1997-1999.
Benchmarking the Use of a Rapid Response Team by Surgical Services at a Tertiary Care Hospital
Barocas, Daniel A; Kulahalli, Chirag S; Ehrenfeld, Jesse M; Kapu, April N; Penson, David F; You, Chaochen (Chad); Weavind, Lisa; Dmochowski, Roger
2015-01-01
BACKGROUND Rapid response teams (RRT) are used to prevent adverse events in patients with acute clinical deterioration, and to save costs of unnecessary transfer in patients with lower-acuity problems. However, determining the optimal use of RRT services is challenging. One method of benchmarking performance is to determine whether a department's event rate is commensurate with its volume and acuity. STUDY DESIGN Using admissions between 2009 and 2011 to 18 distinct surgical services at a tertiary care center, we developed logistic regression models to predict RRT activation, accounting for days at-risk for RRT and patient acuity, using claims modifiers for risk of mortality (ROM) and severity of illness (SOI). The model was used to compute observed-to-expected (O/E) RRT use by service. RESULTS Of 45,651 admissions, 728 (1.6%, or 3.2 per 1,000 inpatient days) resulted in 1 or more RRT activations. Use varied widely across services (0.4% to 6.2% of admissions; 1.39 to 8.73 per 1,000 inpatient days, unadjusted). In the multivariable model, the greatest contributors to the likelihood of RRT were days at risk, SOI, and ROM. The O/E RRT use ranged from 0.32 to 2.82 across services, with 8 services having an observed value that was significantly higher or lower than predicted by the model. CONCLUSIONS We developed a tool for identifying outlying use of an important institutional medical resource. The O/E computation provides a starting point for further investigation into the reasons for variability among services, and a benchmark for quality and process improvement efforts in patient safety. PMID:24275072
Simulation of irradiation creep
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reiley, T.C.; Jung, P.
1977-01-01
The results to date in the area of radiation enhanced deformation using beams of light ions to simulate fast neutron displacement damage are reviewed. A comparison is made between these results and those of in-reactor experiments. Particular attention is given to the displacement rate calculations for light ions and the electronic energy losses and their effect on the displacement cross section. Differences in the displacement processes for light ions and neutrons which may effect the irradiation creep process are discussed. The experimental constraints and potential problem areas associated with these experiments are compared to the advantages of simulation. Support experimentsmore » on the effect of thickness on thermal creep are presented. A brief description of the experiments in progress is presented for the following laboratories: HEDL, NRL, ORNL, PNL, U. of Lowell/MIT in the United States, AERE Harwell in the United Kingdom, CEN Saclay in France, GRK Karlsruhe and KFA Julich in West Germany.« less
Developing a benchmark for emotional analysis of music
Yang, Yi-Hsuan; Soleymani, Mohammad
2017-01-01
Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the ‘Emotion in Music’ task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER. PMID:28282400
Developing a benchmark for emotional analysis of music.
Aljanaki, Anna; Yang, Yi-Hsuan; Soleymani, Mohammad
2017-01-01
Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.
A large-scale benchmark of gene prioritization methods.
Guala, Dimitri; Sonnhammer, Erik L L
2017-04-21
In order to maximize the use of results from high-throughput experimental studies, e.g. GWAS, for identification and diagnostics of new disease-associated genes, it is important to have properly analyzed and benchmarked gene prioritization tools. While prospective benchmarks are underpowered to provide statistically significant results in their attempt to differentiate the performance of gene prioritization tools, a strategy for retrospective benchmarking has been missing, and new tools usually only provide internal validations. The Gene Ontology(GO) contains genes clustered around annotation terms. This intrinsic property of GO can be utilized in construction of robust benchmarks, objective to the problem domain. We demonstrate how this can be achieved for network-based gene prioritization tools, utilizing the FunCoup network. We use cross-validation and a set of appropriate performance measures to compare state-of-the-art gene prioritization algorithms: three based on network diffusion, NetRank and two implementations of Random Walk with Restart, and MaxLink that utilizes network neighborhood. Our benchmark suite provides a systematic and objective way to compare the multitude of available and future gene prioritization tools, enabling researchers to select the best gene prioritization tool for the task at hand, and helping to guide the development of more accurate methods.
NASA Astrophysics Data System (ADS)
Gu, B.; Yang, P.; Kuo, C. P.; Mlawer, E. J.
2017-12-01
Evaluation of RRTMG and Fu-Liou RTM Performance against LBLRTM-DISORT Simulations and CERES Data in terms of Ice Clouds Radiative Effects Boyan Gu1, Ping Yang1, Chia-Pang Kuo1, Eli J. Mlawer2 Department of Atmospheric Sciences, Texas A&M University, College Station, TX 77843, USA Atmospheric and Environmental Research (AER), Lexington, MA 02421, USA Ice clouds play an important role in climate system, especially in the Earth's radiation balance and hydrological cycle. However, the representation of ice cloud radiative effects (CRE) remains significant uncertainty, because scattering properties of ice clouds are not well considered in general circulation models (GCM). We analyze the strengths and weakness of the Rapid Radiative Transfer Model for GCM Applications (RRTMG) and Fu-Liou Radiative Transfer Model (RTM) against rigorous LBLRTM-DISORT (a combination of Line-By-Line Radiative Transfer Model and Discrete Ordinate Radiative Transfer Model) calculations and CERES (Clouds and the Earth's Radiant Energy System) flux observations. In total, 6 US standard atmospheric profiles and 42 atmospheric profiles from Atmospheric and Environmental Research (AER) Company are used to evaluate the RRTMG and Fu-Liou RTM by LBLRTM-DISORT calculations from 0 to 3250 cm-1. Ice cloud radiative effect simulations with RRTMG and Fu-Liou RTM are initialized using the ice cloud properties from MODIS collection-6 products. Simulations of single layer ice cloud CRE by RRTMG and LBLRTM-DISORT show that RRTMG, neglecting scattering, overestimates the TOA flux by about 0-15 W/m2 depending on the cloud particle size and optical depth, and the most significant overestimation occurs when the particle effective radius is small (around 10 μm) and the cloud optical depth is intermediate (about 1-10). The overestimation reduces significantly when the similarity rule is applied to RRTMG. We combine ice cloud properties from MODIS Collection-6 and atmospheric profiles from the Modern-Era Retrospective Analysis for Research and Applications-2 (MERRA2) reanalysis to simulate ice cloud CRE, which is compared with CERES observations.
A hybrid heuristic for the multiple choice multidimensional knapsack problem
NASA Astrophysics Data System (ADS)
Mansi, Raïd; Alves, Cláudio; Valério de Carvalho, J. M.; Hanafi, Saïd
2013-08-01
In this article, a new solution approach for the multiple choice multidimensional knapsack problem is described. The problem is a variant of the multidimensional knapsack problem where items are divided into classes, and exactly one item per class has to be chosen. Both problems are NP-hard. However, the multiple choice multidimensional knapsack problem appears to be more difficult to solve in part because of its choice constraints. Many real applications lead to very large scale multiple choice multidimensional knapsack problems that can hardly be addressed using exact algorithms. A new hybrid heuristic is proposed that embeds several new procedures for this problem. The approach is based on the resolution of linear programming relaxations of the problem and reduced problems that are obtained by fixing some variables of the problem. The solutions of these problems are used to update the global lower and upper bounds for the optimal solution value. A new strategy for defining the reduced problems is explored, together with a new family of cuts and a reformulation procedure that is used at each iteration to improve the performance of the heuristic. An extensive set of computational experiments is reported for benchmark instances from the literature and for a large set of hard instances generated randomly. The results show that the approach outperforms other state-of-the-art methods described so far, providing the best known solution for a significant number of benchmark instances.
Estimating and Valuing Morbidity in a Policy Context: Proceedings of June 1989 AERE Workshop (1989)
Contains the proceedings for the 1989 Association of Environmental and Resource Economists Workshop on valuing reductions in human health morbidity risks. Series of papers and discussions were collected and reported in the document.
Tsimihodimos, Vasilis; Kostapanos, Michael S.; Moulis, Alexandros; Nikas, Nikos; Elisaf, Moses S.
2015-01-01
Objectives: To investigate the effect of benchmarking on the quality of type 2 diabetes (T2DM) care in Greece. Methods: The OPTIMISE (Optimal Type 2 Diabetes Management Including Benchmarking and Standard Treatment) study [ClinicalTrials.gov identifier: NCT00681850] was an international multicenter, prospective cohort study. It included physicians randomized 3:1 to either receive benchmarking for glycated hemoglobin (HbA1c), systolic blood pressure (SBP) and low-density lipoprotein cholesterol (LDL-C) treatment targets (benchmarking group) or not (control group). The proportions of patients achieving the targets of the above-mentioned parameters were compared between groups after 12 months of treatment. Also, the proportions of patients achieving those targets at 12 months were compared with baseline in the benchmarking group. Results: In the Greek region, the OPTIMISE study included 797 adults with T2DM (570 in the benchmarking group). At month 12 the proportion of patients within the predefined targets for SBP and LDL-C was greater in the benchmarking compared with the control group (50.6 versus 35.8%, and 45.3 versus 36.1%, respectively). However, these differences were not statistically significant. No difference between groups was noted in the percentage of patients achieving the predefined target for HbA1c. At month 12 the increase in the percentage of patients achieving all three targets was greater in the benchmarking (5.9–15.0%) than in the control group (2.7–8.1%). In the benchmarking group more patients were on target regarding SBP (50.6% versus 29.8%), LDL-C (45.3% versus 31.3%) and HbA1c (63.8% versus 51.2%) at 12 months compared with baseline (p < 0.001 for all comparisons). Conclusion: Benchmarking may comprise a promising tool for improving the quality of T2DM care. Nevertheless, target achievement rates of each, and of all three, quality indicators were suboptimal, indicating there are still unmet needs in the management of T2DM. PMID:26445642
A dynamic fault tree model of a propulsion system
NASA Technical Reports Server (NTRS)
Xu, Hong; Dugan, Joanne Bechta; Meshkat, Leila
2006-01-01
We present a dynamic fault tree model of the benchmark propulsion system, and solve it using Galileo. Dynamic fault trees (DFT) extend traditional static fault trees with special gates to model spares and other sequence dependencies. Galileo solves DFT models using a judicious combination of automatically generated Markov and Binary Decision Diagram models. Galileo easily handles the complexities exhibited by the benchmark problem. In particular, Galileo is designed to model phased mission systems.
Global-local methodologies and their application to nonlinear analysis
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1989-01-01
An assessment is made of the potential of different global-local analysis strategies for predicting the nonlinear and postbuckling responses of structures. Two postbuckling problems of composite panels are used as benchmarks and the application of different global-local methodologies to these benchmarks is outlined. The key elements of each of the global-local strategies are discussed and future research areas needed to realize the full potential of global-local methodologies are identified.
2017-01-01
The authors use four criteria to examine a novel community detection algorithm: (a) effectiveness in terms of producing high values of normalized mutual information (NMI) and modularity, using well-known social networks for testing; (b) examination, meaning the ability to examine mitigating resolution limit problems using NMI values and synthetic networks; (c) correctness, meaning the ability to identify useful community structure results in terms of NMI values and Lancichinetti-Fortunato-Radicchi (LFR) benchmark networks; and (d) scalability, or the ability to produce comparable modularity values with fast execution times when working with large-scale real-world networks. In addition to describing a simple hierarchical arc-merging (HAM) algorithm that uses network topology information, we introduce rule-based arc-merging strategies for identifying community structures. Five well-studied social network datasets and eight sets of LFR benchmark networks were employed to validate the correctness of a ground-truth community, eight large-scale real-world complex networks were used to measure its efficiency, and two synthetic networks were used to determine its susceptibility to two resolution limit problems. Our experimental results indicate that the proposed HAM algorithm exhibited satisfactory performance efficiency, and that HAM-identified and ground-truth communities were comparable in terms of social and LFR benchmark networks, while mitigating resolution limit problems. PMID:29121100
Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah
2016-01-01
The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them. PMID:26819585
Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah
2016-01-01
The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them.
Benchmarking Multilayer-HySEA model for landslide generated tsunami. HTHMP validation process.
NASA Astrophysics Data System (ADS)
Macias, J.; Escalante, C.; Castro, M. J.
2017-12-01
Landslide tsunami hazard may be dominant along significant parts of the coastline around the world, in particular in the USA, as compared to hazards from other tsunamigenic sources. This fact motivated NTHMP about the need of benchmarking models for landslide generated tsunamis, following the same methodology already used for standard tsunami models when the source is seismic. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory data sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. A total of 7 benchmarks. The Multilayer-HySEA model including non-hydrostatic effects has been used to perform all the benchmarking problems dealing with laboratory experiments proposed in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017 by NTHMP. The aim of this presentation is to show some of the latest numerical results obtained with the Multilayer-HySEA (non-hydrostatic) model in the framework of this validation effort.Acknowledgements. This research has been partially supported by the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and University of Malaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; ...
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 ® problems. These benchmark and scaling studies show promising results.« less
Validation of optimization strategies using the linear structured production chains
NASA Astrophysics Data System (ADS)
Kusiak, Jan; Morkisz, Paweł; Oprocha, Piotr; Pietrucha, Wojciech; Sztangret, Łukasz
2017-06-01
Different optimization strategies applied to sequence of several stages of production chains were validated in this paper. Two benchmark problems described by ordinary differential equations (ODEs) were considered. A water tank and a passive CR-RC filter were used as the exemplary objects described by the first and the second order differential equations, respectively. Considered in the work optimization problems serve as the validators of strategies elaborated by the Authors. However, the main goal of research is selection of the best strategy for optimization of two real metallurgical processes which will be investigated in an on-going projects. The first problem will be the oxidizing roasting process of zinc sulphide concentrate where the sulphur from the input concentrate should be eliminated and the minimal concentration of sulphide sulphur in the roasted products has to be achieved. Second problem will be the lead refining process consisting of three stages: roasting to the oxide, oxide reduction to metal and the oxidizing refining. Strategies, which appear the most effective in considered benchmark problems will be candidates for optimization of the mentioned above industrial processes.
a Proposed Benchmark Problem for Scatter Calculations in Radiographic Modelling
NASA Astrophysics Data System (ADS)
Jaenisch, G.-R.; Bellon, C.; Schumm, A.; Tabary, J.; Duvauchelle, Ph.
2009-03-01
Code Validation is a permanent concern in computer modelling, and has been addressed repeatedly in eddy current and ultrasonic modeling. A good benchmark problem is sufficiently simple to be taken into account by various codes without strong requirements on geometry representation capabilities, focuses on few or even a single aspect of the problem at hand to facilitate interpretation and to avoid that compound errors compensate themselves, yields a quantitative result and is experimentally accessible. In this paper we attempt to address code validation for one aspect of radiographic modeling, the scattered radiation prediction. Many NDT applications can not neglect scattered radiation, and the scatter calculation thus is important to faithfully simulate the inspection situation. Our benchmark problem covers the wall thickness range of 10 to 50 mm for single wall inspections, with energies ranging from 100 to 500 keV in the first stage, and up to 1 MeV with wall thicknesses up to 70 mm in the extended stage. A simple plate geometry is sufficient for this purpose, and the scatter data is compared on a photon level, without a film model, which allows for comparisons with reference codes like MCNP. We compare results of three Monte Carlo codes (McRay, Sindbad and Moderato) as well as an analytical first order scattering code (VXI), and confront them to results obtained with MCNP. The comparison with an analytical scatter model provides insights into the application domain where this kind of approach can successfully replace Monte-Carlo calculations.
NASA Astrophysics Data System (ADS)
Sutanto, G. R.; Kim, S.; Kim, D.; Sutanto, H.
2018-03-01
One of the problems in dealing with capacitated facility location problem (CFLP) is occurred because of the difference between the capacity numbers of facilities and the number of customers that needs to be served. A facility with small capacity may result in uncovered customers. These customers need to be re-allocated to another facility that still has available capacity. Therefore, an approach is proposed to handle CFLP by using k-means clustering algorithm to handle customers’ allocation. And then, if customers’ re-allocation is needed, is decided by the overall average distance between customers and the facilities. This new approach is benchmarked to the existing approach by Liao and Guo which also use k-means clustering algorithm as a base idea to decide the facilities location and customers’ allocation. Both of these approaches are benchmarked by using three clustering evaluation methods with connectedness, compactness, and separations factors.
Integrating CFD, CAA, and Experiments Towards Benchmark Datasets for Airframe Noise Problems
NASA Technical Reports Server (NTRS)
Choudhari, Meelan M.; Yamamoto, Kazuomi
2012-01-01
Airframe noise corresponds to the acoustic radiation due to turbulent flow in the vicinity of airframe components such as high-lift devices and landing gears. The combination of geometric complexity, high Reynolds number turbulence, multiple regions of separation, and a strong coupling with adjacent physical components makes the problem of airframe noise highly challenging. Since 2010, the American Institute of Aeronautics and Astronautics has organized an ongoing series of workshops devoted to Benchmark Problems for Airframe Noise Computations (BANC). The BANC workshops are aimed at enabling a systematic progress in the understanding and high-fidelity predictions of airframe noise via collaborative investigations that integrate state of the art computational fluid dynamics, computational aeroacoustics, and in depth, holistic, and multifacility measurements targeting a selected set of canonical yet realistic configurations. This paper provides a brief summary of the BANC effort, including its technical objectives, strategy, and selective outcomes thus far.
Simulated annealing with probabilistic analysis for solving traveling salesman problems
NASA Astrophysics Data System (ADS)
Hong, Pei-Yee; Lim, Yai-Fung; Ramli, Razamin; Khalid, Ruzelan
2013-09-01
Simulated Annealing (SA) is a widely used meta-heuristic that was inspired from the annealing process of recrystallization of metals. Therefore, the efficiency of SA is highly affected by the annealing schedule. As a result, in this paper, we presented an empirical work to provide a comparable annealing schedule to solve symmetric traveling salesman problems (TSP). Randomized complete block design is also used in this study. The results show that different parameters do affect the efficiency of SA and thus, we propose the best found annealing schedule based on the Post Hoc test. SA was tested on seven selected benchmarked problems of symmetric TSP with the proposed annealing schedule. The performance of SA was evaluated empirically alongside with benchmark solutions and simple analysis to validate the quality of solutions. Computational results show that the proposed annealing schedule provides a good quality of solution.
Modified reactive tabu search for the symmetric traveling salesman problems
NASA Astrophysics Data System (ADS)
Lim, Yai-Fung; Hong, Pei-Yee; Ramli, Razamin; Khalid, Ruzelan
2013-09-01
Reactive tabu search (RTS) is an improved method of tabu search (TS) and it dynamically adjusts tabu list size based on how the search is performed. RTS can avoid disadvantage of TS which is in the parameter tuning in tabu list size. In this paper, we proposed a modified RTS approach for solving symmetric traveling salesman problems (TSP). The tabu list size of the proposed algorithm depends on the number of iterations when the solutions do not override the aspiration level to achieve a good balance between diversification and intensification. The proposed algorithm was tested on seven chosen benchmarked problems of symmetric TSP. The performance of the proposed algorithm is compared with that of the TS by using empirical testing, benchmark solution and simple probabilistic analysis in order to validate the quality of solution. The computational results and comparisons show that the proposed algorithm provides a better quality solution than that of the TS.
Beauchamp, Kyle A; Behr, Julie M; Rustenburg, Ariën S; Bayly, Christopher I; Kroenlein, Kenneth; Chodera, John D
2015-10-08
Atomistic molecular simulations are a powerful way to make quantitative predictions, but the accuracy of these predictions depends entirely on the quality of the force field employed. Although experimental measurements of fundamental physical properties offer a straightforward approach for evaluating force field quality, the bulk of this information has been tied up in formats that are not machine-readable. Compiling benchmark data sets of physical properties from non-machine-readable sources requires substantial human effort and is prone to the accumulation of human errors, hindering the development of reproducible benchmarks of force-field accuracy. Here, we examine the feasibility of benchmarking atomistic force fields against the NIST ThermoML data archive of physicochemical measurements, which aggregates thousands of experimental measurements in a portable, machine-readable, self-annotating IUPAC-standard format. As a proof of concept, we present a detailed benchmark of the generalized Amber small-molecule force field (GAFF) using the AM1-BCC charge model against experimental measurements (specifically, bulk liquid densities and static dielectric constants at ambient pressure) automatically extracted from the archive and discuss the extent of data available for use in larger scale (or continuously performed) benchmarks. The results of even this limited initial benchmark highlight a general problem with fixed-charge force fields in the representation low-dielectric environments, such as those seen in binding cavities or biological membranes.
NASA Astrophysics Data System (ADS)
Waring, Michael S.
2016-11-01
Terpene ozonolysis reactions can be a strong source of secondary organic aerosol (SOA) indoors. SOA formation can be parameterized and predicted using the aerosol mass fraction (AMF), also known as the SOA yield, which quantifies the mass ratio of generated SOA to oxidized terpene. Limonene is a monoterpene that is at sufficient concentrations such that it reacts meaningfully with ozone indoors. It has two unsaturated bonds, and the magnitude of the limonene ozonolysis AMF varies by a factor of ∼4 depending on whether one or both of its unsaturated bonds are ozonated, which depends on whether ozone is in excess compared to limonene as well as the available time for reactions indoors. Hence, this study developed a framework to predict the limonene AMF as a function of the ozone [O3] and limonene [lim] concentrations and the air exchange rate (AER, h-1), which is the inverse of the residence time. Empirical AMF data were used to calculate a mixing coefficient, β, that would yield a 'resultant AMF' as the combination of the AMFs due to ozonolysis of one or both of limonene's unsaturated bonds, within the volatility basis set (VBS) organic aerosol framework. Then, β was regressed against predictors of log10([O3]/[lim]) and AER (R2 = 0.74). The β increased as the log10([O3]/[lim]) increased and as AER decreased, having the physical meaning of driving the resultant AMF to the upper AMF condition when both unsaturated bonds of limonene are ozonated. Modeling demonstrates that using the correct resultant AMF to simulate SOA formation owing to limonene ozonolysis is crucial for accurate indoor prediction.
Timbo, Babgaleh B; Chirtel, Stuart J; Ihrie, John; Oladipo, Taiye; Velez-Suarez, Loy; Brewer, Vickery; Mozersky, Robert
2018-05-01
The Food and Drug Administration (FDA)'s Center for Food Safety and Applied Nutrition (CFSAN) oversees the safety of the nation's foods, dietary supplements, and cosmetic products. To present a descriptive analysis of the 2004-2013 dietary supplement adverse event report (AER) data from CAERS and evaluate the 2006 Dietary Supplements and Nonprescription Drug Consumer Protection Act as pertaining to dietary supplements adverse events reporting. We queried CAERS for data from the 2004-2013 AERs specifying at least 1 suspected dietary supplement product. We extracted the product name(s), the symptom(s) reported, age, sex, and serious adverse event outcomes. We examined time trends for mandatory and voluntary reporting and performed analysis using SAS v9.4 and R v3.3.0 software. Of the total AERs (n = 15 430) received from January 1, 2004, through December 31, 2013, indicating at least 1 suspected dietary supplement product, 66.9% were mandatory, 32.2% were voluntary, and 0.9% were both mandatory and voluntary. Reported serious outcomes included death, life-threatening conditions, hospitalizations, congenital anomalies/birth defects and events requiring interventions to prevent permanent impairments (5.1%). The dietary supplement adverse event reporting rate in the United States was estimated at ~2% based on CAERS data. This study characterizes CAERS dietary supplement adverse event data for the 2004-2013 period and estimates a reporting rate of 2% for dietary supplement adverse events based on CAERS data. The findings show that the 2006 Dietary Supplements and Nonprescription Drug Consumer Protection Act had a substantial impact on the reporting of adverse events.
Estimating the extent of reporting to FDA: a case study of statin-associated rhabdomyolysis.
McAdams, Mara; Staffa, Judy; Dal Pan, Gerald
2008-03-01
To estimate the extent of reporting to FDA through statin-associated rhabdomyolysis data. Data included incidence rates (IRs) of hospitalized rhabdomyolysis among statin users from a population-based study, and comparable reported AERS cases and national estimates of statin use from an AERS analysis. Using IRs, national estimates of statin use and average days supply per prescription, we estimated the number of US statin-associated cases of hospitalized rhabdomyolysis. We compared this estimate to the observed number of cases reported to FDA to evaluate the extent of reporting. We repeated this method for atorvastatin, cerivastatin, pravastatin, and simvastatin and statin combinations. We performed sensitivity analyses to check for biases such as misclassification of statin use and cohort selection bias. We evaluated potential time-dependent cerivastatin reporting by a "Dear Health Care Provider (DHCP)" letter. The estimated extent of reporting to FDA varied by statin (atorvastatin, 5.0%; cerivastatin, 31.2%; simvastatin, 14.2%; all four combined, 17.7%; and non-cerivastatin statins combined, 9.9%). No pravastatin-associated cohort cases occurred. Across a reasonable value range, sensitivity analyses did not significantly alter the results; overall the cohort was similar to national statin-users. There was a large increase in AERS reports after the cerivastatin DHCP letter and the estimated extent of reporting increased from 14.8 to 35.0%. The extent of reporting of adverse events to FDA varied by statin and may be influenced by publicity. For statins-associated rhabdomyolysis, the estimated extent of reporting appears to range from 5 to 30% but in the absence of stimulated reporting appears to be 5-15%. Copyright 2008 John Wiley & Sons, Ltd.
The stress-buffering effect of acute exercise: Evidence for HPA axis negative feedback.
Zschucke, Elisabeth; Renneberg, Babette; Dimeo, Fernando; Wüstenberg, Torsten; Ströhle, Andreas
2015-01-01
According to the cross-stressor adaptation hypothesis, physically trained individuals show lower physiological and psychological responses to stressors other than exercise, e.g. psychosocial stress. Reduced stress reactivity may constitute a mechanism of action for the beneficial effects of exercise in maintaining mental health. With regard to neural and psychoneuroendocrine stress responses, the acute stress-buffering effects of exercise have not been investigated yet. A sample of highly trained (HT) and sedentary (SED) young men was randomized to either exercise on a treadmill at moderate intensity (60-70% VO2max; AER) for 30 min, or to perform 30 min of "placebo" exercise (PLAC). 90 min later, an fMRI experiment was conducted using an adapted version of the Montreal Imaging Stress Task (MIST). The subjective and psychoneuroendocrine (cortisol and α-amylase) changes induced by the exercise intervention and the MIST were assessed, as well as neural activations during the MIST. Finally, associations between the different stress responses were analysed. Participants of the AER group showed a significantly reduced cortisol response to the MIST, which was inversely related to the previous exercise-induced α-amylase and cortisol fluctuations. With regard to the sustained BOLD signal, we found higher bilateral hippocampus (Hipp) activity and lower prefrontal cortex (PFC) activity in the AER group. Participants with a higher aerobic fitness showed lower cortisol responses to the MIST. As the Hipp and PFC are brain structures prominently involved in the regulation of the hypothalamus-pituitary-adrenal (HPA) axis, these findings indicate that the acute stress-buffering effect of exercise relies on negative feedback mechanisms. Positive affective changes after exercise appear as important moderators largely accounting for the effects related to physical fitness. Copyright © 2014 Elsevier Ltd. All rights reserved.
Breen, Michael; Xu, Yadong; Schneider, Alexandra; Williams, Ronald; Devlin, Robert
2018-06-01
Air pollution epidemiology studies of ambient fine particulate matter (PM 2.5 ) often use outdoor concentrations as exposure surrogates, which can induce exposure error. The goal of this study was to improve ambient PM 2.5 exposure assessments for a repeated measurements study with 22 diabetic individuals in central North Carolina called the Diabetes and Environment Panel Study (DEPS) by applying the Exposure Model for Individuals (EMI), which predicts five tiers of individual-level exposure metrics for ambient PM 2.5 using outdoor concentrations, questionnaires, weather, and time-location information. Using EMI, we linked a mechanistic air exchange rate (AER) model to a mass-balance PM 2.5 infiltration model to predict residential AER (Tier 1), infiltration factors (F inf_home , Tier 2), indoor concentrations (C in , Tier 3), personal exposure factors (F pex , Tier 4), and personal exposures (E, Tier 5) for ambient PM 2.5 . We applied EMI to predict daily PM 2.5 exposure metrics (Tiers 1-5) for 174 participant-days across the 13 months of DEPS. Individual model predictions were compared to a subset of daily measurements of F pex and E (Tiers 4-5) from the DEPS participants. Model-predicted F pex and E corresponded well to daily measurements with a median difference of 14% and 23%; respectively. Daily model predictions for all 174 days showed considerable temporal and house-to-house variability of AER, F inf_home , and C in (Tiers 1-3), and person-to-person variability of F pex and E (Tiers 4-5). Our study demonstrates the capability of predicting individual-level ambient PM 2.5 exposure metrics for an epidemiological study, in support of improving risk estimation. Copyright © 2018. Published by Elsevier B.V.
Spherical Harmonic Solutions to the 3D Kobayashi Benchmark Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, P.N.; Chang, B.; Hanebutte, U.R.
1999-12-29
Spherical harmonic solutions of order 5, 9 and 21 on spatial grids containing up to 3.3 million cells are presented for the Kobayashi benchmark suite. This suite of three problems with simple geometry of pure absorber with large void region was proposed by Professor Kobayashi at an OECD/NEA meeting in 1996. Each of the three problems contains a source, a void and a shield region. Problem 1 can best be described as a box in a box problem, where a source region is surrounded by a square void region which itself is embedded in a square shield region. Problems 2more » and 3 represent a shield with a void duct. Problem 2 having a straight and problem 3 a dog leg shaped duct. A pure absorber and a 50% scattering case are considered for each of the three problems. The solutions have been obtained with Ardra, a scalable, parallel neutron transport code developed at Lawrence Livermore National Laboratory (LLNL). The Ardra code takes advantage of a two-level parallelization strategy, which combines message passing between processing nodes and thread based parallelism amongst processors on each node. All calculations were performed on the IBM ASCI Blue-Pacific computer at LLNL.« less
International land Model Benchmarking (ILAMB) Package v002.00
Collier, Nathaniel [Oak Ridge National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory; Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory
2016-05-09
As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.
International land Model Benchmarking (ILAMB) Package v001.00
Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory
2016-05-02
As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.
Benchmark and Framework for Encouraging Research on Multi-Threaded Testing Tools
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Stoller, Scott D.; Ur, Shmuel
2003-01-01
A problem that has been getting prominence in testing is that of looking for intermittent bugs. Multi-threaded code is becoming very common, mostly on the server side. As there is no silver bullet solution, research focuses on a variety of partial solutions. In this paper (invited by PADTAD 2003) we outline a proposed project to facilitate research. The project goals are as follows. The first goal is to create a benchmark that can be used to evaluate different solutions. The benchmark, apart from containing programs with documented bugs, will include other artifacts, such as traces, that are useful for evaluating some of the technologies. The second goal is to create a set of tools with open API s that can be used to check ideas without building a large system. For example an instrumentor will be available, that could be used to test temporal noise making heuristics. The third goal is to create a focus for the research in this area around which a community of people who try to solve similar problems with different techniques, could congregate.
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1986-01-01
An assessment is made of the potential of different global-local analysis strategies for predicting the nonlinear and postbuckling responses of structures. Two postbuckling problems of composite panels are used as benchmarks and the application of different global-local methodologies to these benchmarks is outlined. The key elements of each of the global-local strategies are discussed and future research areas needed to realize the full potential of global-local methodologies are identified.
Integrated control/structure optimization by multilevel decomposition
NASA Technical Reports Server (NTRS)
Zeiler, Thomas A.; Gilbert, Michael G.
1990-01-01
A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The system is fully decomposed into structural and control subsystem designs and an improved design is produced. Theory, implementation, and results for the method are presented and compared with the benchmark example.
Marcovecchio, M L; de Giorgis, T; Di Giovanni, I; Chiavaroli, V; Chiarelli, F; Mohn, A
2017-06-01
To evaluate whether circulating markers of endothelial dysfunction, such as intercellular adhesion molecule-1 (ICAM-1) and myeloperoxidase (MPO), are increased in youth with obesity and in those with type 1 diabetes (T1D) at similar levels, and whether their levels are associated with markers of renal function. A total of 60 obese youth [M/F: 30/30, age: 12.5 ± 2.8 yr; body mass index (BMI) z-score: 2.26 ± 0.46], 30 with T1D (M/F: 15/15; age: 12.9 ± 2.4 yr; BMI z-score: 0.45 ± 0.77), and 30 healthy controls (M/F: 15/15, age: 12.4 ± 3.3 yr, BMI z-score: -0.25 ± 0.56) were recruited. Anthropometric measurements were assessed and a blood sample was collected to measure ICAM-1, MPO, creatinine, cystatin C and lipid levels. A 24-h urine collection was obtained for assessing albumin excretion rate (AER). Levels of ICAM-1 and MPO were significantly higher in obese [ICAM-1: 0.606 (0.460-1.033) µg/mL; MPO: 136.6 (69.7-220.8) ng/mL] and T1D children [ICAM-1: 0.729 (0.507-0.990) µg/mL; MPO: 139.5 (51.0-321.3) ng/mL] compared with control children [ICAM-1: 0.395 (0.272-0.596) µg/mL MPO: 41.3 (39.7-106.9) ng/mL], whereas no significant difference was found between T1D and obese children. BMI z-score was significantly associated with ICAM-1 (β = 0.21, p = 0.02) and MPO (β = 0.41, p < 0.001). A statistically significant association was also found between ICAM-1 and markers of renal function (AER: β = 0.21, p = 0.03; e-GFR: β = 0.19, p = 0.04), after adjusting for BMI. Obese children have increased markers of endothelial dysfunction and early signs of renal damage, similarly to children with T1D, confirming obesity to be a cardiovascular risk factor as T1D. The association between ICAM-1 with e-GFR and AER confirm the known the association between general endothelial and renal dysfunction. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Benchmarking methods and data sets for ligand enrichment assessment in virtual screening.
Xia, Jie; Tilahun, Ermias Lemma; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon
2015-01-01
Retrospective small-scale virtual screening (VS) based on benchmarking data sets has been widely used to estimate ligand enrichments of VS approaches in the prospective (i.e. real-world) efforts. However, the intrinsic differences of benchmarking sets to the real screening chemical libraries can cause biased assessment. Herein, we summarize the history of benchmarking methods as well as data sets and highlight three main types of biases found in benchmarking sets, i.e. "analogue bias", "artificial enrichment" and "false negative". In addition, we introduce our recent algorithm to build maximum-unbiased benchmarking sets applicable to both ligand-based and structure-based VS approaches, and its implementations to three important human histone deacetylases (HDACs) isoforms, i.e. HDAC1, HDAC6 and HDAC8. The leave-one-out cross-validation (LOO CV) demonstrates that the benchmarking sets built by our algorithm are maximum-unbiased as measured by property matching, ROC curves and AUCs. Copyright © 2014 Elsevier Inc. All rights reserved.
Benchmarking Methods and Data Sets for Ligand Enrichment Assessment in Virtual Screening
Xia, Jie; Tilahun, Ermias Lemma; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon
2014-01-01
Retrospective small-scale virtual screening (VS) based on benchmarking data sets has been widely used to estimate ligand enrichments of VS approaches in the prospective (i.e. real-world) efforts. However, the intrinsic differences of benchmarking sets to the real screening chemical libraries can cause biased assessment. Herein, we summarize the history of benchmarking methods as well as data sets and highlight three main types of biases found in benchmarking sets, i.e. “analogue bias”, “artificial enrichment” and “false negative”. In addition, we introduced our recent algorithm to build maximum-unbiased benchmarking sets applicable to both ligand-based and structure-based VS approaches, and its implementations to three important human histone deacetylase (HDAC) isoforms, i.e. HDAC1, HDAC6 and HDAC8. The Leave-One-Out Cross-Validation (LOO CV) demonstrates that the benchmarking sets built by our algorithm are maximum-unbiased in terms of property matching, ROC curves and AUCs. PMID:25481478
A benchmarking method to measure dietary absorption efficiency of chemicals by fish.
Xiao, Ruiyang; Adolfsson-Erici, Margaretha; Åkerman, Gun; McLachlan, Michael S; MacLeod, Matthew
2013-12-01
Understanding the dietary absorption efficiency of chemicals in the gastrointestinal tract of fish is important from both a scientific and a regulatory point of view. However, reported fish absorption efficiencies for well-studied chemicals are highly variable. In the present study, the authors developed and exploited an internal chemical benchmarking method that has the potential to reduce uncertainty and variability and, thus, to improve the precision of measurements of fish absorption efficiency. The authors applied the benchmarking method to measure the gross absorption efficiency for 15 chemicals with a wide range of physicochemical properties and structures. They selected 2,2',5,6'-tetrachlorobiphenyl (PCB53) and decabromodiphenyl ethane as absorbable and nonabsorbable benchmarks, respectively. Quantities of chemicals determined in fish were benchmarked to the fraction of PCB53 recovered in fish, and quantities of chemicals determined in feces were benchmarked to the fraction of decabromodiphenyl ethane recovered in feces. The performance of the benchmarking procedure was evaluated based on the recovery of the test chemicals and precision of absorption efficiency from repeated tests. Benchmarking did not improve the precision of the measurements; after benchmarking, however, the median recovery for 15 chemicals was 106%, and variability of recoveries was reduced compared with before benchmarking, suggesting that benchmarking could account for incomplete extraction of chemical in fish and incomplete collection of feces from different tests. © 2013 SETAC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, J; Dossa, D; Gokhale, M
Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe:more » (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.« less
29 CFR 1952.343 - Compliance staffing benchmarks.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 6 safety and 2 health compliance officers. After opportunity for pulbic...
29 CFR 1952.353 - Compliance staffing benchmarks.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 9 safety and 6 health compliance officers. After opportunity for public...
29 CFR 1952.343 - Compliance staffing benchmarks.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 6 safety and 2 health compliance officers. After opportunity for pulbic...
29 CFR 1952.353 - Compliance staffing benchmarks.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 9 safety and 6 health compliance officers. After opportunity for public...
NASA Technical Reports Server (NTRS)
Feng, Hui-Yu; VanderWijngaart, Rob; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2001-01-01
We describe the design of a new method for the measurement of the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. The method involves the solution of a stylized heat transfer problem on an unstructured, adaptive grid. A Spectral Element Method (SEM) with an adaptive, nonconforming mesh is selected to discretize the transport equation. The relatively high order of the SEM lowers the fraction of wall clock time spent on inter-processor communication, which eases the load balancing task and allows us to concentrate on the memory accesses. The benchmark is designed to be three-dimensional. Parallelization and load balance issues of a reference implementation will be described in detail in future reports.
PID controller tuning using metaheuristic optimization algorithms for benchmark problems
NASA Astrophysics Data System (ADS)
Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.
2017-11-01
This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.
Biofilms and antibiotic susceptibility of multidrug-resistant bacteria from wild animals.
Dias, Carla; Borges, Anabela; Oliveira, Diana; Martinez-Murcia, Antonio; Saavedra, Maria José; Simões, Manuel
2018-01-01
The "One Health" concept recognizes that human health and animal health are interdependent and bound to the health of the ecosystem in which they (co)exist. This interconnection favors the transmission of bacteria and other infectious agents as well as the flow of genetic elements containing antibiotic resistance genes. This problem is worsened when pathogenic bacteria have the ability to establish as biofilms. Therefore, it is important to understand the characteristics and behaviour of microorganisms in both planktonic and biofilms states from the most diverse environmental niches to mitigate the emergence and dissemination of resistance. The purpose of this work was to assess the antibiotic susceptibility of four bacteria ( Acinetobacter spp., Klebsiella pneumoniae , Pseudomonas fluorescens and Shewanella putrefaciens ) isolated from wild animals and their ability to form biofilms. The effect of two antibiotics, imipenem (IPM) and ciprofloxacin (CIP), on biofilm removal was also assessed. Screening of resistance genetic determinants was performed by PCR. Biofilm tests were performed by a modified microtiter plate method. Bacterial surface hydrophobicity was determined by sessile drop contact angles. The susceptibility profile classified the bacteria as multidrug-resistant. Three genes coding for β-lactamases were detected in K. pneumoniae (TEM, SHV, OXA-aer) and one in P. fluorescens (OXA-aer). K. pneumoniae was the microorganism that carried more β-lactamase genes and it was the most proficient biofilm producer, while P. fluorescens demonstrated the highest adhesion ability. Antibiotics at their MIC, 5 × MIC and 10 × MIC were ineffective in total biofilm removal. The highest biomass reductions were found with IPM (54% at 10 × MIC) against K. pneumoniae biofilms and with CIP (40% at 10 × MIC) against P. fluorescens biofilms. The results highlight wildlife as important host reservoirs and vectors for the spread of multidrug-resistant bacteria and genetic determinants of resistance. The ability of these bacteria to form biofilms should increase their persistence.
Biofilms and antibiotic susceptibility of multidrug-resistant bacteria from wild animals
Dias, Carla; Borges, Anabela; Oliveira, Diana; Martinez-Murcia, Antonio; Saavedra, Maria José
2018-01-01
Background The “One Health” concept recognizes that human health and animal health are interdependent and bound to the health of the ecosystem in which they (co)exist. This interconnection favors the transmission of bacteria and other infectious agents as well as the flow of genetic elements containing antibiotic resistance genes. This problem is worsened when pathogenic bacteria have the ability to establish as biofilms. Therefore, it is important to understand the characteristics and behaviour of microorganisms in both planktonic and biofilms states from the most diverse environmental niches to mitigate the emergence and dissemination of resistance. Methods The purpose of this work was to assess the antibiotic susceptibility of four bacteria (Acinetobacter spp., Klebsiella pneumoniae, Pseudomonas fluorescens and Shewanella putrefaciens) isolated from wild animals and their ability to form biofilms. The effect of two antibiotics, imipenem (IPM) and ciprofloxacin (CIP), on biofilm removal was also assessed. Screening of resistance genetic determinants was performed by PCR. Biofilm tests were performed by a modified microtiter plate method. Bacterial surface hydrophobicity was determined by sessile drop contact angles. Results The susceptibility profile classified the bacteria as multidrug-resistant. Three genes coding for β-lactamases were detected in K. pneumoniae (TEM, SHV, OXA-aer) and one in P. fluorescens (OXA-aer). K. pneumoniae was the microorganism that carried more β-lactamase genes and it was the most proficient biofilm producer, while P. fluorescens demonstrated the highest adhesion ability. Antibiotics at their MIC, 5 × MIC and 10 × MIC were ineffective in total biofilm removal. The highest biomass reductions were found with IPM (54% at 10 × MIC) against K. pneumoniae biofilms and with CIP (40% at 10 × MIC) against P. fluorescens biofilms. Discussion The results highlight wildlife as important host reservoirs and vectors for the spread of multidrug-resistant bacteria and genetic determinants of resistance. The ability of these bacteria to form biofilms should increase their persistence.
This newsletter reports on the Huber Technology Groups (HTG) high temperature advanced hazardous waste treatment technology capable of very high destruction and removal efficiencies of various hazardous wastes. This newsletter addresses the destruction of PCBs in an EPA certifi...
Amiodarone-Associated Optic Neuropathy: A Critical Review
Passman, Rod S.; Bennett, Charles L.; Purpura, Joseph M.; Kapur, Rashmi; Johnson, Lenworth N.; Raisch, Dennis W.; West, Dennis P.; Edwards, Beatrice J.; Belknap, Steven M.; Liebling, Dustin B.; Fisher, Mathew J.; Samaras, Athena T.; Jones, Lisa-Gaye A.; Tulas, Katrina-Marie E.; McKoy, June M.
2011-01-01
Although amiodarone is the most commonly prescribed antiarrhythmic drug, its use is limited by serious toxicities, including optic neuropathy. Current reports of amiodarone associated optic neuropathy identified from the Food and Drug Administration's Adverse Event Reporting System (FDA-AERS) and published case reports were reviewed. A total of 296 reports were identified: 214 from AERS, 59 from published case reports, and 23 from adverse events reports for patients enrolled in clinical trials. Mean duration of amiodarone therapy before vision loss was 9 months (range 1-84 months). Insidious onset of amiodarone associated optic neuropathy (44%) was the most common presentation, and nearly one-third were asymptomatic. Optic disc edema was present in 85% of cases. Following drug cessation, 58% had improved visual acuity, 21% were unchanged, and 21% had further decreased visual acuity. Legal blindness (< 20/200) was noted in at least one eye in 20% of cases. Close ophthalmologic surveillance of patients during the tenure of amiodarone administration is warranted. PMID:22385784
Determination of 99Tc in fresh water using TRU resin by ICP-MS.
Guérin, Nicolas; Riopel, Remi; Kramer-Tremblay, Sheila; de Silva, Nimal; Cornett, Jack; Dai, Xiongxin
2017-10-02
Technetium-99 ( 99 Tc) determination at trace level by inductively coupled plasma mass spectrometry (ICP-MS) is challenging because there is no readily available appropriate Tc isotopic tracer. A new method using Re as a recovery tracer to determine 99 Tc in fresh water samples, which does not require any evaporation step, was developed. Tc(VII) and Re(VII) were pre-concentrated on a small anion exchange resin (AER) cartridge from one litre of water sample. They were then efficiently eluted from the AER using a potassium permanganate (KMnO 4 ) solution. After the reduction of KMnO 4 in 2 M sulfuric acid solution, the sample was passed through a small TRU resin cartridge. Tc(VII) and Re(VII) retained on the TRU resin were eluted using near boiling water, which can be directly used for the ICP-MS measurement. The results for method optimisation, validation and application were reported. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
Extreme sensitivity to ultraviolet light in the fungal pathogen causing white-nose syndrome of bats.
Palmer, Jonathan M; Drees, Kevin P; Foster, Jeffrey T; Lindner, Daniel L
2018-01-02
Bat white-nose syndrome (WNS), caused by the fungal pathogen Pseudogymnoascus destructans, has decimated North American hibernating bats since its emergence in 2006. Here, we utilize comparative genomics to examine the evolutionary history of this pathogen in comparison to six closely related nonpathogenic species. P. destructans displays a large reduction in carbohydrate-utilizing enzymes (CAZymes) and in the predicted secretome (~50%), and an increase in lineage-specific genes. The pathogen has lost a key enzyme, UVE1, in the alternate excision repair (AER) pathway, which is known to contribute to repair of DNA lesions induced by ultraviolet (UV) light. Consistent with a nonfunctional AER pathway, P. destructans is extremely sensitive to UV light, as well as the DNA alkylating agent methyl methanesulfonate (MMS). The differential susceptibility of P. destructans to UV light in comparison to other hibernacula-inhabiting fungi represents a potential "Achilles' heel" of P. destructans that might be exploited for treatment of bats with WNS.
Coupling Processes Between Atmospheric Chemistry and Climate
NASA Technical Reports Server (NTRS)
Ko, Malcolm; Weisenstein, Debra; Rodriquez, Jose; Danilin, Michael; Scott, Courtney; Shia, Run-Lie; Eluszkiewicz, Janusz; Sze, Nien-Dak; Stewart, Richard W. (Technical Monitor)
1999-01-01
This is the final report for NAS5-97039 for work performed between December 1996 and November 1999. The overall objective of this project is to improve the understanding of coupling processes among atmospheric chemistry, aerosol and climate, all important for quantitative assessments of global change. Among our priority are changes in ozone and stratospheric sulfate aerosol, with emphasis on how ozone in the lower stratosphere would respond to natural or anthropogenic changes. The work emphasizes two important aspects: (1) AER's continued participation in preparation of, and providing scientific input for, various scientific reports connected with assessment of stratospheric ozone and climate. These include participation in various model intercomparison exercises as well as preparation of national and international reports. (2) Continued development of the AER three-wave interactive model to address how the transport circulation will change as ozone and the thermal properties of the atmosphere change, and assess how these new findings will affect our confidence in the ozone assessment results.
[Real-time PCR in rapid diagnosis of Aeromonas hydrophila necrotizing soft tissue infections].
Kohayagawa, Yoshitaka; Izumi, Yoko; Ushita, Misuzu; Niinou, Norio; Koshizaki, Masayuki; Yamamori, Yuji; Kaneko, Sakae; Fukushima, Hiroshi
2009-11-01
We report a case of rapidly progressive necrotizing soft tissue infection and sepsis followed by a patient's death. We suspected Vibrio vulnificus infection because the patient's underlying disease was cirrhosis and the course extremely rapid. No microbe had been detected at death. We extracted DNA from a blood culture bottle. SYBR green I real-time PCR was conducted but could not detect V. vulnificus vvh in the DNA sample. Aeromonas hydrophila was cultured and identified in blood and necrotized tissue samples. Real-time PCR was conducted to detect A. hydrophila ahh1, AHCYTOEN and aerA in the DNA sample extracted from the blood culture bottle and an isolated necrotized tissue strain, but only ahh1 was positive. High-mortality in necrotizing soft tissue infections makes it is crucial to quickly detect V. vulnificus and A. hydrophila. We found real-time PCR for vvh, ahh1, AHCYTOEN, and aerA useful in detecting V. vulnificus and A. hydrophila in necrotizing soft tissue infections.
Microphysical and Optical Properties of Saharan Dust Measured during the ICE-D Aircraft Campaign
NASA Astrophysics Data System (ADS)
Ryder, Claire; Marenco, Franco; Brooke, Jennifer; Cotton, Richard; Taylor, Jonathan
2017-04-01
During August 2015, the UK FAAM BAe146 research aircraft was stationed in Cape Verde off the coast of West Africa. Measurements of Saharan dust, and ice and liquid water clouds, were taken for the ICE-D (Ice in Clouds Experiment - Dust) project - a multidisciplinary project aimed at further understanding aerosol-cloud interactions. Six flights formed part of a sub-project, AER-D, solely focussing on measurements of Saharan dust within the African dust plume. Dust loadings observed during these flights varied (aerosol optical depths of 0.2 to 1.3), as did the vertical structure of the dust, the size distributions and the optical properties. The BAe146 was fully equipped to measure size distributions covering aerosol accumulation, coarse and giant modes. Initial results of size distribution and optical properties of dust from the AER-D flights will be presented, showing that a substantial coarse mode was present, in agreement with previous airborne measurements. Optical properties of dust relating to the measured size distributions will also be presented.
Continued development and validation of the AER two-dimensional interactive model
NASA Technical Reports Server (NTRS)
Ko, M. K. W.; Sze, N. D.; Shia, R. L.; Mackay, M.; Weisenstein, D. K.; Zhou, S. T.
1996-01-01
Results from two-dimensional chemistry-transport models have been used to predict the future behavior of ozone in the stratosphere. Since the transport circulation, temperature, and aerosol surface area are fixed in these models, they cannot account for the effects of changes in these quantities, which could be modified because of ozone redistribution and/or other changes in the troposphere associated with climate changes. Interactive two-dimensional models, which calculate the transport circulation and temperature along with concentrations of the chemical species, could provide answers to complement the results from three-dimension model calculations. In this project, we performed the following tasks in pursuit of the respective goals: (1) We continued to refine the 2-D chemistry-transport model; (2) We developed a microphysics model to calculate the aerosol loading and its size distribution; (3) The treatment of physics in the AER 2-D interactive model were refined in the following areas--the heating rate in the troposphere, and wave-forcing from propagation of planetary waves.
Benchmarking of HEU Mental Annuli Critical Assemblies with Internally Reflected Graphite Cylinder
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiaobo, Liu; Bess, John D.; Marshall, Margaret A.
Three experimental configurations of critical assemblies, performed in 1963 at the Oak Ridge Critical Experiment Facility, which are assembled using three different diameter HEU annuli (15-9 inches, 15-7 inches and 13-7 inches) metal annuli with internally reflected graphite cylinder are evaluated and benchmarked. The experimental uncertainties which are 0.00055, 0.00055 and 0.00055 respectively, and biases to the detailed benchmark models which are -0.00179, -0.00189 and -0.00114 respectively, were determined, and the experimental benchmark keff results were obtained for both detailed and simplified model. The calculation results for both detailed and simplified models using MCNP6-1.0 and ENDF VII.1 agree well tomore » the benchmark experimental results with a difference of less than 0.2%. These are acceptable benchmark experiments for inclusion in the ICSBEP Handbook.« less
Using Toyota's A3 Thinking for Analyzing MBA Business Cases
ERIC Educational Resources Information Center
Anderson, Joe S.; Morgan, James N.; Williams, Susan K.
2011-01-01
A3 Thinking is fundamental to Toyota's benchmark management philosophy and to their lean production system. It is used to solve problems, gain agreement, mentor team members, and lead organizational improvements. A structured problem-solving approach, A3 Thinking builds improvement opportunities through experience. We used "The Toyota…
For QSAR and QSPR modeling of biological and physicochemical properties, estimating the accuracy of predictions is a critical problem. The “distance to model” (DM) can be defined as a metric that defines the similarity between the training set molecules and the test set compound ...
Benditz, A; Drescher, J; Greimel, F; Zeman, F; Grifka, J; Meißner, W; Völlner, F
2016-12-05
Perioperative pain reduction, particularly during the first two days, is highly important for patients after total knee arthroplasty (TKA). Problems are not only caused by medical issues but by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent benchmarking. All patients included into the study had undergone total knee arthroplasty. Outcome parameters were analyzed by means of a questionnaire on the first postoperative day. A multidisciplinary team implemented a regular procedure of data analyzes and external benchmarking by participating in a nationwide quality improvement project. At the beginning of the study, our hospital ranked 16 th in terms of activity-related pain and 9 th in patient satisfaction among 47 anonymized hospitals participating in the benchmarking project. At the end of the study, we had improved to 1 st activity-related pain and to 2 nd in patient satisfaction. Although benchmarking started and finished with the same standardized pain management concept, results were initially pure. Beside pharmacological treatment, interdisciplinary teamwork and benchmarking with direct feedback mechanisms are also very important for decreasing postoperative pain and for increasing patient satisfaction after TKA.
Benditz, A.; Drescher, J.; Greimel, F.; Zeman, F.; Grifka, J.; Meißner, W.; Völlner, F.
2016-01-01
Perioperative pain reduction, particularly during the first two days, is highly important for patients after total knee arthroplasty (TKA). Problems are not only caused by medical issues but by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent benchmarking. All patients included into the study had undergone total knee arthroplasty. Outcome parameters were analyzed by means of a questionnaire on the first postoperative day. A multidisciplinary team implemented a regular procedure of data analyzes and external benchmarking by participating in a nationwide quality improvement project. At the beginning of the study, our hospital ranked 16th in terms of activity-related pain and 9th in patient satisfaction among 47 anonymized hospitals participating in the benchmarking project. At the end of the study, we had improved to 1st activity-related pain and to 2nd in patient satisfaction. Although benchmarking started and finished with the same standardized pain management concept, results were initially pure. Beside pharmacological treatment, interdisciplinary teamwork and benchmarking with direct feedback mechanisms are also very important for decreasing postoperative pain and for increasing patient satisfaction after TKA. PMID:27917911
NASA Technical Reports Server (NTRS)
Dougherty, N. S.; Johnson, S. L.
1993-01-01
Multiple rocket exhaust plume interactions at high altitudes can produce base flow recirculation with attendant alteration of the base pressure coefficient and increased base heating. A search for a good wind tunnel benchmark problem to check grid clustering technique and turbulence modeling turned up the experiment done at AEDC in 1961 by Goethert and Matz on a 4.25-in. diameter domed missile base model with four rocket nozzles. This wind tunnel model with varied external bleed air flow for the base flow wake produced measured p/p(sub ref) at the center of the base as high as 3.3 due to plume flow recirculation back onto the base. At that time in 1961, relatively inexpensive experimentation with air at gamma = 1.4 and nozzle A(sub e)/A of 10.6 and theta(sub n) = 7.55 deg with P(sub c) = 155 psia simulated a LO2/LH2 rocket exhaust plume with gamma = 1.20, A(sub e)/A of 78 and P(sub c) about 1,000 psia. An array of base pressure taps on the aft dome gave a clear measurement of the plume recirculation effects at p(infinity) = 4.76 psfa corresponding to 145,000 ft altitude. Our CFD computations of the flow field with direct comparison of computed-versus-measured base pressure distribution (across the dome) provide detailed information on velocities and particle traces as well eddy viscosity in the base and nozzle region. The solution was obtained using a six-zone mesh with 284,000 grid points for one quadrant taking advantage of symmetry. Results are compared using a zero-equation algebraic and a one-equation pointwise R(sub t) turbulence model (work in progress). Good agreement with the experimental pressure data was obtained with both; and this benchmark showed the importance of: (1) proper grid clustering and (2) proper choice of turbulence modeling for rocket plume problems/recirculation at high altitude.
Plasma Modeling with Speed-Limited Particle-in-Cell Techniques
NASA Astrophysics Data System (ADS)
Jenkins, Thomas G.; Werner, G. R.; Cary, J. R.; Stoltz, P. H.
2017-10-01
Speed-limited particle-in-cell (SLPIC) modeling is a new particle simulation technique for modeling systems wherein numerical constraints, e.g. limitations on timestep size required for numerical stability, are significantly more restrictive than is needed to model slower kinetic processes of interest. SLPIC imposes artificial speed-limiting behavior on fast particles whose kinetics do not play meaningful roles in the system dynamics, thus enabling larger simulation timesteps and more rapid modeling of such plasma discharges. The use of SLPIC methods to model plasma sheath formation and the free expansion of plasma into vacuum will be demonstrated. Wallclock times for these simulations, relative to conventional PIC, are reduced by a factor of 2.5 for the plasma expansion problem and by over 6 for the sheath formation problem; additional speedup is likely possible. Physical quantities of interest are shown to be correct for these benchmark problems. Additional SLPIC applications will also be discussed. Supported by US DoE SBIR Phase I/II Award DE-SC0015762.
Using domain decomposition in the multigrid NAS parallel benchmark on the Fujitsu VPP500
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, J.C.H.; Lung, H.; Katsumata, Y.
1995-12-01
In this paper, we demonstrate how domain decomposition can be applied to the multigrid algorithm to convert the code for MPP architectures. We also discuss the performance and scalability of this implementation on the new product line of Fujitsu`s vector parallel computer, VPP500. This computer has Fujitsu`s well-known vector processor as the PE each rated at 1.6 C FLOPS. The high speed crossbar network rated at 800 MB/s provides the inter-PE communication. The results show that the physical domain decomposition is the best way to solve MG problems on VPP500.
New NAS Parallel Benchmarks Results
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; Saphir, William; VanderWijngaart, Rob; Woo, Alex; Kutler, Paul (Technical Monitor)
1997-01-01
NPB2 (NAS (NASA Advanced Supercomputing) Parallel Benchmarks 2) is an implementation, based on Fortran and the MPI (message passing interface) message passing standard, of the original NAS Parallel Benchmark specifications. NPB2 programs are run with little or no tuning, in contrast to NPB vendor implementations, which are highly optimized for specific architectures. NPB2 results complement, rather than replace, NPB results. Because they have not been optimized by vendors, NPB2 implementations approximate the performance a typical user can expect for a portable parallel program on distributed memory parallel computers. Together these results provide an insightful comparison of the real-world performance of high-performance computers. New NPB2 features: New implementation (CG), new workstation class problem sizes, new serial sample versions, more performance statistics.
Evaluation of the Pool Critical Assembly Benchmark with Explicitly-Modeled Geometry using MCNP6
Kulesza, Joel A.; Martz, Roger Lee
2017-03-01
Despite being one of the most widely used benchmarks for qualifying light water reactor (LWR) radiation transport methods and data, no benchmark calculation of the Oak Ridge National Laboratory (ORNL) Pool Critical Assembly (PCA) pressure vessel wall benchmark facility (PVWBF) using MCNP6 with explicitly modeled core geometry exists. As such, this paper provides results for such an analysis. First, a criticality calculation is used to construct the fixed source term. Next, ADVANTG-generated variance reduction parameters are used within the final MCNP6 fixed source calculations. These calculations provide unadjusted dosimetry results using three sets of dosimetry reaction cross sections of varyingmore » ages (those packaged with MCNP6, from the IRDF-2002 multi-group library, and from the ACE-formatted IRDFF v1.05 library). These results are then compared to two different sets of measured reaction rates. The comparison agrees in an overall sense within 2% and on a specific reaction- and dosimetry location-basis within 5%. Except for the neptunium dosimetry, the individual foil raw calculation-to-experiment comparisons usually agree within 10% but is typically greater than unity. Finally, in the course of developing these calculations, geometry that has previously not been completely specified is provided herein for the convenience of future analysts.« less
Application of the gravity search algorithm to multi-reservoir operation optimization
NASA Astrophysics Data System (ADS)
Bozorg-Haddad, Omid; Janbaz, Mahdieh; Loáiciga, Hugo A.
2016-12-01
Complexities in river discharge, variable rainfall regime, and drought severity merit the use of advanced optimization tools in multi-reservoir operation. The gravity search algorithm (GSA) is an evolutionary optimization algorithm based on the law of gravity and mass interactions. This paper explores the GSA's efficacy for solving benchmark functions, single reservoir, and four-reservoir operation optimization problems. The GSA's solutions are compared with those of the well-known genetic algorithm (GA) in three optimization problems. The results show that the GSA's results are closer to the optimal solutions than the GA's results in minimizing the benchmark functions. The average values of the objective function equal 1.218 and 1.746 with the GSA and GA, respectively, in solving the single-reservoir hydropower operation problem. The global solution equals 1.213 for this same problem. The GSA converged to 99.97% of the global solution in its average-performing history, while the GA converged to 97% of the global solution of the four-reservoir problem. Requiring fewer parameters for algorithmic implementation and reaching the optimal solution in fewer number of functional evaluations are additional advantages of the GSA over the GA. The results of the three optimization problems demonstrate a superior performance of the GSA for optimizing general mathematical problems and the operation of reservoir systems.
Yue, Zhihua; Shi, Jinhai; Li, Haona; Li, Huiyi
2018-02-01
Nonsteroidal anti-inflammatory drugs (NSAIDs) are likely to be used concomitantly with acyclovir or valacyclovir in clinical practice, but the study on the safety of such combinations was seldom reported. The objective of the study was to investigate reports of acute kidney injury (AKI) events associated with the concomitant use of oral acyclovir or valacyclovir with an NSAID by using the United States Food and Drug Administration (FDA) Adverse Event Reporting System (AERS) database between January 2004 and June 2012. The frequency of AKI events in patients while simultaneously taking either acyclovir or valacyclovir and an NSAID was compared using the Chi-square test. The effect of concomitant use of acyclovir or valacyclovir and individual NSAIDs on AKI was analyzed by the reporting odds ratio (ROR). The results showed that AKI was reported as the adverse event in 8.6% of the 10923 patients taking valacyclovir compared with 8.7% of the 2556 patients taking acyclovir (p=NS). However, AKI was significantly more frequently reported in patients simultaneously taking valacyclovir and an NSAID (19.4%) than in patients simultaneously taking acyclovir and an NSAID (10.5%) (p<0.01). The results also suggested that increased risk of AKI was likely associated with the concomitant use of valacyclovir and some NSAIDs such as loxoprofen, diclofenac, etodolac, ketorolac, piroxicam or lornoxicam. The case series from the AERS indicated that compared with acyclovir, valacyclovir is more likely to be affected by NSAIDs, and the concomitant use of valacyclovir with some NSAIDs might be associated with increased risk of AKI. The drug interactions with this specific combination of medications are worth exploring further.
Cui, Qijia; Fang, Tingting; Huang, Yong; Dong, Peiyan; Wang, Hui
2017-07-01
The microbial quality of urban recreational water is of great concern to public health. The monitoring of indicator organisms and several pathogens alone is not sufficient to accurately and comprehensively identify microbial risks. To assess the levels of bacterial pathogens and health risks in urban recreational water, we analyzed pathogen diversity and quantified four pathogens in 46 water samples collected from waterbodies in Beijing Olympic Forest Park in one year. The pathogen diversity revealed by 16S rRNA gene targeted next-generation sequencing (NGS) showed that 16 of 40 genera and 13 of 76 reference species were present. The most abundant species were Acinetobacter johnsonii, Mycobacterium avium and Aeromonas spp. Quantitative polymerase chain reaction (qPCR) of Escherichia coli (uidA), Aeromonas (aerA), M. avium (16S rRNA), Pseudomonas aeruginosa (oaa) and Salmonella (invA) showed that the aerA genes were the most abundant, occurring in all samples with concentrations of 10 4-6 genome copies/100mL, followed by oaa, invA and M. avium. In total, 34.8% of the samples harbored all genes, indicating the prevalence of these pathogens in this recreational waterbody. Based on the qPCR results, a quantitative microbial risk assessment (QMRA) showed that the annual infection risks of Salmonella, M. avium and P. aeruginosa in five activities were mostly greater than the U.S. EPA risk limit for recreational contacts, and children playing with water may be exposed to the greatest infection risk. Our findings provide a comprehensive understanding of bacterial pathogen diversity and pathogen abundance in urban recreational water by applying both NGS and qPCR. Copyright © 2016. Published by Elsevier B.V.
Numerical Prediction of Signal for Magnetic Flux Leakage Benchmark Task
NASA Astrophysics Data System (ADS)
Lunin, V.; Alexeevsky, D.
2003-03-01
Numerical results predicted by the finite element method based code are presented. The nonlinear magnetic time-dependent benchmark problem proposed by the World Federation of Nondestructive Evaluation Centers, involves numerical prediction of normal (radial) component of the leaked field in the vicinity of two practically rectangular notches machined on a rotating steel pipe (with known nonlinear magnetic characteristic). One notch is located on external surface of pipe and other is on internal one, and both are oriented axially.
Integrated control/structure optimization by multilevel decomposition
NASA Technical Reports Server (NTRS)
Zeiler, Thomas A.; Gilbert, Michael G.
1990-01-01
A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The present paper fully decomposes the system into structural and control subsystem designs and produces an improved design. Theory, implementation, and results for the method are presented and compared with the benchmark example.
Development of an Automated Emergency Response System (AERS) for Rail Transit Systems
DOT National Transportation Integrated Search
1984-10-01
As a result of a fire in 1979 at the Bay Area Rapid Transit District (BART), a microprocessor-based information retrieval system was developed to aid in the emergency decision-making process. This system was proposed, designed and programmed by a sup...
Building Innovation: Learning with Technologies. Australian Education Review Number 56
ERIC Educational Resources Information Center
Moyle, Kathryn
2010-01-01
Australian Education Review (AER) 56 explores national and international policy priorities for building students' innovation capabilities through information and communication technologies (ICT) in Australian schools. Section 1 sets out the Australian policy context for digital education and highlights some of the emerging challenges. It provides…
CASE HISTORY OF FINE PORE DIFFUSER RETROFIT AT RIDGEWOOD, NEW JERSEY
In April 1983, the Ridgewood, New Jersey Wastewater Treatment Plant underwent a retrofit from a coarse bubble to a fine pore aeration system. Also, process modification from contact stabilization to tapered aeration occurred. This report presents a case history of plant and aer...
Kerr, Kathleen F; Bansal, Aasthaa; Pepe, Margaret S
2012-09-15
In this issue of the Journal, Pencina and et al. (Am J Epidemiol. 2012;176(6):492-494) examine the operating characteristics of measures of incremental value. Their goal is to provide benchmarks for the measures that can help identify the most promising markers among multiple candidates. They consider a setting in which new predictors are conditionally independent of established predictors. In the present article, the authors consider more general settings. Their results indicate that some of the conclusions made by Pencina et al. are limited to the specific scenarios the authors considered. For example, Pencina et al. observed that continuous net reclassification improvement was invariant to the strength of the baseline model, but the authors of the present study show this invariance does not hold generally. Further, they disagree with the suggestion that such invariance would be desirable for a measure of incremental value. They also do not see evidence to support the claim that the measures provide complementary information. In addition, they show that correlation with baseline predictors can lead to much bigger gains in performance than the conditional independence scenario studied by Pencina et al. Finally, the authors note that the motivation of providing benchmarks actually reinforces previous observations that the problem with these measures is they do not have useful clinical interpretations. If they did, researchers could use the measures directly and benchmarks would not be needed.
NASA Astrophysics Data System (ADS)
Tornqvist, T. E.; Jankowski, K. L.; Fernandes, A. M.; Keogh, M.; Nienhuis, J.
2017-12-01
Low-elevation coastal zones (LECZs) that often host large population centers are particularly vulnerable to accelerating rates of relative sea-level rise (RSLR). Traditionally, tide-gauge records are used to obtain quantitative data on rates of RSLR, given that they are perceived to capture the rise of the sea surface, as well as land subsidence which is often substantial in such settings. We argue here that tide gauges in LECZs often provide ambiguous data because they ultimately measure RSLR with respect to a benchmark that is typically anchored tens of meters deep. This is problematic because the prime target of interest is usually the rate of RSLR with respect to the land surface. We illustrate this problem with newly obtained rod surface elevation table - marker horizon (RSET-MH) data from coastal Louisiana (n = 274) that show that shallow subsidence in the uppermost 5-10 m accounts for 60-85% of total subsidence. Since benchmarks in this region are anchored at 23 m depth on average, tide-gauge records by definition do not capture this important process and thus underestimate RSLR by a considerable amount. We show how RSET-MH data, combined with GPS and satellite altimetry data, enable us to bypass this problem. Rates of RSLR in coastal Louisiana over the past 6-10 years are 12 ± 8 mm/yr, considerably higher than numbers reported in recent studies based on tide-gauge analysis. Subsidence rates, averaged across this region, total about 9 mm/yr. It is likely that the problems with tide-gauge data are not unique to coastal Louisiana, so we suggest that our new approach to RSLR measurements may be useful in LECZs worldwide, with considerable implications for metropolitan areas like New Orleans that are located within such settings.
A benchmark for subduction zone modeling
NASA Astrophysics Data System (ADS)
van Keken, P.; King, S.; Peacock, S.
2003-04-01
Our understanding of subduction zones hinges critically on the ability to discern its thermal structure and dynamics. Computational modeling has become an essential complementary approach to observational and experimental studies. The accurate modeling of subduction zones is challenging due to the unique geometry, complicated rheological description and influence of fluid and melt formation. The complicated physics causes problems for the accurate numerical solution of the governing equations. As a consequence it is essential for the subduction zone community to be able to evaluate the ability and limitations of various modeling approaches. The participants of a workshop on the modeling of subduction zones, held at the University of Michigan at Ann Arbor, MI, USA in 2002, formulated a number of case studies to be developed into a benchmark similar to previous mantle convection benchmarks (Blankenbach et al., 1989; Busse et al., 1991; Van Keken et al., 1997). Our initial benchmark focuses on the dynamics of the mantle wedge and investigates three different rheologies: constant viscosity, diffusion creep, and dislocation creep. In addition we investigate the ability of codes to accurate model dynamic pressure and advection dominated flows. Proceedings of the workshop and the formulation of the benchmark are available at www.geo.lsa.umich.edu/~keken/subduction02.html We strongly encourage interested research groups to participate in this benchmark. At Nice 2003 we will provide an update and first set of benchmark results. Interested researchers are encouraged to contact one of the authors for further details.
Numerical benchmarking of a Coarse-Mesh Transport (COMET) Method for medical physics applications
NASA Astrophysics Data System (ADS)
Blackburn, Megan Satterfield
2009-12-01
Radiation therapy has become a very import method for treating cancer patients. Thus, it is extremely important to accurately determine the location of energy deposition during these treatments, maximizing dose to the tumor region and minimizing it to healthy tissue. A Coarse-Mesh Transport Method (COMET) has been developed at the Georgia Institute of Technology in the Computational Reactor and Medical Physics Group for use very successfully with neutron transport to analyze whole-core criticality. COMET works by decomposing a large, heterogeneous system into a set of smaller fixed source problems. For each unique local problem that exists, a solution is obtained that we call a response function. These response functions are pre-computed and stored in a library for future use. The overall solution to the global problem can then be found by a linear superposition of these local problems. This method has now been extended to the transport of photons and electrons for use in medical physics problems to determine energy deposition from radiation therapy treatments. The main goal of this work was to develop benchmarks for testing in order to evaluate the COMET code to determine its strengths and weaknesses for these medical physics applications. For response function calculations, legendre polynomial expansions are necessary for space, angle, polar angle, and azimuthal angle. An initial sensitivity study was done to determine the best orders for future testing. After the expansion orders were found, three simple benchmarks were tested: a water phantom, a simplified lung phantom, and a non-clinical slab phantom. Each of these benchmarks was decomposed into 1cm x 1cm and 0.5cm x 0.5cm coarse meshes. Three more clinically relevant problems were developed from patient CT scans. These benchmarks modeled a lung patient, a prostate patient, and a beam re-entry situation. As before, the problems were divided into 1cm x 1cm, 0.5cm x 0.5cm, and 0.25cm x 0.25cm coarse mesh cases. Multiple beam energies were also tested for each case. The COMET solutions for each case were compared to a reference solution obtained by pure Monte Carlo results from EGSnrc. When comparing the COMET results to the reference cases, a pattern of differences appeared in each phantom case. It was found that better results were obtained for lower energy incident photon beams as well as for larger mesh sizes. Possible changes may need to be made with the expansion orders used for energy and angle to better model high energy secondary electrons. Heterogeneity also did not pose a problem for the COMET methodology. Heterogeneous results were found in a comparable amount of time to the homogeneous water phantom. The COMET results were typically found in minutes to hours of computational time, whereas the reference cases typically required hundreds or thousands of hours. A second sensitivity study was also performed on a more stringent problem and with smaller coarse meshes. Previously, the same expansion order was used for each incident photon beam energy so better comparisons could be made. From this second study, it was found that it is optimal to have different expansion orders based on the incident beam energy. Recommendations for future work with this method include more testing on higher expansion orders or possible code modification to better handle secondary electrons. The method also needs to handle more clinically relevant beam descriptions with an energy and angular distribution associated with it.
The Suite for Embedded Applications and Kernels
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-05-10
Many applications of high performance embedded computing are limited by performance or power bottlenecks. We havedesigned SEAK, a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions to these bottlenecks? and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) andgoal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user blackbox evaluation that can capture tradeoffs between performance, power, accuracy, size, and weight. The tradeoffs are especially informativemore » for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.« less
The ab-initio density matrix renormalization group in practice.
Olivares-Amaya, Roberto; Hu, Weifeng; Nakatani, Naoki; Sharma, Sandeep; Yang, Jun; Chan, Garnet Kin-Lic
2015-01-21
The ab-initio density matrix renormalization group (DMRG) is a tool that can be applied to a wide variety of interesting problems in quantum chemistry. Here, we examine the density matrix renormalization group from the vantage point of the quantum chemistry user. What kinds of problems is the DMRG well-suited to? What are the largest systems that can be treated at practical cost? What sort of accuracies can be obtained, and how do we reason about the computational difficulty in different molecules? By examining a diverse benchmark set of molecules: π-electron systems, benchmark main-group and transition metal dimers, and the Mn-oxo-salen and Fe-porphine organometallic compounds, we provide some answers to these questions, and show how the density matrix renormalization group is used in practice.
Benchmarking Defmod, an open source FEM code for modeling episodic fault rupture
NASA Astrophysics Data System (ADS)
Meng, Chunfang
2017-03-01
We present Defmod, an open source (linear) finite element code that enables us to efficiently model the crustal deformation due to (quasi-)static and dynamic loadings, poroelastic flow, viscoelastic flow and frictional fault slip. Ali (2015) provides the original code introducing an implicit solver for (quasi-)static problem, and an explicit solver for dynamic problem. The fault constraint is implemented via Lagrange Multiplier. Meng (2015) combines these two solvers into a hybrid solver that uses failure criteria and friction laws to adaptively switch between the (quasi-)static state and dynamic state. The code is capable of modeling episodic fault rupture driven by quasi-static loadings, e.g. due to reservoir fluid withdraw or injection. Here, we focus on benchmarking the Defmod results against some establish results.
Sensitivity of Geoelectrical Measurements to the Presence of Bacteria in Porous Media
We investigated the sensitivity of low frequency electrical measurements (0.1-1000 Hz) to (i) microbial cell density, (ii) live and dead cells, and (iii) microbial attachment onto mineral surfaces of clean quartz sands and iron oxide coated sands. Three strains of Pseudomonas aer...
75 FR 56086 - Combined Notice of Filings # 1
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-15
.... Applicants: Iberdrola Renewables; Shiloh I Wind Project LLC; Dillon Wind LLC; Dry Lake Wind Power, LLC... Operator submits tariff filing per 35.13(a)(2)(iii): Site/Interconnection Agreements between O&R and AER NY... Commission encourages electronic submission of protests and interventions in lieu of paper, using the FERC...
A Study of Semantic Features: Electrophysiological Correlates.
ERIC Educational Resources Information Center
Wetzel, Frederick; And Others
This study investigates whether words differing in a single contrastive semantic feature (positive/negative) can be discriminated by auditory evoked responses (AERs). Ten right-handed college students were provided with auditory stimuli consisting of 20 relational words (more/less; high/low, etc.) spoken with a middle American accent and computer…
Previous exposure assessment panel studies have observed considerable seasonal, between-home and between-city variability in residential pollutant infiltration. This is likely a result of differences in home ventilation, or air exchange rates (AER). The Stochastic Human Exposure ...
Epidemiological studies frequently use central site concentrations as surrogates of exposure to air pollutants. Variability in air pollutant infiltration due to differential air exchange rates (AERs) is potentially a major factor affecting the relationship between central site c...
Encoding color information for visual tracking: Algorithms and benchmark.
Liang, Pengpeng; Blasch, Erik; Ling, Haibin
2015-12-01
While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.
Testing and Benchmarking a 2014 GM Silverado 6L80 Six Speed Automatic Transmission
Describe the method and test results of EPA’s partial transmission benchmarking process which involves installing both the engine and transmission in an engine dynamometer test cell with the engine wire harness tethered to its vehicle parked outside the test cell.
NASA Astrophysics Data System (ADS)
Izah Anuar, Nurul; Saptari, Adi
2016-02-01
This paper addresses the types of particle representation (encoding) procedures in a population-based stochastic optimization technique in solving scheduling problems known in the job-shop manufacturing environment. It intends to evaluate and compare the performance of different particle representation procedures in Particle Swarm Optimization (PSO) in the case of solving Job-shop Scheduling Problems (JSP). Particle representation procedures refer to the mapping between the particle position in PSO and the scheduling solution in JSP. It is an important step to be carried out so that each particle in PSO can represent a schedule in JSP. Three procedures such as Operation and Particle Position Sequence (OPPS), random keys representation and random-key encoding scheme are used in this study. These procedures have been tested on FT06 and FT10 benchmark problems available in the OR-Library, where the objective function is to minimize the makespan by the use of MATLAB software. Based on the experimental results, it is discovered that OPPS gives the best performance in solving both benchmark problems. The contribution of this paper is the fact that it demonstrates to the practitioners involved in complex scheduling problems that different particle representation procedures can have significant effects on the performance of PSO in solving JSP.
Hierarchical Artificial Bee Colony Algorithm for RFID Network Planning Optimization
Ma, Lianbo; Chen, Hanning; Hu, Kunyuan; Zhu, Yunlong
2014-01-01
This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization, called HABC, to tackle the radio frequency identification network planning (RNP) problem. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operators is applied to enhance the global search ability between species. Experiments are conducted on a set of 10 benchmark optimization problems. The results demonstrate that the proposed HABC obtains remarkable performance on most chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms. Then HABC is used for solving the real-world RNP problem on two instances with different scales. Simulation results show that the proposed algorithm is superior for solving RNP, in terms of optimization accuracy and computation robustness. PMID:24592200
Hierarchical artificial bee colony algorithm for RFID network planning optimization.
Ma, Lianbo; Chen, Hanning; Hu, Kunyuan; Zhu, Yunlong
2014-01-01
This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization, called HABC, to tackle the radio frequency identification network planning (RNP) problem. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operators is applied to enhance the global search ability between species. Experiments are conducted on a set of 10 benchmark optimization problems. The results demonstrate that the proposed HABC obtains remarkable performance on most chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms. Then HABC is used for solving the real-world RNP problem on two instances with different scales. Simulation results show that the proposed algorithm is superior for solving RNP, in terms of optimization accuracy and computation robustness.
NASA Astrophysics Data System (ADS)
Umbarkar, A. J.; Balande, U. T.; Seth, P. D.
2017-06-01
The field of nature inspired computing and optimization techniques have evolved to solve difficult optimization problems in diverse fields of engineering, science and technology. The firefly attraction process is mimicked in the algorithm for solving optimization problems. In Firefly Algorithm (FA) sorting of fireflies is done by using sorting algorithm. The original FA is proposed with bubble sort for ranking the fireflies. In this paper, the quick sort replaces bubble sort to decrease the time complexity of FA. The dataset used is unconstrained benchmark functions from CEC 2005 [22]. The comparison of FA using bubble sort and FA using quick sort is performed with respect to best, worst, mean, standard deviation, number of comparisons and execution time. The experimental result shows that FA using quick sort requires less number of comparisons but requires more execution time. The increased number of fireflies helps to converge into optimal solution whereas by varying dimension for algorithm performed better at a lower dimension than higher dimension.
The Paucity Problem: Where Have All the Space Reactor Experiments Gone?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bess, John D.; Marshall, Margaret A.
2016-10-01
The Handbooks of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) together contain a plethora of documented and evaluated experiments essential in the validation of nuclear data, neutronics codes, and modeling of various nuclear systems. Unfortunately, only a minute selection of handbook data (twelve evaluations) are of actual experimental facilities and mockups designed specifically for space nuclear research. There is a paucity problem, such that the multitude of space nuclear experimental activities performed in the past several decades have yet to be recovered and made available in such detail that themore » international community could benefit from these valuable historical research efforts. Those experiments represent extensive investments in infrastructure, expertise, and cost, as well as constitute significantly valuable resources of data supporting past, present, and future research activities. The ICSBEP and IRPhEP were established to identify and verify comprehensive sets of benchmark data; evaluate the data, including quantification of biases and uncertainties; compile the data and calculations in a standardized format; and formally document the effort into a single source of verified benchmark data. See full abstract in attached document.« less
Edwards, Roger A; Dee, Deborah; Umer, Amna; Perrine, Cria G; Shealy, Katherine R; Grummer-Strawn, Laurence M
2014-02-01
A substantial proportion of US maternity care facilities engage in practices that are not evidence-based and that interfere with breastfeeding. The CDC Survey of Maternity Practices in Infant Nutrition and Care (mPINC) showed significant variation in maternity practices among US states. The purpose of this article is to use benchmarking techniques to identify states within relevant peer groups that were top performers on mPINC survey indicators related to breastfeeding support. We used 11 indicators of breastfeeding-related maternity care from the 2011 mPINC survey and benchmarking techniques to organize and compare hospital-based maternity practices across the 50 states and Washington, DC. We created peer categories for benchmarking first by region (grouping states by West, Midwest, South, and Northeast) and then by size (grouping states by the number of maternity facilities and dividing each region into approximately equal halves based on the number of facilities). Thirty-four states had scores high enough to serve as benchmarks, and 32 states had scores low enough to reflect the lowest score gap from the benchmark on at least 1 indicator. No state served as the benchmark on more than 5 indicators and no state was furthest from the benchmark on more than 7 indicators. The small peer group benchmarks in the South, West, and Midwest were better than the large peer group benchmarks on 91%, 82%, and 36% of the indicators, respectively. In the West large, the Midwest large, the Midwest small, and the South large peer groups, 4-6 benchmarks showed that less than 50% of hospitals have ideal practice in all states. The evaluation presents benchmarks for peer group state comparisons that provide potential and feasible targets for improvement.
Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program
Bess, John D.; Montierth, Leland; Köberl, Oliver; ...
2014-10-09
Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the ²³⁵U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of k eff with MCNP5 and ENDF/B-VII.0 neutron nuclear data aremore » greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of k eff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Risner, J. M.; Wiarda, D.; Dunn, M. E.
2011-09-30
New coupled neutron-gamma cross-section libraries have been developed for use in light water reactor (LWR) shielding applications, including pressure vessel dosimetry calculations. The libraries, which were generated using Evaluated Nuclear Data File/B Version VII Release 0 (ENDF/B-VII.0), use the same fine-group and broad-group energy structures as the VITAMIN-B6 and BUGLE-96 libraries. The processing methodology used to generate both libraries is based on the methods used to develop VITAMIN-B6 and BUGLE-96 and is consistent with ANSI/ANS 6.1.2. The ENDF data were first processed into the fine-group pseudo-problem-independent VITAMIN-B7 library and then collapsed into the broad-group BUGLE-B7 library. The VITAMIN-B7 library containsmore » data for 391 nuclides. This represents a significant increase compared to the VITAMIN-B6 library, which contained data for 120 nuclides. The BUGLE-B7 library contains data for the same nuclides as BUGLE-96, and maintains the same numeric IDs for those nuclides. The broad-group data includes nuclides which are infinitely dilute and group collapsed using a concrete weighting spectrum, as well as nuclides which are self-shielded and group collapsed using weighting spectra representative of important regions of LWRs. The verification and validation of the new libraries includes a set of critical benchmark experiments, a set of regression tests that are used to evaluate multigroup crosssection libraries in the SCALE code system, and three pressure vessel dosimetry benchmarks. Results of these tests confirm that the new libraries are appropriate for use in LWR shielding analyses and meet the requirements of Regulatory Guide 1.190.« less
Evaluating Biology Achievement Scores in an ICT Integrated PBL Environment
ERIC Educational Resources Information Center
Osman, Kamisah; Kaur, Simranjeet Judge
2014-01-01
Students' achievement in Biology is often looked up as a benchmark to evaluate the mode of teaching and learning in higher education. Problem-based learning (PBL) is an approach that focuses on students' solving a problem through collaborative groups. There were eighty samples involved in this study. The samples were divided into three groups: ICT…
Wilderness visitor management practices: a benchmark and an assessment of progress
Alan E. Watson
1989-01-01
In the short time that wilderness visitor management practices have been monitored, some obvious trends have developed. The managing agencies, however, have appeared to provide different solutions to similar problems. In the early years, these problems revolved around concern about overuse of the resource and crowded conditions. Some of those concerns exist today, but...
A multiagent evolutionary algorithm for constraint satisfaction problems.
Liu, Jing; Zhong, Weicai; Jiao, Licheng
2006-02-01
With the intrinsic properties of constraint satisfaction problems (CSPs) in mind, we divide CSPs into two types, namely, permutation CSPs and nonpermutation CSPs. According to their characteristics, several behaviors are designed for agents by making use of the ability of agents to sense and act on the environment. These behaviors are controlled by means of evolution, so that the multiagent evolutionary algorithm for constraint satisfaction problems (MAEA-CSPs) results. To overcome the disadvantages of the general encoding methods, the minimum conflict encoding is also proposed. Theoretical analyzes show that MAEA-CSPs has a linear space complexity and converges to the global optimum. The first part of the experiments uses 250 benchmark binary CSPs and 79 graph coloring problems from the DIMACS challenge to test the performance of MAEA-CSPs for nonpermutation CSPs. MAEA-CSPs is compared with six well-defined algorithms and the effect of the parameters is analyzed systematically. The second part of the experiments uses a classical CSP, n-queen problems, and a more practical case, job-shop scheduling problems (JSPs), to test the performance of MAEA-CSPs for permutation CSPs. The scalability of MAEA-CSPs along n for n-queen problems is studied with great care. The results show that MAEA-CSPs achieves good performance when n increases from 10(4) to 10(7), and has a linear time complexity. Even for 10(7)-queen problems, MAEA-CSPs finds the solutions by only 150 seconds. For JSPs, 59 benchmark problems are used, and good performance is also obtained.
Better Medicare Cost Report data are needed to help hospitals benchmark costs and performance.
Magnus, S A; Smith, D G
2000-01-01
To evaluate costs and achieve cost control in the face of new technology and demands for efficiency from both managed care and governmental payers, hospitals need to benchmark their costs against those of other comparable hospitals. Since they typically use Medicare Cost Report (MCR) data for this purpose, a variety of cost accounting problems with the MCR may hamper hospitals' understanding of their relative costs and performance. Managers and researchers alike need to investigate the validity, accuracy, and timeliness of the MCR's cost accounting data.
FY2012 summary of tasks completed on PROTEUS-thermal work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C.H.; Smith, M.A.
2012-06-06
PROTEUS is a suite of the neutronics codes, both old and new, that can be used within the SHARP codes being developed under the NEAMS program. Discussion here is focused on updates and verification and validation activities of the SHARP neutronics code, DeCART, for application to thermal reactor analysis. As part of the development of SHARP tools, the different versions of the DeCART code created for PWR, BWR, and VHTR analysis were integrated. Verification and validation tests for the integrated version were started, and the generation of cross section libraries based on the subgroup method was revisited for the targetedmore » reactor types. The DeCART code has been reorganized in preparation for an efficient integration of the different versions for PWR, BWR, and VHTR analysis. In DeCART, the old-fashioned common blocks and header files have been replaced by advanced memory structures. However, the changing of variable names was minimized in order to limit problems with the code integration. Since the remaining stability problems of DeCART were mostly caused by the CMFD methodology and modules, significant work was performed to determine whether they could be replaced by more stable methods and routines. The cross section library is a key element to obtain accurate solutions. Thus, the procedure for generating cross section libraries was revisited to provide libraries tailored for the targeted reactor types. To improve accuracy in the cross section library, an attempt was made to replace the CENTRM code by the MCNP Monte Carlo code as a tool obtaining reference resonance integrals. The use of the Monte Carlo code allows us to minimize problems or approximations that CENTRM introduces since the accuracy of the subgroup data is limited by that of the reference solutions. The use of MCNP requires an additional set of libraries without resonance cross sections so that reference calculations can be performed for a unit cell in which only one isotope of interest includes resonance cross sections, among the isotopes in the composition. The OECD MHTGR-350 benchmark core was simulated using DeCART as initial focus of the verification/validation efforts. Among the benchmark problems, Exercise 1 of Phase 1 is a steady-state benchmark case for the neutronics calculation for which block-wise cross sections were provided in 26 energy groups. This type of problem was designed for a homogenized geometry solver like DIF3D rather than the high-fidelity code DeCART. Instead of the homogenized block cross sections given in the benchmark, the VHTR-specific 238-group ENDF/B-VII.0 library of DeCART was directly used for preliminary calculations. Initial results showed that the multiplication factors of a fuel pin and a fuel block with or without a control rod hole were off by 6, -362, and -183 pcm Dk from comparable MCNP solutions, respectively. The 2-D and 3-D one-third core calculations were also conducted for the all-rods-out (ARO) and all-rods-in (ARI) configurations, producing reasonable results. Figure 1 illustrates the intermediate (1.5 eV - 17 keV) and thermal (below 1.5 eV) group flux distributions. As seen from VHTR cores with annular fuels, the intermediate group fluxes are relatively high in the fuel region, but the thermal group fluxes are higher in the inner and outer graphite reflector regions than in the fuel region. To support the current project, a new three-year I-NERI collaboration involving ANL and KAERI was started in November 2011, focused on performing in-depth verification and validation of high-fidelity multi-physics simulation codes for LWR and VHTR. The work scope includes generating improved cross section libraries for the targeted reactor types, developing benchmark models for verification and validation of the neutronics code with or without thermo-fluid feedback, and performing detailed comparisons of predicted reactor parameters against both Monte Carlo solutions and experimental measurements. The following list summarizes the work conducted so far for PROTEUS-Thermal Tasks: Unification of different versions of DeCART was initiated, and at the same time code modernization was conducted to make code unification efficient; (2) Regeneration of cross section libraries was attempted for the targeted reactor types, and the procedure for generating cross section libraries was updated by replacing CENTRM with MCNP for reference resonance integrals; (3) The MHTGR-350 benchmark core was simulated using DeCART with VHTR-specific 238-group ENDF/B-VII.0 library, and MCNP calculations were performed for comparison; and (4) Benchmark problems for PWR and BWR analysis were prepared for the DeCART verification/validation effort. In the coming months, the work listed above will be completed. Cross section libraries will be generated with optimized group structures for specific reactor types.« less
This study was designed to provide understanding of the toxicity of naturally occurring asbestos (NOA) including Libby amphibole (LA), Sumas Mountain chrysotile (SM), El Dorado Hills tremolite (ED) and Ontario ferroactinolite cleavage fragments (ON). Rat-respirable fractions (aer...
ERIC Educational Resources Information Center
Tytler, Russell
2007-01-01
Australian Education Review (AER) 51 elaborates on issues raised by the Australian Council for Educational Research (ACER) Research Conference 2006: "Boosting Science Learning--What Will It Take?" It challenges current orthodoxies in science education and proposes a re-imagining that charts new directions for science teaching and…
between-home and between-city variability in residential pollutant infiltration. This is likely a result of differences in home ventilation, or air exchange rates (AER). The Stochastic Human Exposure and Dose Simulation (SHEDS) model is a population exposure model that uses a pro...
ERIC Educational Resources Information Center
Keogh, Jayne; Garvis, Susanne; Pendergast, Donna; Diamond, Pat
2012-01-01
The intensification process associated with the first year of teaching has a significant impact on beginning teachers' personal and professional lives. This paper uses a narrative approach to investigate the electronic conversations of 16 beginning teachers on a self-initiated group email site. The participants' electronic exchanges demonstrated…
Mele, Antonietta; Calzolaro, Sara; Cannone, Gianluigi; Cetrone, Michela; Conte, Diana; Tricarico, Domenico
2014-01-01
The ATP-sensitive K+ (KATP) channel is an emerging pathway in the skeletal muscle atrophy which is a comorbidity condition in diabetes. The “in vitro” effects of the sulfonylureas and glinides were evaluated on the protein content/muscle weight, fibers viability, mitochondrial succinic dehydrogenases (SDH) activity, and channel currents in oxidative soleus (SOL), glycolitic/oxidative flexor digitorum brevis (FDB), and glycolitic extensor digitorum longus (EDL) muscle fibers of mice using biochemical and cell-counting Kit-8 assay, image analysis, and patch-clamp techniques. The sulfonylureas were: tolbutamide, glibenclamide, and glimepiride; the glinides were: repaglinide and nateglinide. Food and Drug Administration-Adverse Effects Reporting System (FDA-AERS) database searching of atrophy-related signals associated with the use of these drugs in humans has been performed. The drugs after 24 h of incubation time reduced the protein content/muscle weight and fibers viability more effectively in FDB and SOL than in the EDL. The order of efficacy of the drugs in reducing the protein content in FDB was: repaglinide (EC50 = 5.21 × 10−6) ≥ glibenclamide(EC50 = 8.84 × 10−6) > glimepiride(EC50 = 2.93 × 10−5) > tolbutamide(EC50 = 1.07 × 10−4) > nateglinide(EC50 = 1.61 × 10−4) and it was: repaglinide(7.15 × 10−5) ≥ glibenclamide(EC50 = 9.10 × 10−5) > nateglinide(EC50 = 1.80 × 10−4) ≥ tolbutamide(EC50 = 2.19 × 10−4) > glimepiride(EC50=–) in SOL. The drug-induced atrophy can be explained by the KATP channel block and by the enhancement of the mitochondrial SDH activity. In an 8-month period, muscle atrophy was found in 0.27% of the glibenclamide reports in humans and in 0.022% of the other not sulfonylureas and glinides drugs. No reports of atrophy were found for the other sulfonylureas and glinides in the FDA-AERS. Glibenclamide induces atrophy in animal experiments and in human patients. Glimepiride shows less potential for inducing atrophy. PMID:25505577
A neuro-inspired spike-based PID motor controller for multi-motor robots with low cost FPGAs.
Jimenez-Fernandez, Angel; Jimenez-Moreno, Gabriel; Linares-Barranco, Alejandro; Dominguez-Morales, Manuel J; Paz-Vicente, Rafael; Civit-Balcells, Anton
2012-01-01
In this paper we present a neuro-inspired spike-based close-loop controller written in VHDL and implemented for FPGAs. This controller has been focused on controlling a DC motor speed, but only using spikes for information representation, processing and DC motor driving. It could be applied to other motors with proper driver adaptation. This controller architecture represents one of the latest layers in a Spiking Neural Network (SNN), which implements a bridge between robotics actuators and spike-based processing layers and sensors. The presented control system fuses actuation and sensors information as spikes streams, processing these spikes in hard real-time, implementing a massively parallel information processing system, through specialized spike-based circuits. This spike-based close-loop controller has been implemented into an AER platform, designed in our labs, that allows direct control of DC motors: the AER-Robot. Experimental results evidence the viability of the implementation of spike-based controllers, and hardware synthesis denotes low hardware requirements that allow replicating this controller in a high number of parallel controllers working together to allow a real-time robot control.
A Neuro-Inspired Spike-Based PID Motor Controller for Multi-Motor Robots with Low Cost FPGAs
Jimenez-Fernandez, Angel; Jimenez-Moreno, Gabriel; Linares-Barranco, Alejandro; Dominguez-Morales, Manuel J.; Paz-Vicente, Rafael; Civit-Balcells, Anton
2012-01-01
In this paper we present a neuro-inspired spike-based close-loop controller written in VHDL and implemented for FPGAs. This controller has been focused on controlling a DC motor speed, but only using spikes for information representation, processing and DC motor driving. It could be applied to other motors with proper driver adaptation. This controller architecture represents one of the latest layers in a Spiking Neural Network (SNN), which implements a bridge between robotics actuators and spike-based processing layers and sensors. The presented control system fuses actuation and sensors information as spikes streams, processing these spikes in hard real-time, implementing a massively parallel information processing system, through specialized spike-based circuits. This spike-based close-loop controller has been implemented into an AER platform, designed in our labs, that allows direct control of DC motors: the AER-Robot. Experimental results evidence the viability of the implementation of spike-based controllers, and hardware synthesis denotes low hardware requirements that allow replicating this controller in a high number of parallel controllers working together to allow a real-time robot control. PMID:22666004
Cheel, José; Minceva, Mirjana; Urajová, Petra; Aslam, Rabya; Hrouzek, Pavel; Kopecký, Jiří
2015-10-01
Aeruginosin-865 was isolated from cultivated soil cyanobacteria using a combination of centrifugal partition chromatography (CPC) and gel permeation chromatography. The solubility of Aer-865 in different solvents was evaluated using the conductor-like screening model for real solvents (COSMO-RS). The CPC separation was performed in descending mode with a biphasic solvent system composed of water-n-BuOH-acetic acid (5:4:1, v/v/v). The upper phase was used as a stationary phase, whereas the lower phase was employed as a mobile phase at a flow rate of 10 mL/min. The revolution speed and temperature of the separation column were 1700 rpm and 25 degrees C, respectively. Preparative CPC separation followed by gel permeation chromatography was performed on 50 mg of crude extract yielding Aer-865 (3.5 mg), with a purity over 95% as determined by HPLC. The chemical identity of the isolated compound was confirmed by comparing its spectroscopic data (UV, HRESI-MS, HRESI-MS/MS) with those of an authentic standard and data available in the literature.
Robust and Opportunistic Autonomous Science for a Potential Titan Aerobot
NASA Technical Reports Server (NTRS)
Gaines, Daniel M.; Estlin, Tara; Schaffer, Steve; Castano, Rebecca; Elfes, Alberto
2010-01-01
We are developing onboard planning and execution technologies to provide robust and opportunistic mission operations for a potential Titan aerobot. Aerobot have the potential for collecting a vast amount of high priority science data. However, to be effective, an aerobot must address several challenges including communication constraints, extended periods without contact with Earth, uncertain and changing environmental conditions, maneuverability constraints and potentially short-lived science opportunities. We are developing the AerOASIS system to develop and test technology to support autonomous science operations for a potential Titan Aerobot. The planning and execution component of AerOASIS is able to generate mission operations plans that achieve science and engineering objectives while respecting mission and resource constraints as well as adapting the plan to respond to new science opportunities. Our technology leverages prior work on the OASIS system for autonomous rover exploration. In this paper we describe how the OASIS planning component was adapted to address the unique challenges of a Titan Aerobot and we describe a field demonstration of the system with the JPL prototype aerobot.
NASA Astrophysics Data System (ADS)
Liu, Jian; Li, Jia; Cheng, Xu; Wang, Huaming
2018-02-01
In this paper, the process of coating AerMet100 steel on forged 300M steel with laser cladding was investigated, with a thorough analysis of the chemical composition, microstructure, and hardness of the substrate and the cladding layer as well as the transition zone. Results show that the composition and microhardness of the cladding layer are macroscopically homogenous with the uniformly distributed bainite and a small amount of retained austenite in martensite matrix. The transition zone, which spans approximately 100 μm, yields a gradual change of composition from the cladding layer to 300M steel matrix. The heat-affected zone (HAZ) can be divided into three zones: the sufficiently quenched zone (SQZ), the insufficiently quenched zone (IQZ), and the high tempered zone (HTZ). The SQZ consists of martensitic matrix and bainite, as for the IQZ and the HTZ the microstructures are martensite + tempered martensite and tempered martensite + ferrite, respectively. These complicated microstructures in the HAZ are caused by different peak heating temperatures and heterogeneous microstructures of the as-received 300M steel.
Computational Efficiency of the Simplex Embedding Method in Convex Nondifferentiable Optimization
NASA Astrophysics Data System (ADS)
Kolosnitsyn, A. V.
2018-02-01
The simplex embedding method for solving convex nondifferentiable optimization problems is considered. A description of modifications of this method based on a shift of the cutting plane intended for cutting off the maximum number of simplex vertices is given. These modification speed up the problem solution. A numerical comparison of the efficiency of the proposed modifications based on the numerical solution of benchmark convex nondifferentiable optimization problems is presented.
Improving Protein Fold Recognition by Deep Learning Networks.
Jo, Taeho; Hou, Jie; Eickholt, Jesse; Cheng, Jianlin
2015-12-04
For accurate recognition of protein folds, a deep learning network method (DN-Fold) was developed to predict if a given query-template protein pair belongs to the same structural fold. The input used stemmed from the protein sequence and structural features extracted from the protein pair. We evaluated the performance of DN-Fold along with 18 different methods on Lindahl's benchmark dataset and on a large benchmark set extracted from SCOP 1.75 consisting of about one million protein pairs, at three different levels of fold recognition (i.e., protein family, superfamily, and fold) depending on the evolutionary distance between protein sequences. The correct recognition rate of ensembled DN-Fold for Top 1 predictions is 84.5%, 61.5%, and 33.6% and for Top 5 is 91.2%, 76.5%, and 60.7% at family, superfamily, and fold levels, respectively. We also evaluated the performance of single DN-Fold (DN-FoldS), which showed the comparable results at the level of family and superfamily, compared to ensemble DN-Fold. Finally, we extended the binary classification problem of fold recognition to real-value regression task, which also show a promising performance. DN-Fold is freely available through a web server at http://iris.rnet.missouri.edu/dnfold.
Listening to the occupants: a Web-based indoor environmental quality survey.
Zagreus, Leah; Huizenga, Charlie; Arens, Edward; Lehrer, David
2004-01-01
Building occupants are a rich source of information about indoor environmental quality and its effect on comfort and productivity. The Center for the Built Environment has developed a Web-based survey and accompanying online reporting tools to quickly and inexpensively gather, process and present this information. The core questions assess occupant satisfaction with the following IEQ areas: office layout, office furnishings, thermal comfort, indoor air quality, lighting, acoustics, and building cleanliness and maintenance. The survey can be used to assess the performance of a building, identify areas needing improvement, and provide useful feedback to designers and operators about specific aspects of building design features and operating strategies. The survey has been extensively tested and refined and has been conducted in more than 70 buildings, creating a rapidly growing database of standardized survey data that is used for benchmarking. We present three case studies that demonstrate different applications of the survey: a pre/post analysis of occupants moving to a new building, a survey used in conjunction with physical measurements to determine how environmental factors affect occupants' perceived comfort and productivity levels, and a benchmarking example of using the survey to establish how new buildings are meeting a client's design objectives. In addition to its use in benchmarking a building's performance against other buildings, the CBE survey can be used as a diagnostic tool to identify specific problems and their sources. Whenever a respondent indicates dissatisfaction with an aspect of building performance, a branching page follows with more detailed questions about the nature of the problem. This systematically collected information provides a good resource for solving indoor environmental problems in the building. By repeating the survey after a problem has been corrected it is also possible to assess the effectiveness of the solution.
FY 2003 Top 200 Users Survey Report
2003-08-01
ACSI Federal Government Benchmark* 68.6% 71.1% 70.2% DTIC Excels by +8.4 +10.9 +8.8 *ACSI is the official service quality benchmark for the Federal...10.9 +8.8 *ACSI is the official service quality benchmark for the Federal Government Fig 2.3 7 0 10 20 30 40 50 60 70 80 90 100 Very to Extremely...reported the following: 75 percent of users rated “Online Service Quality ” as “Very Good” to “Excellent,” 22 percent as “Good,” and 3 percent as
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Der Marck, S. C.
Three nuclear data libraries have been tested extensively using criticality safety benchmark calculations. The three libraries are the new release of the US library ENDF/B-VII.1 (2011), the new release of the Japanese library JENDL-4.0 (2011), and the OECD/NEA library JEFF-3.1 (2006). All calculations were performed with the continuous-energy Monte Carlo code MCNP (version 4C3, as well as version 6-beta1). Around 2000 benchmark cases from the International Handbook of Criticality Safety Benchmark Experiments (ICSBEP) were used. The results were analyzed per ICSBEP category, and per element. Overall, the three libraries show similar performance on most criticality safety benchmarks. The largest differencesmore » are probably caused by elements such as Be, C, Fe, Zr, W. (authors)« less
Drugs Associated with More Suicidal Ideations Are also Associated with More Suicide Attempts
Robertson, Henry T.; Allison, David B.
2009-01-01
Context In randomized controlled trials (RCTs), some drugs, including CB1 antagonists for obesity treatment, have been shown to cause increased suicidal ideation. A key question is whether drugs that increase or are associated with increased suicidal ideations are also associated with suicidal behavior, or whether drug–induced suicidal ideations are unlinked epiphenomena that do not presage the more troubling and potentially irrevocable outcome of suicidal behavior. This is difficult to determine in RCTs because of the rarity of suicidal attempts and completions. Objective To determine whether drugs associated with more suicidal ideations are also associated with more suicide attempts in large spontaneous adverse event (AE) report databases. Methodology Generalized linear models with negative binomial distribution were fitted to Food and Drug Administration (FDA) Adverse Event (AE) Reporting System (AERS) data from 2004 to 2008. A total of 1,404,470 AEs from 832 drugs were analyzed as a function of reports of suicidal ideations; other non-suicidal adverse reactions; drug class; proportion of reports from males; and average age of subject for which AE was filed. Drug was treated as the unit of analysis, thus the statistical models effectively had 832 observations. Main Outcome Measures Reported suicide attempts and completed suicides per drug. Results 832 drugs, ranging from abacavir to zopiclone, were evaluated. The 832 drugs, as primary suspect drugs in a given adverse event, accounted for over 99.9% of recorded AERS. Suicidal ideations had a significant positive association with suicide attempts (p<.0001) and had an approximately 131-fold stronger magnitude of association than non-suicidal AERs, after adjusting for drug class, gender, and age. Conclusions In AE reports, drugs that are associated with increased suicidal ideations are also associated with increased suicidal attempts or completions. This association suggests that drug-induced suicidal ideations observed in RCTs plausibly represent harbingers that presage the more serious suicide attempts and completions and should be a cause for concern. PMID:19798416
Gañan, Y; Macias, D; Basco, R D; Merino, R; Hurle, J M
1998-04-01
The formation of the digits in amniota embryos is accompanied by apoptotic cell death of the interdigital mesoderm triggered through BMP signaling. Differences in the intensity of this apoptotic process account for the establishment of the different morphological types of feet observed in amniota (i.e., free-digits, webbed digits, lobulated digits). The molecular basis accounting for the differential pattern of interdigital cell death remains uncertain since the reduction of cell death in species with webbed digits is not accompanied by a parallel reduction in the pattern of expression of bmp genes in the interdigital regions. In this study we show that the duck interdigital web mesoderm exhibits an attenuated response to both BMP-induced apoptosis and TGFbeta-induced chondrogenesis in comparison with species with free digits. The attenuated response to these signals is accompanied by a reduced pattern of expression of msx-1 and msx-2 genes. Local application of FGF in the duck interdigit expands the domain of msx-2 expression but not the domain of msx-1 expression. This change in the expression of msx-2 is followed by a parallel increase in spontaneous and exogenous BMP-induced interdigital cell death, while the chondrogenic response to TGFbetas is unchanged. The regression of AER, as deduced by the pattern of extinction of fgf-8 expression, takes place in a similar fashion in the chick and duck regardless of the differences in interdigital cell death and msx gene expression. Implantation of BMP-beads in the distal limb mesoderm induces AER regression in both the chick and duck. This finding suggests an additional role for BMPs in the physiological regression of the AER. It is proposed that the formation of webbed vs free-digit feet in amniota results from a premature differentiation of the interdigital mesoderm into connective tissue caused by a reduced expression of msx genes in the developing autopod. Copyright 1998 Academic Press.
Nguyen, H V; Caruso, D; Lebrun, M; Nguyen, N T; Trinh, T T; Meile, J-C; Chu-Ky, S; Sarter, S
2016-08-01
The aims of this study were to characterize the antibacterial activity and the chemotype of Litsea cubeba leaf essential oil (EO) harvested in North Vietnam and to investigate the biological effects induced by the leaf powder on growth, nonspecific immunity and survival of common carp (Cyprinus carpio) challenged with Aeromonas hydrophila. The EO showed the prevalence of linalool (95%, n = 5). It was bactericidal against the majority of tested strains, with minimum inhibitory concentrations ranging from 0·72 to 2·89 mg ml(-1) (Aer. hydrophila, Edwarsiella tarda, Vibrio furnissii, Vibrio parahaemolyticus, Streptococcus garvieae, Escherichia coli, Salmonella Typhimurium). The fish was fed with 0 (control), 2, 4 and 8% leaf powder supplementation diets for 21 days. Nonspecific immunity parameters (lysozyme, haemolytic and bactericidal activities of plasma) were assessed 21 days after feeding period and before the experimental infection. Weight gain, specific growth rate and feed conversion ratio were improved by supplementation of L. cubeba in a dose-related manner, and a significant difference appeared at the highest dose (8%) when compared to the control. The increase in plasma lysozyme was significant for all the treated groups. Haemolysis activity was higher for the groups fed with 4 and 8% plant powder. Antibacterial activity increased significantly for the 8% dose only. Litsea cubeba leaf powder increased nonspecific immunity of carps in dose-related manner. After infection with Aer. hydrophila, survivals of fish fed with 4 and 8% L. cubeba doses were significantly higher than those fed with 2% dose and the control. A range of 4-8% L. cubeba leaf powder supplementation diet (from specific linalool-rich chemotype) can be used in aquaculture to reduce antibiotic burden and impacts of diseases caused by Aer. hydrophila. © 2016 The Society for Applied Microbiology.
A suite of exercises for verifying dynamic earthquake rupture codes
Harris, Ruth A.; Barall, Michael; Aagaard, Brad T.; Ma, Shuo; Roten, Daniel; Olsen, Kim B.; Duan, Benchun; Liu, Dunyu; Luo, Bin; Bai, Kangchen; Ampuero, Jean-Paul; Kaneko, Yoshihiro; Gabriel, Alice-Agnes; Duru, Kenneth; Ulrich, Thomas; Wollherr, Stephanie; Shi, Zheqiang; Dunham, Eric; Bydlon, Sam; Zhang, Zhenguo; Chen, Xiaofei; Somala, Surendra N.; Pelties, Christian; Tago, Josue; Cruz-Atienza, Victor Manuel; Kozdon, Jeremy; Daub, Eric; Aslam, Khurram; Kase, Yuko; Withers, Kyle; Dalguer, Luis
2018-01-01
We describe a set of benchmark exercises that are designed to test if computer codes that simulate dynamic earthquake rupture are working as intended. These types of computer codes are often used to understand how earthquakes operate, and they produce simulation results that include earthquake size, amounts of fault slip, and the patterns of ground shaking and crustal deformation. The benchmark exercises examine a range of features that scientists incorporate in their dynamic earthquake rupture simulations. These include implementations of simple or complex fault geometry, off‐fault rock response to an earthquake, stress conditions, and a variety of formulations for fault friction. Many of the benchmarks were designed to investigate scientific problems at the forefronts of earthquake physics and strong ground motions research. The exercises are freely available on our website for use by the scientific community.
Benchmarking analysis of three multimedia models: RESRAD, MMSOILS, and MEPAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, J.J.; Faillace, E.R.; Gnanapragasam, E.K.
1995-11-01
Multimedia modelers from the United States Environmental Protection Agency (EPA) and the United States Department of Energy (DOE) collaborated to conduct a comprehensive and quantitative benchmarking analysis of three multimedia models. The three models-RESRAD (DOE), MMSOILS (EPA), and MEPAS (DOE)-represent analytically based tools that are used by the respective agencies for performing human exposure and health risk assessments. The study is performed by individuals who participate directly in the ongoing design, development, and application of the models. A list of physical/chemical/biological processes related to multimedia-based exposure and risk assessment is first presented as a basis for comparing the overall capabilitiesmore » of RESRAD, MMSOILS, and MEPAS. Model design, formulation, and function are then examined by applying the models to a series of hypothetical problems. Major components of the models (e.g., atmospheric, surface water, groundwater) are evaluated separately and then studied as part of an integrated system for the assessment of a multimedia release scenario to determine effects due to linking components of the models. Seven modeling scenarios are used in the conduct of this benchmarking study: (1) direct biosphere exposure, (2) direct release to the air, (3) direct release to the vadose zone, (4) direct release to the saturated zone, (5) direct release to surface water, (6) surface water hydrology, and (7) multimedia release. Study results show that the models differ with respect to (1) environmental processes included (i.e., model features) and (2) the mathematical formulation and assumptions related to the implementation of solutions (i.e., parameterization).« less
Robust visual tracking via multiple discriminative models with object proposals
NASA Astrophysics Data System (ADS)
Zhang, Yuanqiang; Bi, Duyan; Zha, Yufei; Li, Huanyu; Ku, Tao; Wu, Min; Ding, Wenshan; Fan, Zunlin
2018-04-01
Model drift is an important reason for tracking failure. In this paper, multiple discriminative models with object proposals are used to improve the model discrimination for relieving this problem. Firstly, the target location and scale changing are captured by lots of high-quality object proposals, which are represented by deep convolutional features for target semantics. And then, through sharing a feature map obtained by a pre-trained network, ROI pooling is exploited to wrap the various sizes of object proposals into vectors of the same length, which are used to learn a discriminative model conveniently. Lastly, these historical snapshot vectors are trained by different lifetime models. Based on entropy decision mechanism, the bad model owing to model drift can be corrected by selecting the best discriminative model. This would improve the robustness of the tracker significantly. We extensively evaluate our tracker on two popular benchmarks, the OTB 2013 benchmark and UAV20L benchmark. On both benchmarks, our tracker achieves the best performance on precision and success rate compared with the state-of-the-art trackers.
2012-02-09
1nclud1ng suggestions for reduc1ng the burden. to the Department of Defense. ExecutiVe Serv1ce D>rectorate (0704-0188) Respondents should be aware...benchmark problem we contacted Bertrand LeCun who in their poject CHOC from 2005-2008 had applied their parallel B&B framework BOB++ to the RLT1
Alternative industrial carbon emissions benchmark based on input-output analysis
NASA Astrophysics Data System (ADS)
Han, Mengyao; Ji, Xi
2016-12-01
Some problems exist in the current carbon emissions benchmark setting systems. The primary consideration for industrial carbon emissions standards highly relate to direct carbon emissions (power-related emissions) and only a portion of indirect emissions are considered in the current carbon emissions accounting processes. This practice is insufficient and may cause double counting to some extent due to mixed emission sources. To better integrate and quantify direct and indirect carbon emissions, an embodied industrial carbon emissions benchmark setting method is proposed to guide the establishment of carbon emissions benchmarks based on input-output analysis. This method attempts to link direct carbon emissions with inter-industrial economic exchanges and systematically quantifies carbon emissions embodied in total product delivery chains. The purpose of this study is to design a practical new set of embodied intensity-based benchmarks for both direct and indirect carbon emissions. Beijing, at the first level of carbon emissions trading pilot schemes in China, plays a significant role in the establishment of these schemes and is chosen as an example in this study. The newly proposed method tends to relate emissions directly to each responsibility in a practical way through the measurement of complex production and supply chains and reduce carbon emissions from their original sources. This method is expected to be developed under uncertain internal and external contexts and is further expected to be generalized to guide the establishment of industrial benchmarks for carbon emissions trading schemes in China and other countries.
Instruction-matrix-based genetic programming.
Li, Gang; Wang, Jin Feng; Lee, Kin Hong; Leung, Kwong-Sak
2008-08-01
In genetic programming (GP), evolving tree nodes separately would reduce the huge solution space. However, tree nodes are highly interdependent with respect to their fitness. In this paper, we propose a new GP framework, namely, instruction-matrix (IM)-based GP (IMGP), to handle their interactions. IMGP maintains an IM to evolve tree nodes and subtrees separately. IMGP extracts program trees from an IM and updates the IM with the information of the extracted program trees. As the IM actually keeps most of the information of the schemata of GP and evolves the schemata directly, IMGP is effective and efficient. Our experimental results on benchmark problems have verified that IMGP is not only better than those of canonical GP in terms of the qualities of the solutions and the number of program evaluations, but they are also better than some of the related GP algorithms. IMGP can also be used to evolve programs for classification problems. The classifiers obtained have higher classification accuracies than four other GP classification algorithms on four benchmark classification problems. The testing errors are also comparable to or better than those obtained with well-known classifiers. Furthermore, an extended version, called condition matrix for rule learning, has been used successfully to handle multiclass classification problems.
Helmholtz and parabolic equation solutions to a benchmark problem in ocean acoustics.
Larsson, Elisabeth; Abrahamsson, Leif
2003-05-01
The Helmholtz equation (HE) describes wave propagation in applications such as acoustics and electromagnetics. For realistic problems, solving the HE is often too expensive. Instead, approximations like the parabolic wave equation (PE) are used. For low-frequency shallow-water environments, one persistent problem is to assess the accuracy of the PE model. In this work, a recently developed HE solver that can handle a smoothly varying bathymetry, variable material properties, and layered materials, is used for an investigation of the errors in PE solutions. In the HE solver, a preconditioned Krylov subspace method is applied to the discretized equations. The preconditioner combines domain decomposition and fast transform techniques. A benchmark problem with upslope-downslope propagation over a penetrable lossy seamount is solved. The numerical experiments show that, for the same bathymetry, a soft and slow bottom gives very similar HE and PE solutions, whereas the PE model is far from accurate for a hard and fast bottom. A first attempt to estimate the error is made by computing the relative deviation from the energy balance for the PE solution. This measure gives an indication of the magnitude of the error, but cannot be used as a strict error bound.