Expanding Your Laboratory by Accessing Collaboratory Resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoyt, David W.; Burton, Sarah D.; Peterson, Michael R.
2004-03-01
The Environmental Molecular Sciences Laboratory (EMSL) in Richland, Washington, is the home of a research facility setup by the United States Department of Energy (DOE). The facility is atypical because it houses over 100 cutting-edge research systems for the use of researchers all over the United States and the world. Access to the lab is requested through a peer-review proposal process and the scientists who use the facility are generally referred to as ‘users’. There are six main research facilities housed in EMSL, all of which host visiting researchers. Several of these facilities also participate in the EMSL Collaboratory, amore » remote access capability supported by EMSL operations funds. Of these, the High-Field Magnetic Resonance Facility (HFMRF) and Molecular Science Computing Facility (MSCF) have a significant number of their users performing remote work. The HFMRF in EMSL currently houses 12 NMR spectrometers that range in magnet field strength from 7.05T to 21.1T. Staff associated with the NMR facility offers scientific expertise in the areas of structural biology, solid-state materials/catalyst characterization, and magnetic resonance imaging (MRI) techniques. The way in which the HFMRF operates, with a high level of dedication to remote operation across the full suite of High-Field NMR spectrometers, has earned it the name “Virtual NMR Facility”. This review will focus on the operational aspects of remote research done in the High-Field Magnetic Resonance Facility and the computer tools that make remote experiments possible.« less
40 CFR 98.406 - Data reporting requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Natural Gas and Natural Gas Liquids § 98.406 Data..., isobutane, and pentanes plus. (3) Annual volumes in Mscf of natural gas received for processing. (4) Annual... report for each LDC shall contain the following information. (1) Annual volume in Mscf of natural gas...
40 CFR 98.406 - Data reporting requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Natural Gas and Natural Gas Liquids § 98.406 Data..., isobutane, and pentanes plus. (3) Annual volumes in Mscf of natural gas received for processing. (4) Annual... Mscf of natural gas received by the LDC at its city gate stations for redelivery on the LDC's...
40 CFR 98.406 - Data reporting requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Natural Gas and Natural Gas Liquids § 98.406 Data..., isobutane, and pentanes plus. (3) Annual volumes in Mscf of natural gas received for processing. (4) Annual... Mscf of natural gas received by the LDC at its city gate stations for redelivery on the LDC's...
40 CFR 98.406 - Data reporting requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Natural Gas and Natural Gas Liquids § 98.406 Data..., isobutane, and pentanes plus. (3) Annual volumes in Mscf of natural gas received for processing. (4) Annual... shall contain the following information. (1) Annual volume in Mscf of natural gas received by the LDC at...
40 CFR 98.406 - Data reporting requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Natural Gas and Natural Gas Liquids § 98.406 Data..., isobutane, and pentanes plus. (3) Annual volumes in Mscf of natural gas received for processing. (4) Annual... Mscf of natural gas received by the LDC at its city gate stations for redelivery on the LDC's...
Code of Federal Regulations, 2010 CFR
2010-07-01
... GREENHOUSE GAS REPORTING Suppliers of Natural Gas and Natural Gas Liquids § 98.408 Definitions. All terms...) Natural Gas 1.027 MMBtu/Mscf 53.02 Propane 3.836 MMBtu/bbl 63.02 Normal butane 4.326 MMBtu/bbl 64.93... Unit Default CO2 emission value(MT CO2/Unit) Natural Gas Mscf 0.054452 Propane Barrel 0.241745 Normal...
ERIC Educational Resources Information Center
Gamble, Charles W.; Hamblin, Arthur G.
1986-01-01
Discusses the use of a sentence completion instrument predicated on Lazarus' multimodal system. The instrument, entitled The Multimodal Sentence Completion Form for Children (MSCF-C), is designed to systematically assess client needs and assist in identifying intervention strategies. Presents a case study of a 12-year-old, sixth-grade student.…
Preliminary report on the commercial viability of gas production from natural gas hydrates
Walsh, M.R.; Hancock, S.H.; Wilson, S.J.; Patil, S.L.; Moridis, G.J.; Boswell, R.; Collett, T.S.; Koh, C.A.; Sloan, E.D.
2009-01-01
Economic studies on simulated gas hydrate reservoirs have been compiled to estimate the price of natural gas that may lead to economically viable production from the most promising gas hydrate accumulations. As a first estimate, $CDN2005 12/Mscf is the lowest gas price that would allow economically viable production from gas hydrates in the absence of associated free gas, while an underlying gas deposit will reduce the viability price estimate to $CDN2005 7.50/Mscf. Results from a recent analysis of the simulated production of natural gas from marine hydrate deposits are also considered in this report; on an IROR basis, it is $US2008 3.50-4.00/Mscf more expensive to produce marine hydrates than conventional marine gas assuming the existence of sufficiently large marine hydrate accumulations. While these prices represent the best available estimates, the economic evaluation of a specific project is highly dependent on the producibility of the target zone, the amount of gas in place, the associated geologic and depositional environment, existing pipeline infrastructure, and local tariffs and taxes. ?? 2009 Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petersen, R.J.; Cadotte, J.E.; Conway, E.J.
1976-01-01
The object of this program was to develop novel and unique membranes for separating acid gases from coal gasification streams. Many candidate membranes, including cationic, hydrophilic, and silicone, were tested. Optimum separation properties were possessed by membranes formulated from crosslinked methyl cellulose coated on polysulfone support films. The observed separation properties were explained theoretically by the solubility of the various gases in the water contained within the membranes rather than by activated transport. Each of the acid gas clean-up processes considered required additional sulfur clean-up, a guard chamber, and a Claus plant for recovering sulfur. These additional costs were calculatedmore » and added to the base costs for acid gas removal from the raw SNG. When the additional costs were added to the costs of the Rectisol, Benfield, Sulfinol, and fluidized dolomite processes the total costs ranged from 43 to 49 cents/Mscf. For the membrane process the additional sulfur removal costs were about 3.3 cents/Mscf to be added to the base costs for acid gas removal. The best membrane composition found during this program, one which exhibited a CO/sub 2//H/sub 2/ selectivity of 13 at a CO/sub 2/ flux of 6 ft/sup 3//ft/sup 2/-hr-100 psi, would entail a process cost of about 53 cents/Mscf with these additions. This is about 7 cents/Mscf more than for the average of the other processes. No better membrane performance is predicted on the basis of the experiments performed. Without a shift in several cost factors, membranes cannot be competitive. The possibility that reduced energy availability could lead to such shifts should not be discounted but is not foreseen in the near future.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ginovska-Pangovska, Bojana; Autrey, Thomas; Parab, Kshitij K.
We report on a combined computational and experimental study of the activation of hydrogen using for 2,6-lutidine (Lut)/BCl3 Lewis pairs. Herein we describe the synthetic approach used to obtain a new FLP, Lut-BCl3 that activates molecular H2 at ~10 bar, 100 °C in toluene or lutidine as the solvent. The resulting compound is an unexpected neutral hydride, LutBHCl2, rather than the ion pair, which we attribute to ligand redistribution. The mechanism for activation was modeled with density functional theory and accurate G3(MP2)B3 theory. The dative bond in Lut-BCl3 is calculated to have a bond enthalpy of 15 kcal/mol. The separatedmore » pair is calculated to react with H2 and form the [LutH+][HBCl3–] ion pair with a barrier of 13 kcal/mol. Metathesis with LutBCl3 produces LutBHCl2 and [LutH][BCl4]. The overall reaction is exothermic by 8.5 kcal/mol. An alternative pathway was explored involving lutidine–borenium cation pair activating H2. This work was supported by the U.S. Department of Energy's (DOE) Office of Basic Energy Sciences, Division of Chemical Sciences, Biosciences, and Geosciences, and was performed in part using the Molecular Science Computing Facility (MSCF) in the William R. Wiley Environmental Molecular Sciences Laboratory, a DOE national scientific user facility sponsored by the Department of Energy's Office of Biological and Environmental Research and located at the Pacific Northwest National Laboratory (PNNL). PNNL is operated by Battelle for DOE.« less
Acid/base equilibria in clusters and their role in proton exchange membranes: Computational insight
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glezakou, Vanda A; Dupuis, Michel; Mundy, Christopher J
2007-10-24
We describe molecular orbital theory and ab initio molecular dynamics studies of acid/base equilibria of clusters AH:(H 2O) n↔A -:H +(H 2O) n in low hydration regime (n = 1-4), where AH is a model of perfluorinated sulfonic acids, RSO 3H (R = CF 3CF 2), encountered in polymeric electrolyte membranes of fuel cells. Free energy calculations on the neutral and ion pair structures for n = 3 indicate that the two configurations are close in energy and are accessible in the fluctuation dynamics of proton transport. For n = 1,2 the only relevant configuration is the neutral form. Thismore » was verified through ab initio metadynamics simulations. These findings suggest that bases are directly involved in the proton transport at low hydration levels. In addition, the gas phase proton affinity of the model sulfonic acid RSO 3H was found to be comparable to the proton affinity of water. Thus, protonated acids can also play a role in proton transport under low hydration conditions and under high concentration of protons. This work was supported by the Division of Chemical Science, Office of Basic Energy Sciences, US Department of Energy (DOE under Contract DE-AC05-76RL)1830. Computations were performed on computers of the Molecular Interactions and Transformations (MI&T) group and MSCF facility of EMSL, sponsored by US DOE and OBER located at PNNL. This work was benefited from resource of the National Energy Research Scientific Computing Centre, supported by the Office of Science of the US DOE, under Contract No. DE-AC03-76SF00098.« less
Hierarchical detection of red lesions in retinal images by multiscale correlation filtering
NASA Astrophysics Data System (ADS)
Zhang, Bob; Wu, Xiangqian; You, Jane; Li, Qin; Karray, Fakhri
2009-02-01
This paper presents an approach to the computer aided diagnosis (CAD) of diabetic retinopathy (DR) -- a common and severe complication of long-term diabetes which damages the retina and cause blindness. Since red lesions are regarded as the first signs of DR, there has been extensive research on effective detection and localization of these abnormalities in retinal images. In contrast to existing algorithms, a new approach based on Multiscale Correlation Filtering (MSCF) and dynamic thresholding is developed. This consists of two levels, Red Lesion Candidate Detection (coarse level) and True Red Lesion Detection (fine level). The approach was evaluated using data from Retinopathy On-line Challenge (ROC) competition website and we conclude our method to be effective and efficient.
Dehydration pathways of 1-propanol on HZSM-5 in the presence and absence of water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhi, Yuchun; Shi, Hui; Mu, Linyu
The Brønsted acid-catalyzed gas-phase dehydration of 1-propanol (0.075-4 kPa) was studied on zeolite H-MFI (Si/Al = 26, containing minimal amounts of extraframework Al moieties) in the absence and presence of co-fed water (0-2.5 kPa) at 413-443 K. It is shown that propene can be formed from monomeric and dimeric adsorbed 1-propanol. The stronger adsorption of 1-propanol relative to water indicates that the reduced dehydration rates in the presence of water are not a consequence of the competitive adsorption between 1-propanol and water. Instead, the deleterious effect is related to the different extents of stabilization of adsorbed intermediates and the relevantmore » elimination/substitution transition states by water. Water stabilizes the adsorbed 1-propanol monomer significantly more than the elimination transition state, leading to a higher activation barrier and a greater entropy gain for the rate-limiting step, which eventually leads to propene. In a similar manner, an excess of 1-propanol stabilizes the adsorbed state of 1-propanol more than the elimination transition state. In comparison with the monomer-mediated pathway, adsorbed dimer and the relevant transition states for propene and ether formation are similarly, while less effectively, stabilized by intrazeolite water molecules. This work was supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences, and was performed in part using the Molecular Sciences Computing Facility (MSCF) in the William R. Wiley Environmental Molecular Sciences Laboratory, a DOE national scientific user facility sponsored by the DOE’s Office of Biological and Environmental Research and located and the Pacific Northwest National Laboratory (PNNL). PNNL is operated by Battelle for DOE.« less
Nishi, N; Ishikawa, R; Inoue, H; Nishikawa, M; Kakeda, M; Yoneya, T; Tsumura, H; Ohashi, H; Yamaguchi, Y; Motoki, K; Sudo, T; Mori, K J
1996-09-01
The findings that murine marrow stromal cell line MS-5 supported the proliferation of human lineage-negative (Lin-) CD34+CD38- bone marrow cells in long-term culture have been reported. In this study, we analyzed this proliferating activity of MS-5-conditioned medium (CM) on human primitive hematopoietic cells. When Lin-CD34+CD38- cells of normal human cord blood cells were co-cultured with MS-5, colony forming cells (CFCs) were maintained over 7 weeks in vitro. Prevention of contact between MS-5 and Lin-CD34+CD38- cells by using membrane filter (0.45 micron) was negligible for this activity. This indicated that the activity of MS-5 on human primitive hematopoietic cells is a soluble factor(s) secreted from MS-5, which is not induced by the contact between MS-5 and Lin-CD34+CD38- cells. We tried to purify this soluble activity. An active material with a molecular weight of about 150 kDa, determined by gel filtration chromatography, solely supported the growth of Lin-CD34+CD38- cells and Mo7e, a human megakaryocytic cell line. This activity not only reacted with anti-mouse stem cell factor (mSCF) antibody on Western blots, but it was also neutralized in the presence of anti-mSCF antibody. Another active material with a molecular weight of about 20-30 kDa synergized with mSCF to stimulate the growth of Lin-CD34+CD38- cells but failed to do so alone, although this synergy was inhibited in the presence of soluble mouse granulocyte-colony stimulating factor (mG-CSF) receptor, which is a chimeric protein consisting of the extracellular domain of mG-CSF receptor and the Fe region of human IgG1. In addition, the latter molecule supported the growth of the G-CSF dependent cell line FD/GR3, which is a murine myeloid leukemia cell line, FDC-P2, transfected with mG-CSF receptor cDNA. Adding of anti-mSCF antibody and soluble mG-CSF receptor to the culture completely abrogated the activity of MS-5-CM. Recombinant (r) mSCF and rmG-CSF had synergistic activity on the growth of Lin-CD34+CD38- cells. These results indicated that the activity on Lin-CD34+CD38- cells included in MS-5-CM is based upon the synergistic effects of mSCF and mG-CSF.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Calculation Methodology 1 of This Subpart Fuel Default high heating value factor Default CO2 emission factor (kg CO2/MMBtu) Natural Gas 1.028 MMBtu/Mscf 53.02 Propane 3.822 MMBtu/bbl 61.46 Normal butane 4.242...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Calculation Methodology 1 of This Subpart Fuel Default high heating value factor Default CO2 emission factor (kg CO2/MMBtu) Natural Gas 1.028 MMBtu/Mscf 53.02 Propane 3.822 MMBtu/bbl 61.46 Normal butane 4.242...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Calculation Methodology 1 of This Subpart Fuel Default high heating value factor Default CO2 emission factor (kg CO2/MMBtu) Natural Gas 1.028 MMBtu/Mscf 53.02 Propane 3.822 MMBtu/bbl 61.46 Normal butane 4.242...
40 CFR 98.403 - Calculating GHG emissions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Natural Gas and Natural Gas Liquids § 98.403... (Mscf) for natural gas and bbl for NGLs). HHVh = Higher heating value of product “h” supplied (MMBtu... LDC shall follow the procedures below. (1) For natural gas that is received for redelivery to...
40 CFR 98.403 - Calculating GHG emissions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Natural Gas and Natural Gas Liquids § 98.403...). Fuel = Total annual volume of product “h” supplied (volume per year, in Mscf for natural gas and bbl... procedures below. (1) For natural gas that is received for redelivery to downstream gas transmission...
40 CFR 98.403 - Calculating GHG emissions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Natural Gas and Natural Gas Liquids § 98.403... (Mscf) for natural gas and bbl for NGLs). HHVh = Higher heating value of product “h” supplied (MMBtu... LDC shall follow the procedures below. (1) For natural gas that is received for redelivery to...
40 CFR 98.403 - Calculating GHG emissions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Natural Gas and Natural Gas Liquids § 98.403... (Mscf) for natural gas and bbl for NGLs). HHVh = Higher heating value of product “h” supplied (MMBtu... LDC shall follow the procedures below. (1) For natural gas that is received for redelivery to...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holditch, S.A.; Whitehead, W.S.; Davidson, B.M.
Maxus Exploration drilled the Carl Ellis E-3 well in the Ellis Ranch Field, Ochiltree County, Texas in December 1991. The GRI cooperative research program on this well included coring, logging, stress testing, pre-fracture well testing, a mini-frac, post-fracture production data analysis, a fracture treatment, and a post-fracture well test. The well was completed in the Cleveland formation at 6,929-7,008 feet. After a ballout treatment, the well flowed 32 Mscf/day. Results of the pre-fracture pressure buildup test indicate a permeability-thickness product of 1.45 md-ft, a skin factor of -0.05, and a reservoir pressure of 1900 psi. The well was fracture treatedmore » with 70,000 gallons of a 40 lb/1000 gallon linear gel and 185,000 pounds of 20/40 sand. The initial post-fracture flow rate was approximately 500 Mscf/day. Post-fracture analysis with TRIFRAC indicated that the propped fracture height at the wellbore was 330 feet and the propped fracture length was 93 feet.« less
40 CFR Table Nn-2 to Subpart Nn of... - Default Values for Calculation Methodology 2 of This Subpart
Code of Federal Regulations, 2014 CFR
2014-07-01
... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Natural Gas and Natural Gas Liquids Pt. 98, Subpt. NN, Table NN-2 Table NN-2 to Subpart NN of Part 98.../Unit) 1 Natural Gas Mscf 0.0544 Propane Barrel 0.241 Normal butane Barrel 0.281 Ethane Barrel 0.170...
Code of Federal Regulations, 2014 CFR
2014-07-01
... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Natural Gas and Natural Gas Liquids Pt. 98, Subpt. NN, Table NN-1 Table NN-1 to Subpart NN of Part 98... CO2emission factor (kg CO2/MMBtu) Natural Gas 1.026 MMBtu/Mscf 53.06 Propane 3.84 MMBtu/bbl 62.87 Normal...
Nishi, N; Ishikawa, R; Inoue, H; Nishikawa, M; Yoneya, T; Kakeda, M; Tsumura, H; Ohashi, H; Mori, K J
1997-04-01
When Lin-CD34+CD38- cells from normal human cord blood were cocultured with MS-5, colony forming cells were maintained for over 8 weeks. Prevention of contact between MS-5 and Lin-CD34+CD38- cells by using a membrane filter was negligible for this activity, indicating that the activity of MS-5 on human primitive hematopoietic cells may be due to soluble factor(s) secreted from MS-5. We tried to purify this activity by a [3H]TdR incorporation assay. The activity was found in 150 kD fraction and was neutralized with anti-mSCF (stem cell factor) antibody. Another 20-30 kD fraction synergized with mSCF to stimulate the growth of Lin-CD34+CD38- cells but failed alone. This fraction supported the growth of the G-CSF (granulocyte-colony stimulating factor)-dependent cell line FD/GR3, FDC-P2 transfected with mG-CSF receptor cDNA. This synergy was canceled in the presence of soluble mG-CSF receptor. Addition of anti-mSCF antibody and soluble mG-CSF receptor to the culture completely abrogated the activity of MS-5-culture supernatant. These results indicate the activity of MS-5 on Lin-CD34+CD38- cells is due to synergistic effect of mSCF and mG-CSF.
Chung, Brile; Min, Dullei; Joo, Lukas W; Krampf, Mark R; Huang, Jing; Yang, Yujun; Shashidhar, Sumana; Brown, Janice; Dudl, Eric P; Weinberg, Kenneth I
2011-01-01
The decreased ability of the thymus to generate T cells after bone marrow transplantation (BMT) is a clinically significant problem. Interleukin (IL)-7 and stem cell factor (SCF) induce proliferation, differentiation, and survival of thymocytes. Although previous studies have shown that administration of recombinant human IL-7 (rhIL-7) after murine and human BMT improves thymopoiesis and immune function, whether administration of SCF exerts similar effects is unclear. To evaluate independent or combinatorial effects of IL-7 and SCF in post-BMT thymopoiesis, bone marrow (BM)-derived mesenchymal stem cells transduced ex vivo with the rhIL-7 or murine SCF (mSCF) genes were cotransplanted with T cell-depleted BM cells into lethally irradiated mice. Although rhIL-7 and mSCF each improved immune reconstitution, the combination treatment had a significantly greater effect than either cytokine alone. Moreover, the combination treatment significantly increased donor-derived common lymphoid progenitors (CLPs) in BM, suggesting that transplanted CLPs expand more rapidly in response to IL-7 and SCF and may promote immune reconstitution. Our findings demonstrate that IL-7 and SCF might be therapeutically useful for enhancing de novo T cell development. Furthermore, combination therapy may allow the administration of lower doses of IL-7, thereby decreasing the likelihood of IL-7-mediated expansion of mature T cells. 2011. Published by Elsevier Inc.
History of Chandra X-Ray Observatory
1997-12-16
This is a photograph of the Chandra X-Ray Observatory (CXO), formerly Advanced X-Ray Astrophysics Facility (AXAF), High Resolution Mirror Assembly (HRMA) integration at the X-Ray Calibration Facility (XRCF) at the Marshall Space Flight Center (MSFC). The AXAF was renamed CXO in 1999. The CXO is the most sophisticated and the world's most powerful x-ray telescope ever built. It observes x-rays from high-energy regions of the universe, such as hot gas in the remnants of exploded stars. The HRMA, the heart of the telescope system, is contained in the cylindrical "telescope" portion of the observatory. Since high-energy x-rays would penetrate a normal mirror, special cylindrical mirrors were created. The two sets of four nested mirrors resemble tubes within tubes. Incoming x-rays graze off the highly polished mirror surface and are furneled to the instrument section for detection and study. MSFC's XRCF is the world's largest, most advanced laboratory for simulating x-ray emissions from distant celestial objects. It produces a space-like environment in which components related to x-ray telescope imaging are tested and the quality of their performances in space is predicted. TRW, Inc. was the prime contractor for the development of the CXO and NASA's MSCF was responsible for its project management. The Smithsonian Astrophysical Observatory controls science and flight operations of the CXO for NASA from Cambridge, Massachusetts. The CXO was launched July 22, 1999 aboard the Space Shuttle Columbia (STS-93).
Controlling Proton Delivery through Catalyst Structural Dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardenas, Allan Jay P.; Ginovska, Bojana; Kumar, Neeraj
The fastest synthetic molecular catalysts for production and oxidation of H2 emulate components of the active site of natural hydrogenases. The role of controlled structural dynamics is recognized as a critical component in the catalytic performance of many enzymes, including hydrogenases, but is largely neglected in the design of synthetic molecular cata-lysts. In this work, the impact of controlling structural dynamics on the rate of production of H2 was studied for a series of [Ni(PPh2NC6H4-R2)2]2+ catalysts including R = n-hexyl, n-decyl, n-tetradecyl, n-octadecyl, phenyl, or cyclohexyl. A strong correlation was observed between the ligand structural dynamics and the rates ofmore » electrocatalytic hydrogen production in acetonitrile, acetonitrile-water, and protic ionic liquid-water mixtures. Specifically, the turnover frequencies correlate inversely with the rates of ring inversion of the amine-containing ligand, as this dynamic process dictates the positioning of the proton relay in the second coordination sphere and therefore governs protonation at either catalytically productive or non-productive sites. This study demonstrates that the dynamic processes involved in proton delivery can be controlled through modifications of the outer coordination sphere of the catalyst, similar to the role of the protein architecture in many enzymes. The present work provides new mechanistic insight into the large rate enhancements observed in aqueous protic ionic liquid media for the [Ni(PPh2NR2)]2+ family of catalysts. The incorporation of controlled structural dynamics as a design parameter to modulate proton delivery in molecular catalysts has enabled H2 production rates that are up to three orders of magnitude faster than the [Ni(PPh2NPh2)]2+complex. The observed turnover frequencies are up to 106 s-1 in acetonitrile-water, and over 107 s-1 in protic ionic liquid-water mixtures, with a minimal increase in overpotential. This material is based upon work supported as part of the Center for Molecular Electrocatalysis, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, and was performed in part using the Molecular Science Computing Facility (MSCF) in the William R. Wiley Environmental Molecular Sciences Laboratory, a DOE national scientific user facility located at Pacific Northwest National Laboratory (PNNL). PNNL is operated by Battelle for DOE.« less
Choline Alleviates Parenteral Nutrition-Associated Duodenal Motility Disorder in Infant Rats.
Zhu, Jie; Wu, Yang; Guo, Yonggao; Tang, Qingya; Lu, Ting; Cai, Wei; Huang, Haiyan
2016-09-01
Parenteral nutrition (PN) has been found to influence duodenal motility in animals. Choline is an essential nutrient, and its deficiency is related to PN-associated organ diseases. Therefore, this study was aimed to investigate the role of choline supplementation in an infant rat model of PN-associated duodenal motility disorder. Three-week-old Sprague-Dawley male rats were fed chow and water (controls), PN solution (PN), or PN plus intravenous choline (600 mg/kg) (PN + choline). Rats underwent jugular vein cannulation for infusion of PN solution or 0.9% saline (controls) for 7 days. Duodenal oxidative stress status, concentrations of plasma choline, phosphocholine, and betaine and serum tumor necrosis factor (TNF)-α were assayed. The messenger RNA (mRNA) and protein expression of c-Kit proto-oncogene protein (c-Kit) and membrane-bound stem cell factor (mSCF) together with the electrophysiological features of slow waves in the duodenum were also evaluated. Rats on PN showed increased reactive oxygen species; decreased total antioxidant capacity in the duodenum; reduced plasma choline, phosphocholine, and betaine; and enhanced serum TNF-α concentrations, which were reversed by choline intervention. In addition, PN reduced mRNA and protein expression of mSCF and c-Kit, which were inversed under choline administration. Moreover, choline attenuated depolarized resting membrane potential and declined the frequency and amplitude of slow waves in duodenal smooth muscles of infant rats induced by PN, respectively. The addition of choline to PN may alleviate the progression of duodenal motor disorder through protecting smooth muscle cells from injury, promoting mSCF/c-Kit signaling, and attenuating impairment of interstitial cells of Cajal in the duodenum during PN feeding. © 2015 American Society for Parenteral and Enteral Nutrition.
Li, Hai; Chen, Yan; Liu, Shi; Hou, Xiao-Hua
2016-06-21
To investigate the effects of different parameters of gastric electrical stimulation (GES) on interstitial cells of Cajal (ICCs) and changes in the insulin-like growth factor 1 (IGF-1) signal pathway in streptozotocin-induced diabetic rats. Male rats were randomized into control, diabetic (DM), diabetic with sham GES (DM + SGES), diabetic with GES1 (5.5 cpm, 100 ms, 4 mA) (DM + GES1), diabetic with GES2 (5.5 cpm, 300 ms, 4 mA) (DM + GES2) and diabetic with GES3 (5.5 cpm, 550 ms, 2 mA) (DM + GES3) groups. The expression levels of c-kit, M-SCF and IGF-1 receptors were evaluated in the gastric antrum using Western blot analysis. The distribution of ICCs was observed using immunolabeling for c-kit, while smooth muscle cells and IGF-1 receptors were identified using α-SMA and IGF-1R antibodies. Serum level of IGF-1 was tested using enzyme-linked immunosorbent assay. Gastric emptying was delayed in the DM group but improved in all GES groups, especially in the GES2 group. The expression levels of c-kit, M-SCF and IGF-1R were decreased in the DM group but increased in all GES groups. More ICCs (c-kit(+)) and smooth muscle cells (α-SMA(+)/IGF-1R(+)) were observed in all GES groups than in the DM group. The average level of IGF-1 in the DM group was markedly decreased, but it was up-regulated in all GES groups, especially in the GES2 group. The results suggest that long-pulse GES promotes the regeneration of ICCs. The IGF-1 signaling pathway might be involved in the mechanism underlying this process, which results in improved gastric emptying.
High-Performance Computing User Facility | Computational Science | NREL
User Facility High-Performance Computing User Facility The High-Performance Computing User Facility technologies. Photo of the Peregrine supercomputer The High Performance Computing (HPC) User Facility provides Gyrfalcon Mass Storage System. Access Our HPC User Facility Learn more about these systems and how to access
Computer-Aided Facilities Management Systems (CAFM).
ERIC Educational Resources Information Center
Cyros, Kreon L.
Computer-aided facilities management (CAFM) refers to a collection of software used with increasing frequency by facilities managers. The six major CAFM components are discussed with respect to their usefulness and popularity in facilities management applications: (1) computer-aided design; (2) computer-aided engineering; (3) decision support…
Redirecting Under-Utilised Computer Laboratories into Cluster Computing Facilities
ERIC Educational Resources Information Center
Atkinson, John S.; Spenneman, Dirk H. R.; Cornforth, David
2005-01-01
Purpose: To provide administrators at an Australian university with data on the feasibility of redirecting under-utilised computer laboratories facilities into a distributed high performance computing facility. Design/methodology/approach: The individual log-in records for each computer located in the computer laboratories at the university were…
Brief Survey of TSC Computing Facilities
DOT National Transportation Integrated Search
1972-05-01
The Transportation Systems Center (TSC) has four, essentially separate, in-house computing facilities. We shall call them Honeywell Facility, the Hybrid Facility, the Multimode Simulation Facility, and the Central Facility. In addition to these four,...
Closely Spaced Independent Parallel Runway Simulation.
1984-10-01
facility consists of the Central Computer Facility, the Controller Laboratory, and the Simulator Pilot Complex. CENTRAL COMPUTER FACILITY. The Central... Computer Facility consists of a group of mainframes, minicomputers, and associated peripherals which host the operational and data acquisition...in the Controller Laboratory and convert their verbal directives into a keyboard entry which is transmitted to the Central Computer Complex, where
Apollo experience report: Real-time auxiliary computing facility development
NASA Technical Reports Server (NTRS)
Allday, C. E.
1972-01-01
The Apollo real time auxiliary computing function and facility were an extension of the facility used during the Gemini Program. The facility was expanded to include support of all areas of flight control, and computer programs were developed for mission and mission-simulation support. The scope of the function was expanded to include prime mission support functions in addition to engineering evaluations, and the facility became a mandatory mission support facility. The facility functioned as a full scale mission support activity until after the first manned lunar landing mission. After the Apollo 11 mission, the function and facility gradually reverted to a nonmandatory, offline, on-call operation because the real time program flexibility was increased and verified sufficiently to eliminate the need for redundant computations. The evaluation of the facility and function and recommendations for future programs are discussed in this report.
ERIC Educational Resources Information Center
Cornforth, David; Atkinson, John; Spennemann, Dirk H. R.
2006-01-01
Purpose: Many researchers require access to computer facilities beyond those offered by desktop workstations. Traditionally, these are offered either through partnerships, to share the cost of supercomputing facilities, or through purpose-built cluster facilities. However, funds are not always available to satisfy either of these options, and…
Development and applications of nondestructive evaluation at Marshall Space Flight Center
NASA Technical Reports Server (NTRS)
Whitaker, Ann F.
1990-01-01
A brief description of facility design and equipment, facility usage, and typical investigations are presented for the following: Surface Inspection Facility; Advanced Computer Tomography Inspection Station (ACTIS); NDE Data Evaluation Facility; Thermographic Test Development Facility; Radiographic Test Facility; Realtime Radiographic Test Facility; Eddy Current Research Facility; Acoustic Emission Monitoring System; Advanced Ultrasonic Test Station (AUTS); Ultrasonic Test Facility; and Computer Controlled Scanning (CONSCAN) System.
Central Computational Facility CCF communications subsystem options
NASA Technical Reports Server (NTRS)
Hennigan, K. B.
1979-01-01
A MITRE study which investigated the communication options available to support both the remaining Central Computational Facility (CCF) computer systems and the proposed U1108 replacements is presented. The facilities utilized to link the remote user terminals with the CCF were analyzed and guidelines to provide more efficient communications were established.
Academic Computing Facilities and Services in Higher Education--A Survey.
ERIC Educational Resources Information Center
Warlick, Charles H.
1986-01-01
Presents statistics about academic computing facilities based on data collected over the past six years from 1,753 institutions in the United States, Canada, Mexico, and Puerto Rico for the "Directory of Computing Facilities in Higher Education." Organizational, functional, and financial characteristics are examined as well as types of…
The grand challenge of managing the petascale facility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aiken, R. J.; Mathematics and Computer Science
2007-02-28
This report is the result of a study of networks and how they may need to evolve to support petascale leadership computing and science. As Dr. Ray Orbach, director of the Department of Energy's Office of Science, says in the spring 2006 issue of SciDAC Review, 'One remarkable example of growth in unexpected directions has been in high-end computation'. In the same article Dr. Michael Strayer states, 'Moore's law suggests that before the end of the next cycle of SciDAC, we shall see petaflop computers'. Given the Office of Science's strong leadership and support for petascale computing and facilities, wemore » should expect to see petaflop computers in operation in support of science before the end of the decade, and DOE/SC Advanced Scientific Computing Research programs are focused on making this a reality. This study took its lead from this strong focus on petascale computing and the networks required to support such facilities, but it grew to include almost all aspects of the DOE/SC petascale computational and experimental science facilities, all of which will face daunting challenges in managing and analyzing the voluminous amounts of data expected. In addition, trends indicate the increased coupling of unique experimental facilities with computational facilities, along with the integration of multidisciplinary datasets and high-end computing with data-intensive computing; and we can expect these trends to continue at the petascale level and beyond. Coupled with recent technology trends, they clearly indicate the need for including capability petascale storage, networks, and experiments, as well as collaboration tools and programming environments, as integral components of the Office of Science's petascale capability metafacility. The objective of this report is to recommend a new cross-cutting program to support the management of petascale science and infrastructure. The appendices of the report document current and projected DOE computation facilities, science trends, and technology trends, whose combined impact can affect the manageability and stewardship of DOE's petascale facilities. This report is not meant to be all-inclusive. Rather, the facilities, science projects, and research topics presented are to be considered examples to clarify a point.« less
Specialized computer architectures for computational aerodynamics
NASA Technical Reports Server (NTRS)
Stevenson, D. K.
1978-01-01
In recent years, computational fluid dynamics has made significant progress in modelling aerodynamic phenomena. Currently, one of the major barriers to future development lies in the compute-intensive nature of the numerical formulations and the relative high cost of performing these computations on commercially available general purpose computers, a cost high with respect to dollar expenditure and/or elapsed time. Today's computing technology will support a program designed to create specialized computing facilities to be dedicated to the important problems of computational aerodynamics. One of the still unresolved questions is the organization of the computing components in such a facility. The characteristics of fluid dynamic problems which will have significant impact on the choice of computer architecture for a specialized facility are reviewed.
ERIC Educational Resources Information Center
Siu, Kin Wai Michael; Lam, Mei Seung
2012-01-01
Although computer assisted learning (CAL) is becoming increasingly popular, people with visual impairment face greater difficulty in accessing computer-assisted learning facilities. This is primarily because most of the current CAL facilities are not visually impaired friendly. People with visual impairment also do not normally have access to…
An Alternative Method for Long-Term Culture of Chicken Embryonic Stem Cell In Vitro.
Zhang, Li; Wu, Yenan; Li, Xiang; Wei, Shao; Xing, Yiming; Lian, Zhengxing; Han, Hongbing
2018-01-01
Chicken embryonic stem cells (cESCs) obtained from stage X embryos provide a novel model for the study of avian embryonic development. A new way to maintain cESCs for a long period in vitro still remains unexplored. We found that the cESCs showed stem cell-like properties in vitro for a long term with the support of DF-1 feeder and basic culture medium supplemented with human basic fibroblast growth factor (hbFGF), mouse stem cell factor (mSCF), and human leukemia inhibitory factor (hLIF). During the long culture period, the cESCs showed typical ES cell morphology and expressed primitive stem cell markers with a relatively stable proliferation rate and high telomerase activity. These cells also exhibited the capability to differentiate into cardiac myocytes, smooth muscle cells, neural cells, osteoblast, and adipocyte in vitro . Chimera chickens were produced by cESCs cultured for 25 passages with this new culture system. The experiments showed that DF-1 was the optimal feeder and hbFGF was an important factor for maintaining the pluripotency of cESCs in vitro .
Tuan, Rocky S; Lee, Francis Young-In; T Konttinen, Yrjö; Wilkinson, J Mark; Smith, Robert Lane
2008-01-01
New clinical and basic science data on the cellular and molecular mechanisms by which wear particles stimulate the host inflammatory response have provided deeper insight into the pathophysiology of periprosthetic bone loss. Interactions among wear particles, macrophages, osteoblasts, bone marrow-derived mesenchymal stem cells, fibroblasts, endothelial cells, and T cells contribute to the production of pro-inflammatory and pro-osteoclastogenic cytokines such as TNF-alpha, RANKL, M-SCF, PGE2, IL-1, IL-6, and IL-8. These cytokines not only promote osteoclastogenesis but interfere with osteogenesis led by osteoprogenitor cells. Recent studies indicate that genetic variations in TNF-alpha, IL-1, and FRZB can result in subtle changes in gene function, giving rise to altered susceptibility or severity for periprosthetic inflammation and bone loss. Continuing research on the biologic effects and mechanisms of action of wear particles will provide a rational basis for the development of novel and effective ways of diagnosis, prevention, and treatment of periprosthetic inflammatory bone loss.
High-Performance Computing Data Center | Energy Systems Integration
Facility | NREL High-Performance Computing Data Center High-Performance Computing Data Center The Energy Systems Integration Facility's High-Performance Computing Data Center is home to Peregrine -the largest high-performance computing system in the world exclusively dedicated to advancing
Flying a College on the Computer. The Use of the Computer in Planning Buildings.
ERIC Educational Resources Information Center
Saint Louis Community Coll., MO.
Upon establishment of the St. Louis Junior College District, it was decided to make use of computer si"ulation facilities of a nearby aero-space contractor to develop a master schedule for facility planning purposes. Projected enrollments and course offerings were programmed with idealized student-teacher ratios to project facility needs. In…
Method and computer program product for maintenance and modernization backlogging
Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M
2013-02-19
According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.
NASA Technical Reports Server (NTRS)
1983-01-01
An assessment was made of the impact of developments in computational fluid dynamics (CFD) on the traditional role of aerospace ground test facilities over the next fifteen years. With improvements in CFD and more powerful scientific computers projected over this period it is expected to have the capability to compute the flow over a complete aircraft at a unit cost three orders of magnitude lower than presently possible. Over the same period improvements in ground test facilities will progress by application of computational techniques including CFD to data acquisition, facility operational efficiency, and simulation of the light envelope; however, no dramatic change in unit cost is expected as greater efficiency will be countered by higher energy and labor costs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyle A. Johnson Jr.
2002-09-01
Coalbed methane (CBM) is currently the hottest area of energy development in the Rocky Mountain area. The Powder River Basin (PRB) is the largest CBM area in Wyoming and has attracted the majority of the attention because of its high permeability and relatively shallow depth. Other Wyoming coal regions are also being targeted for development, but most of these areas have lower permeability and deeper coal seams. This project consists of the development of a CBM stimulation system for deep coal resources and involves three work areas: (1) Well Placement, (2) Well Stimulation, and (3) Production Monitoring and Evaluation. Themore » focus of this project is the Washakie Basin. Timberline Energy, Inc., the cosponsor, has a project area in southern Carbon County, Wyoming, and northern Moffat County, Colorado. The target coal is found near the top of the lower Fort Union formation. The well for this project, Evans No.1, was drilled to a depth of 2,700 ft. Three coal seams were encountered with sandstone and some interbedded shale between seams. Well logs indicated that the coal seams and the sandstone contained gas. For the testing, the upper seam at 2,000 ft was selected. The well, drilled and completed for this project, produced very little water and only occasional burps of methane. To enhance the well, a mild severity fracture was conducted to fracture the coal seam and not the adjacent sandstone. Fracturing data indicated a fracture half-length of 34 ft, a coal permeability of 0.2226 md, and permeability of 15.3 md. Following fracturing, the gas production rate stabilized at 10 Mscf/day within water production of 18 bpd. The Western Research Institute (WRI) CBM model was used to design a 14-day stimulation cycle followed by a 30-day production period. A maximum injection pressure of 1,200 psig to remain well below the fracture pressure was selected. Model predictions were 20 Mscf/day of air injection for 14 days, a one-day shut-in, then flowback. The predicted flowback was a four-fold increase over the prestimulation rate with production essentially returning to prestimulation rates after 30 days. The physical stimulation was conducted over a 14-day period. Problems with the stimulation injection resulted in a coal bed fire that was quickly quenched when production was resumed. The poststimulation, stabilized production was three to four times the prestimulation rate. The methane content was approximately 45% after one day and increased to 65% at the end of 30 days. The gas production rate was still two and one-half times the prestimulation rate at the end of the 30-day test period. The field results were a good match to the numerical simulator predictions. The physical stimulation did increase the production, but did not produce a commercial rate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyle A. Johnson Jr.
2002-03-01
Coalbed methane (CBM) is currently the hottest area of energy development in the Rocky Mountain area. The Powder River Basin (PRB) is the largest CBM area in Wyoming and has attracted the majority of the attention because of its high permeability and relatively shallow depth. Other Wyoming coal regions are also being targeted for development, but most of these areas have lower permeability and deeper coal seams. This project consists of the development of a CBM stimulation system for deep coal resources and involves three work areas: (1) Well Placement, (2) Well Stimulation, and (3) Production Monitoring and Evaluation. Themore » focus of this project is the Washakie Basin. Timberline Energy, Inc., the cosponsor, has a project area in southern Carbon County, Wyoming, and northern Moffat County, Colorado. The target coal is found near the top of the lower Fort Union formation. The well for this project, Evans No.1, was drilled to a depth of 2,700 ft. Three coal seams were encountered with sandstone and some interbedded shale between seams. Well logs indicated that the coal seams and the sandstone contained gas. For the testing, the upper seam at 2,000 ft was selected. The well, drilled and completed for this project, produced very little water and only occasional burps of methane. To enhance the well, a mild severity fracture was conducted to fracture the coal seam and not the adjacent sandstone. Fracturing data indicated a fracture half-length of 34 ft, a coal permeability of 0.2226 md, and permeability of 15.3 md. Following fracturing, the gas production rate stabilized at 10 Mscf/day within water production of 18 bpd. The Western Research Institute (WRI) CBM model was used to design a 14-day stimulation cycle followed by a 30-day production period. A maximum injection pressure of 1,200 psig to remain well below the fracture pressure was selected. Model predictions were 20 Mscf/day of air injection for 14 days, a one-day shut-in, then flowback. The predicted flowback was a four-fold increase over the prestimulation rate with production essentially returning to prestimulation rates after 30 days. The physical stimulation was conducted over a 14-day period. Problems with the stimulation injection resulted in a coal bed fire that was quickly quenched when production was resumed. The poststimulation, stabilized production was three to four times the prestimulation rate. The methane content was approximately 45% after one day and increased to 65% at the end of 30 days. The gas production rate was still two and one-half times the prestimulation rate at the end of the 30-day test period. The field results were a good match to the numerical simulator predictions. The physical stimulation did increase the production, but did not produce a commercial rate.« less
Instrument Systems Analysis and Verification Facility (ISAVF) users guide
NASA Technical Reports Server (NTRS)
Davis, J. F.; Thomason, J. O.; Wolfgang, J. L.
1985-01-01
The ISAVF facility is primarily an interconnected system of computers, special purpose real time hardware, and associated generalized software systems, which will permit the Instrument System Analysts, Design Engineers and Instrument Scientists, to perform trade off studies, specification development, instrument modeling, and verification of the instrument, hardware performance. It is not the intent of the ISAVF to duplicate or replace existing special purpose facilities such as the Code 710 Optical Laboratories or the Code 750 Test and Evaluation facilities. The ISAVF will provide data acquisition and control services for these facilities, as needed, using remote computer stations attached to the main ISAVF computers via dedicated communication lines.
ERIC Educational Resources Information Center
RENO, MARTIN; AND OTHERS
A STUDY WAS UNDERTAKEN TO EXPLORE IN A QUALITATIVE WAY THE POSSIBLE UTILIZATION OF COMPUTER AND DATA PROCESSING METHODS IN HIGH SCHOOL EDUCATION. OBJECTIVES WERE--(1) TO ESTABLISH A WORKING RELATIONSHIP WITH A COMPUTER FACILITY SO THAT ABLE STUDENTS AND THEIR TEACHERS WOULD HAVE ACCESS TO THE FACILITIES, (2) TO DEVELOP A UNIT FOR THE UTILIZATION…
Facilities | Integrated Energy Solutions | NREL
strategies needed to optimize our entire energy system. A photo of the high-performance computer at NREL . High-Performance Computing Data Center High-performance computing facilities at NREL provide high-speed
Experience with a UNIX based batch computing facility for H1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhards, R.; Kruener-Marquis, U.; Szkutnik, Z.
1994-12-31
A UNIX based batch computing facility for the H1 experiment at DESY is described. The ultimate goal is to replace the DESY IBM mainframe by a multiprocessor SGI Challenge series computer, using the UNIX operating system, for most of the computing tasks in H1.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-26
...; (Formerly FDA-2007D-0393)] Guidance for Industry: Blood Establishment Computer System Validation in the User... Industry: Blood Establishment Computer System Validation in the User's Facility'' dated April 2013. The... document entitled ``Guidance for Industry: Blood Establishment Computer System Validation in the User's...
Future Computer Requirements for Computational Aerodynamics
NASA Technical Reports Server (NTRS)
1978-01-01
Recent advances in computational aerodynamics are discussed as well as motivations for and potential benefits of a National Aerodynamic Simulation Facility having the capability to solve fluid dynamic equations at speeds two to three orders of magnitude faster than presently possible with general computers. Two contracted efforts to define processor architectures for such a facility are summarized.
Facilities Management via Computer: Information at Your Fingertips.
ERIC Educational Resources Information Center
Hensey, Susan
1996-01-01
Computer-aided facilities management is a software program consisting of a relational database of facility information--such as occupancy, usage, student counts, etc.--attached to or merged with computerized floor plans. This program can integrate data with drawings, thereby allowing the development of "what if" scenarios. (MLF)
Computational Tools and Facilities for the Next-Generation Analysis and Design Environment
NASA Technical Reports Server (NTRS)
Noor, Ahmed K. (Compiler); Malone, John B. (Compiler)
1997-01-01
This document contains presentations from the joint UVA/NASA Workshop on Computational Tools and Facilities for the Next-Generation Analysis and Design Environment held at the Virginia Consortium of Engineering and Science Universities in Hampton, Virginia on September 17-18, 1996. The presentations focused on the computational tools and facilities for analysis and design of engineering systems, including, real-time simulations, immersive systems, collaborative engineering environment, Web-based tools and interactive media for technical training. Workshop attendees represented NASA, commercial software developers, the aerospace industry, government labs, and academia. The workshop objectives were to assess the level of maturity of a number of computational tools and facilities and their potential for application to the next-generation integrated design environment.
2014 Annual Report - Argonne Leadership Computing Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, James R.; Papka, Michael E.; Cerny, Beth A.
The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.
2015 Annual Report - Argonne Leadership Computing Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, James R.; Papka, Michael E.; Cerny, Beth A.
The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drewmark Communications; Sartor, Dale; Wilson, Mark
2010-07-01
High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.
Computer Operating System Maintenance.
1982-06-01
FACILITY The Computer Management Information Facility ( CMIF ) system was developed by Rapp Systems to fulfill the need at the CRF to record and report on...computer center resource usage and utilization. The foundation of the CMIF system is a System 2000 data base (CRFMGMT) which stores and permits access
On Laminar to Turbulent Transition of Arc-Jet Flow in the NASA Ames Panel Test Facility
NASA Technical Reports Server (NTRS)
Gokcen, Tahir; Alunni, Antonella I.
2012-01-01
This paper provides experimental evidence and supporting computational analysis to characterize the laminar to turbulent flow transition in a high enthalpy arc-jet facility at NASA Ames Research Center. The arc-jet test data obtained in the 20 MW Panel Test Facility include measurements of surface pressure and heat flux on a water-cooled calibration plate, and measurements of surface temperature on a reaction-cured glass coated tile plate. Computational fluid dynamics simulations are performed to characterize the arc-jet test environment and estimate its parameters consistent with the facility and calibration measurements. The present analysis comprises simulations of the nonequilibrium flowfield in the facility nozzle, test box, and flowfield over test articles. Both laminar and turbulent simulations are performed, and the computed results are compared with the experimental measurements, including Stanton number dependence on Reynolds number. Comparisons of computed and measured surface heat fluxes (and temperatures), along with the accompanying analysis, confirm that that the boundary layer in the Panel Test Facility flow is transitional at certain archeater conditions.
NASA Technical Reports Server (NTRS)
Redhed, D. D.
1978-01-01
Three possible goals for the Numerical Aerodynamic Simulation Facility (NASF) are: (1) a computational fluid dynamics (as opposed to aerodynamics) algorithm development tool; (2) a specialized research laboratory facility for nearly intractable aerodynamics problems that industry encounters; and (3) a facility for industry to use in its normal aerodynamics design work that requires high computing rates. The central system issue for industry use of such a computer is the quality of the user interface as implemented in some kind of a front end to the vector processor.
2016 Annual Report - Argonne Leadership Computing Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, Jim; Papka, Michael E.; Cerny, Beth A.
The Argonne Leadership Computing Facility (ALCF) helps researchers solve some of the world’s largest and most complex problems, while also advancing the nation’s efforts to develop future exascale computing systems. This report presents some of the ALCF’s notable achievements in key strategic areas over the past year.
NASA Astrophysics Data System (ADS)
Miller, Stephen D.; Herwig, Kenneth W.; Ren, Shelly; Vazhkudai, Sudharshan S.; Jemian, Pete R.; Luitz, Steffen; Salnikov, Andrei A.; Gaponenko, Igor; Proffen, Thomas; Lewis, Paul; Green, Mark L.
2009-07-01
The primary mission of user facilities operated by Basic Energy Sciences under the Department of Energy is to produce data for users in support of open science and basic research [1]. We trace back almost 30 years of history across selected user facilities illustrating the evolution of facility data management practices and how these practices have related to performing scientific research. The facilities cover multiple techniques such as X-ray and neutron scattering, imaging and tomography sciences. Over time, detector and data acquisition technologies have dramatically increased the ability to produce prolific volumes of data challenging the traditional paradigm of users taking data home upon completion of their experiments to process and publish their results. During this time, computing capacity has also increased dramatically, though the size of the data has grown significantly faster than the capacity of one's laptop to manage and process this new facility produced data. Trends indicate that this will continue to be the case for yet some time. Thus users face a quandary for how to manage today's data complexity and size as these may exceed the computing resources users have available to themselves. This same quandary can also stifle collaboration and sharing. Realizing this, some facilities are already providing web portal access to data and computing thereby providing users access to resources they need [2]. Portal based computing is now driving researchers to think about how to use the data collected at multiple facilities in an integrated way to perform their research, and also how to collaborate and share data. In the future, inter-facility data management systems will enable next tier cross-instrument-cross facility scientific research fuelled by smart applications residing upon user computer resources. We can learn from the medical imaging community that has been working since the early 1990's to integrate data from across multiple modalities to achieve better diagnoses [3] - similarly, data fusion across BES facilities will lead to new scientific discoveries.
Scientific Computing Strategic Plan for the Idaho National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whiting, Eric Todd
Scientific computing is a critical foundation of modern science. Without innovations in the field of computational science, the essential missions of the Department of Energy (DOE) would go unrealized. Taking a leadership role in such innovations is Idaho National Laboratory’s (INL’s) challenge and charge, and is central to INL’s ongoing success. Computing is an essential part of INL’s future. DOE science and technology missions rely firmly on computing capabilities in various forms. Modeling and simulation, fueled by innovations in computational science and validated through experiment, are a critical foundation of science and engineering. Big data analytics from an increasing numbermore » of widely varied sources is opening new windows of insight and discovery. Computing is a critical tool in education, science, engineering, and experiments. Advanced computing capabilities in the form of people, tools, computers, and facilities, will position INL competitively to deliver results and solutions on important national science and engineering challenges. A computing strategy must include much more than simply computers. The foundational enabling component of computing at many DOE national laboratories is the combination of a showcase like data center facility coupled with a very capable supercomputer. In addition, network connectivity, disk storage systems, and visualization hardware are critical and generally tightly coupled to the computer system and co located in the same facility. The existence of these resources in a single data center facility opens the doors to many opportunities that would not otherwise be possible.« less
Neilson, Christine J
2010-01-01
The Saskatchewan Health Information Resources Partnership (SHIRP) provides library instruction to Saskatchewan's health care practitioners and students on placement in health care facilities as part of its mission to provide province-wide access to evidence-based health library resources. A portable computer lab was assembled in 2007 to provide hands-on training in rural health facilities that do not have computer labs of their own. Aside from some minor inconveniences, the introduction and operation of the portable lab has gone smoothly. The lab has been well received by SHIRP patrons and continues to be an essential part of SHIRP outreach.
A large-scale computer facility for computational aerodynamics
NASA Technical Reports Server (NTRS)
Bailey, F. R.; Ballhaus, W. F., Jr.
1985-01-01
As a result of advances related to the combination of computer system technology and numerical modeling, computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. NASA has, therefore, initiated the Numerical Aerodynamic Simulation (NAS) Program with the objective to provide a basis for further advances in the modeling of aerodynamic flowfields. The Program is concerned with the development of a leading-edge, large-scale computer facility. This facility is to be made available to Government agencies, industry, and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. Attention is given to the requirements for computational aerodynamics, the principal specific goals of the NAS Program, the high-speed processor subsystem, the workstation subsystem, the support processing subsystem, the graphics subsystem, the mass storage subsystem, the long-haul communication subsystem, the high-speed data-network subsystem, and software.
NIF ICCS network design and loading analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tietbohl, G; Bryant, R
The National Ignition Facility (NIF) is housed within a large facility about the size of two football fields. The Integrated Computer Control System (ICCS) is distributed throughout this facility and requires the integration of about 40,000 control points and over 500 video sources. This integration is provided by approximately 700 control computers distributed throughout the NIF facility and a network that provides the communication infrastructure. A main control room houses a set of seven computer consoles providing operator access and control of the various distributed front-end processors (FEPs). There are also remote workstations distributed within the facility that allow providemore » operator console functions while personnel are testing and troubleshooting throughout the facility. The operator workstations communicate with the FEPs which implement the localized control and monitoring functions. There are different types of FEPs for the various subsystems being controlled. This report describes the design of the NIF ICCS network and how it meets the traffic loads that will are expected and the requirements of the Sub-System Design Requirements (SSDR's). This document supersedes the earlier reports entitled Analysis of the National Ignition Facility Network, dated November 6, 1996 and The National Ignition Facility Digital Video and Control Network, dated July 9, 1996. For an overview of the ICCS, refer to the document NIF Integrated Computer Controls System Description (NIF-3738).« less
EOS MLS Science Data Processing System: A Description of Architecture and Capabilities
NASA Technical Reports Server (NTRS)
Cuddy, David T.; Echeverri, Mark D.; Wagner, Paul A.; Hanzel, Audrey T.; Fuller, Ryan A.
2006-01-01
This paper describes the architecture and capabilities of the Science Data Processing System (SDPS) for the EOS MLS. The SDPS consists of two major components--the Science Computing Facility and the Science Investigator-led Processing System. The Science Computing Facility provides the facilities for the EOS MLS Science Team to perform the functions of scientific algorithm development, processing software development, quality control of data products, and scientific analyses. The Science Investigator-led Processing System processes and reprocesses the science data for the entire mission and delivers the data products to the Science Computing Facility and to the Goddard Space Flight Center Earth Science Distributed Active Archive Center, which archives and distributes the standard science products.
The UK Human Genome Mapping Project online computing service.
Rysavy, F R; Bishop, M J; Gibbs, G P; Williams, G W
1992-04-01
This paper presents an overview of computing and networking facilities developed by the Medical Research Council to provide online computing support to the Human Genome Mapping Project (HGMP) in the UK. The facility is connected to a number of other computing facilities in various centres of genetics and molecular biology research excellence, either directly via high-speed links or through national and international wide-area networks. The paper describes the design and implementation of the current system, a 'client/server' network of Sun, IBM, DEC and Apple servers, gateways and workstations. A short outline of online computing services currently delivered by this system to the UK human genetics research community is also provided. More information about the services and their availability could be obtained by a direct approach to the UK HGMP-RC.
Computational Science at the Argonne Leadership Computing Facility
NASA Astrophysics Data System (ADS)
Romero, Nichols
2014-03-01
The goal of the Argonne Leadership Computing Facility (ALCF) is to extend the frontiers of science by solving problems that require innovative approaches and the largest-scale computing systems. ALCF's most powerful computer - Mira, an IBM Blue Gene/Q system - has nearly one million cores. How does one program such systems? What software tools are available? Which scientific and engineering applications are able to utilize such levels of parallelism? This talk will address these questions and describe a sampling of projects that are using ALCF systems in their research, including ones in nanoscience, materials science, and chemistry. Finally, the ways to gain access to ALCF resources will be presented. This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357.
Evolution of the Virtualized HPC Infrastructure of Novosibirsk Scientific Center
NASA Astrophysics Data System (ADS)
Adakin, A.; Anisenkov, A.; Belov, S.; Chubarov, D.; Kalyuzhny, V.; Kaplin, V.; Korol, A.; Kuchin, N.; Lomakin, S.; Nikultsev, V.; Skovpen, K.; Sukharev, A.; Zaytsev, A.
2012-12-01
Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM&MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for a particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM&MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gb/s connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. This contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure and the experience gained while using it for running production data analysis jobs related to HEP experiments being carried out at BINP, especially the KEDR detector experiment at the VEPP-4M electron-positron collider.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Livny, Miron; Shank, James; Ernst, Michael
Under this SciDAC-2 grant the project’s goal w a s t o stimulate new discoveries by providing scientists with effective and dependable access to an unprecedented national distributed computational facility: the Open Science Grid (OSG). We proposed to achieve this through the work of the Open Science Grid Consortium: a unique hands-on multi-disciplinary collaboration of scientists, software developers and providers of computing resources. Together the stakeholders in this consortium sustain and use a shared distributed computing environment that transforms simulation and experimental science in the US. The OSG consortium is an open collaboration that actively engages new research communities. Wemore » operate an open facility that brings together a broad spectrum of compute, storage, and networking resources and interfaces to other cyberinfrastructures, including the US XSEDE (previously TeraGrid), the European Grids for ESciencE (EGEE), as well as campus and regional grids. We leverage middleware provided by computer science groups, facility IT support organizations, and computing programs of application communities for the benefit of consortium members and the US national CI.« less
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian; ...
2017-09-29
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less
LBNL Computational ResearchTheory Facility Groundbreaking - Full Press Conference. Feb 1st, 2012
Yelick, Kathy
2018-01-24
Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.
LBNL Computational Research and Theory Facility Groundbreaking. February 1st, 2012
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yelick, Kathy
2012-02-02
Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.
LBNL Computational Research and Theory Facility Groundbreaking. February 1st, 2012
Yelick, Kathy
2017-12-09
Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.
Ethics and the 7 `P`s` of computer use policies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, T.J.; Voss, R.B.
1994-12-31
A Computer Use Policy (CUP) defines who can use the computer facilities for what. The CUP is the institution`s official position on the ethical use of computer facilities. The authors believe that writing a CUP provides an ideal platform to develop a group ethic for computer users. In prior research, the authors have developed a seven phase model for writing CUPs, entitled the 7 P`s of Computer Use Policies. The purpose of this paper is to present the model and discuss how the 7 P`s can be used to identify and communicate a group ethic for the institution`s computer users.
Expanding the Scope of High-Performance Computing Facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uram, Thomas D.; Papka, Michael E.
The high-performance computing centers of the future will expand their roles as service providers, and as the machines scale up, so should the sizes of the communities they serve. National facilities must cultivate their users as much as they focus on operating machines reliably. The authors present five interrelated topic areas that are essential to expanding the value provided to those performing computational science.
Asah, Flora
2013-04-01
This study discusses factors inhibiting computer usage for work-related tasks among computer-literate professional nurses within rural healthcare facilities in South Africa. In the past two decades computer literacy courses have not been part of the nursing curricula. Computer courses are offered by the State Information Technology Agency. Despite this, there seems to be limited use of computers by professional nurses in the rural context. Focus group interviews held with 40 professional nurses from three government hospitals in northern KwaZulu-Natal. Contributing factors were found to be lack of information technology infrastructure, restricted access to computers and deficits in regard to the technical and nursing management support. The physical location of computers within the health-care facilities and lack of relevant software emerged as specific obstacles to usage. Provision of continuous and active support from nursing management could positively influence computer usage among professional nurses. A closer integration of information technology and computer literacy skills into existing nursing curricula would foster a positive attitude towards computer usage through early exposure. Responses indicated that change of mindset may be needed on the part of nursing management so that they begin to actively promote ready access to computers as a means of creating greater professionalism and collegiality. © 2011 Blackwell Publishing Ltd.
MIP models for connected facility location: A theoretical and computational study☆
Gollowitzer, Stefan; Ljubić, Ivana
2011-01-01
This article comprises the first theoretical and computational study on mixed integer programming (MIP) models for the connected facility location problem (ConFL). ConFL combines facility location and Steiner trees: given a set of customers, a set of potential facility locations and some inter-connection nodes, ConFL searches for the minimum-cost way of assigning each customer to exactly one open facility, and connecting the open facilities via a Steiner tree. The costs needed for building the Steiner tree, facility opening costs and the assignment costs need to be minimized. We model ConFL using seven compact and three mixed integer programming formulations of exponential size. We also show how to transform ConFL into the Steiner arborescence problem. A full hierarchy between the models is provided. For two exponential size models we develop a branch-and-cut algorithm. An extensive computational study is based on two benchmark sets of randomly generated instances with up to 1300 nodes and 115,000 edges. We empirically compare the presented models with respect to the quality of obtained bounds and the corresponding running time. We report optimal values for all but 16 instances for which the obtained gaps are below 0.6%. PMID:25009366
Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.
2014-01-01
The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019
Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud
NASA Astrophysics Data System (ADS)
Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde
2014-06-01
The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.
The OSG Open Facility: an on-ramp for opportunistic scientific computing
NASA Astrophysics Data System (ADS)
Jayatilaka, B.; Levshina, T.; Sehgal, C.; Gardner, R.; Rynge, M.; Würthwein, F.
2017-10-01
The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.
The OSG Open Facility: An On-Ramp for Opportunistic Scientific Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jayatilaka, B.; Levshina, T.; Sehgal, C.
The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource ownersmore » and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerber, Richard; Hack, James; Riley, Katherine
The mission of the U.S. Department of Energy Office of Science (DOE SC) is the delivery of scientific discoveries and major scientific tools to transform our understanding of nature and to advance the energy, economic, and national security missions of the United States. To achieve these goals in today’s world requires investments in not only the traditional scientific endeavors of theory and experiment, but also in computational science and the facilities that support large-scale simulation and data analysis. The Advanced Scientific Computing Research (ASCR) program addresses these challenges in the Office of Science. ASCR’s mission is to discover, develop, andmore » deploy computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to DOE. ASCR supports research in computational science, three high-performance computing (HPC) facilities — the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory and Leadership Computing Facilities at Argonne (ALCF) and Oak Ridge (OLCF) National Laboratories — and the Energy Sciences Network (ESnet) at Berkeley Lab. ASCR is guided by science needs as it develops research programs, computers, and networks at the leading edge of technologies. As we approach the era of exascale computing, technology changes are creating challenges for science programs in SC for those who need to use high performance computing and data systems effectively. Numerous significant modifications to today’s tools and techniques will be needed to realize the full potential of emerging computing systems and other novel computing architectures. To assess these needs and challenges, ASCR held a series of Exascale Requirements Reviews in 2015–2017, one with each of the six SC program offices,1 and a subsequent Crosscut Review that sought to integrate the findings from each. Participants at the reviews were drawn from the communities of leading domain scientists, experts in computer science and applied mathematics, ASCR facility staff, and DOE program managers in ASCR and the respective program offices. The purpose of these reviews was to identify mission-critical scientific problems within the DOE Office of Science (including experimental facilities) and determine the requirements for the exascale ecosystem that would be needed to address those challenges. The exascale ecosystem includes exascale computing systems, high-end data capabilities, efficient software at scale, libraries, tools, and other capabilities. This effort will contribute to the development of a strategic roadmap for ASCR compute and data facility investments and will help the ASCR Facility Division establish partnerships with Office of Science stakeholders. It will also inform the Office of Science research needs and agenda. The results of the six reviews have been published in reports available on the web at http://exascaleage.org/. This report presents a summary of the individual reports and of common and crosscutting findings, and it identifies opportunities for productive collaborations among the DOE SC program offices.« less
Refurbishment and Automation of the Thermal/Vacuum Facilities at the Goddard Space Flight Center
NASA Technical Reports Server (NTRS)
Donohue, John T.; Johnson, Chris; Ogden, Rick; Sushon, Janet
1998-01-01
The thermal/vacuum facilities located at the Goddard Space Flight Center (GSFC) have supported both manned and unmanned space flight since the 1960s. Of the 11 facilities, currently 10 of the systems are scheduled for refurbishment and/or replacement as part of a 5-year implementation. Expected return on investment includes the reduction in test schedules, improvements in the safety of facility operations, reduction in the complexity of a test and the reduction in personnel support required for a test. Additionally, GSFC will become a global resource renowned for expertise in thermal engineering, mechanical engineering and for the automation of thermal/vacuum facilities and thermal/vacuum tests. Automation of the thermal/vacuum facilities includes the utilization of Programmable Logic Controllers (PLCs) and the use of Supervisory Control and Data Acquisition (SCADA) systems. These components allow the computer control and automation of mechanical components such as valves and pumps. In some cases, the chamber and chamber shroud require complete replacement while others require only mechanical component retrofit or replacement. The project of refurbishment and automation began in 1996 and has resulted in the computer control of one Facility (Facility #225) and the integration of electronically controlled devices and PLCs within several other facilities. Facility 225 has been successfully controlled by PLC and SCADA for over one year. Insignificant anomalies have occurred and were resolved with minimal impact to testing and operations. The amount of work remaining to be performed will occur over the next four to five years. Fiscal year 1998 includes the complete refurbishment of one facility, computer control of the thermal systems in two facilities, implementation of SCADA and PLC systems to support multiple facilities and the implementation of a Database server to allow efficient test management and data analysis.
A Bioinformatics Facility for NASA
NASA Technical Reports Server (NTRS)
Schweighofer, Karl; Pohorille, Andrew
2006-01-01
Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.
Designing Facilities for Collaborative Operations
NASA Technical Reports Server (NTRS)
Norris, Jeffrey; Powell, Mark; Backes, Paul; Steinke, Robert; Tso, Kam; Wales, Roxana
2003-01-01
A methodology for designing operational facilities for collaboration by multiple experts has begun to take shape as an outgrowth of a project to design such facilities for scientific operations of the planned 2003 Mars Exploration Rover (MER) mission. The methodology could also be applicable to the design of military "situation rooms" and other facilities for terrestrial missions. It was recognized in this project that modern mission operations depend heavily upon the collaborative use of computers. It was further recognized that tests have shown that layout of a facility exerts a dramatic effect on the efficiency and endurance of the operations staff. The facility designs (for example, see figure) and the methodology developed during the project reflect this recognition. One element of the methodology is a metric, called effective capacity, that was created for use in evaluating proposed MER operational facilities and may also be useful for evaluating other collaboration spaces, including meeting rooms and military situation rooms. The effective capacity of a facility is defined as the number of people in the facility who can be meaningfully engaged in its operations. A person is considered to be meaningfully engaged if the person can (1) see, hear, and communicate with everyone else present; (2) see the material under discussion (typically data on a piece of paper, computer monitor, or projection screen); and (3) provide input to the product under development by the group. The effective capacity of a facility is less than the number of people that can physically fit in the facility. For example, a typical office that contains a desktop computer has an effective capacity of .4, while a small conference room that contains a projection screen has an effective capacity of around 10. Little or no benefit would be derived from allowing the number of persons in an operational facility to exceed its effective capacity: At best, the operations staff would be underutilized; at worst, operational performance would deteriorate. Elements of this methodology were applied to the design of three operations facilities for a series of rover field tests. These tests were observed by human-factors researchers and their conclusions are being used to refine and extend the methodology to be used in the final design of the MER operations facility. Further work is underway to evaluate the use of personal digital assistant (PDA) units as portable input interfaces and communication devices in future mission operations facilities. A PDA equipped for wireless communication and Ethernet, Bluetooth, or another networking technology would cost less than a complete computer system, and would enable a collaborator to communicate electronically with computers and with other collaborators while moving freely within the virtual environment created by a shared immersive graphical display.
Planning and Designing School Computer Facilities. Interim Report.
ERIC Educational Resources Information Center
Alberta Dept. of Education, Edmonton. Finance and Administration Div.
This publication provides suggestions and considerations that may be useful for school jurisdictions developing facilities for computers in schools. An interim report for both use and review, it is intended to assist school system planners in clarifying the specifications needed by the architects, other design consultants, and purchasers involved.…
Molecular Modeling and Computational Chemistry at Humboldt State University.
ERIC Educational Resources Information Center
Paselk, Richard A.; Zoellner, Robert W.
2002-01-01
Describes a molecular modeling and computational chemistry (MM&CC) facility for undergraduate instruction and research at Humboldt State University. This facility complex allows the introduction of MM&CC throughout the chemistry curriculum with tailored experiments in general, organic, and inorganic courses as well as a new molecular modeling…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hewett, R.
1997-12-31
This paper describes the strategy and computer processing system that NREL, the Virginia Department of Mines, Minerals and Energy (DMME) and the state energy office, are developing for computing solar attractiveness scores for state agencies and the individual facilities or buildings within each agency. In the case of an agency, solar attractiveness is a measure of that agency`s having a significant number of facilities for which solar has the potential to be promising. In the case of a facility, solar attractiveness is a measure of its potential for being good, economically viable candidate for a solar waste heating system. Virginiamore » State agencies are charged with reducing fossil energy and electricity use and expense. DMME is responsible for working with them to achieve the goals and for managing the state`s energy consumption and cost monitoring program. This is done using the Fast Accounting System for Energy Reporting (FASER) computerized energy accounting and tracking system and database. Agencies report energy use and expenses (by individual facility and energy type) to DMME quarterly. DMME is also responsible for providing technical and other assistance services to agencies and facilities interested in investigating use of solar. Since Virginia has approximately 80 agencies operating over 8,000 energy-consuming facilities and since DMME`s resources are limited, it is interested in being able to determine: (1) on which agencies to focus; (2) specific facilities on which to focus within each high-priority agency; and (3) irrespective of agency, which facilities are the most promising potential candidates for solar. The computer processing system described in this paper computes numerical solar attractiveness scores for the state`s agencies and the individual facilities using the energy use and cost data in the FASER system database and the state`s and NREL`s experience in implementing, testing and evaluating solar water heating systems in commercial and government facilities.« less
Nelson, Philip H.; Hoffman, Eric L.
2009-01-01
Gas, oil, and water production data were compiled from 38 wells with production commencing during the 1980s from the Wasatch Formation in the Greater Natural Buttes field, Uinta Basin, Utah. This study is one of a series of reports examining fluid production from tight gas reservoirs, which are characterized by low permeability, low porosity, and the presence of clay minerals in pore space. The general ranges of production rates after 2 years are 100-1,000 mscf/day for gas, 0.35-3.4 barrel per day for oil, and less than 1 barrel per day for water. The water:gas ratio ranges from 0.1 to10 barrel per million standard cubic feet, indicating that free water is produced along with water dissolved in gas in the reservoir. The oil:gas ratios are typical of a wet gas system. Neither gas nor water rates show dependence upon the number of perforations, although for low gas-flow rates there is some dependence upon the number of sandstone intervals that were perforated. Over a 5-year time span, gas and water may either increase or decrease in a given well, but the changes in production rate do not exhibit any dependence upon well proximity or well location.
Ray, N J; Hannigan, A
1999-05-01
As dental practice management becomes more computer-based, the efficient functioning of the dentist will become dependent on adequate computer literacy. A survey has been carried out into the computer literacy of a cohort of 140 undergraduate dental students at a University Dental School in Ireland (years 1-5), in the academic year 1997-98. Aspects investigated by anonymous questionnaire were: (1) keyboard skills; (2) computer skills; (3) access to computer facilities; (4) software competencies and (5) use of medical library computer facilities. The students are relatively unfamiliar with basic computer hardware and software: 51.1% considered their expertise with computers as "poor"; 34.3% had taken a formal typewriting or computer keyboarding course; 7.9% had taken a formal computer course at university level and 67.2% were without access to computer facilities at their term-time residences. A majority of students had never used either word-processing, spreadsheet, or graphics programs. Programs relating to "informatics" were more popular, such as literature searching, accessing the Internet and the use of e-mail which represent the major use of the computers in the medical library. The lack of experience with computers may be addressed by including suitable computing courses at the secondary level (age 13-18 years) and/or tertiary level (FE/HE) education programmes. Such training may promote greater use of generic softwares, particularly in the library, with a more electronic-based approach to data handling.
Wong, Bonny Yee-Man; Cerin, Ester; Ho, Sai-Yin; Mak, Kwok-Kei; Lo, Wing-Sze; Lam, Tai-Hing
2010-04-01
To examine the independent, competing, and interactive effects of perceived availability of specific types of media in the home and neighborhood sport facilities on adolescents' leisure-time physical activity (PA). Survey data from 34 369 students in 42 Hong Kong secondary schools were collected (2006-07). Respondents reported moderate-to-vigorous leisure-time PA, presence of sport facilities in the neighborhood and of media equipment in the home. Being sufficiently physically active was defined as engaging in at least 30 minutes of non-school leisure-time PA on a daily basis. Logistic regression and post-estimation linear combinations of regression coefficients were used to examine the independent and competing effects of sport facilities and media equipment on leisure-time PA. Perceived availability of sport facilities was positively (OR(boys) = 1.17; OR(girls) = 1.26), and that of computer/Internet negatively (OR(boys) = 0.48; OR(girls) = 0.41), associated with being sufficiently active. A significant positive association between video game console and being sufficiently active was found in girls (OR(girls) = 1.19) but not in boys. Compared with adolescents without sport facilities and media equipment, those who reported sport facilities only were more likely to be physically active (OR(boys) = 1.26; OR(girls) = 1.34), while those who additionally reported computer/Internet were less likely to be physically active (OR(boys) = 0.60; OR(girls) = 0.54). Perceived availability of sport facilities in the neighborhood may positively impact on adolescents' level of physical activity. However, having computer/Internet may cancel out the effects of active opportunities in the neighborhood. This suggests that physical activity programs for adolescents need to consider limiting the access to computer-mediated communication as an important intervention component.
Description and operational status of the National Transonic Facility computer complex
NASA Technical Reports Server (NTRS)
Boyles, G. B., Jr.
1986-01-01
This paper describes the National Transonic Facility (NTF) computer complex and its support of tunnel operations. The capabilities of the research data acquisition and reduction are discussed along with the types of data that can be acquired and presented. Pretest, test, and posttest capabilities are also outlined along with a discussion of the computer complex to monitor the tunnel control processes and provide the tunnel operators with information needed to control the tunnel. Planned enhancements to the computer complex for support of future testing are presented.
The OSG open facility: A sharing ecosystem
Jayatilaka, B.; Levshina, T.; Rynge, M.; ...
2015-12-23
The Open Science Grid (OSG) ties together individual experiments’ computing power, connecting their resources to create a large, robust computing grid, this computing infrastructure started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero. In the years since, the OSG has broadened its focus to also address the needs of other US researchers and increased delivery of Distributed High Through-put Computing (DHTC) to users from a wide variety of disciplines via the OSG Open Facility. Presently, the Open Facility delivers about 100 million computing wall hours per year to researchers whomore » are not already associated with the owners of the computing sites, this is primarily accomplished by harvesting and organizing the temporarily unused capacity (i.e. opportunistic cycles) from the sites in the OSG. Using these methods, OSG resource providers and scientists share computing hours with researchers in many other fields to enable their science, striving to make sure that these computing power used with maximal efficiency. Furthermore, we believe that expanded access to DHTC is an essential tool for scientific innovation and work continues in expanding this service.« less
ERIC Educational Resources Information Center
Teicholz, Eric
1997-01-01
Reports research on trends in computer-aided facilities management using the Internet and geographic information system (GIS) technology for space utilization research. Proposes that facility assessment software holds promise for supporting facility management decision making, and outlines four areas for its use: inventory; evaluation; reporting;…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muller, Richard P.
2017-07-01
Sandia National Laboratories has developed a broad set of capabilities in quantum information science (QIS), including elements of quantum computing, quantum communications, and quantum sensing. The Sandia QIS program is built atop unique DOE investments at the laboratories, including the MESA microelectronics fabrication facility, the Center for Integrated Nanotechnologies (CINT) facilities (joint with LANL), the Ion Beam Laboratory, and ASC High Performance Computing (HPC) facilities. Sandia has invested $75 M of LDRD funding over 12 years to develop unique, differentiating capabilities that leverage these DOE infrastructure investments.
Muller, George; Perkins, Casey J.; Lancaster, Mary J.; MacDonald, Douglas G.; Clements, Samuel L.; Hutton, William J.; Patrick, Scott W.; Key, Bradley Robert
2015-07-28
Computer-implemented security evaluation methods, security evaluation systems, and articles of manufacture are described. According to one aspect, a computer-implemented security evaluation method includes accessing information regarding a physical architecture and a cyber architecture of a facility, building a model of the facility comprising a plurality of physical areas of the physical architecture, a plurality of cyber areas of the cyber architecture, and a plurality of pathways between the physical areas and the cyber areas, identifying a target within the facility, executing the model a plurality of times to simulate a plurality of attacks against the target by an adversary traversing at least one of the areas in the physical domain and at least one of the areas in the cyber domain, and using results of the executing, providing information regarding a security risk of the facility with respect to the target.
Poonam Khanijo Ahluwalia; Nema, Arvind K
2011-07-01
Selection of optimum locations for locating new facilities and decision regarding capacities at the proposed facilities is a major concern for municipal authorities/managers. The decision as to whether a single facility is preferred over multiple facilities of smaller capacities would vary with varying priorities to cost and associated risks such as environmental or health risk or risk perceived by the society. Currently management of waste streams such as that of computer waste is being done using rudimentary practices and is flourishing as an unorganized sector, mainly as backyard workshops in many cities of developing nations such as India. Uncertainty in the quantification of computer waste generation is another major concern due to the informal setup of present computer waste management scenario. Hence, there is a need to simultaneously address uncertainty in waste generation quantities while analyzing the tradeoffs between cost and associated risks. The present study aimed to address the above-mentioned issues in a multi-time-step, multi-objective decision-support model, which can address multiple objectives of cost, environmental risk, socially perceived risk and health risk, while selecting the optimum configuration of existing and proposed facilities (location and capacities).
Guidance on the Stand Down, Mothball, and Reactivation of Ground Test Facilities
NASA Technical Reports Server (NTRS)
Volkman, Gregrey T.; Dunn, Steven C.
2013-01-01
The development of aerospace and aeronautics products typically requires three distinct types of testing resources across research, development, test, and evaluation: experimental ground testing, computational "testing" and development, and flight testing. Over the last twenty plus years, computational methods have replaced some physical experiments and this trend is continuing. The result is decreased utilization of ground test capabilities and, along with market forces, industry consolidation, and other factors, has resulted in the stand down and oftentimes closure of many ground test facilities. Ground test capabilities are (and very likely will continue to be for many years) required to verify computational results and to provide information for regimes where computational methods remain immature. Ground test capabilities are very costly to build and to maintain, so once constructed and operational it may be desirable to retain access to those capabilities even if not currently needed. One means of doing this while reducing ongoing sustainment costs is to stand down the facility into a "mothball" status - keeping it alive to bring it back when needed. Both NASA and the US Department of Defense have policies to accomplish the mothball of a facility, but with little detail. This paper offers a generic process to follow that can be tailored based on the needs of the owner and the applicable facility.
Green Supercomputing at Argonne
Beckman, Pete
2018-02-07
Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF) talks about Argonne National Laboratory's green supercomputingâeverything from designing algorithms to use fewer kilowatts per operation to using cold Chicago winter air to cool the machine more efficiently. Argonne was recognized for green computing in the 2009 HPCwire Readers Choice Awards. More at http://www.anl.gov/Media_Center/News/2009/news091117.html Read more about the Argonne Leadership Computing Facility at http://www.alcf.anl.gov/
Harrington, Susan S.; Walker, Bonnie L.
2010-01-01
Background Older adults in small residential board and care facilities are at a particularly high risk of fire death and injury because of their characteristics and environment. Methods The authors investigated computer-based instruction as a way to teach fire emergency planning to owners, operators, and staff of small residential board and care facilities. Participants (N = 59) were randomly assigned to a treatment or control group. Results Study participants who completed the training significantly improved their scores from pre- to posttest when compared to a control group. Participants indicated on the course evaluation that the computers were easy to use for training (97%) and that they would like to use computers for future training courses (97%). Conclusions This study demonstrates the potential for using interactive computer-based training as a viable alternative to instructor-led training to meet the fire safety training needs of owners, operators, and staff of small board and care facilities for the elderly. PMID:19263929
ASCR Cybersecurity for Scientific Computing Integrity - Research Pathways and Ideas Workshop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peisert, Sean; Potok, Thomas E.; Jones, Todd
At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues includedmore » research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the three topics and a representative of each of the four major DOE Office of Science Advanced Scientific Computing Research Facilities: the Argonne Leadership Computing Facility (ALCF), the Energy Sciences Network (ESnet), the National Energy Research Scientific Computing Center (NERSC), and the Oak Ridge Leadership Computing Facility (OLCF). The rest of the workshop consisted of topical breakout discussions and focused writing periods that produced much of this report.« less
The ICCB Computer Based Facilities Inventory & Utilization Management Information Subsystem.
ERIC Educational Resources Information Center
Lach, Ivan J.
The Illinois Community College Board (ICCB) Facilities Inventory and Utilization subsystem, a part of the ICCB management information system, was designed to provide decision makers with needed information to better manage the facility resources of Illinois community colleges. This subsystem, dependent upon facilities inventory data and course…
Computer validation in toxicology: historical review for FDA and EPA good laboratory practice.
Brodish, D L
1998-01-01
The application of computer validation principles to Good Laboratory Practice is a fairly recent phenomenon. As automated data collection systems have become more common in toxicology facilities, the U.S. Food and Drug Administration and the U.S. Environmental Protection Agency have begun to focus inspections in this area. This historical review documents the development of regulatory guidance on computer validation in toxicology over the past several decades. An overview of the components of a computer life cycle is presented, including the development of systems descriptions, validation plans, validation testing, system maintenance, SOPs, change control, security considerations, and system retirement. Examples are provided for implementation of computer validation principles on laboratory computer systems in a toxicology facility.
A large high vacuum, high pumping speed space simulation chamber for electric propulsion
NASA Technical Reports Server (NTRS)
Grisnik, Stanley P.; Parkes, James E.
1994-01-01
Testing high power electric propulsion devices poses unique requirements on space simulation facilities. Very high pumping speeds are required to maintain high vacuum levels while handling large volumes of exhaust products. These pumping speeds are significantly higher than those available in most existing vacuum facilities. There is also a requirement for relatively large vacuum chamber dimensions to minimize facility wall/thruster plume interactions and to accommodate far field plume diagnostic measurements. A 4.57 m (15 ft) diameter by 19.2 m (63 ft) long vacuum chamber at NASA Lewis Research Center is described. The chamber utilizes oil diffusion pumps in combination with cryopanels to achieve high vacuum pumping speeds at high vacuum levels. The facility is computer controlled for all phases of operation from start-up, through testing, to shutdown. The computer control system increases the utilization of the facility and reduces the manpower requirements needed for facility operations.
Computer-Assisted School Facility Planning with ONPASS.
ERIC Educational Resources Information Center
Urban Decision Systems, Inc., Los Angeles, CA.
The analytical capabilities of ONPASS, an on-line computer-aided school facility planning system, are described by its developers. This report describes how, using the Canoga Park-Winnetka-Woodland Hills Planning Area as a test case, the Department of City Planning of the city of Los Angeles employed ONPASS to demonstrate how an on-line system can…
ERIC Educational Resources Information Center
Bender, Evelyn
The American Library Association's Carroll Preston Baber Research Award supported this project on the use, impact and feasibility of a computer assisted writing facility located in the library of Stetson Middle School in Philadelphia, an inner city school with a population of minority, "at risk" students. The writing facility consisted…
Sigma 2 Graphic Display Software Program Description
NASA Technical Reports Server (NTRS)
Johnson, B. T.
1973-01-01
A general purpose, user oriented graphic support package was implemented. A comprehensive description of the two software components comprising this package is given: Display Librarian and Display Controller. These programs have been implemented in FORTRAN on the XDS Sigma 2 Computer Facility. This facility consists of an XDS Sigma 2 general purpose computer coupled to a Computek Display Terminal.
Progressive fracture of fiber composites
NASA Technical Reports Server (NTRS)
Irvin, T. B.; Ginty, C. A.
1983-01-01
Refined models and procedures are described for determining progressive composite fracture in graphite/epoxy angleplied laminates. Lewis Research Center capabilities are utilized including the Real Time Ultrasonic C Scan (RUSCAN) experimental facility and the Composite Durability Structural Analysis (CODSTRAN) computer code. The CODSTRAN computer code is used to predict the fracture progression based on composite mechanics, finite element stress analysis, and fracture criteria modules. The RUSCAN facility, CODSTRAN computer code, and scanning electron microscope are used to determine durability and identify failure mechanisms in graphite/epoxy composites.
NASA Technical Reports Server (NTRS)
Montegani, F. J.
1974-01-01
Methods of handling one-third-octave band noise data originating from the outdoor full-scale fan noise facility and the engine acoustic facility at the Lewis Research Center are presented. Procedures for standardizing, retrieving, extrapolating, and reporting these data are explained. Computer programs are given which are used to accomplish these and other noise data analysis tasks. This information is useful as background for interpretation of data from these facilities appearing in NASA reports and can aid data exchange by promoting standardization.
Refurbishment and Automation of Thermal Vacuum Facilities at NASA/GSFC
NASA Technical Reports Server (NTRS)
Dunn, Jamie; Gomez, Carlos; Donohue, John; Johnson, Chris; Palmer, John; Sushon, Janet
1999-01-01
The thermal vacuum facilities located at the Goddard Space Flight Center (GSFC) have supported both manned and unmanned space flight since the 1960s. Of the eleven facilities, currently ten of the systems are scheduled for refurbishment or replacement as part of a five-year implementation. Expected return on investment includes the reduction in test schedules, improvements in safety of facility operations, and reduction in the personnel support required for a test. Additionally, GSFC will become a global resource renowned for expertise in thermal engineering, mechanical engineering, and for the automation of thermal vacuum facilities and tests. Automation of the thermal vacuum facilities includes the utilization of Programmable Logic Controllers (PLCs), the use of Supervisory Control and Data Acquisition (SCADA) systems, and the development of a centralized Test Data Management System. These components allow the computer control and automation of mechanical components such as valves and pumps. The project of refurbishment and automation began in 1996 and has resulted in complete computer control of one facility (Facility 281), and the integration of electronically controlled devices and PLCs in multiple others.
Refurbishment and Automation of Thermal Vacuum Facilities at NASA/GSFC
NASA Technical Reports Server (NTRS)
Dunn, Jamie; Gomez, Carlos; Donohue, John; Johnson, Chris; Palmer, John; Sushon, Janet
1998-01-01
The thermal vacuum facilities located at the Goddard Space Flight Center (GSFC) have supported both manned and unmanned space flight since the 1960s. Of the eleven facilities, currently ten of the systems are scheduled for refurbishment or replacement as part of a five-year implementation. Expected return on investment includes the reduction in test schedules, improvements in safety of facility operations, and reduction in the personnel support required for a test. Additionally, GSFC will become a global resource renowned for expertise in thermal engineering, mechanical engineering, and for the automation of thermal vacuum facilities and tests. Automation of the thermal vacuum facilities includes the utilization of Programmable Logic Controllers (PLCs), the use of Supervisory Control and Data Acquisition (SCADA) systems, and the development of a centralized Test Data Management System. These components allow the computer control and automation of mechanical components such as valves and pumps. The project of refurbishment and automation began in 1996 and has resulted in complete computer control of one facility (Facility 281), and the integration of electronically controlled devices and PLCs in multiple others.
Nonequilibrium Supersonic Freestream Studied Using Coherent Anti-Stokes Raman Spectroscopy
NASA Technical Reports Server (NTRS)
Cutler, Andrew D.; Cantu, Luca M.; Gallo, Emanuela C. A.; Baurle, Rob; Danehy, Paul M.; Rockwell, Robert; Goyne, Christopher; McDaniel, Jim
2015-01-01
Measurements were conducted at the University of Virginia Supersonic Combustion Facility of the flow in a constant-area duct downstream of a Mach 2 nozzle. The airflow was heated to approximately 1200 K in the facility heater upstream of the nozzle. Dual-pump coherent anti-Stokes Raman spectroscopy was used to measure the rotational and vibrational temperatures of N2 and O2 at two planes in the duct. The expectation was that the vibrational temperature would be in equilibrium, because most scramjet facilities are vitiated air facilities and are in vibrational equilibrium. However, with a flow of clean air, the vibrational temperature of N2 along a streamline remains approximately constant between the measurement plane and the facility heater, the vibrational temperature of O2 in the duct is about 1000 K, and the rotational temperature is consistent with the isentropic flow. The measurements of N2 vibrational temperature enabled cross-stream nonuniformities in the temperature exiting the facility heater to be documented. The measurements are in agreement with computational fluid dynamics models employing separate lumped vibrational and translational/rotational temperatures. Measurements and computations are also reported for a few percent steam addition to the air. The effect of the steam is to bring the flow to thermal equilibrium, also in agreement with the computational fluid dynamics.
JESS facility modification and environmental/power plans
NASA Technical Reports Server (NTRS)
Bordeaux, T. A.
1984-01-01
Preliminary plans for facility modifications and environmental/power systems for the JESS (Joint Exercise Support System) computer laboratory and Freedom Hall are presented. Blueprints are provided for each of the facilities and an estimate of the air conditioning requirements is given.
Ergonomic and Anthropometric Considerations of the Use of Computers in Schools by Adolescents
ERIC Educational Resources Information Center
Jermolajew, Anna M.; Newhouse, C. Paul
2003-01-01
Over the past decade there has been an explosion in the provision of computing facilities in schools for student use. However, there is concern that the development of these facilities has often given little regard to the ergonomics of the design for use by children, particularly adolescents. This paper reports on a study that investigated the…
47 CFR 73.208 - Reference points and distance computations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... SERVICES RADIO BROADCAST SERVICES FM Broadcast Stations § 73.208 Reference points and distance computations... filed no later than: (i) The last day of a filing window if the application is for a new FM facility or...(d) and 73.3573(e) if the application is for a new FM facility or a major change in the reserved band...
47 CFR 73.208 - Reference points and distance computations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... SERVICES RADIO BROADCAST SERVICES FM Broadcast Stations § 73.208 Reference points and distance computations... filed no later than: (i) The last day of a filing window if the application is for a new FM facility or...(d) and 73.3573(e) if the application is for a new FM facility or a major change in the reserved band...
117. Back side technical facilities S.R. radar transmitter & computer ...
117. Back side technical facilities S.R. radar transmitter & computer building no. 102, "building sections - sheet I" - architectural, AS-BLT AW 35-46-04, sheet 12, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK
122. Back side technical facilities S.R. radar transmitter & computer ...
122. Back side technical facilities S.R. radar transmitter & computer building no. 102, section II "elevations & details" - structural, AS-BLT AW 35-46-04, sheet 73, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK
118. Back side technical facilities S.R. radar transmitter & computer ...
118. Back side technical facilities S.R. radar transmitter & computer building no. 102, "building sections - sheet I" - architectural, AS-BLT AW 35-46-04, sheet 13, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK
121. Back side technical facilities S.R. radar transmitter & computer ...
121. Back side technical facilities S.R. radar transmitter & computer building no. 102, section II "sections & elevations" - structural, AS-BLT AW 35-46-04, sheet 72, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK
Making Cloud Computing Available For Researchers and Innovators (Invited)
NASA Astrophysics Data System (ADS)
Winsor, R.
2010-12-01
High Performance Computing (HPC) facilities exist in most academic institutions but are almost invariably over-subscribed. Access is allocated based on academic merit, the only practical method of assigning valuable finite compute resources. Cloud computing on the other hand, and particularly commercial clouds, draw flexibly on an almost limitless resource as long as the user has sufficient funds to pay the bill. How can the commercial cloud model be applied to scientific computing? Is there a case to be made for a publicly available research cloud and how would it be structured? This talk will explore these themes and describe how Cybera, a not-for-profit non-governmental organization in Alberta Canada, aims to leverage its high speed research and education network to provide cloud computing facilities for a much wider user base.
The development of the Canadian Mobile Servicing System Kinematic Simulation Facility
NASA Technical Reports Server (NTRS)
Beyer, G.; Diebold, B.; Brimley, W.; Kleinberg, H.
1989-01-01
Canada will develop a Mobile Servicing System (MSS) as its contribution to the U.S./International Space Station Freedom. Components of the MSS will include a remote manipulator (SSRMS), a Special Purpose Dexterous Manipulator (SPDM), and a mobile base (MRS). In order to support requirements analysis and the evaluation of operational concepts related to the use of the MSS, a graphics based kinematic simulation/human-computer interface facility has been created. The facility consists of the following elements: (1) A two-dimensional graphics editor allowing the rapid development of virtual control stations; (2) Kinematic simulations of the space station remote manipulators (SSRMS and SPDM), and mobile base; and (3) A three-dimensional graphics model of the space station, MSS, orbiter, and payloads. These software elements combined with state of the art computer graphics hardware provide the capability to prototype MSS workstations, evaluate MSS operational capabilities, and investigate the human-computer interface in an interactive simulation environment. The graphics technology involved in the development and use of this facility is described.
High-Performance Computing and Visualization | Energy Systems Integration
Facility | NREL High-Performance Computing and Visualization High-Performance Computing and Visualization High-performance computing (HPC) and visualization at NREL propel technology innovation as a . Capabilities High-Performance Computing NREL is home to Peregrine-the largest high-performance computing system
Public computing options for individuals with cognitive impairments: survey outcomes.
Fox, Lynn Elizabeth; Sohlberg, McKay Moore; Fickas, Stephen; Lemoncello, Rik; Prideaux, Jason
2009-09-01
To examine availability and accessibility of public computing for individuals with cognitive impairment (CI) who reside in the USA. A telephone survey was administered as a semi-structured interview to 145 informants representing seven types of public facilities across three geographically distinct regions using a snowball sampling technique. An Internet search of wireless (Wi-Fi) hotspots supplemented the survey. Survey results showed the availability of public computer terminals and Internet hotspots was greatest in the urban sample, followed by the mid-sized and rural cities. Across seven facility types surveyed, libraries had the highest percentage of access barriers, including complex queue procedures, login and password requirements, and limited technical support. University assistive technology centres and facilities with a restricted user policy, such as brain injury centres, had the lowest incidence of access barriers. Findings suggest optimal outcomes for people with CI will result from a careful match of technology and the user that takes into account potential barriers and opportunities to computing in an individual's preferred public environments. Trends in public computing, including the emergence of widespread Wi-Fi and limited access to terminals that permit auto-launch applications, should guide development of technology designed for use in public computing environments.
A Benders based rolling horizon algorithm for a dynamic facility location problem
Marufuzzaman,, Mohammad; Gedik, Ridvan; Roni, Mohammad S.
2016-06-28
This study presents a well-known capacitated dynamic facility location problem (DFLP) that satisfies the customer demand at a minimum cost by determining the time period for opening, closing, or retaining an existing facility in a given location. To solve this challenging NP-hard problem, this paper develops a unique hybrid solution algorithm that combines a rolling horizon algorithm with an accelerated Benders decomposition algorithm. Extensive computational experiments are performed on benchmark test instances to evaluate the hybrid algorithm’s efficiency and robustness in solving the DFLP problem. Computational results indicate that the hybrid Benders based rolling horizon algorithm consistently offers high qualitymore » feasible solutions in a much shorter computational time period than the standalone rolling horizon and accelerated Benders decomposition algorithms in the experimental range.« less
Development and application of computational aerothermodynamics flowfield computer codes
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj
1994-01-01
Research was performed in the area of computational modeling and application of hypersonic, high-enthalpy, thermo-chemical nonequilibrium flow (Aerothermodynamics) problems. A number of computational fluid dynamic (CFD) codes were developed and applied to simulate high altitude rocket-plume, the Aeroassist Flight Experiment (AFE), hypersonic base flow for planetary probes, the single expansion ramp model (SERN) connected with the National Aerospace Plane, hypersonic drag devices, hypersonic ramp flows, ballistic range models, shock tunnel facility nozzles, transient and steady flows in the shock tunnel facility, arc-jet flows, thermochemical nonequilibrium flows around simple and complex bodies, axisymmetric ionized flows of interest to re-entry, unsteady shock induced combustion phenomena, high enthalpy pulsed facility simulations, and unsteady shock boundary layer interactions in shock tunnels. Computational modeling involved developing appropriate numerical schemes for the flows on interest and developing, applying, and validating appropriate thermochemical processes. As part of improving the accuracy of the numerical predictions, adaptive grid algorithms were explored, and a user-friendly, self-adaptive code (SAGE) was developed. Aerothermodynamic flows of interest included energy transfer due to strong radiation, and a significant level of effort was spent in developing computational codes for calculating radiation and radiation modeling. In addition, computational tools were developed and applied to predict the radiative heat flux and spectra that reach the model surface.
Michael Ernst
2017-12-09
As the sole Tier-1 computing facility for ATLAS in the United States and the largest ATLAS computing center worldwide Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing,
Key Issues in Instructional Computer Graphics.
ERIC Educational Resources Information Center
Wozny, Michael J.
1981-01-01
Addresses key issues facing universities which plan to establish instructional computer graphics facilities, including computer-aided design/computer aided manufacturing systems, role in curriculum, hardware, software, writing instructional software, faculty involvement, operations, and research. Thirty-seven references and two appendices are…
EPA'S METAL FINISHING FACILITY POLLUTION PREVENTION TOOL - 2002
To help metal finishing facilities meet the goal of profitable pollution prevention, the USEPA is developing the Metal Finishing Facility Pollution Prevention Tool (MFFP2T), a computer program that estimates the rate of solid, liquid waste generation and air emissions. This progr...
Telecommunications and Data Communication in Korea.
ERIC Educational Resources Information Center
Ahn, Moon-Suk
All facilities of the Ministry of Communications of Korea, which monopolizes telecommunications services in the country, are listed and described. Both domestic facilities, including long-distance telephone and telegraph circuits, and international connections are included. Computer facilities are also listed. The nation's regulatory policies are…
Overview of the NASA Dryden Flight Research Facility aeronautical flight projects
NASA Technical Reports Server (NTRS)
Meyer, Robert R., Jr.
1992-01-01
Several principal aerodynamics flight projects of the NASA Dryden Flight Research Facility are discussed. Key vehicle technology areas from a wide range of flight vehicles are highlighted. These areas include flight research data obtained for ground facility and computation correlation, applied research in areas not well suited to ground facilities (wind tunnels), and concept demonstration.
Sea/Lake Water Air Conditioning at Naval Facilities.
1980-05-01
ECONOMICS AT TWO FACILITIES ......... ................... 2 Facilities ........... .......................... 2 Computer Models...of an operational test at Naval Security Group Activity (NSGA) Winter Harbor, Me., and the economics of Navywide application. In FY76 an assessment of... economics of Navywide application of sea/lake water AC indicated that cost and energy savings at the sites of some Naval facilities are possible, depending
Ogata, Y; Nishizawa, K
1995-10-01
An automated smear counting and data processing system for a life science laboratory was developed to facilitate routine surveys and eliminate human errors by using a notebook computer. This system was composed of a personal computer, a liquid scintillation counter and a well-type NaI(Tl) scintillation counter. The radioactivity of smear samples was automatically measured by these counters. The personal computer received raw signals from the counters through an interface of RS-232C. The software for the computer evaluated the surface density of each radioisotope and printed out that value along with other items as a report. The software was programmed in Pascal language. This system was successfully applied to routine surveys for contamination in our facility.
Icing simulation: A survey of computer models and experimental facilities
NASA Technical Reports Server (NTRS)
Potapczuk, M. G.; Reinmann, J. J.
1991-01-01
A survey of the current methods for simulation of the response of an aircraft or aircraft subsystem to an icing encounter is presented. The topics discussed include a computer code modeling of aircraft icing and performance degradation, an evaluation of experimental facility simulation capabilities, and ice protection system evaluation tests in simulated icing conditions. Current research focussed on upgrading simulation fidelity of both experimental and computational methods is discussed. The need for increased understanding of the physical processes governing ice accretion, ice shedding, and iced airfoil aerodynamics is examined.
Icing simulation: A survey of computer models and experimental facilities
NASA Technical Reports Server (NTRS)
Potapczuk, M. G.; Reinmann, J. J.
1991-01-01
A survey of the current methods for simulation of the response of an aircraft or aircraft subsystem to an icing encounter is presented. The topics discussed include a computer code modeling of aircraft icing and performance degradation, an evaluation of experimental facility simulation capabilities, and ice protection system evaluation tests in simulated icing conditions. Current research focused on upgrading simulation fidelity of both experimental and computational methods is discussed. The need for the increased understanding of the physical processes governing ice accretion, ice shedding, and iced aerodynamics is examined.
Energy consumption and load profiling at major airports. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kennedy, J.
1998-12-01
This report describes the results of energy audits at three major US airports. These studies developed load profiles and quantified energy usage at these airports while identifying procedures and electrotechnologies that could reduce their power consumption. The major power consumers at the airports studied included central plants, runway and taxiway lighting, fuel farms, terminals, people mover systems, and hangar facilities. Several major findings emerged during the study. The amount of energy efficient equipment installed at an airport is directly related to the age of the facility. Newer facilities had more energy efficient equipment while older facilities had much of themore » original electric and natural gas equipment still in operation. As redesign, remodeling, and/or replacement projects proceed, responsible design engineers are selecting more energy efficient equipment to replace original devices. The use of computer-controlled energy management systems varies. At airports, the primary purpose of these systems is to monitor and control the lighting and environmental air conditioning and heating of the facility. Of the facilities studied, one used computer management extensively, one used it only marginally, and one had no computer controlled management devices. At all of the facilities studied, natural gas is used to provide heat and hot water. Natural gas consumption is at its highest in the months of November, December, January, and February. The Central Plant contains most of the inductive load at an airport and is also a major contributor to power consumption inefficiency. Power factor correction equipment was used at one facility but was not installed at the other two facilities due to high power factor and/or lack of need.« less
ERIC Educational Resources Information Center
WITMER, DAVID R.
WISCONSIN STATE UNIVERSITIES HAVE BEEN USING THE COMPUTER AS A MANAGEMENT TOOL TO STUDY PHYSICAL FACILITIES INVENTORIES, SPACE UTILIZATION, AND ENROLLMENT AND PLANT PROJECTIONS. EXAMPLES ARE SHOWN GRAPHICALLY AND DESCRIBED FOR DIFFERENT TYPES OF ANALYSIS, SHOWING THE CARD FORMAT, CODING SYSTEMS, AND PRINTOUT. EQUATIONS ARE PROVIDED FOR DETERMINING…
Artificial intelligence issues related to automated computing operations
NASA Technical Reports Server (NTRS)
Hornfeck, William A.
1989-01-01
Large data processing installations represent target systems for effective applications of artificial intelligence (AI) constructs. The system organization of a large data processing facility at the NASA Marshall Space Flight Center is presented. The methodology and the issues which are related to AI application to automated operations within a large-scale computing facility are described. Problems to be addressed and initial goals are outlined.
120. Back side technical facilities S.R. radar transmitter & computer ...
120. Back side technical facilities S.R. radar transmitter & computer building no. 102, section II "foundation & first floor plan" - structural, AS-BLT AW 35-46-04, sheet 65, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK
119. Back side technical facilities S.R. radar transmitter & computer ...
119. Back side technical facilities S.R. radar transmitter & computer building no. 102, section I "tower plan, sections & details" - structural, AS-BLT AW 35-46-04, sheet 62, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK
33 CFR 106.305 - Facility Security Assessment (FSA) requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., including computer systems and networks; (vi) Existing agreements with private security companies; (vii) Any... 33 Navigation and Navigable Waters 1 2013-07-01 2013-07-01 false Facility Security Assessment (FSA... SECURITY MARITIME SECURITY MARINE SECURITY: OUTER CONTINENTAL SHELF (OCS) FACILITIES Outer Continental...
33 CFR 106.305 - Facility Security Assessment (FSA) requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., including computer systems and networks; (vi) Existing agreements with private security companies; (vii) Any... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Facility Security Assessment (FSA... SECURITY MARITIME SECURITY MARINE SECURITY: OUTER CONTINENTAL SHELF (OCS) FACILITIES Outer Continental...
33 CFR 106.305 - Facility Security Assessment (FSA) requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., including computer systems and networks; (vi) Existing agreements with private security companies; (vii) Any... 33 Navigation and Navigable Waters 1 2014-07-01 2014-07-01 false Facility Security Assessment (FSA... SECURITY MARITIME SECURITY MARINE SECURITY: OUTER CONTINENTAL SHELF (OCS) FACILITIES Outer Continental...
33 CFR 106.305 - Facility Security Assessment (FSA) requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., including computer systems and networks; (vi) Existing agreements with private security companies; (vii) Any... 33 Navigation and Navigable Waters 1 2012-07-01 2012-07-01 false Facility Security Assessment (FSA... SECURITY MARITIME SECURITY MARINE SECURITY: OUTER CONTINENTAL SHELF (OCS) FACILITIES Outer Continental...
Automatic Estimation of the Radiological Inventory for the Dismantling of Nuclear Facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia-Bermejo, R.; Felipe, A.; Gutierrez, S.
The estimation of the radiological inventory of Nuclear Facilities to be dismantled is a process that included information related with the physical inventory of all the plant and radiological survey. Estimation of the radiological inventory for all the components and civil structure of the plant could be obtained with mathematical models with statistical approach. A computer application has been developed in order to obtain the radiological inventory in an automatic way. Results: A computer application that is able to estimate the radiological inventory from the radiological measurements or the characterization program has been developed. In this computer applications has beenmore » included the statistical functions needed for the estimation of the central tendency and variability, e.g. mean, median, variance, confidence intervals, variance coefficients, etc. This computer application is a necessary tool in order to be able to estimate the radiological inventory of a nuclear facility and it is a powerful tool for decision taken in future sampling surveys.« less
NASA Center for Computational Sciences: History and Resources
NASA Technical Reports Server (NTRS)
2000-01-01
The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.
The Education Value of Cloud Computing
ERIC Educational Resources Information Center
Katzan, Harry, Jr.
2010-01-01
Cloud computing is a technique for supplying computer facilities and providing access to software via the Internet. Cloud computing represents a contextual shift in how computers are provisioned and accessed. One of the defining characteristics of cloud software service is the transfer of control from the client domain to the service provider.…
Writing Apprehension, Computer Anxiety and Telecomputing: A Pilot Study.
ERIC Educational Resources Information Center
Harris, Judith; Grandgenett, Neal
1992-01-01
A study measured graduate students' writing apprehension and computer anxiety levels before and after using electronic mail, computer conferencing, and remote database searching facilities during an educational technology course. Results indicted postcourse computer anxiety levels significantly related to usage statistics. Precourse writing…
Los Alamos Plutonium Facility Waste Management System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, K.; Montoya, A.; Wieneke, R.
1997-02-01
This paper describes the new computer-based transuranic (TRU) Waste Management System (WMS) being implemented at the Plutonium Facility at Los Alamos National Laboratory (LANL). The Waste Management System is a distributed computer processing system stored in a Sybase database and accessed by a graphical user interface (GUI) written in Omnis7. It resides on the local area network at the Plutonium Facility and is accessible by authorized TRU waste originators, count room personnel, radiation protection technicians (RPTs), quality assurance personnel, and waste management personnel for data input and verification. Future goals include bringing outside groups like the LANL Waste Management Facilitymore » on-line to participate in this streamlined system. The WMS is changing the TRU paper trail into a computer trail, saving time and eliminating errors and inconsistencies in the process.« less
NASA Technical Reports Server (NTRS)
1979-01-01
A comprehensive review of all NASA airfoil research, conducted both in-house and under grant and contract, as well as a broad spectrum of airfoil research outside of NASA is presented. Emphasis is placed on the development of computational aerodynamic codes for airfoil analysis and design, the development of experimental facilities and test techniques, and all types of airfoil applications.
Taylor, Michael J; Taylor, Dave; Vlaev, Ivo; Elkin, Sarah
2017-01-01
Recent advances in communication technologies enable potential provision of remote education for patients using computer-generated environments known as virtual worlds. Previous research has revealed highly variable levels of patient receptiveness to using information technologies for healthcare-related purposes. This preliminary study involved implementing a questionnaire investigating attitudes and access to computer technologies of respiratory outpatients, in order to assess potential for use of virtual worlds to facilitate health-related education for this sample. Ninety-four patients with a chronic respiratory condition completed surveys, which were distributed at a Chest Clinic. In accordance with our prediction, younger participants were more likely to be able to use, and have access to a computer and some patients were keen to explore use virtual worlds for healthcare-related purposes: Of those with access to computer facilities, 14.50% expressed a willingness to attend a virtual world focus group. Results indicate future virtual world health education facilities should be designed to cater for younger patients, because this group are most likely to accept and use such facilities. Within the study sample, this is likely to comprise of people diagnosed with asthma. Future work could investigate the potential of creating a virtual world asthma education facility.
Atmospheric concentrations of polybrominated diphenyl ethers at near-source sites.
Cahill, Thomas M; Groskova, Danka; Charles, M Judith; Sanborn, James R; Denison, Michael S; Baker, Lynton
2007-09-15
Concentrations of polybrominated diphenyl ethers (PBDEs) were determined in air samples from near suspected sources, namely an indoors computer laboratory, indoors and outdoors at an electronics recycling facility, and outdoors at an automotive shredding and metal recycling facility. The results showed that (1) PBDE concentrations in the computer laboratorywere higherwith computers on compared with the computers off, (2) indoor concentrations at an electronics recycling facility were as high as 650,000 pg/m3 for decabromodiphenyl ether (PBDE 209), and (3) PBDE 209 concentrations were up to 1900 pg/m3 at the downwind fenceline at an automotive shredding/metal recycling facility. The inhalation exposure estimates for all the sites were typically below 110 pg/kg/day with the exception of the indoor air samples adjacent to the electronics shredding equipment, which gave exposure estimates upward of 40,000 pg/kg/day. Although there were elevated inhalation exposures at the three source sites, the exposure was not expected to cause adverse health effects based on the lowest reference dose (RfD) currently in the Integrated Risk Information System (IRIS), although these RfD values are currently being re-evaluated by the U.S. Environmental Protection Agency. More research is needed on the potential health effects of PBDEs.
Taylor, Michael J.; Taylor, Dave; Vlaev, Ivo; Elkin, Sarah
2015-01-01
Recent advances in communication technologies enable potential provision of remote education for patients using computer-generated environments known as virtual worlds. Previous research has revealed highly variable levels of patient receptiveness to using information technologies for healthcare-related purposes. This preliminary study involved implementing a questionnaire investigating attitudes and access to computer technologies of respiratory outpatients, in order to assess potential for use of virtual worlds to facilitate health-related education for this sample. Ninety-four patients with a chronic respiratory condition completed surveys, which were distributed at a Chest Clinic. In accordance with our prediction, younger participants were more likely to be able to use, and have access to a computer and some patients were keen to explore use virtual worlds for healthcare-related purposes: Of those with access to computer facilities, 14.50% expressed a willingness to attend a virtual world focus group. Results indicate future virtual world health education facilities should be designed to cater for younger patients, because this group are most likely to accept and use such facilities. Within the study sample, this is likely to comprise of people diagnosed with asthma. Future work could investigate the potential of creating a virtual world asthma education facility. PMID:28239187
Providing security for automated process control systems at hydropower engineering facilities
NASA Astrophysics Data System (ADS)
Vasiliev, Y. S.; Zegzhda, P. D.; Zegzhda, D. P.
2016-12-01
This article suggests the concept of a cyberphysical system to manage computer security of automated process control systems at hydropower engineering facilities. According to the authors, this system consists of a set of information processing tools and computer-controlled physical devices. Examples of cyber attacks on power engineering facilities are provided, and a strategy of improving cybersecurity of hydropower engineering systems is suggested. The architecture of the multilevel protection of the automated process control system (APCS) of power engineering facilities is given, including security systems, control systems, access control, encryption, secure virtual private network of subsystems for monitoring and analysis of security events. The distinctive aspect of the approach is consideration of interrelations and cyber threats, arising when SCADA is integrated with the unified enterprise information system.
Controlling Infrastructure Costs: Right-Sizing the Mission Control Facility
NASA Technical Reports Server (NTRS)
Martin, Keith; Sen-Roy, Michael; Heiman, Jennifer
2009-01-01
Johnson Space Center's Mission Control Center is a space vehicle, space program agnostic facility. The current operational design is essentially identical to the original facility architecture that was developed and deployed in the mid-90's. In an effort to streamline the support costs of the mission critical facility, the Mission Operations Division (MOD) of Johnson Space Center (JSC) has sponsored an exploratory project to evaluate and inject current state-of-the-practice Information Technology (IT) tools, processes and technology into legacy operations. The general push in the IT industry has been trending towards a data-centric computer infrastructure for the past several years. Organizations facing challenges with facility operations costs are turning to creative solutions combining hardware consolidation, virtualization and remote access to meet and exceed performance, security, and availability requirements. The Operations Technology Facility (OTF) organization at the Johnson Space Center has been chartered to build and evaluate a parallel Mission Control infrastructure, replacing the existing, thick-client distributed computing model and network architecture with a data center model utilizing virtualization to provide the MCC Infrastructure as a Service. The OTF will design a replacement architecture for the Mission Control Facility, leveraging hardware consolidation through the use of blade servers, increasing utilization rates for compute platforms through virtualization while expanding connectivity options through the deployment of secure remote access. The architecture demonstrates the maturity of the technologies generally available in industry today and the ability to successfully abstract the tightly coupled relationship between thick-client software and legacy hardware into a hardware agnostic "Infrastructure as a Service" capability that can scale to meet future requirements of new space programs and spacecraft. This paper discusses the benefits and difficulties that a migration to cloud-based computing philosophies has uncovered when compared to the legacy Mission Control Center architecture. The team consists of system and software engineers with extensive experience with the MCC infrastructure and software currently used to support the International Space Station (ISS) and Space Shuttle program (SSP).
IMPLEMENTATION OF USEPA'S METAL FINISHING FACILITY POLLUTION PREVENTION TOOL (MFFP2T) - 2003
To help metal finishing facilities meet the goal of profitable pollution prevention, the USEPA is developing the Metal Finishing Facility Pollution Prevention Tool (MFFP2T), a computer program that estimates the rate of solid, liquid waste generation and air emissions. This progr...
NASA Technical Reports Server (NTRS)
Pirrello, C. J.; Hardin, R. D.; Capelluro, L. P.; Harrison, W. D.
1971-01-01
The general purpose capabilities of government and industry in the area of real time engineering flight simulation are discussed. The information covers computer equipment, visual systems, crew stations, and motion systems, along with brief statements of facility capabilities. Facility construction and typical operational costs are included where available. The facilities provide for economical and safe solutions to vehicle design, performance, control, and flying qualities problems of manned and unmanned flight systems.
An Electronic Pressure Profile Display system for aeronautic test facilities
NASA Technical Reports Server (NTRS)
Woike, Mark R.
1990-01-01
The NASA Lewis Research Center has installed an Electronic Pressure Profile Display system. This system provides for the real-time display of pressure readings on high resolution graphics monitors. The Electronic Pressure Profile Display system will replace manometer banks currently used in aeronautic test facilities. The Electronic Pressure Profile Display system consists of an industrial type Digital Pressure Transmitter (DPI) unit which interfaces with a host computer. The host computer collects the pressure data from the DPI unit, converts it into engineering units, and displays the readings on a high resolution graphics monitor in bar graph format. Software was developed to accomplish the above tasks and also draw facility diagrams as background information on the displays. Data transfer between host computer and DPT unit is done with serial communications. Up to 64 channels are displayed with one second update time. This paper describes the system configuration, its features, and its advantages over existing systems.
An electronic pressure profile display system for aeronautic test facilities
NASA Technical Reports Server (NTRS)
Woike, Mark R.
1990-01-01
The NASA Lewis Research Center has installed an Electronic Pressure Profile Display system. This system provides for the real-time display of pressure readings on high resolution graphics monitors. The Electronic Pressure Profile Display system will replace manometer banks currently used in aeronautic test facilities. The Electronic Pressure Profile Display system consists of an industrial type Digital Pressure Transmitter (DPT) unit which interfaces with a host computer. The host computer collects the pressure data from the DPT unit, converts it into engineering units, and displays the readings on a high resolution graphics monitor in bar graph format. Software was developed to accomplish the above tasks and also draw facility diagrams as background information on the displays. Data transfer between host computer and DPT unit is done with serial communications. Up to 64 channels are displayed with one second update time. This paper describes the system configuration, its features, and its advantages over existing systems.
ERIC Educational Resources Information Center
Zamora, Ramon M.
Alternative learning environments offering computer-related instruction are developing around the world. Storefront learning centers, museum-based computer facilities, and special theme parks are some of the new concepts. ComputerTown, USA! is a public access computer literacy project begun in 1979 to serve both adults and children in Menlo Park…
Race, Wealth, and Solid Waste Facilities in North Carolina
Norton, Jennifer M.; Wing, Steve; Lipscomb, Hester J.; Kaufman, Jay S.; Marshall, Stephen W.; Cravey, Altha J.
2007-01-01
Background Concern has been expressed in North Carolina that solid waste facilities may be disproportionately located in poor communities and in communities of color, that this represents an environmental injustice, and that solid waste facilities negatively impact the health of host communities. Objective Our goal in this study was to conduct a statewide analysis of the location of solid waste facilities in relation to community race and wealth. Methods We used census block groups to obtain racial and economic characteristics, and information on solid waste facilities was abstracted from solid waste facility permit records. We used logistic regression to compute prevalence odds ratios for 2003, and Cox regression to compute hazard ratios of facilities issued permits between 1990 and 2003. Results The adjusted prevalence odds of a solid waste facility was 2.8 times greater in block groups with ≥50% people of color compared with block groups with < 10% people of color, and 1.5 times greater in block groups with median house values < $60,000 compared with block groups with median house values ≥$100,000. Among block groups that did not have a previously permitted solid waste facility, the adjusted hazard of a new permitted facility was 2.7 times higher in block groups with ≥50% people of color compared with block groups with < 10% people of color. Conclusion Solid waste facilities present numerous public health concerns. In North Carolina solid waste facilities are disproportionately located in communities of color and low wealth. In the absence of action to promote environmental justice, the continued need for new facilities could exacerbate this environmental injustice. PMID:17805426
Space technology test facilities at the NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Gross, Anthony R.; Rodrigues, Annette T.
1990-01-01
The major space research and technology test facilities at the NASA Ames Research Center are divided into five categories: General Purpose, Life Support, Computer-Based Simulation, High Energy, and the Space Exploraton Test Facilities. The paper discusses selected facilities within each of the five categories and discusses some of the major programs in which these facilities have been involved. Special attention is given to the 20-G Man-Rated Centrifuge, the Human Research Facility, the Plant Crop Growth Facility, the Numerical Aerodynamic Simulation Facility, the Arc-Jet Complex and Hypersonic Test Facility, the Infrared Detector and Cryogenic Test Facility, and the Mars Wind Tunnel. Each facility is described along with its objectives, test parameter ranges, and major current programs and applications.
NASA Technical Reports Server (NTRS)
Hathaway, M. D.; Wood, J. R.; Wasserbauer, C. A.
1991-01-01
A low speed centrifugal compressor facility recently built by the NASA Lewis Research Center is described. The purpose of this facility is to obtain detailed flow field measurements for computational fluid dynamic code assessment and flow physics modeling in support of Army and NASA efforts to advance small gas turbine engine technology. The facility is heavily instrumented with pressure and temperature probes, both in the stationary and rotating frames of reference, and has provisions for flow visualization and laser velocimetry. The facility will accommodate rotational speeds to 2400 rpm and is rated at pressures to 1.25 atm. The initial compressor stage being tested is geometrically and dynamically representative of modern high-performance centrifugal compressor stages with the exception of Mach number levels. Preliminary experimental investigations of inlet and exit flow uniformly and measurement repeatability are presented. These results demonstrate the high quality of the data which may be expected from this facility. The significance of synergism between computational fluid dynamic analysis and experimentation throughout the development of the low speed centrifugal compressor facility is demonstrated.
Computers in Schools: White Boys Only?
ERIC Educational Resources Information Center
Hammett, Roberta F.
1997-01-01
Discusses the role of computers in today's world and the construction of computer use attitudes, such as gender gaps. Suggests how schools might close the gaps. Includes a brief explanation about how facility with computers is important for women in their efforts to gain equitable treatment in all aspects of their lives. (PA)
20. SITE BUILDING 002 SCANNER BUILDING IN COMPUTER ...
20. SITE BUILDING 002 - SCANNER BUILDING - IN COMPUTER ROOM LOOKING AT "CONSOLIDATED MAINTENANCE OPERATIONS CENTER" JOB AREA AND OPERATION WORK CENTER. TASKS INCLUDE RADAR MAINTENANCE, COMPUTER MAINTENANCE, CYBER COMPUTER MAINTENANCE AND RELATED ACTIVITIES. - Cape Cod Air Station, Technical Facility-Scanner Building & Power Plant, Massachusetts Military Reservation, Sandwich, Barnstable County, MA
Launch Site Computer Simulation and its Application to Processes
NASA Technical Reports Server (NTRS)
Sham, Michael D.
1995-01-01
This paper provides an overview of computer simulation, the Lockheed developed STS Processing Model, and the application of computer simulation to a wide range of processes. The STS Processing Model is an icon driven model that uses commercial off the shelf software and a Macintosh personal computer. While it usually takes one year to process and launch 8 space shuttles, with the STS Processing Model this process is computer simulated in about 5 minutes. Facilities, orbiters, or ground support equipment can be added or deleted and the impact on launch rate, facility utilization, or other factors measured as desired. This same computer simulation technology can be used to simulate manufacturing, engineering, commercial, or business processes. The technology does not require an 'army' of software engineers to develop and operate, but instead can be used by the layman with only a minimal amount of training. Instead of making changes to a process and realizing the results after the fact, with computer simulation, changes can be made and processes perfected before they are implemented.
48 CFR 970.5227-1 - Rights in data-facilities.
Code of Federal Regulations, 2013 CFR
2013-10-01
...) Computer data bases, as used in this clause, means a collection of data in a form capable of, and for the purpose of, being stored in, processed, and operated on by a computer. The term does not include computer software. (2) Computer software, as used in this clause, means (i) computer programs which are data...
48 CFR 970.5227-1 - Rights in data-facilities.
Code of Federal Regulations, 2014 CFR
2014-10-01
...) Computer data bases, as used in this clause, means a collection of data in a form capable of, and for the purpose of, being stored in, processed, and operated on by a computer. The term does not include computer software. (2) Computer software, as used in this clause, means (i) computer programs which are data...
48 CFR 970.5227-1 - Rights in data-facilities.
Code of Federal Regulations, 2012 CFR
2012-10-01
...) Computer data bases, as used in this clause, means a collection of data in a form capable of, and for the purpose of, being stored in, processed, and operated on by a computer. The term does not include computer software. (2) Computer software, as used in this clause, means (i) computer programs which are data...
48 CFR 970.5227-1 - Rights in data-facilities.
Code of Federal Regulations, 2011 CFR
2011-10-01
...) Computer data bases, as used in this clause, means a collection of data in a form capable of, and for the purpose of, being stored in, processed, and operated on by a computer. The term does not include computer software. (2) Computer software, as used in this clause, means (i) computer programs which are data...
Naval Computer & Telecommunications Area Master Station, Eastern Pacific, Radio ...
Naval Computer & Telecommunications Area Master Station, Eastern Pacific, Radio Transmitter Facility Lualualei, Marine Barracks, Intersection of Tower Drive & Morse Street, Makaha, Honolulu County, HI
Logistics in the Computer Lab.
ERIC Educational Resources Information Center
Cowles, Jim
1989-01-01
Discusses ways to provide good computer laboratory facilities for elementary and secondary schools. Topics discussed include establishing the computer lab and selecting hardware; types of software; physical layout of the room; printers; networking possibilities; considerations relating to the physical environment; and scheduling methods. (LRW)
Computer-Aided Engineering Education at the K.U. Leuven.
ERIC Educational Resources Information Center
Snoeys, R.; Gobin, R.
1987-01-01
Describes some recent initiatives and developments in the computer-aided design program in the engineering faculty of the Katholieke Universiteit Leuven (Belgium). Provides a survey of the engineering curriculum, the computer facilities, and the main software packages available. (TW)
76 FR 59803 - Children's Online Privacy Protection Rule
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-27
...,'' covering the ``myriad of computer and telecommunications facilities, including equipment and operating..., Dir. and Professor of Computer Sci. and Pub. Affairs, Princeton Univ. (currently Chief Technologist at... data in the manner of a personal computer. See Electronic Privacy Information Center (``EPIC...
Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing
NASA Astrophysics Data System (ADS)
Klimentov, A.; Buncic, P.; De, K.; Jha, S.; Maeno, T.; Mount, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Porter, R. J.; Read, K. F.; Vaniachine, A.; Wells, J. C.; Wenaus, T.
2015-05-01
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(102) sites, O(105) cores, O(108) jobs per year, O(103) users, and ATLAS data volume is O(1017) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled ‘Next Generation Workload Management and Analysis System for Big Data’ (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. We will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.
Influence of Computer-Aided Detection on Performance of Screening Mammography
Fenton, Joshua J.; Taplin, Stephen H.; Carney, Patricia A.; Abraham, Linn; Sickles, Edward A.; D'Orsi, Carl; Berns, Eric A.; Cutter, Gary; Hendrick, R. Edward; Barlow, William E.; Elmore, Joann G.
2011-01-01
Background Computer-aided detection identifies suspicious findings on mammograms to assist radiologists. Since the Food and Drug Administration approved the technology in 1998, it has been disseminated into practice, but its effect on the accuracy of interpretation is unclear. Methods We determined the association between the use of computer-aided detection at mammography facilities and the performance of screening mammography from 1998 through 2002 at 43 facilities in three states. We had complete data for 222,135 women (a total of 429,345 mammograms), including 2351 women who received a diagnosis of breast cancer within 1 year after screening. We calculated the specificity, sensitivity, and positive predictive value of screening mammography with and without computer-aided detection, as well as the rates of biopsy and breast-cancer detection and the overall accuracy, measured as the area under the receiver-operating-characteristic (ROC) curve. Results Seven facilities (16%) implemented computer-aided detection during the study period. Diagnostic specificity decreased from 90.2% before implementation to 87.2% after implementation (P<0.001), the positive predictive value decreased from 4.1% to 3.2% (P = 0.01), and the rate of biopsy increased by 19.7% (P<0.001). The increase in sensitivity from 80.4% before implementation of computer-aided detection to 84.0% after implementation was not significant (P = 0.32). The change in the cancer-detection rate (including invasive breast cancers and ductal carcinomas in situ) was not significant (4.15 cases per 1000 screening mammograms before implementation and 4.20 cases after implementation, P = 0.90). Analyses of data from all 43 facilities showed that the use of computer-aided detection was associated with significantly lower overall accuracy than was nonuse (area under the ROC curve, 0.871 vs. 0.919; P = 0.005). Conclusions The use of computer-aided detection is associated with reduced accuracy of interpretation of screening mammograms. The increased rate of biopsy with the use of computer-aided detection is not clearly associated with improved detection of invasive breast cancer. PMID:17409321
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Salman; Roser, Robert; Gerber, Richard
The U.S. Department of Energy (DOE) Office of Science (SC) Offices of High Energy Physics (HEP) and Advanced Scientific Computing Research (ASCR) convened a programmatic Exascale Requirements Review on June 10–12, 2015, in Bethesda, Maryland. This report summarizes the findings, results, and recommendations derived from that meeting. The high-level findings and observations are as follows. Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude — and in some cases greatermore » — than that available currently. The growth rate of data produced by simulations is overwhelming the current ability of both facilities and researchers to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. Data rates and volumes from experimental facilities are also straining the current HEP infrastructure in its ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. A close integration of high-performance computing (HPC) simulation and data analysis will greatly aid in interpreting the results of HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. Long-range planning between HEP and ASCR will be required to meet HEP’s research needs. To best use ASCR HPC resources, the experimental HEP program needs (1) an established, long-term plan for access to ASCR computational and data resources, (2) the ability to map workflows to HPC resources, (3) the ability for ASCR facilities to accommodate workflows run by collaborations potentially comprising thousands of individual members, (4) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, (5) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.« less
How Data Becomes Physics: Inside the RACF
Ernst, Michael; Rind, Ofer; Rajagopalan, Srini; Lauret, Jerome; Pinkenburg, Chris
2018-06-22
The RHIC & ATLAS Computing Facility (RACF) at the U.S. Department of Energyâs (DOE) Brookhaven National Laboratory sits at the center of a global computing network. It connects more than 2,500 researchers around the world with the data generated by millions of particle collisions taking place each second at Brookhaven Lab's Relativistic Heavy Ion Collider (RHIC, a DOE Office of Science User Facility for nuclear physics research), and the ATLAS experiment at the Large Hadron Collider in Europe. Watch this video to learn how the people and computing resources of the RACF serve these scientists to turn petabytes of raw data into physics discoveries.
NASA Technical Reports Server (NTRS)
Hollis, Brian R.
1996-01-01
A computational algorithm has been developed which can be employed to determine the flow properties of an arbitrary real (virial) gas in a wind tunnel. A multiple-coefficient virial gas equation of state and the assumption of isentropic flow are used to model the gas and to compute flow properties throughout the wind tunnel. This algorithm has been used to calculate flow properties for the wind tunnels of the Aerothermodynamics Facilities Complex at the NASA Langley Research Center, in which air, CF4. He, and N2 are employed as test gases. The algorithm is detailed in this paper and sample results are presented for each of the Aerothermodynamic Facilities Complex wind tunnels.
Core commands across airway facilities systems.
DOT National Transportation Integrated Search
2003-05-01
This study takes a high-level approach to evaluate computer systems without regard to the specific method of : interaction. This document analyzes the commands that Airway Facilities (AF) use across different systems and : the meanings attributed to ...
Yahoo! Compute Coop (YCC). A Next-Generation Passive Cooling Design for Data Centers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robison, AD; Page, Christina; Lytle, Bob
The purpose of the Yahoo! Compute Coop (YCC) project is to research, design, build and implement a greenfield "efficient data factory" and to specifically demonstrate that the YCC concept is feasible for large facilities housing tens of thousands of heat-producing computing servers. The project scope for the Yahoo! Compute Coop technology includes: - Analyzing and implementing ways in which to drastically decrease energy consumption and waste output. - Analyzing the laws of thermodynamics and implementing naturally occurring environmental effects in order to maximize the "free-cooling" for large data center facilities. "Free cooling" is the direct usage of outside air tomore » cool the servers vs. traditional "mechanical cooling" which is supplied by chillers or other Dx units. - Redesigning and simplifying building materials and methods. - Shortening and simplifying build-to-operate schedules while at the same time reducing initial build and operating costs. Selected for its favorable climate, the greenfield project site is located in Lockport, NY. Construction on the 9.0 MW critical load data center facility began in May 2009, with the fully operational facility deployed in September 2010. The relatively low initial build cost, compatibility with current server and network models, and the efficient use of power and water are all key features that make it a highly compatible and globally implementable design innovation for the data center industry. Yahoo! Compute Coop technology is designed to achieve 99.98% uptime availability. This integrated building design allows for free cooling 99% of the year via the building's unique shape and orientation, as well as server physical configuration.« less
A distributed data base management facility for the CAD/CAM environment
NASA Technical Reports Server (NTRS)
Balza, R. M.; Beaudet, R. W.; Johnson, H. R.
1984-01-01
Current/PAD research in the area of distributed data base management considers facilities for supporting CAD/CAM data management in a heterogeneous network of computers encompassing multiple data base managers supporting a variety of data models. These facilities include coordinated execution of multiple DBMSs to provide for administration of and access to data distributed across them.
1980-06-05
N-231 High Reynolds Number Channel Facility (An example of a Versatile Wind Tunnel) Tunnel 1 I is a blowdown Facility that utilizes interchangeable test sections and nozzles. The facility provides experimental support for the fluid mechanics research, including experimental verification of aerodynamic computer codes and boundary-layer and airfoil studies that require high Reynolds number simulation. (Tunnel 1)
NASA Astrophysics Data System (ADS)
Roslyakov, P. V.; Morozov, I. V.; Zaychenko, M. N.; Sidorkin, V. T.
2016-04-01
Various variants for the structure of low-emission burner facilities, which are meant for char gas burning in an operating TP-101 boiler of the Estonia power plant, are considered. The planned increase in volumes of shale reprocessing and, correspondingly, a rise in char gas volumes cause the necessity in their cocombustion. In this connection, there was a need to develop a burner facility with a given capacity, which yields effective char gas burning with the fulfillment of reliability and environmental requirements. For this purpose, the burner structure base was based on the staging burning of fuel with the gas recirculation. As a result of the preliminary analysis of possible structure variants, three types of early well-operated burner facilities were chosen: vortex burner with the supply of recirculation gases into the secondary air, vortex burner with the baffle supply of recirculation gases between flows of the primary and secondary air, and burner facility with the vortex pilot burner. Optimum structural characteristics and operation parameters were determined using numerical experiments. These experiments using ANSYS CFX bundled software of computational hydrodynamics were carried out with simulation of mixing, ignition, and burning of char gas. Numerical experiments determined the structural and operation parameters, which gave effective char gas burning and corresponded to required environmental standard on nitrogen oxide emission, for every type of the burner facility. The burner facility for char gas burning with the pilot diffusion burner in the central part was developed and made subject to computation results. Preliminary verification nature tests on the TP-101 boiler showed that the actual content of nitrogen oxides in burner flames of char gas did not exceed a claimed concentration of 150 ppm (200 mg/m3).
Evaluation of Visual Computer Simulator for Computer Architecture Education
ERIC Educational Resources Information Center
Imai, Yoshiro; Imai, Masatoshi; Moritoh, Yoshio
2013-01-01
This paper presents trial evaluation of a visual computer simulator in 2009-2011, which has been developed to play some roles of both instruction facility and learning tool simultaneously. And it illustrates an example of Computer Architecture education for University students and usage of e-Learning tool for Assembly Programming in order to…
Hybrid Computation at Louisiana State University.
ERIC Educational Resources Information Center
Corripio, Armando B.
Hybrid computation facilities have been in operation at Louisiana State University since the spring of 1969. In part, they consist of an Electronics Associates, Inc. (EAI) Model 680 analog computer, an EAI Model 693 interface, and a Xerox Data Systems (XDS) Sigma 5 digital computer. The hybrid laboratory is used in a course on hybrid computation…
Computer Augmented Video Education.
ERIC Educational Resources Information Center
Sousa, M. B.
1979-01-01
Describes project CAVE (Computer Augmented Video Education), an ongoing effort at the U.S. Naval Academy to present lecture material on videocassette tape, reinforced by drill and practice through an interactive computer system supported by a 12 channel closed circuit television distribution and production facility. (RAO)
The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diachin, L F; Garaizar, F X; Henson, V E
2009-10-12
In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE andmore » the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.« less
Hydrocode simulations of air and water shocks for facility vulnerability assessments.
Clutter, J Keith; Stahl, Michael
2004-01-02
Hydrocodes are widely used in the study of explosive systems but their use in routine facility vulnerability assessments has been limited due to the computational resources typically required. These requirements are due to the fact that the majority of hydrocodes have been developed primarily for the simulation of weapon-scale phenomena. It is not practical to use these same numerical frameworks on the large domains found in facility vulnerability studies. Here, a hydrocode formulated specifically for facility vulnerability assessments is reviewed. Techniques used to accurately represent the explosive source while maintaining computational efficiency are described. Submodels for addressing other issues found in typical terrorist attack scenarios are presented. In terrorist attack scenarios, loads produced by shocks play an important role in vulnerability. Due to the difference in the material properties of water and air and interface phenomena, there exists significant contrast in wave propagation phenomena in these two medium. These physical variations also require special attention be paid to the mathematical and numerical models used in the hydrocodes. Simulations for a variety of air and water shock scenarios are presented to validate the computational models used in the hydrocode and highlight the phenomenological issues.
Emerging CAE technologies and their role in Future Ambient Intelligence Environments
NASA Astrophysics Data System (ADS)
Noor, Ahmed K.
2011-03-01
Dramatic improvements are on the horizon in Computer Aided Engineering (CAE) and various simulation technologies. The improvements are due, in part, to the developments in a number of leading-edge technologies and their synergistic combinations/convergence. The technologies include ubiquitous, cloud, and petascale computing; ultra high-bandwidth networks, pervasive wireless communication; knowledge based engineering; networked immersive virtual environments and virtual worlds; novel human-computer interfaces; and powerful game engines and facilities. This paper describes the frontiers and emerging simulation technologies, and their role in the future virtual product creation and learning/training environments. The environments will be ambient intelligence environments, incorporating a synergistic combination of novel agent-supported visual simulations (with cognitive learning and understanding abilities); immersive 3D virtual world facilities; development chain management systems and facilities (incorporating a synergistic combination of intelligent engineering and management tools); nontraditional methods; intelligent, multimodal and human-like interfaces; and mobile wireless devices. The Virtual product creation environment will significantly enhance the productivity and will stimulate creativity and innovation in future global virtual collaborative enterprises. The facilities in the learning/training environment will provide timely, engaging, personalized/collaborative and tailored visual learning.
Facilities | Computational Science | NREL
technology innovation by providing scientists and engineers the ability to tackle energy challenges that scientists and engineers to take full advantage of advanced computing hardware and software resources
Sandia National Laboratories: Locations: Kauai Test Facility
Defense Systems & Assessments About Defense Systems & Assessments Program Areas Accomplishments Foundations Bioscience Computing & Information Science Electromagnetics Engineering Science Geoscience Suppliers iSupplier Account Accounts Payable Contract Information Construction & Facilities Contract
Hu, Xiangen; Graesser, Arthur C
2004-05-01
The Human Use Regulatory Affairs Advisor (HURAA) is a Web-based facility that provides help and training on the ethical use of human subjects in research, based on documents and regulations in United States federal agencies. HURAA has a number of standard features of conventional Web facilities and computer-based training, such as hypertext, multimedia, help modules, glossaries, archives, links to other sites, and page-turning didactic instruction. HURAA also has these intelligent features: (1) an animated conversational agent that serves as a navigational guide for the Web facility, (2) lessons with case-based and explanation-based reasoning, (3) document retrieval through natural language queries, and (4) a context-sensitive Frequently Asked Questions segment, called Point & Query. This article describes the functional learning components of HURAA, specifies its computational architecture, and summarizes empirical tests of the facility on learners.
Oak Ridge Leadership Computing Facility Position Paper
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oral, H Sarp; Hill, Jason J; Thach, Kevin G
This paper discusses the business, administration, reliability, and usability aspects of storage systems at the Oak Ridge Leadership Computing Facility (OLCF). The OLCF has developed key competencies in architecting and administration of large-scale Lustre deployments as well as HPSS archival systems. Additionally as these systems are architected, deployed, and expanded over time reliability and availability factors are a primary driver. This paper focuses on the implementation of the Spider parallel Lustre file system as well as the implementation of the HPSS archive at the OLCF.
Astronaut Thomas Jones anchored to bunk facility while working on computer
1994-04-14
STS059-10-011 (9-20 April 1994) --- Astronaut Thomas D. Jones appears to have climbed out of bed right into his work in this onboard 35mm frame. Actually, Jones had anchored himself in the bunk facility while working on one of the onboard computers which transfered data to the ground via modem. The mission specialist was joined in space by five other NASA astronauts for a week and a half of support to the Space Radar Laboratory (SRL-1)/STS-59 mission.
77 FR 62231 - DOE/Advanced Scientific Computing Advisory Committee
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-12
.... Facilities update. ESnet-5. Early Career technical talks. Co-design. Innovative and Novel Computational Impact on Theory and Experiment (INCITE). Public Comment (10-minute rule). Public Participation: The...
Argonne's Magellan Cloud Computing Research Project
Beckman, Pete
2017-12-11
Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF), discusses the Department of Energy's new $32-million Magellan project, which designed to test how cloud computing can be used for scientific research. More information: http://www.anl.gov/Media_Center/News/2009/news091014a.html
Argonne's Magellan Cloud Computing Research Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beckman, Pete
Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF), discusses the Department of Energy's new $32-million Magellan project, which designed to test how cloud computing can be used for scientific research. More information: http://www.anl.gov/Media_Center/News/2009/news091014a.html
A Plan for Community College Instructional Computing.
ERIC Educational Resources Information Center
Howard, Alan; And Others
This document presents a comprehensive plan for future growth in instructional computing in the Washington community colleges. Two chapters define the curriculum objectives and content recommended for instructional courses in the community colleges which require access to computing facilities. The courses described include data processing…
Computer simulation: A modern day crystal ball?
NASA Technical Reports Server (NTRS)
Sham, Michael; Siprelle, Andrew
1994-01-01
It has long been the desire of managers to be able to look into the future and predict the outcome of decisions. With the advent of computer simulation and the tremendous capability provided by personal computers, that desire can now be realized. This paper presents an overview of computer simulation and modeling, and discusses the capabilities of Extend. Extend is an iconic-driven Macintosh-based software tool that brings the power of simulation to the average computer user. An example of an Extend based model is presented in the form of the Space Transportation System (STS) Processing Model. The STS Processing Model produces eight shuttle launches per year, yet it takes only about ten minutes to run. In addition, statistical data such as facility utilization, wait times, and processing bottlenecks are produced. The addition or deletion of resources, such as orbiters or facilities, can be easily modeled and their impact analyzed. Through the use of computer simulation, it is possible to look into the future to see the impact of today's decisions.
Users Guide for the National Transonic Facility Research Data System
NASA Technical Reports Server (NTRS)
Foster, Jean M.; Adcock, Jerry B.
1996-01-01
The National Transonic Facility is a complex cryogenic wind tunnel facility. This report briefly describes the facility, the data systems, and the instrumentation used to acquire research data. The computational methods and equations are discussed in detail and many references are listed for those who need additional technical information. This report is intended to be a user's guide, not a programmer's guide; therefore, the data reduction code itself is not documented. The purpose of this report is to assist personnel involved in conducting a test in the National Transonic Facility.
Autonomous Electrothermal Facility for Oil Recovery Intensification Fed by Wind Driven Power Unit
NASA Astrophysics Data System (ADS)
Belsky, Aleksey A.; Dobush, Vasiliy S.
2017-10-01
This paper describes the structure of autonomous facility fed by wind driven power unit for intensification of viscous and heavy crude oil recovery by means of heat impact on productive strata. Computer based service simulation of this facility was performed. Operational energy characteristics were obtained for various operational modes of facility. The optimal resistance of heating element of the downhole heater was determined for maximum operating efficiency of wind power unit.
Ten Commandments for Microcomputer Facility Planners.
ERIC Educational Resources Information Center
Espinosa, Leonard J.
1991-01-01
Presents factors involved in designing a microcomputer facility, including how computers will be used in the instructional program; educational specifications; planning committees; user input; quality of purchases; visual supervision considerations; location; workstation design; turnkey systems; electrical requirements; local area networks;…
Supporting NASA Facilities Through GIS
NASA Technical Reports Server (NTRS)
Ingham, Mary E.
2000-01-01
The NASA GIS Team supports NASA facilities and partners in the analysis of spatial data. Geographic Information System (G[S) is an integration of computer hardware, software, and personnel linking topographic, demographic, utility, facility, image, and other geo-referenced data. The system provides a graphic interface to relational databases and supports decision making processes such as planning, design, maintenance and repair, and emergency response.
Test Facilities and Experience on Space Nuclear System Developments at the Kurchatov Institute
NASA Astrophysics Data System (ADS)
Ponomarev-Stepnoi, Nikolai N.; Garin, Vladimir P.; Glushkov, Evgeny S.; Kompaniets, George V.; Kukharkin, Nikolai E.; Madeev, Vicktor G.; Papin, Vladimir K.; Polyakov, Dmitry N.; Stepennov, Boris S.; Tchuniyaev, Yevgeny I.; Tikhonov, Lev Ya.; Uksusov, Yevgeny I.
2004-02-01
The complexity of space fission systems and rigidity of requirement on minimization of weight and dimension characteristics along with the wish to decrease expenditures on their development demand implementation of experimental works which results shall be used in designing, safety substantiation, and licensing procedures. Experimental facilities are intended to solve the following tasks: obtainment of benchmark data for computer code validations, substantiation of design solutions when computational efforts are too expensive, quality control in a production process, and ``iron'' substantiation of criticality safety design solutions for licensing and public relations. The NARCISS and ISKRA critical facilities and unique ORM facility on shielding investigations at the operating OR nuclear research reactor were created in the Kurchatov Institute to solve the mentioned tasks. The range of activities performed at these facilities within the implementation of the previous Russian nuclear power system programs is briefly described in the paper. This experience shall be analyzed in terms of methodological approach to development of future space nuclear systems (this analysis is beyond this paper). Because of the availability of these facilities for experiments, the brief description of their critical assemblies and characteristics is given in this paper.
NASA Astrophysics Data System (ADS)
Yahyaei, Mohsen; Bashiri, Mahdi
2017-12-01
The hub location problem arises in a variety of domains such as transportation and telecommunication systems. In many real-world situations, hub facilities are subject to disruption. This paper deals with the multiple allocation hub location problem in the presence of facilities failure. To model the problem, a two-stage stochastic formulation is developed. In the proposed model, the number of scenarios grows exponentially with the number of facilities. To alleviate this issue, two approaches are applied simultaneously. The first approach is to apply sample average approximation to approximate the two stochastic problem via sampling. Then, by applying the multiple cuts Benders decomposition approach, computational performance is enhanced. Numerical studies show the effective performance of the SAA in terms of optimality gap for small problem instances with numerous scenarios. Moreover, performance of multi-cut Benders decomposition is assessed through comparison with the classic version and the computational results reveal the superiority of the multi-cut approach regarding the computational time and number of iterations.
Virtual Facility at Fermilab: Infrastructure and Services Expand to Public Clouds
Timm, Steve; Garzoglio, Gabriele; Cooper, Glenn; ...
2016-02-18
In preparation for its new Virtual Facility Project, Fermilab has launched a program of work to determine the requirements for running a computation facility on-site, in public clouds, or a combination of both. This program builds on the work we have done to successfully run experimental workflows of 1000-VM scale both on an on-site private cloud and on Amazon AWS. To do this at scale we deployed dynamically launched and discovered caching services on the cloud. We are now testing the deployment of more complicated services on Amazon AWS using native load balancing and auto scaling features they provide. Themore » Virtual Facility Project will design and develop a facility including infrastructure and services that can live on the site of Fermilab, off-site, or a combination of both. We expect to need this capacity to meet the peak computing requirements in the future. The Virtual Facility is intended to provision resources on the public cloud on behalf of the facility as a whole instead of having each experiment or Virtual Organization do it on their own. We will describe the policy aspects of a distributed Virtual Facility, the requirements, and plans to make a detailed comparison of the relative cost of the public and private clouds. Furthermore, this talk will present the details of the technical mechanisms we have developed to date, and the plans currently taking shape for a Virtual Facility at Fermilab.« less
Virtual Facility at Fermilab: Infrastructure and Services Expand to Public Clouds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timm, Steve; Garzoglio, Gabriele; Cooper, Glenn
In preparation for its new Virtual Facility Project, Fermilab has launched a program of work to determine the requirements for running a computation facility on-site, in public clouds, or a combination of both. This program builds on the work we have done to successfully run experimental workflows of 1000-VM scale both on an on-site private cloud and on Amazon AWS. To do this at scale we deployed dynamically launched and discovered caching services on the cloud. We are now testing the deployment of more complicated services on Amazon AWS using native load balancing and auto scaling features they provide. Themore » Virtual Facility Project will design and develop a facility including infrastructure and services that can live on the site of Fermilab, off-site, or a combination of both. We expect to need this capacity to meet the peak computing requirements in the future. The Virtual Facility is intended to provision resources on the public cloud on behalf of the facility as a whole instead of having each experiment or Virtual Organization do it on their own. We will describe the policy aspects of a distributed Virtual Facility, the requirements, and plans to make a detailed comparison of the relative cost of the public and private clouds. Furthermore, this talk will present the details of the technical mechanisms we have developed to date, and the plans currently taking shape for a Virtual Facility at Fermilab.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerber, Richard; Allcock, William; Beggio, Chris
2014-10-17
U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at themore » DOE national laboratories. The report contains findings from that review.« less
The NASA Ames 16-Inch Shock Tunnel Nozzle Simulations and Experimental Comparison
NASA Technical Reports Server (NTRS)
TokarcikPolsky, S.; Papadopoulos, P.; Venkatapathy, E.; Delwert, G. S.; Edwards, Thomas A. (Technical Monitor)
1995-01-01
The 16-Inch Shock Tunnel at NASA Ames Research Center is a unique test facility used for hypersonic propulsion testing. To provide information necessary to understand the hypersonic testing of the combustor model, computational simulations of the facility nozzle were performed and results are compared with available experimental data, namely static pressure along the nozzle walls and pitot pressure at the exit of the nozzle section. Both quasi-one-dimensional and axisymmetric approaches were used to study the numerous modeling issues involved. The facility nozzle flow was examined for three hypersonic test conditions, and the computational results are presented in detail. The effects of variations in reservoir conditions, boundary layer growth, and parameters of numerical modeling are explored.
Ground Software Maintenance Facility (GSMF) system manual
NASA Technical Reports Server (NTRS)
Derrig, D.; Griffith, G.
1986-01-01
The Ground Software Maintenance Facility (GSMF) is designed to support development and maintenance of spacelab ground support software. THE GSMF consists of a Perkin Elmer 3250 (Host computer) and a MITRA 125s (ATE computer), with appropriate interface devices and software to simulate the Electrical Ground Support Equipment (EGSE). This document is presented in three sections: (1) GSMF Overview; (2) Software Structure; and (3) Fault Isolation Capability. The overview contains information on hardware and software organization along with their corresponding block diagrams. The Software Structure section describes the modes of software structure including source files, link information, and database files. The Fault Isolation section describes the capabilities of the Ground Computer Interface Device, Perkin Elmer host, and MITRA ATE.
Some Computer-Based Developments in Sociology.
ERIC Educational Resources Information Center
Heise, David R.; Simmons, Roberta G.
1985-01-01
Discusses several ways in which computers are being used in sociology and how they continue to change this discipline. Areas considered include data collection, data analysis, simulations of social processes based on mathematical models, and problem areas (including standardization concerns, training, and the financing of computing facilities).…
Code of Federal Regulations, 2014 CFR
2014-01-01
... terrestrial technology having the capacity to provide transmission facilities that enable subscribers of the...) Computer Access Points and wireless access, that is used for the purposes of providing free access to and..., and after normal working hours and on Saturdays or Sunday. Computer Access Point means a new computer...
Reliable Facility Location Problem with Facility Protection
Tang, Luohao; Zhu, Cheng; Lin, Zaili; Shi, Jianmai; Zhang, Weiming
2016-01-01
This paper studies a reliable facility location problem with facility protection that aims to hedge against random facility disruptions by both strategically protecting some facilities and using backup facilities for the demands. An Integer Programming model is proposed for this problem, in which the failure probabilities of facilities are site-specific. A solution approach combining Lagrangian Relaxation and local search is proposed and is demonstrated to be both effective and efficient based on computational experiments on random numerical examples with 49, 88, 150 and 263 nodes in the network. A real case study for a 100-city network in Hunan province, China, is presented, based on which the properties of the model are discussed and some managerial insights are analyzed. PMID:27583542
NASA Technical Reports Server (NTRS)
Duke, E. L.; Regenie, V. A.; Deets, D. A.
1986-01-01
The Dryden Flight Research Facility of the NASA Ames Research Facility of the NASA Ames Research Center is developing a rapid prototyping facility for flight research in flight systems concepts that are based on artificial intelligence (AI). The facility will include real-time high-fidelity aircraft simulators, conventional and symbolic processors, and a high-performance research aircraft specially modified to accept commands from the ground-based AI computers. This facility is being developed as part of the NASA-DARPA automated wingman program. This document discusses the need for flight research and for a national flight research facility for the rapid prototyping of AI-based avionics systems and the NASA response to those needs.
A rapid prototyping facility for flight research in advanced systems concepts
NASA Technical Reports Server (NTRS)
Duke, Eugene L.; Brumbaugh, Randal W.; Disbrow, James D.
1989-01-01
The Dryden Flight Research Facility of the NASA Ames Research Facility of the NASA Ames Research Center is developing a rapid prototyping facility for flight research in flight systems concepts that are based on artificial intelligence (AI). The facility will include real-time high-fidelity aircraft simulators, conventional and symbolic processors, and a high-performance research aircraft specially modified to accept commands from the ground-based AI computers. This facility is being developed as part of the NASA-DARPA automated wingman program. This document discusses the need for flight research and for a national flight research facility for the rapid prototyping of AI-based avionics systems and the NASA response to those needs.
Fusion interfaces for tactical environments: An application of virtual reality technology
NASA Technical Reports Server (NTRS)
Haas, Michael W.
1994-01-01
The term Fusion Interface is defined as a class of interface which integrally incorporates both virtual and nonvirtual concepts and devices across the visual, auditory, and haptic sensory modalities. A fusion interface is a multisensory virtually-augmented synthetic environment. A new facility has been developed within the Human Engineering Division of the Armstrong Laboratory dedicated to exploratory development of fusion interface concepts. This new facility, the Fusion Interfaces for Tactical Environments (FITE) Facility is a specialized flight simulator enabling efficient concept development through rapid prototyping and direct experience of new fusion concepts. The FITE Facility also supports evaluation of fusion concepts by operation fighter pilots in an air combat environment. The facility is utilized by a multidisciplinary design team composed of human factors engineers, electronics engineers, computer scientists, experimental psychologists, and oeprational pilots. The FITE computational architecture is composed of twenty-five 80486-based microcomputers operating in real-time. The microcomputers generate out-the-window visuals, in-cockpit and head-mounted visuals, localized auditory presentations, haptic displays on the stick and rudder pedals, as well as executing weapons models, aerodynamic models, and threat models.
Soviet Cybernetics Review. Volume 2, Number 5,
prize; Aeroflot’s sirena system turned on; Computer system controls 2500 construction sites; Automation of aircraft languages; Diagnosis by teletype; ALGEM-1 and ALGEM-2 languages; Nuclear institute’s computer facilities.
INTERIOR; VIEW OF ENTRY HALL, LOOKING SOUTH. Naval Computer ...
INTERIOR; VIEW OF ENTRY HALL, LOOKING SOUTH. - Naval Computer & Telecommunications Area Master Station, Eastern Pacific, Radio Transmitter Facility Lualualei, Marine Barracks, Intersection of Tower Drive & Morse Street, Makaha, Honolulu County, HI
Lean coding machine. Facilities target productivity and job satisfaction with coding automation.
Rollins, Genna
2010-07-01
Facilities are turning to coding automation to help manage the volume of electronic documentation, streamlining workflow, boosting productivity, and increasing job satisfaction. As EHR adoption increases, computer-assisted coding may become a necessity, not an option.
Berkeley Lab - Materials Sciences Division
Computational Study of Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion Facilities and Centers Staff Center for X-ray Optics Patrick Naulleau Director 510-486-4529 2-432 PNaulleau
Simplifying Facility and Event Scheduling: Saving Time and Money.
ERIC Educational Resources Information Center
Raasch, Kevin
2003-01-01
Describes a product called the Event Management System (EMS), a computer software program to manage facility and event scheduling. Provides example of the school district and university uses of EMS. Describes steps in selecting a scheduling-management system. (PKP)
Designing Communication and Learning Environments.
ERIC Educational Resources Information Center
Gayeski, Diane M., Ed.
Designing and remodeling educational facilities are becoming more complex with options that include computer-based collaboration, classrooms with multimedia podiums, conference centers, and workplaces with desktop communication systems. This book provides a collection of articles that address educational facility design categorized in the…
45 CFR 1614.3 - Range of activities.
Code of Federal Regulations, 2013 CFR
2013-10-01
... assistance, research, advice and counsel, or the use of recipient facilities, libraries, computer assisted... bono basis through the provision of community legal education, training, technical assistance, research, advice and counsel; co-counseling arrangements; or the use of private law firm facilities, libraries...
45 CFR 1614.3 - Range of activities.
Code of Federal Regulations, 2014 CFR
2014-10-01
... assistance, research, advice and counsel, or the use of recipient facilities, libraries, computer assisted... bono basis through the provision of community legal education, training, technical assistance, research, advice and counsel; co-counseling arrangements; or the use of private law firm facilities, libraries...
45 CFR 1614.3 - Range of activities.
Code of Federal Regulations, 2012 CFR
2012-10-01
... assistance, research, advice and counsel, or the use of recipient facilities, libraries, computer assisted... bono basis through the provision of community legal education, training, technical assistance, research, advice and counsel; co-counseling arrangements; or the use of private law firm facilities, libraries...
2006-06-23
KENNEDY SPACE CENTER, FLA. - An overview of the new Firing Room 4 shows the expanse of computer stations and the various operations the facility will be able to manage. FR4 is now designated the primary firing room for all remaining shuttle launches, and will also be used daily to manage operations in the Orbiter Processing Facilities and for integrated processing for the shuttle. The firing room now includes sound-suppressing walls and floors, new humidity control, fire-suppression systems and consoles, support tables with computer stations, communication systems and laptop computer ports. FR 4 also has power and computer network connections and a newly improved Checkout, Control and Monitor Subsystem. The renovation is part of the Launch Processing System Extended Survivability Project that began in 2003. United Space Alliance's Launch Processing System directorate managed the FR 4 project for NASA. Photo credit: NASA/Dimitri Gerondidakis
Administration of Computer Resources.
ERIC Educational Resources Information Center
Franklin, Gene F.
Computing at Stanford University has, until recently, been performed at one of five facilities. The Stanford hospital operates an IBM 370/135 mainly for administrative use. The university business office has an IBM 370/145 for its administrative needs and support of the medical clinic. Under the supervision of the Stanford Computation Center are…
Turbomachinery Heat Transfer and Loss Modeling for 3D Navier-Stokes Codes
NASA Technical Reports Server (NTRS)
DeWitt, Kenneth; Ameri, Ali
2005-01-01
This report's contents focus on making use of NASA Glenn on-site computational facilities,to develop, validate, and apply models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes to enhance the capability to compute heat transfer and losses in turbomachiney.
48 CFR 970.5227-1 - Rights in data-facilities.
Code of Federal Regulations, 2010 CFR
2010-10-01
... software. (2) Computer software, as used in this clause, means (i) computer programs which are data... software. The term “data” does not include data incidental to the administration of this contract, such as... this clause, means data, other than computer software, developed at private expense that embody trade...
Berkeley Lab - Materials Sciences Division
Computational Study of Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion and Materials Physics Scattering and Instrumentation Science Centers Center for Computational Study of Sciences Centers Center for Computational Study of Excited-State Phenomena in Energy Materials Center for X
How You Can Protect Public Access Computers "and" Their Users
ERIC Educational Resources Information Center
Huang, Phil
2007-01-01
By providing the public with online computing facilities, librarians make available a world of information resources beyond their traditional print materials. Internet-connected computers in libraries greatly enhance the opportunity for patrons to enjoy the benefits of the digital age. Unfortunately, as hackers become more sophisticated and…
The HEPCloud Facility: elastic computing for High Energy Physics - The NOvA Use Case
NASA Astrophysics Data System (ADS)
Fuess, S.; Garzoglio, G.; Holzman, B.; Kennedy, R.; Norman, A.; Timm, S.; Tiradani, A.
2017-10-01
The need for computing in the HEP community follows cycles of peaks and valleys mainly driven by conference dates, accelerator shutdown, holiday schedules, and other factors. Because of this, the classical method of provisioning these resources at providing facilities has drawbacks such as potential overprovisioning. As the appetite for computing increases, however, so does the need to maximize cost efficiency by developing a model for dynamically provisioning resources only when needed. To address this issue, the HEPCloud project was launched by the Fermilab Scientific Computing Division in June 2015. Its goal is to develop a facility that provides a common interface to a variety of resources, including local clusters, grids, high performance computers, and community and commercial Clouds. Initially targeted experiments include CMS and NOvA, as well as other Fermilab stakeholders. In its first phase, the project has demonstrated the use of the “elastic” provisioning model offered by commercial clouds, such as Amazon Web Services. In this model, resources are rented and provisioned automatically over the Internet upon request. In January 2016, the project demonstrated the ability to increase the total amount of global CMS resources by 58,000 cores from 150,000 cores - a 38 percent increase - in preparation for the Recontres de Moriond. In March 2016, the NOvA experiment has also demonstrated resource burst capabilities with an additional 7,300 cores, achieving a scale almost four times as large as the local allocated resources and utilizing the local AWS s3 storage to optimize data handling operations and costs. NOvA was using the same familiar services used for local computations, such as data handling and job submission, in preparation for the Neutrino 2016 conference. In both cases, the cost was contained by the use of the Amazon Spot Instance Market and the Decision Engine, a HEPCloud component that aims at minimizing cost and job interruption. This paper describes the Fermilab HEPCloud Facility and the challenges overcome for the CMS and NOvA communities.
Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing
Klimentov, A.; Buncic, P.; De, K.; ...
2015-05-22
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2) sites, O(10 5) cores, O(10 8) jobs per year, O(10 3) users, and ATLAS data volume is O(10 17) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. Finally, we will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.« less
Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klimentov, A.; Buncic, P.; De, K.
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2) sites, O(10 5) cores, O(10 8) jobs per year, O(10 3) users, and ATLAS data volume is O(10 17) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. Finally, we will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.« less
Performance Predictions for Proposed ILS Facilities at St. Louis Municipal Airport
DOT National Transportation Integrated Search
1978-01-01
The results of computer simulations of performance of proposed ILS facilities on Runway 12L/30R at St. Louis Municipal Airport (Lambert Field) are reported. These simulations indicate that an existing industrial complex located near the runway is com...
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Biotechnology Facility (BTF) for ISS
NASA Technical Reports Server (NTRS)
1998-01-01
Engineering mockup shows the general arrangement of the plarned Biotechnology Facility inside an EXPRESS rack aboard the International Space Station. This layout includes a gas supply module (bottom left), control computer and laptop interface (bottom right), two rotating wall vessels (top right), and support systems.
High-Performance Computing Data Center Efficiency Dashboard | Computational
recovery water (ERW) loop Heat exchanger for energy recovery Thermosyphon Heat exchanger between ERW loop and cooling tower loop Evaporative cooling towers Learn more about our energy-efficient facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peavler, J.
1979-06-01
This publication gives details about hardware, software, procedures, and services of the Central Computing Facility, as well as information about how to become an authorized user. Languages, compilers' libraries, and applications packages available are described. 17 tables. (RWR)
Advanced ballistic range technology
NASA Technical Reports Server (NTRS)
Yates, Leslie A.
1994-01-01
The research conducted supported two facilities at NASA Ames Research Center: the Hypervelocity Free-Flight Aerodynamic Facility and the 16-Inch Shock Tunnel. During the grant period, a computerized film-reading system was developed, and five- and six-degree-of-freedom parameter-identification routines were written and successfully implemented. Studies of flow separation were conducted, and methods to extract phase shift information from finite-fringe interferograms were developed. Methods for constructing optical images from Computational Fluid Dynamics solutions were also developed, and these methods were used for one-to-one comparisons of experiment and computations.
NASA Astrophysics Data System (ADS)
Ovsiannikov, Mikhail; Ovsiannikov, Sergei
2017-01-01
The paper presents the combined approach to noise mapping and visualizing of industrial facilities sound pollution using forward ray tracing method and thin-plate spline interpolation. It is suggested to cauterize industrial area in separate zones with similar sound levels. Equivalent local source is defined for range computation of sanitary zones based on ray tracing algorithm. Computation of sound pressure levels within clustered zones are based on two-dimension spline interpolation of measured data on perimeter and inside the zone.
Matrix computations in MACSYMA
NASA Technical Reports Server (NTRS)
Wang, P. S.
1977-01-01
Facilities built into MACSYMA for manipulating matrices with numeric or symbolic entries are described. Computations will be done exactly, keeping symbols as symbols. Topics discussed include how to form a matrix and create other matrices by transforming existing matrices within MACSYMA; arithmetic and other computation with matrices; and user control of computational processes through the use of optional variables. Two algorithms designed for sparse matrices are given. The computing times of several different ways to compute the determinant of a matrix are compared.
Flow Characterization Studies of the 10-MW TP3 Arc-Jet Facility: Probe Sweeps
NASA Technical Reports Server (NTRS)
Goekcen, Tahir; Alunni, Antonella I.
2016-01-01
This paper reports computational simulations and analysis in support of calibration and flow characterization tests in a high enthalpy arc-jet facility at NASA Ames Research Center. These tests were conducted in the NASA Ames 10-MW TP3 facility using flat-faced stagnation calorimeters at six conditions corresponding to the steps of a simulated flight heating profile. Data were obtained using a conical nozzle test configuration in which the models were placed in a free jet downstream of the nozzle. Experimental surveys of arc-jet test flow with pitot pressure and heat flux probes were also performed at these arc-heater conditions, providing assessment of the flow uniformity and valuable data for the flow characterization. Two different sets of pitot pressure and heat probes were used: 9.1-mm sphere-cone probes (nose radius of 4.57 mm or 0.18 in) with null-point heat flux gages, and 15.9-mm (0.625 in) diameter hemisphere probes with Gardon gages. The probe survey data clearly show that the test flow in the TP3 facility is not uniform at most conditions (not even axisymmetric at some conditions), and the extent of non-uniformity is highly dependent on various arc-jet parameters such as arc current, mass flow rate, and the amount of cold-gas injection at the arc-heater plenum. The present analysis comprises computational fluid dynamics simulations of the nonequilibrium flowfield in the facility nozzle and test box, including the models tested. Comparisons of computations with the experimental measurements show reasonably good agreement except at the extreme low pressure conditions of the facility envelope.
Automation of electromagnetic compatability (EMC) test facilities
NASA Technical Reports Server (NTRS)
Harrison, C. A.
1986-01-01
Efforts to automate electromagnetic compatibility (EMC) test facilities at Marshall Space Flight Center are discussed. The present facility is used to accomplish a battery of nine standard tests (with limited variations) deigned to certify EMC of Shuttle payload equipment. Prior to this project, some EMC tests were partially automated, but others were performed manually. Software was developed to integrate all testing by means of a desk-top computer-controller. Near real-time data reduction and onboard graphics capabilities permit immediate assessment of test results. Provisions for disk storage of test data permit computer production of the test engineer's certification report. Software flexibility permits variation in the tests procedure, the ability to examine more closely those frequency bands which indicate compatibility problems, and the capability to incorporate additional test procedures.
A test matrix sequencer for research test facility automation
NASA Technical Reports Server (NTRS)
Mccartney, Timothy P.; Emery, Edward F.
1990-01-01
The hardware and software configuration of a Test Matrix Sequencer, a general purpose test matrix profiler that was developed for research test facility automation at the NASA Lewis Research Center, is described. The system provides set points to controllers and contact closures to data systems during the course of a test. The Test Matrix Sequencer consists of a microprocessor controlled system which is operated from a personal computer. The software program, which is the main element of the overall system is interactive and menu driven with pop-up windows and help screens. Analog and digital input/output channels can be controlled from a personal computer using the software program. The Test Matrix Sequencer provides more efficient use of aeronautics test facilities by automating repetitive tasks that were once done manually.
NASA Technical Reports Server (NTRS)
Perkins, Hugh Douglas
2010-01-01
In order to improve the understanding of particle vitiation effects in hypersonic propulsion test facilities, a quasi-one dimensional numerical tool was developed to efficiently model reacting particle-gas flows over a wide range of conditions. Features of this code include gas-phase finite-rate kinetics, a global porous-particle combustion model, mass, momentum and energy interactions between phases, and subsonic and supersonic particle drag and heat transfer models. The basic capabilities of this tool were validated against available data or other validated codes. To demonstrate the capabilities of the code a series of computations were performed for a model hypersonic propulsion test facility and scramjet. Parameters studied were simulated flight Mach number, particle size, particle mass fraction and particle material.
Real-time data-intensive computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parkinson, Dilworth Y., E-mail: dyparkinson@lbl.gov; Chen, Xian; Hexemer, Alexander
2016-07-27
Today users visit synchrotrons as sources of understanding and discovery—not as sources of just light, and not as sources of data. To achieve this, the synchrotron facilities frequently provide not just light but often the entire end station and increasingly, advanced computational facilities that can reduce terabytes of data into a form that can reveal a new key insight. The Advanced Light Source (ALS) has partnered with high performance computing, fast networking, and applied mathematics groups to create a “super-facility”, giving users simultaneous access to the experimental, computational, and algorithmic resources to make this possible. This combination forms an efficientmore » closed loop, where data—despite its high rate and volume—is transferred and processed immediately and automatically on appropriate computing resources, and results are extracted, visualized, and presented to users or to the experimental control system, both to provide immediate insight and to guide decisions about subsequent experiments during beamtime. We will describe our work at the ALS ptychography, scattering, micro-diffraction, and micro-tomography beamlines.« less
The iPad and EFL Digital Literacy
NASA Astrophysics Data System (ADS)
Meurant, Robert C.
In future, the uses of English by non-native speakers will predominantly be online, using English language digital resources, and in computer-mediated communication with other non-native speakers of English. Thus for Korea to be competitive in the global economy, its EFL should develop L2 Digital Literacy in English. With its fast Internet connections, Korea is the most wired nation on Earth; but ICT facilities in educational institutions need reorganization. Opportunities for computer-mediated second language learning need to be increased, providing multimedia-capable, mobile web solutions that put the Internet into the hands of all students and teachers. Wi-Fi networked campuses allow any campus space to act as a wireless classroom. Every classroom should have a teacher's computer console. All students should be provided with adequate computing facilities, that are available anywhere, anytime. Ubiquitous computing has now become feasible by providing every student on enrollment with a tablet: a Wi-Fi+3G enabled Apple iPad.
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
36 CFR Appendix A to Part 1234 - Minimum Security Standards for Level III Federal Facilities
Code of Federal Regulations, 2014 CFR
2014-07-01
... technology and blast standards. Immediate review of ongoing projects may generate savings in the... critical systems (alarm systems, radio communications, computer facilities, etc.) Required. Occupant... all exterior windows (shatter protection) Recommended. Review current projects for blast standards...
36 CFR Appendix A to Part 1234 - Minimum Security Standards for Level III Federal Facilities
Code of Federal Regulations, 2013 CFR
2013-07-01
... construction projects should be reviewed if possible, to incorporate current technology and blast standards... critical systems (alarm systems, radio communications, computer facilities, etc.) Required. Occupant... all exterior windows (shatter protection) Recommended. Review current projects for blast standards...
36 CFR Appendix A to Part 1234 - Minimum Security Standards for Level III Federal Facilities
Code of Federal Regulations, 2012 CFR
2012-07-01
... technology and blast standards. Immediate review of ongoing projects may generate savings in the... critical systems (alarm systems, radio communications, computer facilities, etc.) Required. Occupant... all exterior windows (shatter protection) Recommended. Review current projects for blast standards...
CFD Simulations of the IHF Arc-Jet Flow: Compression-Pad/Separation Bolt Wedge Tests
NASA Technical Reports Server (NTRS)
Gokcen, Tahir; Skokova, Kristina A.
2017-01-01
This paper reports computational analyses in support of two wedge tests in a high enthalpy arc-jet facility at NASA Ames Research Center. These tests were conducted using two different wedge models, each placed in a free jet downstream of a corresponding different conical nozzle in the Ames 60-MW Interaction Heating Facility. Panel test articles included a metallic separation bolt imbedded in the compression-pad and heat shield materials, resulting in a circular protuberance over a flat plate. As part of the test calibration runs, surface pressure and heat flux measurements on water-cooled calibration plates integrated with the wedge models were also obtained. Surface heating distributions on the test articles as well as arc-jet test environment parameters for each test configuration are obtained through computational fluid dynamics simulations, consistent with the facility and calibration measurements. The present analysis comprises simulations of the non-equilibrium flow field in the facility nozzle, test box, and flow field over test articles, and comparisons with the measured calibration data.
Voting with Their Seats: Computer Laboratory Design and the Casual User
ERIC Educational Resources Information Center
Spennemann, Dirk H. R.; Atkinson, John; Cornforth, David
2007-01-01
Student computer laboratories are provided by most teaching institutions around the world; however, what is the most effective layout for such facilities? The log-in data files from computer laboratories at a regional university in Australia were analysed to determine whether there was a pattern in student seating. In particular, it was…
A Functional Specification for a Programming Language for Computer Aided Learning Applications.
ERIC Educational Resources Information Center
National Research Council of Canada, Ottawa (Ontario).
In 1972 there were at least six different course authoring languages in use in Canada with little exchange of course materials between Computer Assisted Learning (CAL) centers. In order to improve facilities for producing "transportable" computer based course materials, a working panel undertook the definition of functional requirements of a user…
WIRELESS Computing in Schools: Reach Out and Touch the World.
ERIC Educational Resources Information Center
Null, Linda; Teschner, Randy
Many elementary and secondary schools tie with local colleges and universities and use modems to access the computing power available at these higher education facilities. To help alleviate the financial burden of long-distance phone charges, work had begun to use the airways instead of phone lines for computer communication. An interest in…
Payload/orbiter contamination control requirement study: Computer interface
NASA Technical Reports Server (NTRS)
Bareiss, L. E.; Hooper, V. W.; Ress, E. B.
1976-01-01
The MSFC computer facilities, and future plans for them are described relative to characteristics of the various computers as to availability and suitability for processing the contamination program. A listing of the CDC 6000 series and UNIVAC 1108 characteristics is presented so that programming requirements can be compared directly and differences noted.
Computers and Play in Early Childhood: Affordances and Limitations
ERIC Educational Resources Information Center
Verenikina, Irina; Herrington, Jan; Peterson, Rob; Mantei, Jessica
2010-01-01
The widespread proliferation of computer games for children as young as six months of age, merits a reexamination of their manner of use and a review of their facility to provide opportunities for developmental play. This article describes a research study conducted to explore the use of computer games by young children, specifically to…
Space Age Multi-CPU Computer Network Is Just for Fun and Education, Too.
ERIC Educational Resources Information Center
Technological Horizons in Education, 1980
1980-01-01
Describes the Sesame Place's Computer Gallery, 56 Apple II computers linked by three Nestar Cluster/One Model A hard disc systems, the first commercial permanent educational play park. Programs for this hands-on indoor/outdoor park as well as a description of the facility are given. (JN)
10 CFR 1703.112 - Computation of time.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Computation of time. 1703.112 Section 1703.112 Energy DEFENSE NUCLEAR FACILITIES SAFETY BOARD PUBLIC INFORMATION AND REQUESTS § 1703.112 Computation of time. In... until the end of the next working day. Whenever a person has the right or is required to take some...
10 CFR 1703.112 - Computation of time.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Computation of time. 1703.112 Section 1703.112 Energy DEFENSE NUCLEAR FACILITIES SAFETY BOARD PUBLIC INFORMATION AND REQUESTS § 1703.112 Computation of time. In... until the end of the next working day. Whenever a person has the right or is required to take some...
10 CFR 1703.112 - Computation of time.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 4 2013-01-01 2013-01-01 false Computation of time. 1703.112 Section 1703.112 Energy DEFENSE NUCLEAR FACILITIES SAFETY BOARD PUBLIC INFORMATION AND REQUESTS § 1703.112 Computation of time. In... until the end of the next working day. Whenever a person has the right or is required to take some...
10 CFR 1703.112 - Computation of time.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Computation of time. 1703.112 Section 1703.112 Energy DEFENSE NUCLEAR FACILITIES SAFETY BOARD PUBLIC INFORMATION AND REQUESTS § 1703.112 Computation of time. In... until the end of the next working day. Whenever a person has the right or is required to take some...
10 CFR 1703.112 - Computation of time.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Computation of time. 1703.112 Section 1703.112 Energy DEFENSE NUCLEAR FACILITIES SAFETY BOARD PUBLIC INFORMATION AND REQUESTS § 1703.112 Computation of time. In... until the end of the next working day. Whenever a person has the right or is required to take some...
High-Resiliency and Auto-Scaling of Large-Scale Cloud Computing for OCO-2 L2 Full Physics Processing
NASA Astrophysics Data System (ADS)
Hua, H.; Manipon, G.; Starch, M.; Dang, L. B.; Southam, P.; Wilson, B. D.; Avis, C.; Chang, A.; Cheng, C.; Smyth, M.; McDuffie, J. L.; Ramirez, P.
2015-12-01
Next generation science data systems are needed to address the incoming flood of data from new missions such as SWOT and NISAR where data volumes and data throughput rates are order of magnitude larger than present day missions. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. We present our experiences on deploying a hybrid-cloud computing science data system (HySDS) for the OCO-2 Science Computing Facility to support large-scale processing of their Level-2 full physics data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer ~10X costs savings but with an unpredictable computing environment based on market forces. We will present how we enabled high-tolerance computing in order to achieve large-scale computing as well as operational cost savings.
Information Presentation and Control in a Modern Air Traffic Control Tower Simulator
NASA Technical Reports Server (NTRS)
Haines, Richard F.; Doubek, Sharon; Rabin, Boris; Harke, Stanton
1996-01-01
The proper presentation and management of information in America's largest and busiest (Level V) air traffic control towers calls for an in-depth understanding of many different human-computer considerations: user interface design for graphical, radar, and text; manual and automated data input hardware; information/display output technology; reconfigurable workstations; workload assessment; and many other related subjects. This paper discusses these subjects in the context of the Surface Development and Test Facility (SDTF) currently under construction at NASA's Ames Research Center, a full scale, multi-manned, air traffic control simulator which will provide the "look and feel" of an actual airport tower cab. Special emphasis will be given to the human-computer interfaces required for the different kinds of information displayed at the various controller and supervisory positions and to the computer-aided design (CAD) and other analytic, computer-based tools used to develop the facility.
Support System Effects on the NASA Common Research Model
NASA Technical Reports Server (NTRS)
Rivers, S. Melissa B.; Hunter, Craig A.
2012-01-01
An experimental investigation of the NASA Common Research Model was conducted in the NASA Langley National Transonic Facility and NASA Ames 11-Foot Transonic Wind Tunnel Facility for use in the Drag Prediction Workshop. As data from the experimental investigations was collected, a large difference in moment values was seen between the experimental and the computational data from the 4th Drag Prediction Workshop. This difference led to the present work. In this study, a computational assessment has been undertaken to investigate model support system interference effects on the Common Research Model. The configurations computed during this investigation were the wing/body/tail=0deg without the support system and the wing/body/tail=0deg with the support system. The results from this investigation confirm that the addition of the support system to the computational cases does shift the pitching moment in the direction of the experimental results.
Surgical resource utilization in urban terrorist bombing: a computer simulation.
Hirshberg, A; Stein, M; Walden, R
1999-09-01
The objective of this study was to analyze the utilization of surgical staff and facilities during an urban terrorist bombing incident. A discrete-event computer model of the emergency room and related hospital facilities was constructed and implemented, based on cumulated data from 12 urban terrorist bombing incidents in Israel. The simulation predicts that the admitting capacity of the hospital depends primarily on the number of available surgeons and defines an optimal staff profile for surgeons, residents, and trauma nurses. The major bottlenecks in the flow of critical casualties are the shock rooms and the computed tomographic scanner but not the operating rooms. The simulation also defines the number of reinforcement staff needed to treat noncritical casualties and shows that radiology is the major obstacle to the flow of these patients. Computer simulation is an important new tool for the optimization of surgical service elements for a multiple-casualty situation.
Template Interfaces for Agile Parallel Data-Intensive Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramakrishnan, Lavanya; Gunter, Daniel; Pastorello, Gilerto Z.
Tigres provides a programming library to compose and execute large-scale data-intensive scientific workflows from desktops to supercomputers. DOE User Facilities and large science collaborations are increasingly generating large enough data sets that it is no longer practical to download them to a desktop to operate on them. They are instead stored at centralized compute and storage resources such as high performance computing (HPC) centers. Analysis of this data requires an ability to run on these facilities, but with current technologies, scaling an analysis to an HPC center and to a large data set is difficult even for experts. Tigres ismore » addressing the challenge of enabling collaborative analysis of DOE Science data through a new concept of reusable "templates" that enable scientists to easily compose, run and manage collaborative computational tasks. These templates define common computation patterns used in analyzing a data set.« less
Reliability model derivation of a fault-tolerant, dual, spare-switching, digital computer system
NASA Technical Reports Server (NTRS)
1974-01-01
A computer based reliability projection aid, tailored specifically for application in the design of fault-tolerant computer systems, is described. Its more pronounced characteristics include the facility for modeling systems with two distinct operational modes, measuring the effect of both permanent and transient faults, and calculating conditional system coverage factors. The underlying conceptual principles, mathematical models, and computer program implementation are presented.
INFN-Pisa scientific computation environment (GRID, HPC and Interactive Analysis)
NASA Astrophysics Data System (ADS)
Arezzini, S.; Carboni, A.; Caruso, G.; Ciampa, A.; Coscetti, S.; Mazzoni, E.; Piras, S.
2014-06-01
The INFN-Pisa Tier2 infrastructure is described, optimized not only for GRID CPU and Storage access, but also for a more interactive use of the resources in order to provide good solutions for the final data analysis step. The Data Center, equipped with about 6700 production cores, permits the use of modern analysis techniques realized via advanced statistical tools (like RooFit and RooStat) implemented in multicore systems. In particular a POSIX file storage access integrated with standard SRM access is provided. Therefore the unified storage infrastructure is described, based on GPFS and Xrootd, used both for SRM data repository and interactive POSIX access. Such a common infrastructure allows a transparent access to the Tier2 data to the users for their interactive analysis. The organization of a specialized many cores CPU facility devoted to interactive analysis is also described along with the login mechanism integrated with the INFN-AAI (National INFN Infrastructure) to extend the site access and use to a geographical distributed community. Such infrastructure is used also for a national computing facility in use to the INFN theoretical community, it enables a synergic use of computing and storage resources. Our Center initially developed for the HEP community is now growing and includes also HPC resources fully integrated. In recent years has been installed and managed a cluster facility (1000 cores, parallel use via InfiniBand connection) and we are now updating this facility that will provide resources for all the intermediate level HPC computing needs of the INFN theoretical national community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kostin, Mikhail; Mokhov, Nikolai; Niita, Koji
A parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. It is intended to be used with older radiation transport codes implemented in Fortran77, Fortran 90 or C. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was developed and tested in conjunction with the MARS15 code. It is possible to use it with other codes such as PHITS, FLUKA andmore » MCNP after certain adjustments. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. The framework corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.« less
NASA Astrophysics Data System (ADS)
Vasilkin, Andrey
2018-03-01
The more designing solutions at the search stage for design for high-rise buildings can be synthesized by the engineer, the more likely that the final adopted version will be the most efficient and economical. However, in modern market conditions, taking into account the complexity and responsibility of high-rise buildings the designer does not have the necessary time to develop, analyze and compare any significant number of options. To solve this problem, it is expedient to use the high potential of computer-aided designing. To implement automated search for design solutions, it is proposed to develop the computing facilities, the application of which will significantly increase the productivity of the designer and reduce the complexity of designing. Methods of structural and parametric optimization have been adopted as the basis of the computing facilities. Their efficiency in the synthesis of design solutions is shown, also the schemes, that illustrate and explain the introduction of structural optimization in the traditional design of steel frames, are constructed. To solve the problem of synthesis and comparison of design solutions for steel frames, it is proposed to develop the computing facilities that significantly reduces the complexity of search designing and based on the use of methods of structural and parametric optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laurie, Carol
2017-02-01
This book takes readers inside the places where daily discoveries shape the next generation of wind power systems. Energy Department laboratory facilities span the United States and offer wind research capabilities to meet industry needs. The facilities described in this book make it possible for industry players to increase reliability, improve efficiency, and reduce the cost of wind energy -- one discovery at a time. Whether you require blade testing or resource characterization, grid integration or high-performance computing, Department of Energy laboratory facilities offer a variety of capabilities to meet your wind research needs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Office of Energy Efficiency and Renewable Energy
This book takes readers inside the places where daily discoveries shape the next generation of wind power systems. Energy Department laboratory facilities span the United States and offer wind research capabilities to meet industry needs. The facilities described in this book make it possible for industry players to increase reliability, improve efficiency, and reduce the cost of wind energy -- one discovery at a time. Whether you require blade testing or resource characterization, grid integration or high-performance computing, Department of Energy laboratory facilities offer a variety of capabilities to meet your wind research needs.
History of a Building Automation System.
ERIC Educational Resources Information Center
Martin, Anthony A.
1984-01-01
Having successfully used computer control in the solar-heated and cooled Terraset School, the Fairfax County, VA, Public Schools are now computerizing all their facilities. This article discusses the configuration and use of a countywide control system, reasons for the project's success, and problems of facility automation. (MCG)
Numerical aerodynamic simulation facility preliminary study: Executive study
NASA Technical Reports Server (NTRS)
1977-01-01
A computing system was designed with the capability of providing an effective throughput of one billion floating point operations per second for three dimensional Navier-Stokes codes. The methodology used in defining the baseline design, and the major elements of the numerical aerodynamic simulation facility are described.
Numerical Investigation of Double-Cone Flows with High Enthalpy Effects
NASA Astrophysics Data System (ADS)
Nompelis, I.; Candler, G. V.
2009-01-01
A numerical study of shock/shock and shock/boundary layer interactions generated by a double-cone model that is placed in a hypersonic free-stream is presented. Computational results are compared with the experimental measurements made at the CUBRC LENS facility for nitrogen flows at high enthalpy conditions. The CFD predictions agree well with surface pressure and heat-flux measurements for all but one of the double-cone cases that have been studied by the authors. Unsteadiness is observed in computations of one of the LENS cases, however for this case the experimental measurements show that the flowfield is steady. To understand this discrepancy, several double-cone experiments performed in two different facilities with both air and nitrogen as the working gas are examined in the present study. Computational results agree well with measurements made in both the AEDC tunnel 9 and the CUBRC LENS facility for double-cone flows at low free-stream Reynolds numbers where the flow is steady. It is shown that at higher free- stream pressures the double-cone simulations develop instabilities that result in an unsteady separation.
Data Transfer Study HPSS Archiving
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wynne, James; Parete-Koon, Suzanne T; Mitchell, Quinn
2015-01-01
The movement of the large amounts of data produced by codes run in a High Performance Computing (HPC) environment can be a bottleneck for project workflows. To balance filesystem capacity and performance requirements, HPC centers enforce data management policies to purge old files to make room for new computation and analysis results. Users at Oak Ridge Leadership Computing Facility (OLCF) and many other HPC user facilities must archive data to avoid data loss during purges, therefore the time associated with data movement for archiving is something that all users must consider. This study observed the difference in transfer speed frommore » the originating location on the Lustre filesystem to the more permanent High Performance Storage System (HPSS). The tests were done with a number of different transfer methods for files that spanned a variety of sizes and compositions that reflect OLCF user data. This data will be used to help users of Titan and other Cray supercomputers plan their workflow and data transfers so that they are most efficient for their project. We will also discuss best practice for maintaining data at shared user facilities.« less
Computational Analysis of Arc-Jet Wedge Tests Including Ablation and Shape Change
NASA Technical Reports Server (NTRS)
Goekcen, Tahir; Chen, Yih-Kanq; Skokova, Kristina A.; Milos, Frank S.
2010-01-01
Coupled fluid-material response analyses of arc-jet wedge ablation tests conducted in a NASA Ames arc-jet facility are considered. These tests were conducted using blunt wedge models placed in a free jet downstream of the 6-inch diameter conical nozzle in the Ames 60-MW Interaction Heating Facility. The fluid analysis includes computational Navier-Stokes simulations of the nonequilibrium flowfield in the facility nozzle and test box as well as the flowfield over the models. The material response analysis includes simulation of two-dimensional surface ablation and internal heat conduction, thermal decomposition, and pyrolysis gas flow. For ablating test articles undergoing shape change, the material response and fluid analyses are coupled in order to calculate the time dependent surface heating and pressure distributions that result from shape change. The ablating material used in these arc-jet tests was Phenolic Impregnated Carbon Ablator. Effects of the test article shape change on fluid and material response simulations are demonstrated, and computational predictions of surface recession, shape change, and in-depth temperatures are compared with the experimental measurements.
47 CFR 87.143 - Transmitter control requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Section 87.143 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO..., the control point for an automatically controlled enroute station is the computer facility which controls the transmitter. Any computer controlled transmitter must be equipped to automatically shut down...
47 CFR 87.143 - Transmitter control requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Section 87.143 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO..., the control point for an automatically controlled enroute station is the computer facility which controls the transmitter. Any computer controlled transmitter must be equipped to automatically shut down...
47 CFR 87.143 - Transmitter control requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Section 87.143 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO..., the control point for an automatically controlled enroute station is the computer facility which controls the transmitter. Any computer controlled transmitter must be equipped to automatically shut down...
47 CFR 87.143 - Transmitter control requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Section 87.143 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO..., the control point for an automatically controlled enroute station is the computer facility which controls the transmitter. Any computer controlled transmitter must be equipped to automatically shut down...
47 CFR 87.143 - Transmitter control requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Section 87.143 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO..., the control point for an automatically controlled enroute station is the computer facility which controls the transmitter. Any computer controlled transmitter must be equipped to automatically shut down...
WORLDWIDE COLLECTION AND EVALUATION OF EARTHQUAKE DATA
period, the hypocenter and magnitude programs were tested and then used to process January 1964 data at the computer facilities of the Environmental ... Science Services Administration (ESSA), Suitland, Maryland, using the CDC 6600 computer. Results of this processing are shown.
Operating Dedicated Data Centers - Is It Cost-Effective?
NASA Astrophysics Data System (ADS)
Ernst, M.; Hogue, R.; Hollowell, C.; Strecker-Kellog, W.; Wong, A.; Zaytsev, A.
2014-06-01
The advent of cloud computing centres such as Amazon's EC2 and Google's Computing Engine has elicited comparisons with dedicated computing clusters. Discussions on appropriate usage of cloud resources (both academic and commercial) and costs have ensued. This presentation discusses a detailed analysis of the costs of operating and maintaining the RACF (RHIC and ATLAS Computing Facility) compute cluster at Brookhaven National Lab and compares them with the cost of cloud computing resources under various usage scenarios. An extrapolation of likely future cost effectiveness of dedicated computing resources is also presented.
Computational Electromagnetics (CEM) Laboratory: Simulation Planning Guide
NASA Technical Reports Server (NTRS)
Khayat, Michael A.
2011-01-01
The simulation process, milestones and inputs are unknowns to first-time users of the CEM Laboratory. The Simulation Planning Guide aids in establishing expectations for both NASA and non-NASA facility customers. The potential audience for this guide includes both internal and commercial spaceflight hardware/software developers. It is intended to assist their engineering personnel in simulation planning and execution. Material covered includes a roadmap of the simulation process, roles and responsibilities of facility and user, major milestones, facility capabilities, and inputs required by the facility. Samples of deliverables, facility interfaces, and inputs necessary to define scope, cost, and schedule are included as an appendix to the guide.
NASA Astrophysics Data System (ADS)
James, C. M.; Gildfind, D. E.; Lewis, S. W.; Morgan, R. G.; Zander, F.
2018-03-01
Expansion tubes are an important type of test facility for the study of planetary entry flow-fields, being the only type of impulse facility capable of simulating the aerothermodynamics of superorbital planetary entry conditions from 10 to 20 km/s. However, the complex flow processes involved in expansion tube operation make it difficult to fully characterise flow conditions, with two-dimensional full facility computational fluid dynamics simulations often requiring tens or hundreds of thousands of computational hours to complete. In an attempt to simplify this problem and provide a rapid flow condition prediction tool, this paper presents a validated and comprehensive analytical framework for the simulation of an expansion tube facility. It identifies central flow processes and models them from state to state through the facility using established compressible and isentropic flow relations, and equilibrium and frozen chemistry. How the model simulates each section of an expansion tube is discussed, as well as how the model can be used to simulate situations where flow conditions diverge from ideal theory. The model is then validated against experimental data from the X2 expansion tube at the University of Queensland.
Evolving technologies for Space Station Freedom computer-based workstations
NASA Technical Reports Server (NTRS)
Jensen, Dean G.; Rudisill, Marianne
1990-01-01
Viewgraphs on evolving technologies for Space Station Freedom computer-based workstations are presented. The human-computer computer software environment modules are described. The following topics are addressed: command and control workstation concept; cupola workstation concept; Japanese experiment module RMS workstation concept; remote devices controlled from workstations; orbital maneuvering vehicle free flyer; remote manipulator system; Japanese experiment module exposed facility; Japanese experiment module small fine arm; flight telerobotic servicer; human-computer interaction; and workstation/robotics related activities.
1991-09-01
System ( CAPMS ) in lieu of using DODI 4151.15H. Facility utilization rate computation is not explicitly defined; it is merely identified as a ratio of...front of a bottleneck buffers the critical resource and protects against disruption of the system. This approach optimizes facility utilization by...run titled BUFFERED BASELINE. Three different levels of inventory were used to evaluate the effect of increasing the inventory level on critical
Rapid Prototyping of Computer-Based Presentations Using NEAT, Version 1.1.
ERIC Educational Resources Information Center
Muldner, Tomasz
NEAT (iNtegrated Environment for Authoring in ToolBook) provides templates and various facilities for the rapid prototyping of computer-based presentations, a capability that is lacking in current authoring systems. NEAT is a specialized authoring system that can be used by authors who have a limited knowledge of computer systems and no…
Undergraduate Student Task Group Approach to Complex Problem Solving Employing Computer Programming.
ERIC Educational Resources Information Center
Brooks, LeRoy D.
A project formulated a computer simulation game for use as an instructional device to improve financial decision making. The author constructed a hypothetical firm, specifying its environment, variables, and a maximization problem. Students, assisted by a professor and computer consultants and having access to B5500 and B6700 facilities, held 16…
NASA Technical Reports Server (NTRS)
Gerber, C. R.
1972-01-01
The development of uniform computer program standards and conventions for the modular space station is discussed. The accomplishments analyzed are: (1) development of computer program specification hierarchy, (2) definition of computer program development plan, and (3) recommendations for utilization of all operating on-board space station related data processing facilities.
NASA Technical Reports Server (NTRS)
1994-01-01
This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in the areas of (1) applied and numerical mathematics, including numerical analysis and algorithm development; (2) theoretical and computational research in fluid mechanics in selected areas of interest, including acoustics and combustion; (3) experimental research in transition and turbulence and aerodynamics involving Langley facilities and scientists; and (4) computer science.
ERIC Educational Resources Information Center
Lippert, Henry T.; Harris, Edward V.
The diverse requirements for computing facilities in education place heavy demands upon available resources. Although multiple or very large computers can supply such diverse needs, their cost makes them impractical for many institutions. Small computers which serve a few specific needs may be an economical answer. However, to serve operationally…
Circus: A Replicated Procedure Call Facility
1984-08-01
Computer Science Laboratory, Xerox PARC, July 1082 . [24) Bruce Ja.y Nelson. Remote Procedure Ctdl. Ph.D. dissertation, Computer Science Department...t. Ph.D. dissertation, Computer Science Division, University of California, Berkeley, Xerox PARC report number CSIF 82-7, December 1082 . [30...Tandem Computers Inc. GUARDIAN Opet’ating Sy•tem Programming Mt~nulll, Volumu 1 11nd 2. C upertino, California, 1082 . [31) R. H. Thoma.s. A majority
2004-04-15
The Wake Shield Facility (WSF) is a free-flying research and development facility that is designed to use the pure vacuum of space to conduct scientific research in the development of new materials. The thin film materials technology developed by the WSF could some day lead to applications such as faster electronics components for computers.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-13
...) disposal facilities. The workshop has been developed to facilitate communication among Federal and State... and conceptual models, and (3) the selection of computer codes. Information gathered from invited.... NRC Public Meeting The purpose of this public meeting is to facilitate communication and gather...
Variable gravity research facility
NASA Technical Reports Server (NTRS)
Allan, Sean; Ancheta, Stan; Beine, Donna; Cink, Brian; Eagon, Mark; Eckstein, Brett; Luhman, Dan; Mccowan, Daniel; Nations, James; Nordtvedt, Todd
1988-01-01
Spin and despin requirements; sequence of activities required to assemble the Variable Gravity Research Facility (VGRF); power systems technology; life support; thermal control systems; emergencies; communication systems; space station applications; experimental activities; computer modeling and simulation of tether vibration; cost analysis; configuration of the crew compartments; and tether lengths and rotation speeds are discussed.
DEVELOPMENT OF THE U.S. EPA'S METAL FINISHING FACILITY POLLUTION PREVENTION TOOL
Metal finishing processes are a type of chemical processes and can be modeled using Computer Aided Process Engineering (CAPE). Currently, the U.S. EPA is developing the Metal Finishing Facility Pollution Prevention Tool (MFFP2T), a pollution prevention software tool for the meta...
Post-Secondary Institutions Facilities Inventory Operating Manual.
ERIC Educational Resources Information Center
British Columbia Dept. of Education, Victoria.
This manual presents the operations of British Columbia's computerized facilities inventory for post-secondary institutions. A brief summary describes the kinds of code tables used, the forms used to feed data into the computer, the types of printout reports available, and the responsibilities of institutions using the system. More detailed…
ERIC Educational Resources Information Center
Blodgett, Teresa; Repman, Judi
1995-01-01
Addresses the necessity of incorporating new computer technologies into school library resource centers and notes some administrative challenges. An extensive checklist is provided for assessing equipment and furniture needs, physical facilities, and rewiring needs. A glossary of 20 terms and 11 additional resources is included. (AEF)
Our Story | Materials Research Laboratory at UCSB: an NSF MRSEC
this site Materials Research Laboratory at UCSB: an NSF MRSEC logo Materials Research Laboratory at & Workshops Visitor Info Research IRG-1: Magnetic Intermetallic Mesostructures IRG 2: Polymeric Seminars Publications MRL Calendar Facilities Computing Energy Research Facility Microscopy &
Computer visualizations in engineering applications
NASA Astrophysics Data System (ADS)
Bills, K. C.
The use of computerized simulations of various robotic tasks via IGRIP software is reported. The projects include underwater activities demonstrating clean up of a quarry; time study of methods to store waste drums inside a facility; design walk-through of a new facility; plant layout flyover; and conceptual development and layout of new mechanisms.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-14
... 0938-AP87 Medicare Program; Prospective Payment System and Consolidated Billing for Skilled Nursing... Payment System and Consolidated Billing for Skilled Nursing Facilities for FY 2011.'' DATES: Effective... illustrate the skilled nursing facility (SNF) prospective payment system (PPS) payment rate computations for...
Berkeley Lab - Materials Sciences Division
Computational Study of Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion Beam Analysis Behavior of Lithium Metal across a Rigid Block Copolymer Electrolyte Membrane. Journal of the
Accommodating Technology in the Visual Literacy Classroom.
ERIC Educational Resources Information Center
Lloyd, Carla V.; Barnhurst, Kevin G.
The development of a visual literacy facility, the Creative Visual Lab, at the S. I. Newhouse School of Public Communications at Syracuse University (New York) is described. The facility was designed to provide students with the instruction that would develop their computer proficiency and visual sensitivity without being, in itself, completely…
Teaching ergonomics to nursing facility managers using computer-based instruction.
Harrington, Susan S; Walker, Bonnie L
2006-01-01
This study offers evidence that computer-based training is an effective tool for teaching nursing facility managers about ergonomics and increasing their awareness of potential problems. Study participants (N = 45) were randomly assigned into a treatment or control group. The treatment group completed the ergonomics training and a pre- and posttest. The control group completed the pre- and posttests without training. Treatment group participants improved significantly from 67% on the pretest to 91% on the posttest, a gain of 24%. Differences between mean scores for the control group were not significant for the total score or for any of the subtests.
Prediction and characterization of application power use in a high-performance computing environment
Bugbee, Bruce; Phillips, Caleb; Egan, Hilary; ...
2017-02-27
Power use in data centers and high-performance computing (HPC) facilities has grown in tandem with increases in the size and number of these facilities. Substantial innovation is needed to enable meaningful reduction in energy footprints in leadership-class HPC systems. In this paper, we focus on characterizing and investigating application-level power usage. We demonstrate potential methods for predicting power usage based on a priori and in situ characteristics. Lastly, we highlight a potential use case of this method through a simulated power-aware scheduler using historical jobs from a real scientific HPC system.
Separating Added Value from Hype: Some Experiences and Prognostications
NASA Astrophysics Data System (ADS)
Reed, Dan
2004-03-01
These are exciting times for the interplay of science and computing technology. As new data archives, instruments and computing facilities are connected nationally and internationally, a new model of distributed scientific collaboration is emerging. However, any new technology brings both opportunities and challenges -- Grids are no exception. In this talk, we will discuss some of the experiences deploying Grid software in production environments, illustrated with experiences from the NSF PACI Alliance, the NSF Extensible Terascale Facility (ETF) and other Grid projects. From these experiences, we derive some guidelines for deployment and some suggestions for community engagement, software development and infrastructure
NASA Technical Reports Server (NTRS)
Ramsey, J. W., Jr.; Taylor, J. T.; Wilson, J. F.; Gray, C. E., Jr.; Leatherman, A. D.; Rooker, J. R.; Allred, J. W.
1976-01-01
The results of extensive computer (finite element, finite difference and numerical integration), thermal, fatigue, and special analyses of critical portions of a large pressurized, cryogenic wind tunnel (National Transonic Facility) are presented. The computer models, loading and boundary conditions are described. Graphic capability was used to display model geometry, section properties, and stress results. A stress criteria is presented for evaluation of the results of the analyses. Thermal analyses were performed for major critical and typical areas. Fatigue analyses of the entire tunnel circuit are presented.
Advanced Simulation and Computing Fiscal Year 2016 Implementation Plan, Version 0
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCoy, M.; Archer, B.; Hendrickson, B.
2015-08-27
The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The purpose of this IP is to outline key work requirements to be performed and to control individualmore » work activities within the scope of work. Contractors may not deviate from this plan without a revised WA or subsequent IP.« less
Digital tape unit test facility software
NASA Technical Reports Server (NTRS)
Jackson, J. T.
1971-01-01
Two computer programs are described which are used for the collection and analysis of data from the digital tape unit test facility (DTUTF). The data are the recorded results of skew tests made on magnetic digital tapes which are used on computers as input/output media. The results of each tape test are keypunched onto an 80 column computer card. The format of the card is checked and the card image is stored on a master summary tape via the DTUTF card checking and tape updating system. The master summary tape containing the results of all the tape tests is then used for analysis as input to the DTUTF histogram generating system which produces a histogram of skew vs. date for selected data, followed by some statistical analysis of the data.
Technician Program Uses Advanced Instruments.
ERIC Educational Resources Information Center
Stinson, Stephen
1981-01-01
Describes various aspects of a newly-developed computer-assisted drafting/computer-assisted manufacture (CAD/CAM) facility in the chemical engineering technology department at Broome Community College, Binghamton, New York. Stresses the use of new instruments such as microcomputers and microprocessor-equipped instruments. (CS)
FAA computer security : concerns remain due to personnel and other continuing weaknesses
DOT National Transportation Integrated Search
2000-08-01
FAA has a history of computer security weaknesses in a number of areas, including its physical security management at facilities that house air traffic control (ATC) systems, systems security for both operational and future systems, management struct...
42 CFR 441.182 - Maintenance of effort: Computation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... SERVICES Inpatient Psychiatric Services for Individuals Under Age 21 in Psychiatric Facilities or Programs § 441.182 Maintenance of effort: Computation. (a) For expenditures for inpatient psychiatric services... total State Medicaid expenditures in the current quarter for inpatient psychiatric services and...
Considering High-Tech Exhibits?
ERIC Educational Resources Information Center
Routman, Emily
1994-01-01
Discusses a variety of high-tech exhibit media used in The Living World, an educational facility operated by The Saint Louis Zoo. Considers the strengths and weaknesses of holograms, video, animatronics, video-equipped microscopes, and computer interactives. Computer interactives are treated with special attention. (LZ)
Helms with computers at HRF rack in Destiny module
2001-05-18
ISS002-E-6288 (18 May 2001) --- Susan J. Helms, Expedition Two flight engineer, works with three laptop computers at the Human Research Facility (HRF) in the U.S. Laboratory. The image was taken with a digital still camera.
Helms with computers at HRF rack in Destiny module
2001-05-18
ISS002-E-6294 (18 May 2001) --- Susan J. Helms, Expedition Two flight engineer, works with three laptop computers at the Human Research Facility (HRF) in the U.S. Laboratory. The image was taken with a digital still camera.
11 CFR 9003.6 - Production of computer information.
Code of Federal Regulations, 2010 CFR
2010-01-01
... legal and accounting services, including the allocation of payroll and overhead expenditures; (4..., ground services and facilities made available to media personnel, including records relating to how costs... explaining the computer system's software capabilities, such as user guides, technical manuals, formats...
The Center for Nanophase Materials Sciences
NASA Astrophysics Data System (ADS)
Lowndes, Douglas
2005-03-01
The Center for Nanophase Materials Sciences (CNMS) located at Oak Ridge National Laboratory (ORNL) will be the first DOE Nanoscale Science Research Center to begin operation, with construction to be completed in April 2005 and initial operations in October 2005. The CNMS' scientific program has been developed through workshops with the national community, with the goal of creating a highly collaborative research environment to accelerate discovery and drive technological advances. Research at the CNMS is organized under seven Scientific Themes selected to address challenges to understanding and to exploit particular ORNL strengths (see http://cnms.ornl.govhttp://cnms.ornl.gov). These include extensive synthesis and characterization capabilities for soft, hard, nanostructured, magnetic and catalytic materials and their composites; neutron scattering at the Spallation Neutron Source and High Flux Isotope Reactor; computational nanoscience in the CNMS' Nanomaterials Theory Institute and utilizing facilities and expertise of the Center for Computational Sciences and the new Leadership Scientific Computing Facility at ORNL; a new CNMS Nanofabrication Research Laboratory; and a suite of unique and state-of-the-art instruments to be made reliably available to the national community for imaging, manipulation, and properties measurements on nanoscale materials in controlled environments. The new research facilities will be described together with the planned operation of the user research program, the latter illustrated by the current ``jump start'' user program that utilizes existing ORNL/CNMS facilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, C.; Nabelssi, B.; Roglans-Ribas, J.
1995-04-01
This report contains the Appendices for the Analysis of Accident Sequences and Source Terms at Waste Treatment and Storage Facilities for Waste Generated by the U.S. Department of Energy Waste Management Operations. The main report documents the methodology, computational framework, and results of facility accident analyses performed as a part of the U.S. Department of Energy (DOE) Waste Management Programmatic Environmental Impact Statement (WM PEIS). The accident sequences potentially important to human health risk are specified, their frequencies are assessed, and the resultant radiological and chemical source terms are evaluated. A personal computer-based computational framework and database have been developedmore » that provide these results as input to the WM PEIS for calculation of human health risk impacts. This report summarizes the accident analyses and aggregates the key results for each of the waste streams. Source terms are estimated and results are presented for each of the major DOE sites and facilities by WM PEIS alternative for each waste stream. The appendices identify the potential atmospheric release of each toxic chemical or radionuclide for each accident scenario studied. They also provide discussion of specific accident analysis data and guidance used or consulted in this report.« less
Local storage federation through XRootD architecture for interactive distributed analysis
NASA Astrophysics Data System (ADS)
Colamaria, F.; Colella, D.; Donvito, G.; Elia, D.; Franco, A.; Luparello, G.; Maggi, G.; Miniello, G.; Vallero, S.; Vino, G.
2015-12-01
A cloud-based Virtual Analysis Facility (VAF) for the ALICE experiment at the LHC has been deployed in Bari. Similar facilities are currently running in other Italian sites with the aim to create a federation of interoperating farms able to provide their computing resources for interactive distributed analysis. The use of cloud technology, along with elastic provisioning of computing resources as an alternative to the grid for running data intensive analyses, is the main challenge of these facilities. One of the crucial aspects of the user-driven analysis execution is the data access. A local storage facility has the disadvantage that the stored data can be accessed only locally, i.e. from within the single VAF. To overcome such a limitation a federated infrastructure, which provides full access to all the data belonging to the federation independently from the site where they are stored, has been set up. The federation architecture exploits both cloud computing and XRootD technologies, in order to provide a dynamic, easy-to-use and well performing solution for data handling. It should allow the users to store the files and efficiently retrieve the data, since it implements a dynamic distributed cache among many datacenters in Italy connected to one another through the high-bandwidth national network. Details on the preliminary architecture implementation and performance studies are discussed.
ERA 1103 UNIVAC 2 Calculating Machine
1955-09-21
The new 10-by 10-Foot Supersonic Wind Tunnel at the Lewis Flight Propulsion Laboratory included high tech data acquisition and analysis systems. The reliable gathering of pressure, speed, temperature, and other data from test runs in the facilities was critical to the research process. Throughout the 1940s and early 1950s female employees, known as computers, recorded all test data and performed initial calculations by hand. The introduction of punch card computers in the late 1940s gradually reduced the number of hands-on calculations. In the mid-1950s new computational machines were installed in the office building of the 10-by 10-Foot tunnel. The new systems included this UNIVAC 1103 vacuum tube computer—the lab’s first centralized computer system. The programming was done on paper tape and fed into the machine. The 10-by 10 computer center also included the Lewis-designed Computer Automated Digital Encoder (CADDE) and Digital Automated Multiple Pressure Recorder (DAMPR) systems which converted test data to binary-coded decimal numbers and recorded test pressures automatically, respectively. The systems primarily served the 10-by 10, but were also applied to the other large facilities. Engineering Research Associates (ERA) developed the initial UNIVAC computer for the Navy in the late 1940s. In 1952 the company designed a commercial version, the UNIVAC 1103. The 1103 was the first computer designed by Seymour Cray and the first commercially successful computer.
Abstracts of Research, July 1975-June 1976.
ERIC Educational Resources Information Center
Ohio State Univ., Columbus. Computer and Information Science Research Center.
Abstracts of research papers in computer and information science are given for 62 papers in the areas of information storage and retrieval; computer facilities; information analysis; linguistics analysis; artificial intelligence; information processes in physical, biological, and social systems; mathematical technigues; systems programming;…
2016-05-05
Following a naming dedication ceremony May 5, 2016 - the 55th anniversary of Alan Shepard's historic rocket launch - NASA Langley Research Center's newest building is known as the Katherine G. Johnson Computational Research Facility, honoring the "human computer" who successfully calculated the trajectories for America's first space flights.
Multi-year Content Analysis of User Facility Related Publications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, Robert M; Stahl, Christopher G; Hines, Jayson
2013-01-01
Scientific user facilities provide resources and support that enable scientists to conduct experiments or simulations pertinent to their respective research. Consequently, it is critical to have an informed understanding of the impact and contributions that these facilities have on scientific discoveries. Leveraging insight into scientific publications that acknowledge the use of these facilities enables more informed decisions by facility management and sponsors in regard to policy, resource allocation, and influencing the direction of science as well as more effectively understand the impact of a scientific user facility. This work discusses preliminary results of mining scientific publications that utilized resources atmore » the Oak Ridge Leadership Computing Facility (OLCF) at Oak Ridge National Laboratory (ORNL). These results show promise in identifying and leveraging multi-year trends and providing a higher resolution view of the impact that a scientific user facility may have on scientific discoveries.« less
The HEPCloud Facility: elastic computing for High Energy Physics – The NOvA Use Case
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuess, S.; Garzoglio, G.; Holzman, B.
The need for computing in the HEP community follows cycles of peaks and valleys mainly driven by conference dates, accelerator shutdown, holiday schedules, and other factors. Because of this, the classical method of provisioning these resources at providing facilities has drawbacks such as potential overprovisioning. As the appetite for computing increases, however, so does the need to maximize cost efficiency by developing a model for dynamically provisioning resources only when needed. To address this issue, the HEPCloud project was launched by the Fermilab Scientific Computing Division in June 2015. Its goal is to develop a facility that provides a commonmore » interface to a variety of resources, including local clusters, grids, high performance computers, and community and commercial Clouds. Initially targeted experiments include CMS and NOvA, as well as other Fermilab stakeholders. In its first phase, the project has demonstrated the use of the “elastic” provisioning model offered by commercial clouds, such as Amazon Web Services. In this model, resources are rented and provisioned automatically over the Internet upon request. In January 2016, the project demonstrated the ability to increase the total amount of global CMS resources by 58,000 cores from 150,000 cores - a 25 percent increase - in preparation for the Recontres de Moriond. In March 2016, the NOvA experiment has also demonstrated resource burst capabilities with an additional 7,300 cores, achieving a scale almost four times as large as the local allocated resources and utilizing the local AWS s3 storage to optimize data handling operations and costs. NOvA was using the same familiar services used for local computations, such as data handling and job submission, in preparation for the Neutrino 2016 conference. In both cases, the cost was contained by the use of the Amazon Spot Instance Market and the Decision Engine, a HEPCloud component that aims at minimizing cost and job interruption. This paper describes the Fermilab HEPCloud Facility and the challenges overcome for the CMS and NOvA communities.« less
Challenges in scaling NLO generators to leadership computers
NASA Astrophysics Data System (ADS)
Benjamin, D.; Childers, JT; Hoeche, S.; LeCompte, T.; Uram, T.
2017-10-01
Exascale computing resources are roughly a decade away and will be capable of 100 times more computing than current supercomputers. In the last year, Energy Frontier experiments crossed a milestone of 100 million core-hours used at the Argonne Leadership Computing Facility, Oak Ridge Leadership Computing Facility, and NERSC. The Fortran-based leading-order parton generator called Alpgen was successfully scaled to millions of threads to achieve this level of usage on Mira. Sherpa and MadGraph are next-to-leading order generators used heavily by LHC experiments for simulation. Integration times for high-multiplicity or rare processes can take a week or more on standard Grid machines, even using all 16-cores. We will describe our ongoing work to scale the Sherpa generator to thousands of threads on leadership-class machines and reduce run-times to less than a day. This work allows the experiments to leverage large-scale parallel supercomputers for event generation today, freeing tens of millions of grid hours for other work, and paving the way for future applications (simulation, reconstruction) on these and future supercomputers.
Workflow Management Systems for Molecular Dynamics on Leadership Computers
NASA Astrophysics Data System (ADS)
Wells, Jack; Panitkin, Sergey; Oleynik, Danila; Jha, Shantenu
Molecular Dynamics (MD) simulations play an important role in a range of disciplines from Material Science to Biophysical systems and account for a large fraction of cycles consumed on computing resources. Increasingly science problems require the successful execution of ''many'' MD simulations as opposed to a single MD simulation. There is a need to provide scalable and flexible approaches to the execution of the workload. We present preliminary results on the Titan computer at the Oak Ridge Leadership Computing Facility that demonstrate a general capability to manage workload execution agnostic of a specific MD simulation kernel or execution pattern, and in a manner that integrates disparate grid-based and supercomputing resources. Our results build upon our extensive experience of distributed workload management in the high-energy physics ATLAS project using PanDA (Production and Distributed Analysis System), coupled with recent conceptual advances in our understanding of workload management on heterogeneous resources. We will discuss how we will generalize these initial capabilities towards a more production level service on DOE leadership resources. This research is sponsored by US DOE/ASCR and used resources of the OLCF computing facility.
Multi-objective reverse logistics model for integrated computer waste management.
Ahluwalia, Poonam Khanijo; Nema, Arvind K
2006-12-01
This study aimed to address the issues involved in the planning and design of a computer waste management system in an integrated manner. A decision-support tool is presented for selecting an optimum configuration of computer waste management facilities (segregation, storage, treatment/processing, reuse/recycle and disposal) and allocation of waste to these facilities. The model is based on an integer linear programming method with the objectives of minimizing environmental risk as well as cost. The issue of uncertainty in the estimated waste quantities from multiple sources is addressed using the Monte Carlo simulation technique. An illustrated example of computer waste management in Delhi, India is presented to demonstrate the usefulness of the proposed model and to study tradeoffs between cost and risk. The results of the example problem show that it is possible to reduce the environmental risk significantly by a marginal increase in the available cost. The proposed model can serve as a powerful tool to address the environmental problems associated with exponentially growing quantities of computer waste which are presently being managed using rudimentary methods of reuse, recovery and disposal by various small-scale vendors.
ERIC Educational Resources Information Center
Roach, Ronald
2005-01-01
The Joint Educational Facilities Inc. (JEF) computer science program has as its goal to acquaint minority and socially disadvantaged K-12 students with computer science basics and the innovative subdisciplines within the field, and to reinforce the college ambitions of participants or help them consider college as an option. A non-profit…
Generation and physical characteristics of the ERTS MSS system corrected computer compatible tapes
NASA Technical Reports Server (NTRS)
Thomas, V. L.
1973-01-01
The generation and format are discussed of the ERTS system corrected multispectral scanner computer compatible tapes. The discussion includes spacecraft sensors, scene characteristics, data transmission, and conversion of data to computer compatible tapes at the NASA Data Processing Facility. Geometeric and radiometric corrections, tape formats, and the physical characteristics of the tapes are also included.
Ways of achieving continuous service from computers
NASA Technical Reports Server (NTRS)
Quinn, M. J., Jr.
1974-01-01
This paper outlines the methods used in the real-time computer complex to keep computers operating. Methods include selectover, high-speed restart, and low-speed restart. The hardware and software needed to implement these methods is discussed as well as the system recovery facility, alternate device support, and timeout. In general, methods developed while supporting the Gemini, Apollo, and Skylab space missions are presented.
A free-piston Stirling engine/linear alternator controls and load interaction test facility
NASA Technical Reports Server (NTRS)
Rauch, Jeffrey S.; Kankam, M. David; Santiago, Walter; Madi, Frank J.
1992-01-01
A test facility at LeRC was assembled for evaluating free-piston Stirling engine/linear alternator control options, and interaction with various electrical loads. This facility is based on a 'SPIKE' engine/alternator. The engine/alternator, a multi-purpose load system, a digital computer based load and facility control, and a data acquisition system with both steady-periodic and transient capability are described. Preliminary steady-periodic results are included for several operating modes of a digital AC parasitic load control. Preliminary results on the transient response to switching a resistive AC user load are discussed.
Chong, Mei Chan; Francis, Karen; Cooper, Simon; Abdullah, Khatijah Lim; Hmwe, Nant Thin Thin; Sohod, Salina
2016-01-01
Continuous nursing education (CNE) courses delivered through e-learning is believed to be an effective mode of learning for nurses. Implementation of e-learning modules requires pre-assessment of infrastructure and learners' characteristics. Understanding the learners' needs and their perspectives would facilitate effective e-learning delivery by addressing the underlying issues and providing necessary support to learners. The aim of this study was to examine access to computer and Internet facilities, interest in and preferences regarding e-learning, and attitudes toward e-learning among nurses in Peninsular Malaysia. The study utilized a cross-sectional descriptive survey. Government hospitals and community clinics in four main regions of Peninsular Malaysia. A total of 300 registered nurses. Data were collected using questionnaires, which consisted of demographic and background items and questions on access to computer and Internet facilities, interest and preferences in e-learning, and attitudes toward e-learning. Descriptive analysis and a chi-squared test were used to identify associations between variables. Most Malaysian nurses had access to a personal or home computer (85.3%, n=256) and computer access at work (85.3%, n=256). The majority had Internet access at home (84%, n=252) and at work (71.8%, n=215); however, average hours of weekly computer use were low. Most nurses (83%, n=249) did not have an e-learning experience but were interested in e-learning activities. Most nurses displayed positive attitudes toward e-learning. Average weekly computer use and interest in e-learning were positively associated with attitudes toward e-learning. Study findings suggest that organizational support is needed to promote accessibility of information and communications technology (ICT) facilities for Malaysian nurses to motivate their involvement in e-learning. Copyright © 2015. Published by Elsevier Ltd.
The Practical Obstacles of Data Transfer: Why researchers still love scp
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nam, Hai Ah; Hill, Jason J; Parete-Koon, Suzanne T
The importance of computing facilities is heralded every six months with the announcement of the new Top500 list, showcasing the world s fastest supercomputers. Unfortu- nately, with great computing capability does not come great long-term data storage capacity, which often means users must move their data to their local site archive, to remote sites where they may be doing future computation or anal- ysis, or back to their home institution, else face the dreaded data purge that most HPC centers employ to keep utiliza- tion of large parallel filesystems low to manage performance and capacity. At HPC centers, data transfermore » is crucial to the scientific workflow and will increase in importance as computing systems grow in size. The Energy Sciences Net- work (ESnet) recently launched its fifth generation network, a 100 Gbps high-performance, unclassified national network connecting more than 40 DOE research sites to support scientific research and collaboration. Despite the tenfold increase in bandwidth to DOE research sites amenable to multiple data transfer streams and high throughput, in prac- tice, researchers often under-utilize the network and resort to painfully-slow single stream transfer methods such as scp to avoid the complexity of using multiple stream tools such as GridFTP and bbcp, and contend with frustration from the lack of consistency of available tools between sites. In this study we survey and assess the data transfer methods pro- vided at several DOE supported computing facilities, includ- ing both leadership-computing facilities, connected through ESnet. We present observed transfer rates, suggested opti- mizations, and discuss the obstacles the tools must overcome to receive wide-spread adoption over scp.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-23
... Federal share) IMD and other mental health facility DSH expenditures applicable to the State's FY 1995 DSH... State's total computable DSH expenditures attributable to the FY 1995 DSH allotment for mental health... DSH expenditures (mental health facility plus inpatient hospital) applicable to the FY 1995 DSH...
Grids and clouds in the Czech NGI
NASA Astrophysics Data System (ADS)
Kundrát, Jan; Adam, Martin; Adamová, Dagmar; Chudoba, Jiří; Kouba, Tomáš; Lokajíček, Miloš; Mikula, Alexandr; Říkal, Václav; Švec, Jan; Vohnout, Rudolf
2016-09-01
There are several infrastructure operators within the Czech Republic NGI (National Grid Initiative) which provide users with access to high-performance computing facilities over a grid and cloud interface. This article focuses on those where the primary author has personal first-hand experience. We cover some operational issues as well as the history of these facilities.
NASA Marshall Space Flight Center solar observatory report, January - June 1993
NASA Technical Reports Server (NTRS)
Smith, J. E.
1993-01-01
This report provides a description of the NASA Marshall Space Flight Center's Solar Vector Magnetograph Facility and gives a summary of its observations and data reduction during January-June 1993. The systems that make up the facility are a magnetograph telescope, an H-alpha telescope, a Questar telescope, and a computer code.
NASA Marshall Space Flight Center Solar Observatory report, July - October 1993
NASA Technical Reports Server (NTRS)
Smith, J. E.
1994-01-01
This report provides a description of the NASA Marshall Space Flight Center's Solar Vector Magnetograph Facility and gives a summary of its observations and data reduction during June-October 1993. The systems that make up the facility are a magnetograph telescope, an H-alpha telescope, a Questar telescope, and a computer code.
NASA Marshall Space Flight Center Solar Observatory report, March - May 1994
NASA Technical Reports Server (NTRS)
Smith, J. E.
1994-01-01
This report provides a description of the NASA Marshall Space Flight Center's Solar Vector Magnetograph Facility and gives a summary of its observations and data reduction during March-May 1994. The systems that make up the facility are a magnetograph telescope, an H-alpha telescope, a Questar telescope, and a computer code.
Power source evaluation capabilities at Sandia National Laboratories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doughty, D.H.; Butler, P.C.
1996-04-01
Sandia National Laboratories maintains one of the most comprehensive power source characterization facilities in the U.S. National Laboratory system. This paper describes the capabilities for evaluation of fuel cell technologies. The facility has a rechargeable battery test laboratory and a test area for performing nondestructive and functional computer-controlled testing of cells and batteries.
BIBLIO: A Computer System Designed to Support the Near-Library User Model of Information Retrieval.
ERIC Educational Resources Information Center
Belew, Richard K.; Holland, Maurita Peterson
1988-01-01
Description of the development of the Information Exchange Facility, a prototype microcomputer-based personal bibliographic facility, covers software selection, user selection, overview of the system, and evaluation. The plan for an integrated system, BIBLIO, and the future role of libraries are discussed. (eight references) (MES)
National remote computational flight research facility
NASA Technical Reports Server (NTRS)
Rediess, Herman A.
1989-01-01
The extension of the NASA Ames-Dryden remotely augmented vehicle (RAV) facility to accommodate flight testing of a hypersonic aircraft utilizing the continental United States as a test range is investigated. The development and demonstration of an automated flight test management system (ATMS) that uses expert system technology for flight test planning, scheduling, and execution is documented.
DOT National Transportation Integrated Search
1978-05-01
The User Delay Cost Model (UDCM) is a Monte Carlo computer simulation of essential aspects of Terminal Control Area (TCA) air traffic movements that would be affected by facility outages. The model can also evaluate delay effects due to other factors...
Experimental investigation of nozzle/plume aerodynamics at hypersonic speeds
NASA Technical Reports Server (NTRS)
Bogdanoff, David W.; Cambier, Jean-Luc; Papadopoulos, Perikles
1994-01-01
Much of the work involved the Ames 16-Inch Shock Tunnel facility. The facility was reactivated and upgraded, a data acquisition system was configured and upgraded several times, several facility calibrations were performed and test entries with a wedge model with hydrogen injection and a full scramjet combustor model, with hydrogen injection, were performed. Extensive CFD modeling of the flow in the facility was done. This includes modeling of the unsteady flow in the driver and driven tubes and steady flow modeling of the nozzle flow. Other modeling efforts include simulations of non-equilibrium flows and turbulence, plasmas, light gas guns and the use of non-ideal gas equations of state. New experimental techniques to improve the performance of gas guns, shock tubes and tunnels and scramjet combustors were conceived and studied computationally. Ways to improve scramjet engine performance using steady and pulsed detonation waves were also studied computationally. A number of studies were performed on the operation of the ram accelerator, including investigations of in-tube gasdynamic heating and the use of high explosives to raise the velocity capability of the device.
NASA Technical Reports Server (NTRS)
Drozda, Tomasz G.; Cabell, Karen F.; Passe, Bradley J.; Baurle, Robert A.
2017-01-01
Computational fluid dynamics analyses and experimental data are presented for the Mach 6 facility nozzle used in the Arc-Heated Scramjet Test Facility for the Enhanced Injection and Mixing Project (EIMP). This project, conducted at the NASA Langley Research Center, aims to investigate supersonic combustion ramjet (scramjet) fuel injection and mixing physics relevant to flight Mach numbers greater than 8. The EIMP experiments use a two-dimensional Mach 6 facility nozzle to provide the high-speed air simulating the combustor entrance flow of a scramjet engine. Of interest are the physical extent and the thermodynamic properties of the core flow at the nozzle exit plane. The detailed characterization of this flow is obtained from three-dimensional, viscous, Reynolds-averaged simulations. Thermodynamic nonequilibrium effects are also investigated. The simulations are compared with the available experimental data, which includes wall static pressures as well as in-stream static pressure, pitot pressure and total temperature obtained via in-stream probes positioned just downstream of the nozzle exit plane.
NASA Technical Reports Server (NTRS)
Kramer, Williams T. C.; Simon, Horst D.
1994-01-01
This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.
Thermal-mechanical fatigue test apparatus for metal matrix composites and joint attachments
NASA Technical Reports Server (NTRS)
Westfall, L. J.; Petrasek, D. W.
1985-01-01
Two thermal-mechanical fatigue (TMF) test facilities were designed and developed, one to test tungsten fiber reinforced metal matrix composite specimens at temperature up to 1430C (2600F) and another to test composite/metal attachment bond joints at temperatures up to 760C (1400 F). The TMF facility designed for testing tungsten fiber reinforced metal matrix composites permits test specimen temperature excursions from room temperature to 1430C (2600F) with controlled heating and loading rates. A strain-measuring device measures the strain in the test section of the specimen during each heating and cooling cycle with superimposed loads. Data is collected and recorded by a computer. The second facility is designed to test composite/metal attachment bond joints and to permit heating to a maximum temperature of 760C (1400F) within 10 min and cooling to 150C (300F) within 3 min. A computer controls specimen temperature and load cycling.
Simulation Enabled Safeguards Assessment Methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robert Bean; Trond Bjornard; Thomas Larson
2007-09-01
It is expected that nuclear energy will be a significant component of future supplies. New facilities, operating under a strengthened international nonproliferation regime will be needed. There is good reason to believe virtual engineering applied to the facility design, as well as to the safeguards system design will reduce total project cost and improve efficiency in the design cycle. Simulation Enabled Safeguards Assessment MEthodology (SESAME) has been developed as a software package to provide this capability for nuclear reprocessing facilities. The software architecture is specifically designed for distributed computing, collaborative design efforts, and modular construction to allow step improvements inmore » functionality. Drag and drop wireframe construction allows the user to select the desired components from a component warehouse, render the system for 3D visualization, and, linked to a set of physics libraries and/or computational codes, conduct process evaluations of the system they have designed.« less
Computational study of radiation doses at UNLV accelerator facility
NASA Astrophysics Data System (ADS)
Hodges, Matthew; Barzilov, Alexander; Chen, Yi-Tung; Lowe, Daniel
2017-09-01
A Varian K15 electron linear accelerator (linac) has been considered for installation at University of Nevada, Las Vegas (UNLV). Before experiments can be performed, it is necessary to evaluate the photon and neutron spectra as generated by the linac, as well as the resulting dose rates within the accelerator facility. A computational study using MCNPX was performed to characterize the source terms for the bremsstrahlung converter. The 15 MeV electron beam available in the linac is above the photoneutron threshold energy for several materials in the linac assembly, and as a result, neutrons must be accounted for. The angular and energy distributions for bremsstrahlung flux generated by the interaction of the 15 MeV electron beam with the linac target were determined. This source term was used in conjunction with the K15 collimators to determine the dose rates within the facility.
NASA Technical Reports Server (NTRS)
Macdonald, G.
1983-01-01
A prototype Air Traffic Control facility and multiman flight simulator facility was designed and one of the component simulators fabricated as a proof of concept. The facility was designed to provide a number of independent simple simulator cabs that would have the capability of some local, stand alone processing that would in turn interface with a larger host computer. The system can accommodate up to eight flight simulators (commercially available instrument trainers) which could be operated stand alone if no graphics were required or could operate in a common simulated airspace if connected to the host computer. A proposed addition to the original design is the capability of inputing pilot inputs and quantities displayed on the flight and navigation instruments to the microcomputer when the simulator operates in the stand alone mode to allow independent use of these commercially available instrument trainers for research. The conceptual design of the system and progress made to date on its implementation are described.
Thermal-mechanical fatigue test apparatus for metal matrix composites and joint attachments
NASA Technical Reports Server (NTRS)
Westfall, Leonard J.; Petrasek, Donald W.
1988-01-01
Two thermal-mechanical fatigue (TMF) test facilities were designed and developed, one to test tungsten fiber reinforced metal matrix composite specimens at temperature up to 1430C (2600F) and another to test composite/metal attachment bond joints at temperatures up to 760F (1400F). The TMF facility designed for testing tungsten fiber reinforced metal matrix composites permits test specimen temperature excursions from room temperature to 1430C (2600F) with controlled heating and loading rates. A strain-measuring device measures the strain in the test section of the specimen during each heating and cooling cycle with superimposed loads. Data is collected and recorded by a computer. The second facility is designed to test composite/metal attachment bond joints and to permit heating to a maximum temperature of 760C (1400F) within 10 min and cooling to 150C (300F) within 3 min. A computer controls specimen temperature and load cycling.
Arjomandi, Mehrdad; Seward, James; Gotway, Michael B.; Nishimura, Stephen; Fulton, George P.; Thundiyil, Josef; King, Talmadge E.; Harber, Philip; Balmes, John R.
2012-01-01
Objective To study the prevalence of beryllium sensitization (BeS) and chronic beryllium disease (CBD) in a cohort of workers from a nuclear weapons research and development facility. Methods We evaluated 50 workers with BeS with medical and occupational histories, physical examination, chest imaging with high-resolution computed tomography (N = 49), and pulmonary function testing. Forty of these workers also underwent bronchoscopy for bronchoalveolar lavage and transbronchial biopsies. Results The mean duration of employment at the facility was 18 years and the mean latency (from first possible exposure) to time of evaluation was 32 years. Five of the workers had CBD at the time of evaluation (based on histology or high-resolution computed tomography); three others had evidence of probable CBD. Conclusions These workers with BeS, characterized by a long duration of potential Be exposure and a long latency, had a low prevalence of CBD. PMID:20523233
DOE Office of Scientific and Technical Information (OSTI.GOV)
De, K; Jha, S; Klimentov, A
2016-01-01
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full production for the ATLAS experiment since September 2015. We will present our current accomplishments with running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.« less
Microcosm to Cosmos: The Growth of a Divisional Computer Network
Johannes, R.S.; Kahane, Stephen N.
1987-01-01
In 1982, we reported the deployment of a network of microcomputers in the Division of Gastroenterology[1]. This network was based upon Corvus Systems Omninet®. Corvus was one of the very first firms to offer networking products for PC's. This PC development occurred coincident with the planning phase of the Johns Hopkins Hospital's multisegment ethernet project. A rich communications infra-structure is now in place at the Johns Hopkins Medical Institutions[2,3]. Shortly after the hospital development under the direction of the Operational and Clinical Systems Division (OCS) development began, the Johns Hopkins School of Medicine began an Integrated Academic Information Management Systems (IAIMS) planning effort. We now present a model that uses aspects of all three planning efforts (PC networks, Hospital Information Systems & IAIMS) to build a divisional computing facility. This facility is viewed as a terminal leaf on then institutional network diagram. Nevertheless, it is noteworthy that this leaf, the divisional resource in the Division of Gastroenterology (GASNET), has a rich substructure and functionality of its own, perhaps revealing the recursive nature of network architecture. The current status, design and function of the GASNET computational facility is discussed. Among the major positive aspects of this design are the sharing and centralization of MS-DOS software, the high-speed DOS/Unix link that makes available most of the our institution's computing resources.
Integrating multiple scientific computing needs via a Private Cloud infrastructure
NASA Astrophysics Data System (ADS)
Bagnasco, S.; Berzano, D.; Brunetti, R.; Lusso, S.; Vallero, S.
2014-06-01
In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.
47 CFR 69.307 - General support facilities.
Code of Federal Regulations, 2014 CFR
2014-10-01
....307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED... computer investment used in the provision of the Line Information Database sub-element at § 69.120(b) shall be assigned to that sub-element. (b) General purpose computer investment used in the provision of the...
47 CFR 69.307 - General support facilities.
Code of Federal Regulations, 2010 CFR
2010-10-01
....307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED... computer investment used in the provision of the Line Information Database sub-element at § 69.120(b) shall be assigned to that sub-element. (b) General purpose computer investment used in the provision of the...
47 CFR 69.307 - General support facilities.
Code of Federal Regulations, 2011 CFR
2011-10-01
....307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED... computer investment used in the provision of the Line Information Database sub-element at § 69.120(b) shall be assigned to that sub-element. (b) General purpose computer investment used in the provision of the...
47 CFR 69.307 - General support facilities.
Code of Federal Regulations, 2012 CFR
2012-10-01
....307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED... computer investment used in the provision of the Line Information Database sub-element at § 69.120(b) shall be assigned to that sub-element. (b) General purpose computer investment used in the provision of the...
47 CFR 69.307 - General support facilities.
Code of Federal Regulations, 2013 CFR
2013-10-01
....307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED... computer investment used in the provision of the Line Information Database sub-element at § 69.120(b) shall be assigned to that sub-element. (b) General purpose computer investment used in the provision of the...
Computer Self-Efficacy among Health Information Students
ERIC Educational Resources Information Center
Hendrix, Dorothy Marie
2011-01-01
Roles and functions of health information professionals are evolving due to the mandated electronic health record adoption process for healthcare facilities. A knowledgeable workforce with computer information technology skill sets is required for the successful collection of quality patient-care data, improvement of productivity, and…
Computer program determines performance efficiency of remote measuring systems
NASA Technical Reports Server (NTRS)
Merewether, E. K.
1966-01-01
Computer programs control and evaluate instrumentation system performance for numerous rocket engine test facilities and prescribe calibration and maintenance techniques to maintain the systems within process specifications. Similar programs can be written for other test equipment in an industry such as the petrochemical industry.
Electromagnetic Induction: A Computer-Assisted Experiment
ERIC Educational Resources Information Center
Fredrickson, J. E.; Moreland, L.
1972-01-01
By using minimal equipment it is possible to demonstrate Faraday's Law. An electronic desk calculator enables sophomore students to solve a difficult mathematical expression for the induced EMF. Polaroid pictures of the plot of induced EMF, together with the computer facility, enables students to make comparisons. (PS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kharrati, Hedi; Agrebi, Amel; Karaoui, Mohamed-Karim
2007-04-15
X-ray buildup factors of lead in broad beam geometry for energies from 15 to 150 keV are determined using the general purpose Monte Carlo N-particle radiation transport computer code (MCNP4C). The obtained buildup factors data are fitted to a modified three parameter Archer et al. model for ease in calculating the broad beam transmission with computer at any tube potentials/filters combinations in diagnostic energies range. An example for their use to compute the broad beam transmission at 70, 100, 120, and 140 kVp is given. The calculated broad beam transmission is compared to data derived from literature, presenting good agreement.more » Therefore, the combination of the buildup factors data as determined and a mathematical model to generate x-ray spectra provide a computationally based solution to broad beam transmission for lead barriers in shielding x-ray facilities.« less
ASC FY17 Implementation Plan, Rev. 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamilton, P. G.
The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computationalmore » resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resources, including technical staff, hardware, simulation software, and computer science solutions.« less
Pre-Hardware Optimization of Spacecraft Image Processing Algorithms and Hardware Implementation
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Petrick, David J.; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Day, John H. (Technical Monitor)
2002-01-01
Spacecraft telemetry rates and telemetry product complexity have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image data processing and color picture generation application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The proposed solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms, and reconfigurable computing hardware (RC) technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processors (DSP). It has been shown that this approach can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft.
Computer aided radiation analysis for manned spacecraft
NASA Technical Reports Server (NTRS)
Appleby, Matthew H.; Griffin, Brand N.; Tanner, Ernest R., II; Pogue, William R.; Golightly, Michael J.
1991-01-01
In order to assist in the design of radiation shielding an analytical tool is presented that can be employed in combination with CAD facilities and NASA transport codes. The nature of radiation in space is described, and the operational requirements for protection are listed as background information for the use of the technique. The method is based on the Boeing radiation exposure model (BREM) for combining NASA radiation transport codes and CAD facilities, and the output is given as contour maps of the radiation-shield distribution so that dangerous areas can be identified. Computational models are used to solve the 1D Boltzmann transport equation and determine the shielding needs for the worst-case scenario. BREM can be employed directly with the radiation computations to assess radiation protection during all phases of design which saves time and ultimately spacecraft weight.
POLLUX: a program for simulated cloning, mutagenesis and database searching of DNA constructs.
Dayringer, H E; Sammons, S A
1991-04-01
Computer support for research in biotechnology has developed rapidly and has provided several tools to aid the researcher. This report describes the capabilities of new computer software developed in this laboratory to aid in the documentation and planning of experiments in molecular biology. The program, POLLUX, provides a graphical medium for the entry, edit and manipulation of DNA constructs and a textual format for display and edit of construct descriptive data. Program operation and procedures are designed to mimic the actual laboratory experiments with respect to capability and the order in which they are performed. Flexible control over the content of the computer-generated displays and program facilities is provided by a mouse-driven menu interface. Programmed facilities for mutagenesis, simulated cloning and searching of the database from networked workstations are described.
Computational model of gamma irradiation room at ININ
NASA Astrophysics Data System (ADS)
Rodríguez-Romo, Suemi; Patlan-Cardoso, Fernando; Ibáñez-Orozco, Oscar; Vergara Martínez, Francisco Javier
2018-03-01
In this paper, we present a model of the gamma irradiation room at the National Institute of Nuclear Research (ININ is its acronym in Spanish) in Mexico to improve the use of physics in dosimetry for human protection. We deal with air-filled ionization chambers and scientific computing made in house and framed in both the GEANT4 scheme and our analytical approach to characterize the irradiation room. This room is the only secondary dosimetry facility in Mexico. Our aim is to optimize its experimental designs, facilities, and industrial applications of physical radiation. The computational results provided by our model are supported by all the known experimental data regarding the performance of the ININ gamma irradiation room and allow us to predict the values of the main variables related to this fully enclosed space to within an acceptable margin of error.
Wakefield Computations for the CLIC PETS using the Parallel Finite Element Time-Domain Code T3P
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candel, A; Kabel, A.; Lee, L.
In recent years, SLAC's Advanced Computations Department (ACD) has developed the high-performance parallel 3D electromagnetic time-domain code, T3P, for simulations of wakefields and transients in complex accelerator structures. T3P is based on advanced higher-order Finite Element methods on unstructured grids with quadratic surface approximation. Optimized for large-scale parallel processing on leadership supercomputing facilities, T3P allows simulations of realistic 3D structures with unprecedented accuracy, aiding the design of the next generation of accelerator facilities. Applications to the Compact Linear Collider (CLIC) Power Extraction and Transfer Structure (PETS) are presented.
A facility for training Space Station astronauts
NASA Technical Reports Server (NTRS)
Hajare, Ankur R.; Schmidt, James R.
1992-01-01
The Space Station Training Facility (SSTF) will be the primary facility for training the Space Station Freedom astronauts and the Space Station Control Center ground support personnel. Conceptually, the SSTF will consist of two parts: a Student Environment and an Author Environment. The Student Environment will contain trainers, instructor stations, computers and other equipment necessary for training. The Author Environment will contain the systems that will be used to manage, develop, integrate, test and verify, operate and maintain the equipment and software in the Student Environment.
Control System Upgrade for a Mass Property Measurement Facility
NASA Technical Reports Server (NTRS)
Chambers, William; Hinkle, R. Kenneth (Technical Monitor)
2002-01-01
The Mass Property Measurement Facility (MPMF) at the Goddard Space Flight Center has undergone modifications to ensure the safety of Flight Payloads and the measurement facility. The MPMF has been technically updated to improve reliability and increase the accuracy of the measurements. Modifications include the replacement of outdated electronics with a computer based software control system, the addition of a secondary gas supply in case of a catastrophic failure to the gas supply and a motor controlled emergency stopping feature instead of a hard stop.
NASA Technical Reports Server (NTRS)
Hewett, Marle D.; Tartt, David M.; Duke, Eugene L.; Antoniewicz, Robert F.; Brumbaugh, Randal W.
1988-01-01
The development of an automated flight test management system (ATMS) as a component of a rapid-prototyping flight research facility for AI-based flight systems concepts is described. The rapid-prototyping facility includes real-time high-fidelity simulators, numeric and symbolic processors, and high-performance research aircraft modified to accept commands for a ground-based remotely augmented vehicle facility. The flight system configuration of the ATMS includes three computers: the TI explorer LX and two GOULD SEL 32/27s.
Description of the Spacecraft Control Laboratory Experiment (SCOLE) facility
NASA Technical Reports Server (NTRS)
Williams, Jeffrey P.; Rallo, Rosemary A.
1987-01-01
A laboratory facility for the study of control laws for large flexible spacecraft is described. The facility fulfills the requirements of the Spacecraft Control Laboratory Experiment (SCOLE) design challenge for a laboratory experiment, which will allow slew maneuvers and pointing operations. The structural apparatus is described in detail sufficient for modelling purposes. The sensor and actuator types and characteristics are described so that identification and control algorithms may be designed. The control implementation computer and real-time subroutines are also described.
Description of the Spacecraft Control Laboratory Experiment (SCOLE) facility
NASA Technical Reports Server (NTRS)
Williams, Jeffrey P.; Rallo, Rosemary A.
1987-01-01
A laboratory facility for the study of control laws for large flexible spacecraft is described. The facility fulfills the requirements of the Spacecraft Control Laboratory Experiment (SCOLE) design challenge for laboratory experiments, which will allow slew maneuvers and pointing operations. The structural apparatus is described in detail sufficient for modelling purposes. The sensor and actuator types and characteristics are described so that identification and control algorithms may be designed. The control implementation computer and real-time subroutines are also described.
Operation of the 25kW NASA Lewis Research Center Solar Regenerative Fuel Cell Tested Facility
NASA Technical Reports Server (NTRS)
Moore, S. H.; Voecks, G. E.
1997-01-01
Assembly of the NASA Lewis Research Center(LeRC)Solar Regenerative Fuel Cell (RFC) Testbed Facility has been completed and system testing has proceeded. This facility includes the integration of two 25kW photovoltaic solar cell arrays, a 25kW proton exchange membrane (PEM) electrolysis unit, four 5kW PEM fuel cells, high pressure hydrogen and oxygen storage vessels, high purity water storage containers, and computer monitoring, control and data acquisition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doss, E.D.; Sikes, W.C.
1992-09-01
This report describes the work performed during Phase 1 and Phase 2 of the collaborative research program established between Argonne National Laboratory (ANL) and Newport News Shipbuilding and Dry Dock Company (NNS). Phase I of the program focused on the development of computer models for Magnetohydrodynamic (MHD) propulsion. Phase 2 focused on the experimental validation of the thruster performance models and the identification, through testing, of any phenomena which may impact the attractiveness of this propulsion system for shipboard applications. The report discusses in detail the work performed in Phase 2 of the program. In Phase 2, a two Teslamore » test facility was designed, built, and operated. The facility test loop, its components, and their design are presented. The test matrix and its rationale are discussed. Representative experimental results of the test program are presented, and are compared to computer model predictions. In general, the results of the tests and their comparison with the predictions indicate that thephenomena affecting the performance of MHD seawater thrusters are well understood and can be accurately predicted with the developed thruster computer models.« less
Scaling and entropy in p-median facility location along a line
NASA Astrophysics Data System (ADS)
Gastner, Michael T.
2011-09-01
The p-median problem is a common model for optimal facility location. The task is to place p facilities (e.g., warehouses or schools) in a heterogeneously populated space such that the average distance from a person's home to the nearest facility is minimized. Here we study the special case where the population lives along a line (e.g., a road or a river). If facilities are optimally placed, the length of the line segment served by a facility is inversely proportional to the square root of the population density. This scaling law is derived analytically and confirmed for concrete numerical examples of three US interstate highways and the Mississippi River. If facility locations are permitted to deviate from the optimum, the number of possible solutions increases dramatically. Using Monte Carlo simulations, we compute how scaling is affected by an increase in the average distance to the nearest facility. We find that the scaling exponents change and are most sensitive near the optimum facility distribution.
Automated Help System For A Supercomputer
NASA Technical Reports Server (NTRS)
Callas, George P.; Schulbach, Catherine H.; Younkin, Michael
1994-01-01
Expert-system software developed to provide automated system of user-helping displays in supercomputer system at Ames Research Center Advanced Computer Facility. Users located at remote computer terminals connected to supercomputer and each other via gateway computers, local-area networks, telephone lines, and satellite links. Automated help system answers routine user inquiries about how to use services of computer system. Available 24 hours per day and reduces burden on human experts, freeing them to concentrate on helping users with complicated problems.
Unified, Cross-Platform, Open-Source Library Package for High-Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kozacik, Stephen
Compute power is continually increasing, but this increased performance is largely found in sophisticated computing devices and supercomputer resources that are difficult to use, resulting in under-utilization. We developed a unified set of programming tools that will allow users to take full advantage of the new technology by allowing them to work at a level abstracted away from the platform specifics, encouraging the use of modern computing systems, including government-funded supercomputer facilities.
Cloud computing can simplify HIT infrastructure management.
Glaser, John
2011-08-01
Software as a Service (SaaS), built on cloud computing technology, is emerging as the forerunner in IT infrastructure because it helps healthcare providers reduce capital investments. Cloud computing leads to predictable, monthly, fixed operating expenses for hospital IT staff. Outsourced cloud computing facilities are state-of-the-art data centers boasting some of the most sophisticated networking equipment on the market. The SaaS model helps hospitals safeguard against technology obsolescence, minimizes maintenance requirements, and simplifies management.
NASA Technical Reports Server (NTRS)
Botts, Michael E.; Phillips, Ron J.; Parker, John V.; Wright, Patrick D.
1992-01-01
Five scientists at MSFC/ESAD have EOS SCF investigator status. Each SCF has unique tasks which require the establishment of a computing facility dedicated to accomplishing those tasks. A SCF Working Group was established at ESAD with the charter of defining the computing requirements of the individual SCFs and recommending options for meeting these requirements. The primary goal of the working group was to determine which computing needs can be satisfied using either shared resources or separate but compatible resources, and which needs require unique individual resources. The requirements investigated included CPU-intensive vector and scalar processing, visualization, data storage, connectivity, and I/O peripherals. A review of computer industry directions and a market survey of computing hardware provided information regarding important industry standards and candidate computing platforms. It was determined that the total SCF computing requirements might be most effectively met using a hierarchy consisting of shared and individual resources. This hierarchy is composed of five major system types: (1) a supercomputer class vector processor; (2) a high-end scalar multiprocessor workstation; (3) a file server; (4) a few medium- to high-end visualization workstations; and (5) several low- to medium-range personal graphics workstations. Specific recommendations for meeting the needs of each of these types are presented.
ERIC Educational Resources Information Center
Fenton, Ginger D.; LaBorde, Luke F.; Radhakrishna, Rama B.; Brown, J. Lynne; Cutter, Catherine N.
2006-01-01
Computer-based training is increasingly favored by food companies for training workers due to convenience, self-pacing ability, and ease of use. The objectives of this study were to determine if personal hygiene training, offered through a computer-based method, is as effective as a face-to-face method in knowledge acquisition and improved…
ERIC Educational Resources Information Center
Lavender, Julie
2013-01-01
Military health care facilities make extensive use of computer-based training (CBT) for both clinical and non-clinical staff. Despite evidence identifying various factors that may impact CBT, the problem is unclear as to what factors specifically influence employee participation in computer-based training. The purpose of this mixed method case…
EBR-II high-ramp transients under computer control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forrester, R.J.; Larson, H.A.; Christensen, L.J.
1983-01-01
During reactor run 122, EBR-II was subjected to 13 computer-controlled overpower transients at ramps of 4 MWt/s to qualify the facility and fuel for transient testing of LMFBR oxide fuels as part of the EBR-II operational-reliability-testing (ORT) program. A computer-controlled automatic control-rod drive system (ACRDS), designed by EBR-II personnel, permitted automatic control on demand power during the transients.
The Influence of Large-Scale Computing on Aircraft Structural Design.
1986-04-01
the customer in the most cost- effective manner. Computer facility organizations became computer resource power brokers. A good data processing...capabilities generated on other processors can be easily used. This approach is easily implementable and provides a good strategy for using existing...assistance to member nations for the purpose of increasing their scientific and technical potential; - Recommending effective ways for the member nations to
Challenges facing developers of CAD/CAM models that seek to predict human working postures
NASA Astrophysics Data System (ADS)
Wiker, Steven F.
2005-11-01
This paper outlines the need for development of human posture prediction models for Computer Aided Design (CAD) and Computer Aided Manufacturing (CAM) design applications in product, facility and work design. Challenges facing developers of posture prediction algorithms are presented and discussed.
Integrating Computational Chemistry into the Physical Chemistry Curriculum
ERIC Educational Resources Information Center
Johnson, Lewis E.; Engel, Thomas
2011-01-01
Relatively few undergraduate physical chemistry programs integrate molecular modeling into their quantum mechanics curriculum owing to concerns about limited access to computational facilities, the cost of software, and concerns about increasing the course material. However, modeling exercises can be integrated into an undergraduate course at a…
Data and Communications in Basic Energy Sciences: Creating a Pathway for Scientific Discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nugent, Peter E.; Simonson, J. Michael
2011-10-24
This report is based on the Department of Energy (DOE) Workshop on “Data and Communications in Basic Energy Sciences: Creating a Pathway for Scientific Discovery” that was held at the Bethesda Marriott in Maryland on October 24-25, 2011. The workshop brought together leading researchers from the Basic Energy Sciences (BES) facilities and Advanced Scientific Computing Research (ASCR). The workshop was co-sponsored by these two Offices to identify opportunities and needs for data analysis, ownership, storage, mining, provenance and data transfer at light sources, neutron sources, microscopy centers and other facilities. Their charge was to identify current and anticipated issues inmore » the acquisition, analysis, communication and storage of experimental data that could impact the progress of scientific discovery, ascertain what knowledge, methods and tools are needed to mitigate present and projected shortcomings and to create the foundation for information exchanges and collaboration between ASCR and BES supported researchers and facilities. The workshop was organized in the context of the impending data tsunami that will be produced by DOE’s BES facilities. Current facilities, like SLAC National Accelerator Laboratory’s Linac Coherent Light Source, can produce up to 18 terabytes (TB) per day, while upgraded detectors at Lawrence Berkeley National Laboratory’s Advanced Light Source will generate ~10TB per hour. The expectation is that these rates will increase by over an order of magnitude in the coming decade. The urgency to develop new strategies and methods in order to stay ahead of this deluge and extract the most science from these facilities was recognized by all. The four focus areas addressed in this workshop were: Workflow Management - Experiment to Science: Identifying and managing the data path from experiment to publication. Theory and Algorithms: Recognizing the need for new tools for computation at scale, supporting large data sets and realistic theoretical models. Visualization and Analysis: Supporting near-real-time feedback for experiment optimization and new ways to extract and communicate critical information from large data sets. Data Processing and Management: Outlining needs in computational and communication approaches and infrastructure needed to handle unprecedented data volume and information content. It should be noted that almost all participants recognized that there were unlikely to be any turn-key solutions available due to the unique, diverse nature of the BES community, where research at adjacent beamlines at a given light source facility often span everything from biology to materials science to chemistry using scattering, imaging and/or spectroscopy. However, it was also noted that advances supported by other programs in data research, methodologies, and tool development could be implemented on reasonable time scales with modest effort. Adapting available standard file formats, robust workflows, and in-situ analysis tools for user facility needs could pay long-term dividends. Workshop participants assessed current requirements as well as future challenges and made the following recommendations in order to achieve the ultimate goal of enabling transformative science in current and future BES facilities: Theory and analysis components should be integrated seamlessly within experimental workflow. Develop new algorithms for data analysis based on common data formats and toolsets. Move analysis closer to experiment. Move the analysis closer to the experiment to enable real-time (in-situ) streaming capabilities, live visualization of the experiment and an increase of the overall experimental efficiency. Match data management access and capabilities with advancements in detectors and sources. Remove bottlenecks, provide interoperability across different facilities/beamlines and apply forefront mathematical techniques to more efficiently extract science from the experiments. This workshop report examines and reviews the status of several BES facilities and highlights the successes and shortcomings of the current data and communication pathways for scientific discovery. It then ascertains what methods and tools are needed to mitigate present and projected data bottlenecks to science over the next 10 years. The goal of this report is to create the foundation for information exchanges and collaborations among ASCR and BES supported researchers, the BES scientific user facilities, and ASCR computing and networking facilities. To jumpstart these activities, there was a strong desire to see a joint effort between ASCR and BES along the lines of the highly successful Scientific Discovery through Advanced Computing (SciDAC) program in which integrated teams of engineers, scientists and computer scientists were engaged to tackle a complete end-to-end workflow solution at one or more beamlines, to ascertain what challenges will need to be addressed in order to handle future increases in data« less
LamLum : a tool for evaluating the financial feasibility of laminated lumber plants
E.M. (Ted) Bilek; John F. Hunt
2006-01-01
A spreadsheet-based computer program called LamLum was created to analyze the economics of value- added laminated lumber manufacturing facilities. Such facilities manufacture laminations, typically from lower grades of structural lumber, then glue these laminations together to make various types of higher value laminated lumber products. This report provides the...
@berkeley.edu 510-642-1220 Research profile » A U.S. Department of Energy National Laboratory Operated by the Computational Study of Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion Investigators Division Staff Facilities and Centers Staff Jobs Safety Personnel Resources Committees In Case of
NASA Marshall Space Flight Center Solar Observatory Report, July to December 1992
NASA Technical Reports Server (NTRS)
Smith, J. E.
1993-01-01
This report provides a description of the NASA Marshall Space Flight Center's Solar Vector Magnetograph Facility and gives a summary of its observations and data reduction during July-December 1992. The systems that make up the facility are a magnetograph telescope, an H-alpha telescope, a Questar telescope, and a computer code.
NASA Marshall Space Flight Center Solar Observatory report, January - June 1992
NASA Technical Reports Server (NTRS)
Smith, James E.
1992-01-01
This report provides a description of the NASA Marshall Space Flight Center's Solar Vector Magnetograph Facility and gives a summary of its observations and data reduction during Jan. to Jun. 1992. The systems that make up the facility are a magnetograph telescope, and H-alpha telescope, a Questar telescope, and a computer code.
42 CFR 456.657 - Computation of reductions in FFP.
Code of Federal Regulations, 2010 CFR
2010-10-01
... the total number of recipients in that facility at the level of care in question. (2) The fraction... reductions in FFP. (a) For each level of care specified in a provider agreement, and for each quarter for...) For each level of care, the number of recipients who received services in facilities that did not meet...
MaRIE: A facility for time-dependent materials science at the mesoscale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnes, Cris William; Kippen, Karen Elizabeth
To meet new and emerging national security issues the Laboratory is stepping up to meet another grand challenge—transitioning from observing to controlling a material’s performance. This challenge requires the best of experiment, modeling, simulation, and computational tools. MaRIE is the Laboratory’s proposed flagship experimental facility intended to meet the challenge.
2016-11-18
Eating Index (HEI) scores were computed. Descriptive and independent t-test analyses were performed pre to post HPP implementation ( =0.05; 80% power...Appendix B: HPP THOR3 Point-of-Service Label Examples ....................................... 77 Appendix C: Demographic & Lifestyle Survey ...78 Appendix D: Dining Facility Satisfaction Survey
Computer applications in remote sensing education
NASA Technical Reports Server (NTRS)
Danielson, R. L.
1980-01-01
Computer applications to instruction in any field may be divided into two broad generic classes: computer-managed instruction and computer-assisted instruction. The division is based on how frequently the computer affects the instructional process and how active a role the computer affects the instructional process and how active a role the computer takes in actually providing instruction. There are no inherent characteristics of remote sensing education to preclude the use of one or both of these techniques, depending on the computer facilities available to the instructor. The characteristics of the two classes are summarized, potential applications to remote sensing education are discussed, and the advantages and disadvantages of computer applications to the instructional process are considered.
A Simple and Resource-efficient Setup for the Computer-aided Drug Design Laboratory.
Moretti, Loris; Sartori, Luca
2016-10-01
Undertaking modelling investigations for Computer-Aided Drug Design (CADD) requires a proper environment. In principle, this could be done on a single computer, but the reality of a drug discovery program requires robustness and high-throughput computing (HTC) to efficiently support the research. Therefore, a more capable alternative is needed but its implementation has no widespread solution. Here, the realization of such a computing facility is discussed, from general layout to technical details all aspects are covered. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
[The Computer Competency of Nurses in Long-Term Care Facilities and Related Factors].
Chang, Ya-Ping; Kuo, Huai-Ting; Li, I-Chuan
2016-12-01
It is important for nurses who work in long-term care facilities (LTCFs) to have an adequate level of computer competency due to the multidisciplinary and comprehensive nature of long-term care services. Thus, it is important to understand the current computer competency of nursing staff in LTCFs and the factors that relate to this competency. To explore the computer competency of LTCF nurses and to identify the demographic and computer-usage characteristics that relate significantly to computer competency in the LTCF environment. A cross-sectional research design and a self-report questionnaire were used to collect data from 185 nurses working at LTCFs in Taipei. The results found that the variables of the frequency of computer use (β = .33), age (β = -.30), type(s) of the software used at work (β = .28), hours of on-the-job training (β = -.14), prior work experience at other LTCFs (β = -.14), and Internet use at home (β = .12) explain 58.0% of the variance in the computer competency of participants. The results of the present study suggest that the following measures may help increase the computer competency of LTCF nurses. (1) Nurses should be encouraged to use electronic nursing records rather than handwritten records. (2) On-the-job training programs should emphasize participant competency in the Excel software package in order to maintain efficient and good-quality of LTC services after implementing of the LTC insurance policy.
31 CFR 356.11 - How are bids submitted in an auction?
Code of Federal Regulations, 2012 CFR
2012-07-01
..., our computer time stamp will establish the receipt time. You are bound by your bids after the closing... failures or disruptions of equipment or communications facilities used for participating in Treasury auctions. (4) Submitters are responsible for bids submitted using computer equipment on their premises...
31 CFR 356.11 - How are bids submitted in an auction?
Code of Federal Regulations, 2011 CFR
2011-07-01
..., our computer time stamp will establish the receipt time. You are bound by your bids after the closing... failures or disruptions of equipment or communications facilities used for participating in Treasury auctions. (4) Submitters are responsible for bids submitted using computer equipment on their premises...
Computer-Aided Design Speeds Development of Safe, Affordable, and Efficient
Systems Integration Facility's 3-D visualization room. Photo by Dennis Schroeder, NREL 41705 Computer from industry, academia, national laboratories, and other research institutions. Photo by Dennis Dennis Schroeder, NREL 41483 Bringing CAEBAT to the Next Level CAEBAT teams are now working to
Final-Approach-Spacing Subsystem For Air Traffic
NASA Technical Reports Server (NTRS)
Davis, Thomas J.; Erzberger, Heinz; Bergeron, Hugh
1992-01-01
Automation subsystem of computers, computer workstations, communication equipment, and radar helps air-traffic controllers in terminal radar approach-control (TRACON) facility manage sequence and spacing of arriving aircraft for both efficiency and safety. Called FAST (Final Approach Spacing Tool), subsystem enables controllers to choose among various levels of automation.
Lifelong Learning for the 21st Century.
ERIC Educational Resources Information Center
Goodnight, Ron
The Lifelong Learning Center for the 21st Century was proposed to provide personal renewal and technical training for employees at a major United States automotive manufacturing company when it implemented a new, computer-based Computer Numerical Controlled (CNC) machining, robotics, and high technology facility. The employees needed training for…
34 CFR 607.10 - What activities may and may not be carried out under a grant?
Code of Federal Regulations, 2011 CFR
2011-07-01
..., including the integration of computer technology into institutional facilities to create smart buildings... academic programs or methodology, including computer-assisted instruction, that strengthen the academic... new technology or methodology to increase student success and retention or to retain accreditation; or...
Evaluation of Mobile Authoring and Tutoring in Medical Issues
ERIC Educational Resources Information Center
Alepis, Efthymios; Virvou, Maria
2010-01-01
Mobile computing facilities may provide many assets to the educational process. Mobile technology provides software access from anywhere and at any time, as well as computer equipment independence. The need for time and place independence is even greater for medical instructors and medical students. Medical instructors are usually doctors that…
ERIC Educational Resources Information Center
Swanson, Dewey A.; Phillips, Julie A.
At the Purdue University School of Technology (PST) at Columbus, Indiana, the Total Quality Management (TQM) philosophy was used in the computer laboratories to better meet student needs. A customer satisfaction survey was conducted to gather data on lab facilities, lab assistants, and hardware/software; other sections of the survey included…
Word Processing for Technical Writers and Teachers.
ERIC Educational Resources Information Center
Mullins, Carolyn J.; West, Thomas W.
This discussion of the computing network and word processing facilities available to professionals on the Indiana University campuses identifies the word and text processing needs of technical writers and faculty, describes the current computing network, and outlines both long- and short-range objectives, policies, and plans for meeting these…
Development of an Intelligent Instruction System for Mathematical Computation
ERIC Educational Resources Information Center
Kim, Du Gyu; Lee, Jaemu
2013-01-01
In this paper, we propose the development of a web-based, intelligent instruction system to help elementary school students for mathematical computation. We concentrate on the intelligence facilities which support diagnosis and advice. The existing web-based instruction systems merely give information on whether the learners' replies are…
Directions for Education Building Planning Guidelines. Facility Services Section.
ERIC Educational Resources Information Center
Guenther, Peter
A major problem of accommodating computer technology in today's classrooms is space availability and the general design and construction of most traditional classrooms. This document addresses the types of classroom architectural and interior considerations believed necessary in order to create a more amenable environment for classroom computers.…
34 CFR 607.10 - What activities may and may not be carried out under a grant?
Code of Federal Regulations, 2012 CFR
2012-07-01
..., including the integration of computer technology into institutional facilities to create smart buildings... academic programs or methodology, including computer-assisted instruction, that strengthen the academic... new technology or methodology to increase student success and retention or to retain accreditation; or...
Space lab system analysis: Advanced Solid Rocket Motor (ASRM) communications networks analysis
NASA Technical Reports Server (NTRS)
Ingels, Frank M.; Moorhead, Robert J., II; Moorhead, Jane N.; Shearin, C. Mark; Thompson, Dale R.
1990-01-01
A synopsis of research on computer viruses and computer security is presented. A review of seven technical meetings attended is compiled. A technical discussion on the communication plans for the ASRM facility is presented, with a brief tutorial on the potential local area network media and protocols.
The Role of Wireless Computing Technology in the Design of Schools.
ERIC Educational Resources Information Center
Nair, Prakash
2003-01-01
After briefly describing the educational advantages of wireless networks using mobile computers, discusses the technical, operational, financial aspects of wireless local area networks (WLAN). Provides examples of school facilities designed for the use of WLAN. Includes a glossary of WLAN-related terms. (Contains 12 references.)
Ubiquitous Green Computing Techniques for High Demand Applications in Smart Environments
Zapater, Marina; Sanchez, Cesar; Ayala, Jose L.; Moya, Jose M.; Risco-Martín, José L.
2012-01-01
Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time. PMID:23112621
Langley Aerospace Research Summer Scholars. Part 2
NASA Technical Reports Server (NTRS)
Schwan, Rafaela (Compiler)
1995-01-01
The Langley Aerospace Research Summer Scholars (LARSS) Program was established by Dr. Samuel E. Massenberg in 1986. The program has increased from 20 participants in 1986 to 114 participants in 1995. The program is LaRC-unique and is administered by Hampton University. The program was established for the benefit of undergraduate juniors and seniors and first-year graduate students who are pursuing degrees in aeronautical engineering, mechanical engineering, electrical engineering, material science, computer science, atmospheric science, astrophysics, physics, and chemistry. Two primary elements of the LARSS Program are: (1) a research project to be completed by each participant under the supervision of a researcher who will assume the role of a mentor for the summer, and (2) technical lectures by prominent engineers and scientists. Additional elements of this program include tours of LARC wind tunnels, computational facilities, and laboratories. Library and computer facilities will be available for use by the participants.
Technical Reports: Langley Aerospace Research Summer Scholars. Part 1
NASA Technical Reports Server (NTRS)
Schwan, Rafaela (Compiler)
1995-01-01
The Langley Aerospace Research Summer Scholars (LARSS) Program was established by Dr. Samuel E. Massenberg in 1986. The program has increased from 20 participants in 1986 to 114 participants in 1995. The program is LaRC-unique and is administered by Hampton University. The program was established for the benefit of undergraduate juniors and seniors and first-year graduate students who are pursuing degrees in aeronautical engineering, mechanical engineering, electrical engineering, material science, computer science, atmospheric science, astrophysics, physics, and chemistry. Two primary elements of the LARSS Program are: (1) a research project to be completed by each participant under the supervision of a researcher who will assume the role of a mentor for the summer, and (2) technical lectures by prominent engineers and scientists. Additional elements of this program include tours of LARC wind tunnels, computational facilities, and laboratories. Library and computer facilities will be available for use by the participants.
Distributed Computing Framework for Synthetic Radar Application
NASA Technical Reports Server (NTRS)
Gurrola, Eric M.; Rosen, Paul A.; Aivazis, Michael
2006-01-01
We are developing an extensible software framework, in response to Air Force and NASA needs for distributed computing facilities for a variety of radar applications. The objective of this work is to develop a Python based software framework, that is the framework elements of the middleware that allows developers to control processing flow on a grid in a distributed computing environment. Framework architectures to date allow developers to connect processing functions together as interchangeable objects, thereby allowing a data flow graph to be devised for a specific problem to be solved. The Pyre framework, developed at the California Institute of Technology (Caltech), and now being used as the basis for next-generation radar processing at JPL, is a Python-based software framework. We have extended the Pyre framework to include new facilities to deploy processing components as services, including components that monitor and assess the state of the distributed network for eventual real-time control of grid resources.
Using high-performance networks to enable computational aerosciences applications
NASA Technical Reports Server (NTRS)
Johnson, Marjory J.
1992-01-01
One component of the U.S. Federal High Performance Computing and Communications Program (HPCCP) is the establishment of a gigabit network to provide a communications infrastructure for researchers across the nation. This gigabit network will provide new services and capabilities, in addition to increased bandwidth, to enable future applications. An understanding of these applications is necessary to guide the development of the gigabit network and other high-performance networks of the future. In this paper we focus on computational aerosciences applications run remotely using the Numerical Aerodynamic Simulation (NAS) facility located at NASA Ames Research Center. We characterize these applications in terms of network-related parameters and relate user experiences that reveal limitations imposed by the current wide-area networking infrastructure. Then we investigate how the development of a nationwide gigabit network would enable users of the NAS facility to work in new, more productive ways.
NASA Technical Reports Server (NTRS)
Lillesand, T. M.; Meisner, D. E. (Principal Investigator)
1980-01-01
An investigation was conducted into ways to improve the involvement of state and local user personnel in the digital image analysis process by isolating those elements of the analysis process which require extensive involvement by field personnel and providing means for performing those activities apart from a computer facility. In this way, the analysis procedure can be converted from a centralized activity focused on a computer facility to a distributed activity in which users can interact with the data at the field office level or in the field itself. A general image processing software was developed on the University of Minnesota computer system (Control Data Cyber models 172 and 74). The use of color hardcopy image data as a primary medium in supervised training procedures was investigated and digital display equipment and a coordinate digitizer were procured.
Review of blunt body wake flows at hypersonic low density conditions
NASA Technical Reports Server (NTRS)
Moss, J. N.; Price, J. M.
1996-01-01
Recent results of experimental and computational studies concerning hypersonic flows about blunted cones including their near wake are reviewed. Attention is focused on conditions where rarefaction effects are present, particularly in the wake. The experiments have been performed for a common model configuration (70 deg spherically-blunted cone) in five hypersonic facilities that encompass a significant range of rarefaction and nonequilibrium effects. Computational studies using direct simulation Monte Carlo (DSMC) and Navier-Stokes solvers have been applied to selected experiments performed in each of the facilities. In addition, computations have been made for typical flight conditions in both Earth and Mars atmospheres, hence more energetic flows than produced in the ground-based tests. Also, comparisons of DSMC calculations and forebody measurements made for the Japanese Orbital Reentry Experiment (OREX) vehicle (a 50 deg spherically-blunted cone) are presented to bridge the spectrum of ground to flight conditions.
Ubiquitous green computing techniques for high demand applications in Smart environments.
Zapater, Marina; Sanchez, Cesar; Ayala, Jose L; Moya, Jose M; Risco-Martín, José L
2012-01-01
Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.
The International Conference on Vector and Parallel Computing (2nd)
1989-01-17
Computation of the SVD of Bidiagonal Matrices" ...................................... 11 " Lattice QCD -As a Large Scale Scientific Computation...vectorizcd for the IBM 3090 Vector Facility. In addition, elapsed times " Lattice QCD -As a Large Scale Scientific have been reduced by using 3090...benchmarked Lattice QCD on a large number ofcompu- come from the wavefront solver routine. This was exten- ters: CrayX-MP and Cray 2 (vector
Generation and physical characteristics of the Landsat 1 and 2 MSS computer compatible tapes
NASA Technical Reports Server (NTRS)
Thomas, V. L.
1975-01-01
The generation and format is discussed of the Landsat 1 and 2 system corrected multispectral scanner computer compatible tapes. Included in the discussion are the spacecraft sensors, scene characteristics, the transmission of data, and the conversion of the data to computer compatible tapes at the NASA Data Processing Facility. Geometric and radiometric corrections, tape formats, and the physical characteristics of the tape are also described.
Device 2E6 (ACMS) Air Combat Maneuvering Simulator Instructor Console Review.
1983-12-01
While the device provides some new features which support training such as a debrief facility and a computer based instructor training module, the...Equipment Center, Orlando, FL (in printing). - 11 - -~.-. -- ~ --- NAVTRAEQUI PCEN 82-M-0767- 1 PROJECTORS DOE COMPUTER SYSTEMS Figure 1. General...arrangement (2E6) - 12 7 NAVTRAEQUIPCEN 82-M--0767-1 d. instructor stations, e. computer systems, ftarget model subsystem, g. debrief subsystem, h
Human-Computer Interaction and Virtual Environments
NASA Technical Reports Server (NTRS)
Noor, Ahmed K. (Compiler)
1995-01-01
The proceedings of the Workshop on Human-Computer Interaction and Virtual Environments are presented along with a list of attendees. The objectives of the workshop were to assess the state-of-technology and level of maturity of several areas in human-computer interaction and to provide guidelines for focused future research leading to effective use of these facilities in the design/fabrication and operation of future high-performance engineering systems.
NASA Astrophysics Data System (ADS)
Belyaev, A.; Berezhnaya, A.; Betev, L.; Buncic, P.; De, K.; Drizhuk, D.; Klimentov, A.; Lazin, Y.; Lyalin, I.; Mashinistov, R.; Novikov, A.; Oleynik, D.; Polyakov, A.; Poyda, A.; Ryabinkin, E.; Teslyuk, A.; Tkachenko, I.; Yasnopolskiy, L.
2015-12-01
The LHC experiments are preparing for the precision measurements and further discoveries that will be made possible by higher LHC energies from April 2015 (LHC Run2). The need for simulation, data processing and analysis would overwhelm the expected capacity of grid infrastructure computing facilities deployed by the Worldwide LHC Computing Grid (WLCG). To meet this challenge the integration of the opportunistic resources into LHC computing model is highly important. The Tier-1 facility at Kurchatov Institute (NRC-KI) in Moscow is a part of WLCG and it will process, simulate and store up to 10% of total data obtained from ALICE, ATLAS and LHCb experiments. In addition Kurchatov Institute has supercomputers with peak performance 0.12 PFLOPS. The delegation of even a fraction of supercomputing resources to the LHC Computing will notably increase total capacity. In 2014 the development a portal combining a Tier-1 and a supercomputer in Kurchatov Institute was started to provide common interfaces and storage. The portal will be used not only for HENP experiments, but also by other data- and compute-intensive sciences like biology with genome sequencing analysis; astrophysics with cosmic rays analysis, antimatter and dark matter search, etc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Choong-Seock; Greenwald, Martin; Riley, Katherine
The additional computing power offered by the planned exascale facilities could be transformational across the spectrum of plasma and fusion research — provided that the new architectures can be efficiently applied to our problem space. The collaboration that will be required to succeed should be viewed as an opportunity to identify and exploit cross-disciplinary synergies. To assess the opportunities and requirements as part of the development of an overall strategy for computing in the exascale era, the Exascale Requirements Review meeting of the Fusion Energy Sciences (FES) community was convened January 27–29, 2016, with participation from a broad range ofmore » fusion and plasma scientists, specialists in applied mathematics and computer science, and representatives from the U.S. Department of Energy (DOE) and its major computing facilities. This report is a summary of that meeting and the preparatory activities for it and includes a wealth of detail to support the findings. Technical opportunities, requirements, and challenges are detailed in this report (and in the recent report on the Workshop on Integrated Simulation). Science applications are described, along with mathematical and computational enabling technologies. Also see http://exascaleage.org/fes/ for more information.« less
Sukums, Felix; Mensah, Nathan; Mpembeni, Rose; Kaltschmidt, Jens; Haefeli, Walter E.; Blank, Antje
2014-01-01
Background The QUALMAT (Quality of Maternal and Prenatal Care: Bridging the Know-do Gap) project has introduced an electronic clinical decision support system (CDSS) for pre-natal and maternal care services in rural primary health facilities in Burkina Faso, Ghana, and Tanzania. Objective To report an assessment of health providers’ computer knowledge, experience, and attitudes prior to the implementation of the QUALMAT electronic CDSS. Design A cross-sectional study was conducted with providers in 24 QUALMAT project sites. Information was collected using structured questionnaires. Chi-squared tests and one-way ANOVA describe the association between computer knowledge, attitudes, and other factors. Semi-structured interviews and focus groups were conducted to gain further insights. Results A total of 108 providers responded, 63% were from Tanzania and 37% from Ghana. The mean age was 37.6 years, and 79% were female. Only 40% had ever used computers, and 29% had prior computer training. About 80% were computer illiterate or beginners. Educational level, age, and years of work experience were significantly associated with computer knowledge (p<0.01). Most (95.3%) had positive attitudes towards computers – average score (±SD) of 37.2 (±4.9). Females had significantly lower scores than males. Interviews and group discussions showed that although most were lacking computer knowledge and experience, they were optimistic about overcoming challenges associated with the introduction of computers in their workplace. Conclusions Given the low levels of computer knowledge among rural health workers in Africa, it is important to provide adequate training and support to ensure the successful uptake of electronic CDSSs in these settings. The positive attitudes to computers found in this study underscore that also rural care providers are ready to use such technology. PMID:25361721
Sukums, Felix; Mensah, Nathan; Mpembeni, Rose; Kaltschmidt, Jens; Haefeli, Walter E; Blank, Antje
2014-01-01
The QUALMAT (Quality of Maternal and Prenatal Care: Bridging the Know-do Gap) project has introduced an electronic clinical decision support system (CDSS) for pre-natal and maternal care services in rural primary health facilities in Burkina Faso, Ghana, and Tanzania. To report an assessment of health providers' computer knowledge, experience, and attitudes prior to the implementation of the QUALMAT electronic CDSS. A cross-sectional study was conducted with providers in 24 QUALMAT project sites. Information was collected using structured questionnaires. Chi-squared tests and one-way ANOVA describe the association between computer knowledge, attitudes, and other factors. Semi-structured interviews and focus groups were conducted to gain further insights. A total of 108 providers responded, 63% were from Tanzania and 37% from Ghana. The mean age was 37.6 years, and 79% were female. Only 40% had ever used computers, and 29% had prior computer training. About 80% were computer illiterate or beginners. Educational level, age, and years of work experience were significantly associated with computer knowledge (p<0.01). Most (95.3%) had positive attitudes towards computers - average score (±SD) of 37.2 (±4.9). Females had significantly lower scores than males. Interviews and group discussions showed that although most were lacking computer knowledge and experience, they were optimistic about overcoming challenges associated with the introduction of computers in their workplace. Given the low levels of computer knowledge among rural health workers in Africa, it is important to provide adequate training and support to ensure the successful uptake of electronic CDSSs in these settings. The positive attitudes to computers found in this study underscore that also rural care providers are ready to use such technology.
NASA Technical Reports Server (NTRS)
Bennett, Jerome (Technical Monitor)
2002-01-01
The NASA Center for Computational Sciences (NCCS) is a high-performance scientific computing facility operated, maintained and managed by the Earth and Space Data Computing Division (ESDCD) of NASA Goddard Space Flight Center's (GSFC) Earth Sciences Directorate. The mission of the NCCS is to advance leading-edge science by providing the best people, computers, and data storage systems to NASA's Earth and space sciences programs and those of other U.S. Government agencies, universities, and private institutions. Among the many computationally demanding Earth science research efforts supported by the NCCS in Fiscal Year 1999 (FY99) are the NASA Seasonal-to-Interannual Prediction Project, the NASA Search and Rescue Mission, Earth gravitational model development efforts, the National Weather Service's North American Observing System program, Data Assimilation Office studies, a NASA-sponsored project at the Center for Ocean-Land-Atmosphere Studies, a NASA-sponsored microgravity project conducted by researchers at the City University of New York and the University of Pennsylvania, the completion of a satellite-derived global climate data set, simulations of a new geodynamo model, and studies of Earth's torque. This document presents highlights of these research efforts and an overview of the NCCS, its facilities, and its people.
NASA Technical Reports Server (NTRS)
Kirk, Lindsay C.; Lillard, Randolph P.; Olejniczak, Joseph; Tanno, Hideyuki
2015-01-01
Computational assessments were performed to size boundary layer trips for a scaled Apollo capsule model in the High Enthalpy Shock Tunnel (HIEST) facility at the JAXA Kakuda Space Center in Japan. For stagnation conditions between 2 MJ/kg and 20 MJ/kg and between 10 MPa and 60 MPa, the appropriate trips were determined to be between 0.2 mm and 1.3 mm high, which provided kappa/delta values on the heatshield from 0.15 to 2.25. The tripped configuration consisted of an insert with a series of diamond shaped trips along the heatshield downstream of the stagnation point. Surface heat flux measurements were obtained on a capsule with a 250 mm diameter, 6.4% scale model, and pressure measurements were taken at axial stations along the nozzle walls. At low enthalpy conditions, the computational predictions agree favorably to the test data along the heatshield centerline. However, agreement becomes less favorable as the enthalpy increases conditions. The measured surface heat flux on the heatshield from the HIEST facility was under-predicted by the computations in these cases. Both smooth and tripped configurations were tested for comparison, and a post-test computational analysis showed that kappa/delta values based on the as-measured stagnation conditions ranged between 0.5 and 1.2. Tripped configurations for both 0.6 mm and 0.8 mm trip heights were able to effectively trip the flow to fully turbulent for a range of freestream conditions.
Close to real life. [solving for transonic flow about lifting airfoils using supercomputers
NASA Technical Reports Server (NTRS)
Peterson, Victor L.; Bailey, F. Ron
1988-01-01
NASA's Numerical Aerodynamic Simulation (NAS) facility for CFD modeling of highly complex aerodynamic flows employs as its basic hardware two Cray-2s, an ETA-10 Model Q, an Amdahl 5880 mainframe computer that furnishes both support processing and access to 300 Gbytes of disk storage, several minicomputers and superminicomputers, and a Thinking Machines 16,000-device 'connection machine' processor. NAS, which was the first supercomputer facility to standardize operating-system and communication software on all processors, has done important Space Shuttle aerodynamics simulations and will be critical to the configurational refinement of the National Aerospace Plane and its intergrated powerplant, which will involve complex, high temperature reactive gasdynamic computations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayes, D.F.; Schroeder, P.R.; Engler, R.M.
This technical note describes procedures for determining mean hydraulic retention time and efficiency of a confined disposal facility (CDF) from a dye tracer slug test. These parameters are required to properly design a CDF for solids retention and for effluent quality considerations. Detailed information on conduct and analysis of dye tracer studies can be found in Engineer Manual 1110-2-5027, Confined Dredged Material Disposal. This technical note documents the DYECON computer program which facilitates the analysis of dye tracer concentration data and computes the hydraulic efficiency of a CDF as part of the Automated Dredging and Disposal Alternatives Management System (ADDAMS).
DNET: A communications facility for distributed heterogeneous computing
NASA Technical Reports Server (NTRS)
Tole, John; Nagappan, S.; Clayton, J.; Ruotolo, P.; Williamson, C.; Solow, H.
1989-01-01
This document describes DNET, a heterogeneous data communications networking facility. DNET allows programs operating on hosts on dissimilar networks to communicate with one another without concern for computer hardware, network protocol, or operating system differences. The overall DNET network is defined as the collection of host machines/networks on which the DNET software is operating. Each underlying network is considered a DNET 'domain'. Data communications service is provided between any two processes on any two hosts on any of the networks (domains) that may be reached via DNET. DNET provides protocol transparent, reliable, streaming data transmission between hosts (restricted, initially to DECnet and TCP/IP networks). DNET also provides variable length datagram service with optional return receipts.
Department of Defense In-House RDT and E Activities
1976-10-30
BALLISTIC TESTS.FAC AVAL FCR TESIS OF SP ELELTRONIC’ FIl’ CON EQUIP 4 RELATED SYSTEMS E COMPONFNTZ, 35 INSTALLATION: MEDICAL BIOENGINEERINC- R&D LABORATORY...ANALYSIS OF CHEMICAL AND METALLOGRAPHIC EFFECTS, MICROBIOLOGICAL EFFECTS, CLIMATIC ENVIRONMENTAL EFFECTS. TEST AND EVALUATE WARHEADS AND SPECIAL...CCMMUNICATI’N SYST:M INSTRUMENTED DROP ZONES ENGINEERING TEST FACILITY INSTRUMENTATION CALIBRATICN FACILITY SCIENTIFIC COMPUTER CENTER ENVIRONMENTAL TESY
NASA Marshall Space Flight Center Solar Observatory report, January - June 1990
NASA Technical Reports Server (NTRS)
Smith, James E.
1990-01-01
A description of the NASA Marshall Space Flight Center's Solar Vector Magnetograph Facility is presented and a summary of its observations and data reduction is given. The systems that make up the facility are a magnetograph telescope, an H alpha telescope, a Questar telescope, and a computer code. The data are represented by longitudinal contours with azimuth plots.
NASA Marshall Space Flight Center solar observatory
NASA Technical Reports Server (NTRS)
Smith, James E.
1988-01-01
A description is provided of the NASA Marshall Space Flight Center's Solar Vector Magnetograph Facility and a summary is given of its observations and data reduction during Jan. to Mar. 1988. The systems that make up the facility are a magnetograph telescope, an H-alpha telescope, a Questar telescope, and a computer center. The data are represented by longitudinal contours with azimuth plots.
CFD Simulations of the IHF Arc-Jet Flow: Compression-Pad Separation Bolt Wedge Tests
NASA Technical Reports Server (NTRS)
Gokcen, Tahir; Skokova, Kristina A.
2017-01-01
This paper reports computational analyses in support of two wedge tests in a high enthalpy arc-jet facility at NASA Ames Research Center. These tests were conducted using two different wedge models, each placed in a free jet downstream of a corresponding different conical nozzle in the Ames 60-MW Interaction Heating Facility. Each panel test article included a metallic separation bolt imbedded in Orion compression-pad and heatshield materials, resulting in a circular protuberance over a flat plate. The protuberances produce complex model flowfields, containing shock-shock and shock-boundary layer interactions, and multiple augmented heating regions on the test plate. As part of the test calibration runs, surface pressure and heat flux measurements on water-cooled calibration plates integrated with the wedge models were also obtained. Surface heating distributions on the test articles as well as arc-jet test environment parameters for each test configuration are obtained through computational fluid dynamics simulations, consistent with the facility and calibration measurements. The present analysis comprises simulations of the non-equilibrium flow field in the facility nozzle, test box, and flow field over test articles, and comparisons with the measured calibration data.
CFD Simulations of the IHF Arc-Jet Flow: Compression-Pad/Separation Bolt Wedge Tests
NASA Technical Reports Server (NTRS)
Goekcen, Tahir; Skokova, Kristina A.
2017-01-01
This paper reports computational analyses in support of two wedge tests in a high enthalpy arc-jet facility at NASA Ames Research Center. These tests were conducted using two different wedge models, each placed in a free jet downstream of a corresponding different conical nozzle in the Ames 60-MW Interaction Heating Facility. Each panel test article included a metallic separation bolt imbedded in Orion compression-pad and heatshield materials, resulting in a circular protuberance over a flat plate. The protuberances produce complex model flowfields, containing shock-shock and shock-boundary layer interactions, and multiple augmented heating regions on the test plate. As part of the test calibration runs, surface pressure and heat flux measurements on water-cooled calibration plates integrated with the wedge models were also obtained. Surface heating distributions on the test articles as well as arc-jet test environment parameters for each test configuration are obtained through computational fluid dynamics simulations, consistent with the facility and calibration measurements. The present analysis comprises simulations of the nonequilibrium flowfield in the facility nozzle, test box, and flowfield over test articles, and comparisons with the measured calibration data.
NASA Technical Reports Server (NTRS)
Redonnet, Stephane; Lockard, David P.; Khorrami, Mehdi R.; Choudhari, Meelan M.
2011-01-01
This paper presents a numerical assessment of acoustic installation effects in the tandem cylinder (TC) experiments conducted in the NASA Langley Quiet Flow Facility (QFF), an open-jet, anechoic wind tunnel. Calculations that couple the Computational Fluid Dynamics (CFD) and Computational Aeroacoustics (CAA) of the TC configuration within the QFF are conducted using the CFD simulation results previously obtained at NASA LaRC. The coupled simulations enable the assessment of installation effects associated with several specific features in the QFF facility that may have impacted the measured acoustic signature during the experiment. The CFD-CAA coupling is based on CFD data along a suitably chosen surface, and employs a technique that was recently improved to account for installed configurations involving acoustic backscatter into the CFD domain. First, a CFD-CAA calculation is conducted for an isolated TC configuration to assess the coupling approach, as well as to generate a reference solution for subsequent assessments of QFF installation effects. Direct comparisons between the CFD-CAA calculations associated with the various installed configurations allow the assessment of the effects of each component (nozzle, collector, etc.) or feature (confined vs. free jet flow, etc.) characterizing the NASA LaRC QFF facility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harms, Kevin; Oral, H. Sarp; Atchley, Scott
The Oak Ridge and Argonne Leadership Computing Facilities are both receiving new systems under the Collaboration of Oak Ridge, Argonne, and Livermore (CORAL) program. Because they are both part of the INCITE program, applications need to be portable between these two facilities. However, the Summit and Aurora systems will be vastly different architectures, including their I/O subsystems. While both systems will have POSIX-compliant parallel file systems, their Burst Buffer technologies will be different. This difference may pose challenges to application portability between facilities. Application developers need to pay attention to specific burst buffer implementations to maximize code portability.
Future experimental needs to support applied aerodynamics - A transonic perspective
NASA Technical Reports Server (NTRS)
Gloss, Blair B.
1992-01-01
Advancements in facilities, test techniques, and instrumentation are needed to provide data required for the development of advanced aircraft and to verify computational methods. An industry survey of major users of wind tunnel facilities at Langley Research Center (LaRC) was recently carried out to determine future facility requirements, test techniques, and instrumentation requirements; results from this survey are reflected in this paper. In addition, areas related to transonic testing at LaRC which are either currently being developed or are recognized as needing improvements are discussed.
2012-02-17
Industrial Area Construction: Located 5 miles south of Launch Complex 39, construction of the main buildings -- Operations and Checkout Building, Headquarters Building, and Central Instrumentation Facility – began in 1963. In 1992, the Space Station Processing Facility was designed and constructed for the pre-launch processing of International Space Station hardware that was flown on the space shuttle. Along with other facilities, the industrial area provides spacecraft assembly and checkout, crew training, computer and instrumentation equipment, hardware preflight testing and preparations, as well as administrative offices. Poster designed by Kennedy Space Center Graphics Department/Greg Lee. Credit: NASA
The F-18 systems research aircraft facility
NASA Technical Reports Server (NTRS)
Sitz, Joel R.
1992-01-01
To help ensure that new aerospace initiatives rapidly transition to competitive U.S. technologies, NASA Dryden Flight Research Facility has dedicated a systems research aircraft facility. The primary goal is to accelerate the transition of new aerospace technologies to commercial, military, and space vehicles. Key technologies include more-electric aircraft concepts, fly-by-light systems, flush airdata systems, and advanced computer architectures. Future aircraft that will benefit are the high-speed civil transport and the National AeroSpace Plane. This paper describes the systems research aircraft flight research vehicle and outlines near-term programs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barker, Ashley D.; Bernholdt, David E.; Bland, Arthur S.
Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatestmore » number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for Southern California to date. The Titan system provides the largest extant heterogeneous architecture for computing and computational science. Usage is high, delivering on the promise of a system well-suited for capability simulations for science. This success is due in part to innovations in tracking and reporting the activity on the compute nodes, and using this information to further enable and optimize applications, extending and balancing workload across the entire node. The OLCF continues to invest in innovative processes, tools, and resources necessary to meet continuing user demand. The facility’s leadership in data analysis and workflows was featured at the Department of Energy (DOE) booth at SC15, for the second year in a row, highlighting work with researchers from the National Library of Medicine coupled with unique computational and data resources serving experimental and observational data across facilities. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. Building on the exemplary year of 2014, as shown by the 2014 Operational Assessment Report (OAR) review committee response in Appendix A, this OAR delineates the policies, procedures, and innovations implemented by the OLCF to continue delivering a multi-petaflop resource for cutting-edge research. This report covers CY 2015, which, unless otherwise specified, denotes January 1, 2015, through December 31, 2015.« less
NASA Astrophysics Data System (ADS)
Clayton, R. W.; Kohler, M. D.; Massari, A.; Heaton, T. H.; Guy, R.; Chandy, M.; Bunn, J.; Strand, L.
2014-12-01
The CSN is now in its 3rdyear of operation and has expanded to 400 stations in the Los Angeles region. The goal of the network is to produce a map of strong shaking immediately following a major earthquake as a proxy for damage and a guide for first responders. We have also instrumented a number of buildings with the goal of determining the state of health of these structures before and after they have been shaken. In one 15-story structure, our sensors distributed two per floor, and show body waves propagating in the structure after a moderate local earthquake (M4.4 in Encino, CA). Sensors in a 52-story structure, which we plan to instrument with two sensors per floor as well, show the modes of the building (see Figure) down to the fundamental mode at 5 sec due to a M5.1 earthquake in La Habra, CA. The CSN utilizes a number of technologies that will likely be important in building robust low-cost networks. These include: Distributed computing - the sensors themselves are smart-sensors that perform the basic detection and size estimation in the onboard computers and send the results immediately (without packetization latency) to the central facility. Cloud computing - the central facility is housed in the cloud, which means it is more robust than a local site, and has expandable computing resources available so that it can operate with minimal resources during quiet times but still be able to exploit an very large computing facility during an earthquake. Low-cost/low-maintenance sensors - the MEM sensors are capable of staying onscale to +/- 2g, and can measure events in the Los Angeles Basin a low as magnitude 3.
Reducing cooling energy consumption in data centres and critical facilities
NASA Astrophysics Data System (ADS)
Cross, Gareth
Given the rise of our everyday reliance on computers in all walks of life, from checking the train times to paying our credit card bills online, the need for computational power is ever increasing. Other than the ever-increasing performance of home Personal Computers (PC's) this reliance has given rise to a new phenomenon in the last 10 years ago. The data centre. Data centres contain vast arrays of IT cabinets loaded with servers that perform millions of computational equations every second. It is these data centres that allow us to continue with our reliance on the internet and the PC. As more and more data centres become necessary due to the increase in computing processing power required for the everyday activities we all take for granted so the energy consumed by these data centres rises. Not only are more and more data centres being constructed daily, but operators are also looking at ways to squeeze more processing from their existing data centres. This in turn leads to greater heat outputs and therefore requires more cooling. Cooling data centres requires a sizeable energy input, indeed to many megawatts per data centre site. Given the large amounts of money dependant on the successful operation of data centres, in particular for data centres operated by financial institutions, the onus is predominantly on ensuring the data centres operate with no technical glitches rather than in an energy conscious fashion. This report aims to investigate the ways and means of reducing energy consumption within data centres without compromising the technology the data centres are designed to house. As well as discussing the individual merits of the technologies and their implementation technical calculations will be undertaken where necessary to determine the levels of energy saving, if any, from each proposal. To enable comparison between each proposal any design calculations within this report will be undertaken against a notional data facility. This data facility will nominally be considered to require 1000 kW. Refer to Section 2.1 'Outline of Notional data Facility for Calculation Purposes' for details of the design conditions and constraints of the energy consumption calculations.
Langley Ground Facilities and Testing in the 21st Century
NASA Technical Reports Server (NTRS)
Ambur, Damodar R.; Kegelman, Jerome T.; Kilgore, William A.
2010-01-01
A strategic approach for retaining and more efficiently operating the essential Langley Ground Testing Facilities in the 21st Century is presented. This effort takes advantage of the previously completed and ongoing studies at the Agency and National levels. This integrated approach takes into consideration the overall decline in test business base within the nation and reduced utilization in each of the Langley facilities with capabilities to test in the subsonic, transonic, supersonic, and hypersonic speed regimes. The strategy accounts for capability needs to meet the Agency programmatic requirements and strategic goals and to execute test activities in the most efficient and flexible facility operating structure. The structure currently being implemented at Langley offers agility to right-size our capability and capacity from a national perspective, to accommodate the dynamic nature of the testing needs, and will address the influence of existing and emerging analytical tools for design. The paradigm for testing in the retained facilities is to efficiently and reliably provide more accurate and high-quality test results at an affordable cost to support design information needs for flight regimes where the computational capability is not adequate and to verify and validate the existing and emerging computational tools. Each of the above goals are planned to be achieved, keeping in mind the increasing small industry customer base engaged in developing unpiloted aerial vehicles and commercial space transportation systems.
Ant colony optimization for solving university facility layout problem
NASA Astrophysics Data System (ADS)
Mohd Jani, Nurul Hafiza; Mohd Radzi, Nor Haizan; Ngadiman, Mohd Salihin
2013-04-01
Quadratic Assignment Problems (QAP) is classified as the NP hard problem. It has been used to model a lot of problem in several areas such as operational research, combinatorial data analysis and also parallel and distributed computing, optimization problem such as graph portioning and Travel Salesman Problem (TSP). In the literature, researcher use exact algorithm, heuristics algorithm and metaheuristic approaches to solve QAP problem. QAP is largely applied in facility layout problem (FLP). In this paper we used QAP to model university facility layout problem. There are 8 facilities that need to be assigned to 8 locations. Hence we have modeled a QAP problem with n ≤ 10 and developed an Ant Colony Optimization (ACO) algorithm to solve the university facility layout problem. The objective is to assign n facilities to n locations such that the minimum product of flows and distances is obtained. Flow is the movement from one to another facility, whereas distance is the distance between one locations of a facility to other facilities locations. The objective of the QAP is to obtain minimum total walking (flow) of lecturers from one destination to another (distance).
NASA Technical Reports Server (NTRS)
Clement, W. F.; Allen, R. W.; Heffley, R. K.; Jewell, W. F.; Jex, H. R.; Mcruer, D. T.; Schulman, T. M.; Stapleford, R. L.
1980-01-01
The NASA Ames Research Center proposed a man-vehicle systems research facility to support flight simulation studies which are needed for identifying and correcting the sources of human error associated with current and future air carrier operations. The organization of research facility is reviewed and functional requirements and related priorities for the facility are recommended based on a review of potentially critical operational scenarios. Requirements are included for the experimenter's simulation control and data acquisition functions, as well as for the visual field, motion, sound, computation, crew station, and intercommunications subsystems. The related issues of functional fidelity and level of simulation are addressed, and specific criteria for quantitative assessment of various aspects of fidelity are offered. Recommendations for facility integration, checkout, and staffing are included.
School Data Processing Services in Texas. A Cooperative Approach. [Revised.
ERIC Educational Resources Information Center
Texas Education Agency, Austin. Management Information Center.
The Texas plan for computer services provides services to public school districts through a statewide network of 20 regional Education Service Centers (ESC). Each of the three Multi-Regional Processing Centers (MRPCs) operates a large computer facility providing school district services within from three to eight ESC regions; each of the five…
School Data Processing Services in Texas: A Cooperative Approach.
ERIC Educational Resources Information Center
Texas Education Agency, Austin.
The Texas plan for computer services provides services to public school districts through a statewide network of 20 regional Education Service Centers (ESC). Each of the three Multi-Regional Processing Centers (MRPCs) operates a large computer facility providing school district services within from three to eight ESC regions; each of the five…
School Data Processing Services in Texas: A Cooperative Approach.
ERIC Educational Resources Information Center
Texas Education Agency, Austin.
The Texas plan for computer services provides services to public school districts through a statewide network of 20 regional Education Service Centers (ESO). Each of the three Multi-Regional Processing Centers (MRPCs) operates a large computer facility providing school district services within from three to eight ESC regions each of the five…
Interdisciplinary Facilities that Support Collaborative Teaching and Learning
ERIC Educational Resources Information Center
Asoodeh, Mike; Bonnette, Roy
2006-01-01
It has become widely accepted that the computer is an indispensable tool in the study of science and technology. Thus, in recent years curricular programs such as Industrial Technology and associated scientific disciplines have been adopting and adapting the computer as a tool in new and innovative ways to support teaching, learning, and research.…
ERIC Educational Resources Information Center
Cedeno, David L.; Jones, Marjorie A.; Friesen, Jon A.; Wirtz, Mark W.; Rios, Luz Amalia; Ocampo, Gonzalo Taborda
2010-01-01
At the Universidad de Caldas, Manizales, Colombia, we used their new computer facilities to introduce chemistry graduate students to biochemical database mining and quantum chemistry calculations using freeware. These hands-on workshops allowed the students a strong introduction to easily accessible software and how to use this software to begin…
ODU-CAUSE: Computer Based Learning Lab.
ERIC Educational Resources Information Center
Sachon, Michael W.; Copeland, Gary E.
This paper describes the Computer Based Learning Lab (CBLL) at Old Dominion University (ODU) as a component of the ODU-Comprehensive Assistance to Undergraduate Science Education (CAUSE) Project. Emphasis is directed to the structure and management of the facility and to the software under development by the staff. Serving the ODU-CAUSE User Group…
Makerspaces: The Next Iteration for Educational Technology in K-12 Schools
ERIC Educational Resources Information Center
Strycker, Jesse
2015-01-01
With the continually growing number of computers and mobile devices available in K-12 schools, the need is dwindling for dedicated computer labs and media centers. Some schools are starting to repurpose those facilities into different kinds of exploratory learning environments known as "makerspaces". This article discusses this next…
1993-06-01
administering contractual support for lab-wide or multiple buys of ADP systems, software, and services. Computer systems located in the Central Computing Facility...Code Dr. D.L. Bradley Vacant Mrs. N.J. Beauchamp Dr. W.A. Kuperman Dr. E.R. Franchi Dr. M.H. Orr Dr. J.A. Bucaro Mr. L.B. Palmer Dr. D.J. Ramsdale Mr
New Parallel computing framework for radiation transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kostin, M.A.; /Michigan State U., NSCL; Mokhov, N.V.
A new parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was integrated with the MARS15 code, and an effort is under way to deploy it in PHITS. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility canmore » be used in single process calculations as well as in the parallel regime. Several checkpoint files can be merged into one thus combining results of several calculations. The framework also corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.« less
Further Investigation of the Support System Effects and Wing Twist on the NASA Common Research Model
NASA Technical Reports Server (NTRS)
Rivers, Melissa B.; Hunter, Craig A.; Campbell, Richard L.
2012-01-01
An experimental investigation of the NASA Common Research Model was conducted in the NASA Langley National Transonic Facility and NASA Ames 11-foot Transonic Wind Tunnel Facility for use in the Drag Prediction Workshop. As data from the experimental investigations was collected, a large difference in moment values was seen between the experiment and computational data from the 4th Drag Prediction Workshop. This difference led to a computational assessment to investigate model support system interference effects on the Common Research Model. The results from this investigation showed that the addition of the support system to the computational cases did increase the pitching moment so that it more closely matched the experimental results, but there was still a large discrepancy in pitching moment. This large discrepancy led to an investigation into the shape of the as-built model, which in turn led to a change in the computational grids and re-running of all the previous support system cases. The results of these cases are the focus of this paper.
Middleware for big data processing: test results
NASA Astrophysics Data System (ADS)
Gankevich, I.; Gaiduchok, V.; Korkhov, V.; Degtyarev, A.; Bogdanov, A.
2017-12-01
Dealing with large volumes of data is resource-consuming work which is more and more often delegated not only to a single computer but also to a whole distributed computing system at once. As the number of computers in a distributed system increases, the amount of effort put into effective management of the system grows. When the system reaches some critical size, much effort should be put into improving its fault tolerance. It is difficult to estimate when some particular distributed system needs such facilities for a given workload, so instead they should be implemented in a middleware which works efficiently with a distributed system of any size. It is also difficult to estimate whether a volume of data is large or not, so the middleware should also work with data of any volume. In other words, the purpose of the middleware is to provide facilities that adapt distributed computing system for a given workload. In this paper we introduce such middleware appliance. Tests show that this middleware is well-suited for typical HPC and big data workloads and its performance is comparable with well-known alternatives.
Computational biomedicine: a challenge for the twenty-first century.
Coveney, Peter V; Shublaq, Nour W
2012-01-01
With the relentless increase of computer power and the widespread availability of digital patient-specific medical data, we are now entering an era when it is becoming possible to develop predictive models of human disease and pathology, which can be used to support and enhance clinical decision-making. The approach amounts to a grand challenge to computational science insofar as we need to be able to provide seamless yet secure access to large scale heterogeneous personal healthcare data in a facile way, typically integrated into complex workflows-some parts of which may need to be run on high performance computers-in a facile way that is integrated into clinical decision support software. In this paper, we review the state of the art in terms of case studies drawn from neurovascular pathologies and HIV/AIDS. These studies are representative of a large number of projects currently being performed within the Virtual Physiological Human initiative. They make demands of information technology at many scales, from the desktop to national and international infrastructures for data storage and processing, linked by high performance networks.
Zhang, P; Tao, L; Zeng, X; Qin, C; Chen, S Y; Zhu, F; Yang, S Y; Li, Z R; Chen, W P; Chen, Y Z
2017-02-03
The studies of biological, disease, and pharmacological networks are facilitated by the systems-level investigations using computational tools. In particular, the network descriptors developed in other disciplines have found increasing applications in the study of the protein, gene regulatory, metabolic, disease, and drug-targeted networks. Facilities are provided by the public web servers for computing network descriptors, but many descriptors are not covered, including those used or useful for biological studies. We upgraded the PROFEAT web server http://bidd2.nus.edu.sg/cgi-bin/profeat2016/main.cgi for computing up to 329 network descriptors and protein-protein interaction descriptors. PROFEAT network descriptors comprehensively describe the topological and connectivity characteristics of unweighted (uniform binding constants and molecular levels), edge-weighted (varying binding constants), node-weighted (varying molecular levels), edge-node-weighted (varying binding constants and molecular levels), and directed (oriented processes) networks. The usefulness of the network descriptors is illustrated by the literature-reported studies of the biological networks derived from the genome, interactome, transcriptome, metabolome, and diseasome profiles. Copyright © 2016 Elsevier Ltd. All rights reserved.
Investigation of Storage Options for Scientific Computing on Grid and Cloud Facilities
NASA Astrophysics Data System (ADS)
Garzoglio, Gabriele
2012-12-01
In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storage server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on “bare metal” nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.
Distributed computing testbed for a remote experimental environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butner, D.N.; Casper, T.A.; Howard, B.C.
1995-09-18
Collaboration is increasing as physics research becomes concentrated on a few large, expensive facilities, particularly in magnetic fusion energy research, with national and international participation. These facilities are designed for steady state operation and interactive, real-time experimentation. We are developing tools to provide for the establishment of geographically distant centers for interactive operations; such centers would allow scientists to participate in experiments from their home institutions. A testbed is being developed for a Remote Experimental Environment (REE), a ``Collaboratory.`` The testbed will be used to evaluate the ability of a remotely located group of scientists to conduct research on themore » DIII-D Tokamak at General Atomics. The REE will serve as a testing environment for advanced control and collaboration concepts applicable to future experiments. Process-to-process communications over high speed wide area networks provide real-time synchronization and exchange of data among multiple computer networks, while the ability to conduct research is enhanced by adding audio/video communication capabilities. The Open Software Foundation`s Distributed Computing Environment is being used to test concepts in distributed control, security, naming, remote procedure calls and distributed file access using the Distributed File Services. We are exploring the technology and sociology of remotely participating in the operation of a large scale experimental facility.« less
Investigation of storage options for scientific computing on Grid and Cloud facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garzoglio, Gabriele
In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storagemore » server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on bare metal nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.« less
NASA Technical Reports Server (NTRS)
1993-01-01
This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics and computer science during the period April 1, 1993 through September 30, 1993. The major categories of the current ICASE research program are: (1) applied and numerical mathematics, including numerical analysis and algorithm development; (2) theoretical and computational research in fluid mechanics in selected areas of interest to LaRC, including acoustic and combustion; (3) experimental research in transition and turbulence and aerodynamics involving LaRC facilities and scientists; and (4) computer science.
NASA Technical Reports Server (NTRS)
1994-01-01
This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, fluid mechanics, and computer science during the period October 1, 1993 through March 31, 1994. The major categories of the current ICASE research program are: (1) applied and numerical mathematics, including numerical analysis and algorithm development; (2) theoretical and computational research in fluid mechanics in selected areas of interest to LaRC, including acoustics and combustion; (3) experimental research in transition and turbulence and aerodynamics involving LaRC facilities and scientists; and (4) computer science.
Saving Water at Los Alamos National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, Andy
Los Alamos National Laboratory decreased its water usage by 26 percent in 2014, with about one-third of the reduction attributable to using reclaimed water to cool a supercomputing center. The Laboratory's goal during 2014 was to use only re-purposed water to support the mission at the Strategic Computing Complex. Using reclaimed water from the Sanitary Effluent Reclamation Facility, or SERF, substantially decreased water usage and supported the overall mission. SERF collects industrial wastewater and treats it for reuse. The reclamation facility contributed more than 27 million gallons of re-purposed water to the Laboratory's computing center, a secured supercomputing facility thatmore » supports the Laboratory’s national security mission and is one of the institution’s larger water users. In addition to the strategic water reuse program at SERF, the Laboratory reduced water use in 2014 by focusing conservation efforts on areas that use the most water, upgrading to water-conserving fixtures, and repairing leaks identified in a biennial survey.« less
An SSH key management system: easing the pain of managing key/user/account associations
NASA Astrophysics Data System (ADS)
Arkhipkin, D.; Betts, W.; Lauret, J.; Shiryaev, A.
2008-07-01
Cyber security requirements for secure access to computing facilities often call for access controls via gatekeepers and the use of two-factor authentication. Using SSH keys to satisfy the two factor authentication requirement has introduced a potentially challenging task of managing the keys and their associations with individual users and user accounts. Approaches for a facility with the simple model of one remote user corresponding to one local user would not work at facilities that require a many-to-many mapping between users and accounts on multiple systems. We will present an SSH key management system we developed, tested and deployed to address the many-to-many dilemma in the environment of the STAR experiment. We will explain its use in an online computing context and explain how it makes possible the management and tracing of group account access spread over many sub-system components (data acquisition, slow controls, trigger, detector instrumentation, etc.) without the use of shared passwords for remote logins.
Saving Water at Los Alamos National Laboratory
Erickson, Andy
2018-01-16
Los Alamos National Laboratory decreased its water usage by 26 percent in 2014, with about one-third of the reduction attributable to using reclaimed water to cool a supercomputing center. The Laboratory's goal during 2014 was to use only re-purposed water to support the mission at the Strategic Computing Complex. Using reclaimed water from the Sanitary Effluent Reclamation Facility, or SERF, substantially decreased water usage and supported the overall mission. SERF collects industrial wastewater and treats it for reuse. The reclamation facility contributed more than 27 million gallons of re-purposed water to the Laboratory's computing center, a secured supercomputing facility that supports the Laboratoryâs national security mission and is one of the institutionâs larger water users. In addition to the strategic water reuse program at SERF, the Laboratory reduced water use in 2014 by focusing conservation efforts on areas that use the most water, upgrading to water-conserving fixtures, and repairing leaks identified in a biennial survey.
The Research on Application of Information Technology in sports Stadiums
NASA Astrophysics Data System (ADS)
Can, Han; Lu, Ma; Gan, Luying
With the Olympic glory in the national fitness program planning and the smooth development of China, the public's concern for the sport continues to grow, while their physical health is also increasingly fervent desired, the country launched a modern technological construction of sports facilities. Information technology applications in the sports venues in the increasingly wide range of modern venues and facilities, including not only the intelligent application of office automation systems, intelligent systems and sports facilities, communication systems for event management, ticket access control system, contest information systems, television systems, Command and Control System, but also in action including the use of computer technology, image analysis, computer-aided training athletes, sports training system and related data entry systems, decision support systems.Using documentary data method, this paper focuses on the research on application of information technology in Sports Stadiums, and try to explore its future trends.With a view to promote the growth of China's national economyand,so as to improve the students'quality and promote the cause of Chinese sports.
Aerothermodynamic testing requirements for future space transportation systems
NASA Technical Reports Server (NTRS)
Paulson, John W., Jr.; Miller, Charles G., III
1995-01-01
Aerothermodynamics, encompassing aerodynamics, aeroheating, and fluid dynamic and physical processes, is the genesis for the design and development of advanced space transportation vehicles. It provides crucial information to other disciplines involved in the development process such as structures, materials, propulsion, and avionics. Sources of aerothermodynamic information include ground-based facilities, computational fluid dynamic (CFD) and engineering computer codes, and flight experiments. Utilization of this triad is required to provide the optimum requirements while reducing undue design conservatism, risk, and cost. This paper discusses the role of ground-based facilities in the design of future space transportation system concepts. Testing methodology is addressed, including the iterative approach often required for the assessment and optimization of configurations from an aerothermodynamic perspective. The influence of vehicle shape and the transition from parametric studies for optimization to benchmark studies for final design and establishment of the flight data book is discussed. Future aerothermodynamic testing requirements including the need for new facilities are also presented.
MIT Laboratory for Computer Science Progress Report, July 1984-June 1985
1985-06-01
larger (up to several thousand machines) multiprocessor systems. This facility, funded by the newly formed Strategic Computing Program of the Defense...Szolovits, Group Leader R. Patil Collaborating Investigators M. Criscitiello, M.D., Tufts-New England Medical Center Hospital J. Dzierzanowski, Ph.D., Dept...COMPUTATION STRUCTURES Academic Staff J. B. Dennis, Group Leader Research Staff W. B. Ackerman G. A. Boughton W. Y-P. Lim Graduate Students T-A. Chu S
Engineering computer graphics in gas turbine engine design, analysis and manufacture
NASA Technical Reports Server (NTRS)
Lopatka, R. S.
1975-01-01
A time-sharing and computer graphics facility designed to provide effective interactive tools to a large number of engineering users with varied requirements was described. The application of computer graphics displays at several levels of hardware complexity and capability is discussed, with examples of graphics systems tracing gas turbine product development, beginning with preliminary design through manufacture. Highlights of an operating system stylized for interactive engineering graphics is described.
(abstract) Simple Spreadsheet Thermal Models for Cryogenic Applications
NASA Technical Reports Server (NTRS)
Nash, A. E.
1994-01-01
Self consistent circuit analog thermal models, that can be run in commercial spreadsheet programs on personal computers, have been created to calculate the cooldown and steady state performance of cryogen cooled Dewars. The models include temperature dependent conduction and radiation effects. The outputs of the models provide temperature distribution and Dewar performance information. These models have been used to analyze the Cryogenic Telescope Test Facility (CTTF). The facility will be on line in early 1995 for its first user, the Infrared Telescope Technology Testbed (ITTT), for the Space Infrared Telescope Facility (SIRTF) at JPL. The model algorithm as well as a comparison of the model predictions and actual performance of this facility will be presented.
Simple Spreadsheet Thermal Models for Cryogenic Applications
NASA Technical Reports Server (NTRS)
Nash, Alfred
1995-01-01
Self consistent circuit analog thermal models that can be run in commercial spreadsheet programs on personal computers have been created to calculate the cooldown and steady state performance of cryogen cooled Dewars. The models include temperature dependent conduction and radiation effects. The outputs of the models provide temperature distribution and Dewar performance information. these models have been used to analyze the SIRTF Telescope Test Facility (STTF). The facility has been brought on line for its first user, the Infrared Telescope Technology Testbed (ITTT), for the Space Infrared Telescope Facility (SIRTF) at JPL. The model algorithm as well as a comparison between the models' predictions and actual performance of this facility will be presented.
Experience in using commercial clouds in CMS
NASA Astrophysics Data System (ADS)
Bauerdick, L.; Bockelman, B.; Dykstra, D.; Fuess, S.; Garzoglio, G.; Girone, M.; Gutsche, O.; Holzman, B.; Hufnagel, D.; Kim, H.; Kennedy, R.; Mason, D.; Spentzouris, P.; Timm, S.; Tiradani, A.; Vaandering, E.; CMS Collaboration
2017-10-01
Historically high energy physics computing has been performed on large purpose-built computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.
Experience in using commercial clouds in CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauerdick, L.; Bockelman, B.; Dykstra, D.
Historically high energy physics computing has been performed on large purposebuilt computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is amore » growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.« less
NACA Computer Operates an IBM Telereader
1952-02-21
A staff member from the Computing Section at the National Advisory Committee for Aeronautics (NACA) Lewis Flight Propulsion Laboratory operates an International Business Machines (IBM) telereader at the 8- by 6-Foot Supersonic Wind Tunnel. The telereader was used to measure recorded data from motion picture film or oscillographs. The machine could perform 50 measurements per minute. The component to her right is a telerecordex that was used convert the telereader measurements into decimal form and record the data on computer punch cards. During test runs in the 8- by 6-foot tunnel, or the other large test facilities, pressure sensors on the test article were connected to mercury-filled manometer tubes located below the test section. The mercury would rise or fall in relation to the pressure fluctuations in the test section. Initially, female staff members, known as “computers,” transcribed all the measurements by hand. The process became automated with the introduction of the telereader and other data reduction equipment in the early 1950s. The Computer Section staff members were still needed to operate the machines. The Computing Section was introduced during World War II to relieve short-handed research engineers of some of the tedious work. The computers made the initial computations and plotted the data graphically. The researcher then analyzed the data and either summarized the findings in a report or made modifications or ran the test again. The computers and analysts were located in the Altitude Wind Tunnel Shop and Office Building office wing during the 1940s. They were transferred to the new facility when the 8- by 6-Foot tunnel began operations in 1948.
Cloud@Home: A New Enhanced Computing Paradigm
NASA Astrophysics Data System (ADS)
Distefano, Salvatore; Cunsolo, Vincenzo D.; Puliafito, Antonio; Scarpa, Marco
Cloud computing is a distributed computing paradigm that mixes aspects of Grid computing, ("… hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities" (Foster, 2002)) Internet Computing ("…a computing platform geographically distributed across the Internet" (Milenkovic et al., 2003)), Utility computing ("a collection of technologies and business practices that enables computing to be delivered seamlessly and reliably across multiple computers, ... available as needed and billed according to usage, much like water and electricity are today" (Ross & Westerman, 2004)) Autonomic computing ("computing systems that can manage themselves given high-level objectives from administrators" (Kephart & Chess, 2003)), Edge computing ("… provides a generic template facility for any type of application to spread its execution across a dedicated grid, balancing the load …" Davis, Parikh, & Weihl, 2004) and Green computing (a new frontier of Ethical computing1 starting from the assumption that in next future energy costs will be related to the environment pollution).
Moyers, M F
2014-06-01
Adequate evaluation of the results from multi-institutional trials involving light ion beam treatments requires consideration of the planning margins applied to both targets and organs at risk. A major uncertainty that affects the size of these margins is the conversion of x ray computed tomography numbers (XCTNs) to relative linear stopping powers (RLSPs). Various facilities engaged in multi-institutional clinical trials involving proton beams have been applying significantly different margins in their patient planning. This study was performed to determine the variance in the conversion functions used at proton facilities in the U.S.A. wishing to participate in National Cancer Institute sponsored clinical trials. A simplified method of determining the conversion function was developed using a standard phantom containing only water and aluminum. The new method was based on the premise that all scanners have their XCTNs for air and water calibrated daily to constant values but that the XCTNs for high density/high atomic number materials are variable with different scanning conditions. The standard phantom was taken to 10 different proton facilities and scanned with the local protocols resulting in 14 derived conversion functions which were compared to the conversion functions used at the local facilities. For tissues within ±300 XCTN of water, all facility functions produced converted RLSP values within ±6% of the values produced by the standard function and within 8% of the values from any other facility's function. For XCTNs corresponding to lung tissue, converted RLSP values differed by as great as ±8% from the standard and up to 16% from the values of other facilities. For XCTNs corresponding to low-density immobilization foam, the maximum to minimum values differed by as much as 40%. The new method greatly simplifies determination of the conversion function, reduces ambiguity, and in the future could promote standardization between facilities. Although it was not possible from these experiments to determine which conversion function is most appropriate, the variation between facilities suggests that the margins used in some facilities to account for the uncertainty in converting XCTNs to RLSPs may be too small.
Computers as learning resources in the health sciences: impact and issues.
Ellis, L B; Hannigan, G G
1986-01-01
Starting with two computer terminals in 1972, the Health Sciences Learning Resources Center of the University of Minnesota Bio-Medical Library expanded its instructional facilities to ten terminals and thirty-five microcomputers by 1985. Computer use accounted for 28% of total center circulation. The impact of these resources on health sciences curricula is described and issues related to use, support, and planning are raised and discussed. Judged by their acceptance and educational value, computers are successful health sciences learning resources at the University of Minnesota. PMID:3518843
NASA Technical Reports Server (NTRS)
Blotzer, Michael J.; Woods, Jody L.
2009-01-01
This viewgraph presentation reviews computational fluid dynamics as a tool for modelling the dispersion of carbon monoxide at the Stennis Space Center's A3 Test Stand. The contents include: 1) Constellation Program; 2) Constellation Launch Vehicles; 3) J2X Engine; 4) A-3 Test Stand; 5) Chemical Steam Generators; 6) Emission Estimates; 7) Located in Existing Test Complex; 8) Computational Fluid Dynamics; 9) Computational Tools; 10) CO Modeling; 11) CO Model results; and 12) Next steps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kippen, Karen Elizabeth
Physics Flash is the newsletter for the Physics Division at Los Alamos National Laboratory. This newsletter is for August 2016. The following topics are covered: "Accomplishments in the Trident Laser Facility", "David Meyerhofer elected as chair-elect APS Nominating Committee", "HAWC searches for gamma rays from dark matter", "Proton Radiography Facility commissions electromagnetic magnifier", and "Cosmic ray muon computed tomography of spent nuclear fuel in dry storage casks."
NASA Marshall Space Flight Center Solar Observatory report, October - December 1990
NASA Technical Reports Server (NTRS)
Smith, James E.
1991-01-01
A description of the NASA Marshall Space Flight Center's Solar Vector Magnetograph Facility is provided, and a summary of its observations and data reduction during Oct. - Dec. 1990 is presented. The systems that make up the facility are a magnetograph telescope, and H-alpha telescope, a Questar telescope, and a computer code. The data are represented by longitudinal contours with azimuth plots.