How the Theory of Computing Can Help in Space Exploration
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik; Longpre, Luc
1997-01-01
The opening of the NASA Pan American Center for Environmental and Earth Sciences (PACES) at the University of Texas at El Paso made it possible to organize the student Center for Theoretical Research and its Applications in Computer Science (TRACS). In this abstract, we briefly describe the main NASA-related research directions of the TRACS center, and give an overview of the preliminary results of student research.
2003-09-03
KENNEDY SPACE CENTER, FLA. - Boeing workers perform a 3D digital scan of the actuator on the table. At left is Dan Clark. At right are Alden Pitard (seated at computer) and John Macke, from Boeing, St. Louis. . There are two actuators per engine on the Shuttle, one for pitch motion and one for yaw motion. The Space Shuttle Main Engine hydraulic servoactuators are used to gimbal the main engine.
Computational Fluid Dynamics. [numerical methods and algorithm development
NASA Technical Reports Server (NTRS)
1992-01-01
This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.
Eastern Space and Missile Center (ESMC) Capability.
1983-09-16
Sites Fig. 4 ETR Tracking Itlescopes A unique feature at the ETR is the ability to compute a The Contraves Model 151 includes a TV camera. a widetband...main objective lens. The Contraves wideband transmitter sends video signals from either the main objective TV or the DAGE wide-angle TV system to the...Modified main objective plus the time of day to 0.1 second. to use the ESMC precise 2400 b/s acquisition data system, the Contraves computer system
Administration of Computer Resources.
ERIC Educational Resources Information Center
Franklin, Gene F.
Computing at Stanford University has, until recently, been performed at one of five facilities. The Stanford hospital operates an IBM 370/135 mainly for administrative use. The university business office has an IBM 370/145 for its administrative needs and support of the medical clinic. Under the supervision of the Stanford Computation Center are…
The Molecular and Cellular Characterization of Screen‐Detected Lesions ‐ Coordinating Center and Data Management Group will provide support for the participating studies responding to RFA CA14‐10. The coordinating center supports three main domains: network coordination, statistical support and computational analysis and protocol development and database support. Support for
Pulkovo IVS Analysis Center (PUL) 2012 Annual Report
NASA Technical Reports Server (NTRS)
Malkin, Zinovy; Sokolova, Julia
2013-01-01
This report briefly presents the PUL IVS Analysis Center activities during 2012 and plans for the coming year. The main topics of the investigations of PUL staff in that period were ICRF related studies, computation and analysis of EOP series, celestial pole offset (CPO) modeling, and VLBI2010 related issues.
Data communication network at the ASRM facility
NASA Technical Reports Server (NTRS)
Moorhead, Robert J., II; Smith, Wayne D.; Nirgudkar, Ravi; Zhu, Zhifan; Robinson, Walter
1993-01-01
The main objective of the report is to present the overall communication network structure for the Advanced Solid Rocket Motor (ASRM) facility being built at Yellow Creek near Iuka, Mississippi. This report is compiled using information received from NASA/MSFC, LMSC, AAD, and RUST Inc. As per the information gathered, the overall network structure will have one logical FDDI ring acting as a backbone for the whole complex. The buildings will be grouped into two categories viz. manufacturing critical and manufacturing non-critical. The manufacturing critical buildings will be connected via FDDI to the Operational Information System (OIS) in the main computing center in B 1000. The manufacturing non-critical buildings will be connected by 10BASE-FL to the Business Information System (BIS) in the main computing center. The workcells will be connected to the Area Supervisory Computers (ASCs) through the nearest manufacturing critical hub and one of the OIS hubs. The network structure described in this report will be the basis for simulations to be carried out next year. The Comdisco's Block Oriented Network Simulator (BONeS) will be used for the network simulation. The main aim of the simulations will be to evaluate the loading of the OIS, the BIS, the ASCs, and the network links by the traffic generated by the workstations and workcells throughout the site.
Data communication network at the ASRM facility
NASA Astrophysics Data System (ADS)
Moorhead, Robert J., II; Smith, Wayne D.; Nirgudkar, Ravi; Zhu, Zhifan; Robinson, Walter
1993-02-01
The main objective of the report is to present the overall communication network structure for the Advanced Solid Rocket Motor (ASRM) facility being built at Yellow Creek near Iuka, Mississippi. This report is compiled using information received from NASA/MSFC, LMSC, AAD, and RUST Inc. As per the information gathered, the overall network structure will have one logical FDDI ring acting as a backbone for the whole complex. The buildings will be grouped into two categories viz. manufacturing critical and manufacturing non-critical. The manufacturing critical buildings will be connected via FDDI to the Operational Information System (OIS) in the main computing center in B 1000. The manufacturing non-critical buildings will be connected by 10BASE-FL to the Business Information System (BIS) in the main computing center. The workcells will be connected to the Area Supervisory Computers (ASCs) through the nearest manufacturing critical hub and one of the OIS hubs. The network structure described in this report will be the basis for simulations to be carried out next year. The Comdisco's Block Oriented Network Simulator (BONeS) will be used for the network simulation. The main aim of the simulations will be to evaluate the loading of the OIS, the BIS, the ASCs, and the network links by the traffic generated by the workstations and workcells throughout the site.
NASA Astrophysics Data System (ADS)
Ukawa, Akira
1998-05-01
The CP-PACS computer is a massively parallel computer consisting of 2048 processing units and having a peak speed of 614 GFLOPS and 128 GByte of main memory. It was developed over the four years from 1992 to 1996 at the Center for Computational Physics, University of Tsukuba, for large-scale numerical simulations in computational physics, especially those of lattice QCD. The CP-PACS computer has been in full operation for physics computations since October 1996. In this article we describe the chronology of the development, the hardware and software characteristics of the computer, and its performance for lattice QCD simulations.
14 CFR 29.725 - Limit drop test.
Code of Federal Regulations, 2011 CFR
2011-01-01
....), equal to the static reaction on the particular unit with the rotorcraft in the most critical attitude. A rational method may be used in computing a main gear static reaction, taking into consideration the moment arm between the main wheel reaction and the rotorcraft center of gravity. W=W N for nose gear units...
14 CFR 29.725 - Limit drop test.
Code of Federal Regulations, 2010 CFR
2010-01-01
....), equal to the static reaction on the particular unit with the rotorcraft in the most critical attitude. A rational method may be used in computing a main gear static reaction, taking into consideration the moment arm between the main wheel reaction and the rotorcraft center of gravity. W=W N for nose gear units...
NASA Lewis Research Center/university graduate research program on engine structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1985-01-01
NASA Lewis Research Center established a graduate research program in support of the Engine Structures Research activities. This graduate research program focuses mainly on structural and dynamics analyses, computational mechanics, mechanics of composites and structural optimization. The broad objectives of the program, the specific program, the participating universities and the program status are briefly described.
NASA Lewis Research Center/University Graduate Research Program on Engine Structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1985-01-01
NASA Lewis Research Center established a graduate research program in support of the Engine Structures Research activities. This graduate research program focuses mainly on structural and dynamics analyses, computational mechanics, mechanics of composites and structural optimization. The broad objectives of the program, the specific program, the participating universities and the program status are briefly described.
Composite mechanics for engine structures
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
1987-01-01
Recent research activities and accomplishments at Lewis Research Center on composite mechanics for engine structures are summarized. The activities focused mainly on developing procedures for the computational simulation of composite intrinsic and structural behavior. The computational simulation encompasses all aspects of composite mechanics, advanced three-dimensional finite-element methods, damage tolerance, composite structural and dynamic response, and structural tailoring and optimization.
Composite mechanics for engine structures
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
1989-01-01
Recent research activities and accomplishments at Lewis Research Center on composite mechanics for engine structures are summarized. The activities focused mainly on developing procedures for the computational simulation of composite intrinsic and structural behavior. The computational simulation encompasses all aspects of composite mechanics, advanced three-dimensional finite-element methods, damage tolerance, composite structural and dynamic response, and structural tailoring and optimization.
View northeast of a microchip based computer control system installed ...
View northeast of a microchip based computer control system installed in the early 1980's to replace Lamokin Tower, at center of photograph; panels 1 and 2 at right of photograph are part of main supervisory board; panel 1 controlled Allen Lane sub-station #7; responsiblity for this portion of the system was transferred to southeast Pennsylvania transit authority (septa) in 1985; panel 2 at extreme right controls catenary switches in a coach storage yard adjacent to the station - Thirtieth Street Station, Power Director Center, Thirtieth & Market Streets in Amtrak Railroad Station, Philadelphia, Philadelphia County, PA
ERIC Educational Resources Information Center
Sanders, Mechelle; Fiscella, Kevin; Veazie, Peter; Dolan, James G.; Jerant, Anthony
2016-01-01
The main aim is to examine whether patients' viewing time on information about colorectal cancer (CRC) screening before a primary care physician (PCP) visit is associated with discussion of screening options during the visit. We analyzed data from a multi-center randomized controlled trial of a tailored interactive multimedia computer program…
High-Performance Computing Data Center Waste Heat Reuse | Computational
control room With heat exchangers, heat energy in the energy recovery water (ERW) loop becomes available to heat the facility's process hot water (PHW) loop. Once heated, the PHW loop supplies: Active loop in the courtyard of the ESIF's main entrance District heating loop: If additional heat is needed
ERIC Educational Resources Information Center
Possen, Uri M.; And Others
As an introduction, this paper presents a statement of the objectives of the university computing center (UCC) from the viewpoint of the university, the government, the typical user, and the UCC itself. The operating and financial structure of a UCC are described. Three main types of budgeting schemes are discussed: time allocation, pseudo-dollar,…
NASA Technical Reports Server (NTRS)
Moore, Robert C.
1998-01-01
The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on June 6, 1983. RIACS is privately operated by USRA, a consortium of universities that serves as a bridge between NASA and the academic community. Under a five-year co-operative agreement with NASA, research at RIACS is focused on areas that are strategically enabling to the Ames Research Center's role as NASA's Center of Excellence for Information Technology. The primary mission of RIACS is charted to carry out research and development in computer science. This work is devoted in the main to tasks that are strategically enabling with respect to NASA's bold mission in space exploration and aeronautics. There are three foci for this work: (1) Automated Reasoning. (2) Human-Centered Computing. and (3) High Performance Computing and Networking. RIACS has the additional goal of broadening the base of researcher in these areas of importance to the nation's space and aeronautics enterprises. Through its visiting scientist program, RIACS facilitates the participation of university-based researchers, including both faculty and students, in the research activities of NASA and RIACS. RIACS researchers work in close collaboration with NASA computer scientists on projects such as the Remote Agent Experiment on Deep Space One mission, and Super-Resolution Surface Modeling.
Laser Spot Detection Based on Reaction Diffusion.
Vázquez-Otero, Alejandro; Khikhlukha, Danila; Solano-Altamirano, J M; Dormido, Raquel; Duro, Natividad
2016-03-01
Center-location of a laser spot is a problem of interest when the laser is used for processing and performing measurements. Measurement quality depends on correctly determining the location of the laser spot. Hence, improving and proposing algorithms for the correct location of the spots are fundamental issues in laser-based measurements. In this paper we introduce a Reaction Diffusion (RD) system as the main computational framework for robustly finding laser spot centers. The method presented is compared with a conventional approach for locating laser spots, and the experimental results indicate that RD-based computation generates reliable and precise solutions. These results confirm the flexibility of the new computational paradigm based on RD systems for addressing problems that can be reduced to a set of geometric operations.
Chmela, Jiří; Greisch, Jean-François; Harding, Michael E; Klopper, Wim; Kappes, Manfred M; Schooss, Detlef
2018-03-08
The gas-phase laser-induced photoluminescence of cationic mononuclear gadolinium and lutetium complexes involving two 9-oxophenalen-1-one ligands is reported. Performing measurements at a temperature of 83 K enables us to resolve vibronic transitions. Via comparison to Franck-Condon computations, the main vibrational contributions to the ligand-centered phosphorescence are determined to involve rocking, wagging, and stretching of the 9-oxophenalen-1-one-lanthanoid coordination in the low-energy range, intraligand bending, and stretching in the medium- to high-energy range, rocking of the carbonyl and methine groups, and C-H stretching beyond. Whereas Franck-Condon calculations based on density-functional harmonic frequency computations reproduce the main features of the vibrationally resolved emission spectra, the absolute transition energies as determined by density functional theory are off by several thousand wavenumbers. This discrepancy is found to remain at higher computational levels. The relative energy of the Gd(III) and Lu(III) emission bands is only reproduced at the coupled-cluster singles and doubles level and beyond.
Ergonomic assessment for the task of repairing computers in a manufacturing company: A case study.
Maldonado-Macías, Aidé; Realyvásquez, Arturo; Hernández, Juan Luis; García-Alcaraz, Jorge
2015-01-01
Manufacturing industry workers who repair computers may be exposed to ergonomic risk factors. This project analyzes the tasks involved in the computer repair process to (1) find the risk level for musculoskeletal disorders (MSDs) and (2) propose ergonomic interventions to address any ergonomic issues. Work procedures and main body postures were video recorded and analyzed using task analysis, the Rapid Entire Body Assessment (REBA) postural method, and biomechanical analysis. High risk for MSDs was found on every subtask using REBA. Although biomechanical analysis found an acceptable mass center displacement during tasks, a hazardous level of compression on the lower back during computer's transportation was detected. This assessment found ergonomic risks mainly in the trunk, arm/forearm, and legs; the neck and hand/wrist were also compromised. Opportunities for ergonomic analyses and interventions in the design and execution of computer repair tasks are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgan, O.B. Jr.; Berry, L.A.; Sheffield, J.
This annual report on fusion energy discusses the progress on work in the following main topics: toroidal confinement experiments; atomic physics and plasma diagnostics development; plasma theory and computing; plasma-materials interactions; plasma technology; superconducting magnet development; fusion engineering design center; materials research and development; and neutron transport. (LSP)
Reduced-Order Modeling for Optimization and Control of Complex Flows
2010-11-30
Statistics Colloquium, Auburn, AL, (January 2009). 16. University of Pittsburgh, Mathematics Colloquium, Pittsburgh, PA, (February 2009). 17. Goethe ...Center for Scientific Computing, Goethe University Frankfurt am Main, Ger- many, (June 2009). 18. Air Force Institute of Technology, Wright-Patterson
NASA Technical Reports Server (NTRS)
Moore, Robert C.
1998-01-01
The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on June 6, 1983. RIACS is privately operated by USRA, a consortium of universities that serves as a bridge between NASA and the academic community. Under a five-year co-operative agreement with NASA, research at RIACS is focused on areas that are strategically enabling to the Ames Research Center's role as NASA's Center of Excellence for Information Technology. Research is carried out by a staff of full-time scientist,augmented by visitors, students, post doctoral candidates and visiting university faculty. The primary mission of RIACS is charted to carry out research and development in computer science. This work is devoted in the main to tasks that are strategically enabling with respect to NASA's bold mission in space exploration and aeronautics. There are three foci for this work: Automated Reasoning. Human-Centered Computing. and High Performance Computing and Networking. RIACS has the additional goal of broadening the base of researcher in these areas of importance to the nation's space and aeronautics enterprises. Through its visiting scientist program, RIACS facilitates the participation of university-based researchers, including both faculty and students, in the research activities of NASA and RIACS. RIACS researchers work in close collaboration with NASA computer scientists on projects such as the Remote Agent Experiment on Deep Space One mission, and Super-Resolution Surface Modeling.
NOAA/West coast and Alaska Tsunami warning center Atlantic Ocean response criteria
Whitmore, P.; Refidaff, C.; Caropolo, M.; Huerfano-Moreno, V.; Knight, W.; Sammler, W.; Sandrik, A.
2009-01-01
West Coast/Alaska Tsunami Warning Center (WCATWC) response criteria for earthquakesoccurring in the Atlantic and Caribbean basins are presented. Initial warning center decisions are based on an earthquake's location, magnitude, depth, distance from coastal locations, and precomputed threat estimates based on tsunami models computed from similar events. The new criteria will help limit the geographical extent of warnings and advisories to threatened regions, and complement the new operational tsunami product suite. Criteria are set for tsunamis generated by earthquakes, which are by far the main cause of tsunami generation (either directly through sea floor displacement or indirectly by triggering of sub-sea landslides).The new criteria require development of a threat data base which sets warning or advisory zones based on location, magnitude, and pre-computed tsunami models. The models determine coastal tsunami amplitudes based on likely tsunami source parameters for a given event. Based on the computed amplitude, warning and advisory zones are pre-set.
Data communication network at the ASRM facility
NASA Astrophysics Data System (ADS)
Moorhead, Robert J., II; Smith, Wayne D.
1993-08-01
This report describes the simulation of the overall communication network structure for the Advanced Solid Rocket Motor (ASRM) facility being built at Yellow Creek near Iuka, Mississippi as of today. The report is compiled using information received from NASA/MSFC, LMSC, AAD, and RUST Inc. As per the information gathered, the overall network structure will have one logical FDDI ring acting as a backbone for the whole complex. The buildings will be grouped into two categories viz. manufacturing intensive and manufacturing non-intensive. The manufacturing intensive buildings will be connected via FDDI to the Operational Information System (OIS) in the main computing center in B_1000. The manufacturing non-intensive buildings will be connected by 10BASE-FL to the OIS through the Business Information System (BIS) hub in the main computing center. All the devices inside B_1000 will communicate with the BIS. The workcells will be connected to the Area Supervisory Computers (ASCs) through the nearest manufacturing intensive hub and one of the OIS hubs. Comdisco's Block Oriented Network Simulator (BONeS) has been used to simulate the performance of the network. BONeS models a network topology, traffic, data structures, and protocol functions using a graphical interface. The main aim of the simulations was to evaluate the loading of the OIS, the BIS, and the ASCs, and the network links by the traffic generated by the workstations and workcells throughout the site.
Data communication network at the ASRM facility
NASA Technical Reports Server (NTRS)
Moorhead, Robert J., II; Smith, Wayne D.
1993-01-01
This report describes the simulation of the overall communication network structure for the Advanced Solid Rocket Motor (ASRM) facility being built at Yellow Creek near Iuka, Mississippi as of today. The report is compiled using information received from NASA/MSFC, LMSC, AAD, and RUST Inc. As per the information gathered, the overall network structure will have one logical FDDI ring acting as a backbone for the whole complex. The buildings will be grouped into two categories viz. manufacturing intensive and manufacturing non-intensive. The manufacturing intensive buildings will be connected via FDDI to the Operational Information System (OIS) in the main computing center in B_1000. The manufacturing non-intensive buildings will be connected by 10BASE-FL to the OIS through the Business Information System (BIS) hub in the main computing center. All the devices inside B_1000 will communicate with the BIS. The workcells will be connected to the Area Supervisory Computers (ASCs) through the nearest manufacturing intensive hub and one of the OIS hubs. Comdisco's Block Oriented Network Simulator (BONeS) has been used to simulate the performance of the network. BONeS models a network topology, traffic, data structures, and protocol functions using a graphical interface. The main aim of the simulations was to evaluate the loading of the OIS, the BIS, and the ASCs, and the network links by the traffic generated by the workstations and workcells throughout the site.
2013-06-01
of the ATCIS in the NetSPIN Name Main functions Terminal Functions as the terminal that generates traffics MFE (Multi-Function accessing...generates traffics : MFE Function to transform messages of SST into TCP liP packets (Multi-Function accessing Equipment) Termmal PPP Functions of the...center Operation battalion DMT Computer shelter DLP Operation center MFE DMTTerminal Command post of a corps Brigade communication Operation
Center for Modeling of Turbulence and Transition: Research Briefs, 1995
NASA Technical Reports Server (NTRS)
1995-01-01
This research brief contains the progress reports of the research staff of the Center for Modeling of Turbulence and Transition (CMOTT) from July 1993 to July 1995. It also constitutes a progress report to the Institute of Computational Mechanics in Propulsion located at the Ohio Aerospace Institute and the Lewis Research Center. CMOTT has been in existence for about four years. In the first three years, its main activities were to develop and validate turbulence and combustion models for propulsion systems, in an effort to remove the deficiencies of existing models. Three workshops on computational turbulence modeling were held at LeRC (1991, 1993, 1994). At present, CMOTT is integrating the CMOTT developed/improved models into CFD tools which can be used by the propulsion systems community. This activity has resulted in an increased collaboration with the Lewis CFD researchers.
Center for modeling of turbulence and transition: Research briefs, 1995
NASA Astrophysics Data System (ADS)
1995-10-01
This research brief contains the progress reports of the research staff of the Center for Modeling of Turbulence and Transition (CMOTT) from July 1993 to July 1995. It also constitutes a progress report to the Institute of Computational Mechanics in Propulsion located at the Ohio Aerospace Institute and the Lewis Research Center. CMOTT has been in existence for about four years. In the first three years, its main activities were to develop and validate turbulence and combustion models for propulsion systems, in an effort to remove the deficiencies of existing models. Three workshops on computational turbulence modeling were held at LeRC (1991, 1993, 1994). At present, CMOTT is integrating the CMOTT developed/improved models into CFD tools which can be used by the propulsion systems community. This activity has resulted in an increased collaboration with the Lewis CFD researchers.
Contact centers, pervasive computing and telemedicine: a quality health care triangle.
Maglaveras, Nicos
2004-01-01
The Citizen Health System (CHS) is a European Commission (CEC) funded project in the field of IST for Health. Its main goal is to develop a generic contact center which in its pilot stage can be used in the monitoring, treatment and management of chronically ill patients at home in Greece, Spain, and Germany. Such contact centers, using any type of communication technology, and providing timely and preventive prompting to the patients are envisaged in the future to evolve into well-being contact centers providing services to all citizens. In this paper, we present the structure of such a generic contact center and present its major achievements, and their impact to the quality of health delivery.
A Horizontal Tilt Correction Method for Ship License Numbers Recognition
NASA Astrophysics Data System (ADS)
Liu, Baolong; Zhang, Sanyuan; Hong, Zhenjie; Ye, Xiuzi
2018-02-01
An automatic ship license numbers (SLNs) recognition system plays a significant role in intelligent waterway transportation systems since it can be used to identify ships by recognizing the characters in SLNs. Tilt occurs frequently in many SLNs because the monitors and the ships usually have great vertical or horizontal angles, which decreases the accuracy and robustness of a SLNs recognition system significantly. In this paper, we present a horizontal tilt correction method for SLNs. For an input tilt SLN image, the proposed method accomplishes the correction task through three main steps. First, a MSER-based characters’ center-points computation algorithm is designed to compute the accurate center-points of the characters contained in the input SLN image. Second, a L 1- L 2 distance-based straight line is fitted to the computed center-points using M-estimator algorithm. The tilt angle is estimated at this stage. Finally, based on the computed tilt angle, an affine transformation rotation is conducted to rotate and to correct the input SLN horizontally. At last, the proposed method is tested on 200 tilt SLN images, the proposed method is proved to be effective with a tilt correction rate of 80.5%.
2007-05-24
KENNEDY SPACE CENTER, FLA. -- In Space Shuttle Maine Engine Shop, workers get ready to install an engine controller in one of the three main engines (behind them) of the orbiter Discovery. The controller is an electronics package mounted on each space shuttle main engine. It contains two digital computers and the associated electronics to control all main engine components and operations. The controller is attached to the main combustion chamber by shock-mounted fittings. Discovery is the designated orbiter for mission STS-120 to the International Space Station. It will carry a payload that includes the Node 2 module, named Harmony. Launch is targeted for no earlier than Oct. 20. Photo credit: NASA/Cory Huston
2007-05-24
KENNEDY SPACE CENTER, FLA. -- In the Space Shuttle Maine Engine Shop, workers are installing an engine controller in one of the three main engines of the orbiter Discovery. The controller is an electronics package mounted on each space shuttle main engine. It contains two digital computers and the associated electronics to control all main engine components and operations. The controller is attached to the main combustion chamber by shock-mounted fittings. Discovery is the designated orbiter for mission STS-120 to the International Space Station. It will carry a payload that includes the Node 2 module, named Harmony. Launch is targeted for no earlier than Oct. 20. Photo credit: NASA/Cory Huston
2007-05-24
KENNEDY SPACE CENTER, FLA. -- In the Space Shuttle Maine Engine Shop, workers check the installation of an engine controller in one of the three main engines of the orbiter Discovery. The controller is an electronics package mounted on each space shuttle main engine. It contains two digital computers and the associated electronics to control all main engine components and operations. The controller is attached to the main combustion chamber by shock-mounted fittings. Discovery is the designated orbiter for mission STS-120 to the International Space Station. It will carry a payload that includes the Node 2 module, named Harmony. Launch is targeted for no earlier than Oct. 20. Photo credit: NASA/Cory Huston
2007-05-24
KENNEDY SPACE CENTER, FLA. -- In the Space Shuttle Maine Engine Shop, workers are installing an engine controller in one of the three main engines of the orbiter Discovery. The controller is an electronics package mounted on each space shuttle main engine. It contains two digital computers and the associated electronics to control all main engine components and operations. The controller is attached to the main combustion chamber by shock-mounted fittings. Discovery is the designated orbiter for mission STS-120 to the International Space Station. It will carry a payload that includes the Node 2 module, named Harmony. Launch is targeted for no earlier than Oct. 20. Photo credit: NASA/Cory Huston
2007-05-24
KENNEDY SPACE CENTER, FLA. -- In the Space Shuttle Maine Engine Shop, workers get ready to install an engine controller in one of the three main engines of the orbiter Discovery. The controller is an electronics package mounted on each space shuttle main engine. It contains two digital computers and the associated electronics to control all main engine components and operations. The controller is attached to the main combustion chamber by shock-mounted fittings. Discovery is the designated orbiter for mission STS-120 to the International Space Station. It will carry a payload that includes the Node 2 module, named Harmony. Launch is targeted for no earlier than Oct. 20. Photo credit: NASA/Cory Huston
Experimenting with the virtual environment Moodle in Physics Education
NASA Astrophysics Data System (ADS)
Martins, Maria Ines; Dickman, Adriana
2008-03-01
The master's program in Physics Education of the Catholic University in the state of Minas Gerais, Brazil, includes the discipline ``Digital technologies in Physics education.'' The main goal of this discipline is to discuss the role of Information and Communication Technology (ICT) in the process of learning-teaching science. We introduce our students to several virtual platforms, both free and commercial, discussing their functionality and features. We encourage our students to get in touch with computer tools and resources by planning their own computer based course using the Moodle platform. We discuss different patterns of virtual environment courses, whose proposals are centered mainly in the students, or teacher-centered or even system-centered. The student is free to choose between only one topic and a year course to work with, since their interests vary from learning something more about a specific subject to a complete e-learning course covering the entire school year. (The courses are available online in the address sitesinf01.pucmg.br/moodle. Participation only requires filling out an application form.) After three editions of this discipline, we have several courses available. We realize that students tend to focus on traditional methods, always preserving their role as knowledge-givers. In conclusion, we can say that, in spite of exhaustive discussion about autonomy involved with ICTs abilities, most of the students used the new virtual medium to organize traditional teacher-centered courses.
Improving Family Forest Knowledge Transfer through Social Network Analysis
ERIC Educational Resources Information Center
Gorczyca, Erika L.; Lyons, Patrick W.; Leahy, Jessica E.; Johnson, Teresa R.; Straub, Crista L.
2012-01-01
To better engage Maine's family forest landowners our study used social network analysis: a computational social science method for identifying stakeholders, evaluating models of engagement, and targeting areas for enhanced partnerships. Interviews with researchers associated with a research center were conducted to identify how social network…
Time-resolved EPR spectroscopy in a Unix environment.
Lacoff, N M; Franke, J E; Warden, J T
1990-02-01
A computer-aided time-resolved electron paramagnetic resonance (EPR) spectrometer implemented under version 2.9 BSD Unix was developed by interfacing a Varian E-9 EPR spectrometer and a Biomation 805 waveform recorder to a PDP-11/23A minicomputer having MINC A/D and D/A capabilities. Special problems with real-time data acquisition in a multiuser, multitasking Unix environment, addressing of computer main memory for the control of hardware devices, and limitation of computer main memory were resolved, and their solutions are presented. The time-resolved EPR system and the data acquisition and analysis programs, written entirely in C, are described. Furthermore, the benefits of utilizing the Unix operating system and the C language are discussed, and system performance is illustrated with time-resolved EPR spectra of the reaction center cation in photosystem 1 of green plant photosynthesis.
GIS-based channel flow and sediment transport simulation using CCHE1D coupled with AnnAGNPS
USDA-ARS?s Scientific Manuscript database
CCHE1D (Center for Computational Hydroscience and Engineering 1-Dimensional model) simulates unsteady free-surface flows with nonequilibrium, nonuniform sediment transport in dendritic channel networks. Since early 1990’s, the model and its software packages have been developed and continuously main...
Studies of Asteroids and Comets
NASA Technical Reports Server (NTRS)
Bowell, Edward L. G.
1998-01-01
Research under this grant was carried out between 1989 and 1998. It comprised observational, theoretical, and computational research, mainly on asteroids. Two principal areas of research, centering on astrometry and photometry, were interrelated in their aim to study the overall structure of the asteroid belt and the orbital and physical properties of individual asteroids.
Development and Evaluation of an Interactive Internet-Based Pharmacokinetic Teaching Module.
ERIC Educational Resources Information Center
Hedaya, Mohsen A.
1998-01-01
Describes an Internet-based, interactive, learner-centered, asynchronous instructional module for pharmacokinetics that requires minimal computer knowledge to operate. Main components are concept presentation, a simulation exercise, and self-assessment questions. The module has been found effective in teaching the steady state concept at the…
Research Reports: 1988 NASA/ASEE Summer Faculty Fellowship Program
NASA Technical Reports Server (NTRS)
Freeman, L. Michael (Editor); Chappell, Charles R. (Editor); Cothran, Ernestine K. (Editor); Karr, Gerald R. (Editor)
1988-01-01
The basic objectives are to further the professional knowledge of qualified engineering and science faculty members; to stimulate an exchange of ideas between participants and NASA: to enrich and refresh the research and teaching activities of the participants' institutions; and to contribute to the research objectives of the NASA centers. Topics addressed include: cryogenics; thunderstorm simulation; computer techniques; computer assisted instruction; system analysis weather forecasting; rocket engine design; crystal growth; control systems design; turbine pumps for the Space Shuttle Main engine; electron mobility; heat transfer predictions; rotor dynamics; mathematical models; computational fluid dynamics; and structural analysis.
Computational Toxicology as Implemented by the US EPA ...
Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, the U.S. Environmental Protection Agency EPA is developing robust and flexible computational tools that can be applied to the thousands of chemicals in commerce, and contaminant mixtures found in air, water, and hazardous-waste sites. The Office of Research and Development (ORD) Computational Toxicology Research Program (CTRP) is composed of three main elements. The largest component is the National Center for Computational Toxicology (NCCT), which was established in 2005 to coordinate research on chemical screening and prioritization, informatics, and systems modeling. The second element consists of related activities in the National Health and Environmental Effects Research Laboratory (NHEERL) and the National Exposure Research Laboratory (NERL). The third and final component consists of academic centers working on various aspects of computational toxicology and funded by the U.S. EPA Science to Achieve Results (STAR) program. Together these elements form the key components in the implementation of both the initial strategy, A Framework for a Computational Toxicology Research Program (U.S. EPA, 2003), and the newly released The U.S. Environmental Protection Agency's Strategic Plan for Evaluating the T
1979-03-01
LSPFIT 112 4.3.5 SLICE 112 4.3.6 CRD 113 4.3.7 OUTPUT 113 4.3.8 SHOCK 115 4.3.9 ATMOS 115 4.3.10 PNLC 115 4.4 Program Usage and Logic 116 4.5 Description...number MAIN, SLICE, OUTPUT F Intermediate variable LSPFIT FAC Intermediate variable PNLC FC Center frequency SLICE FIRSTU Flight velocity Ua MAIN, SLICE...Index CRD J211 Index CRD K Index, also wave number MAIN, SLICE, PNLC KN Surrounding boundary index MAIN KNCAS Case counter MAIN KNK Surrounding
1974-09-01
introduction of modifications involving flashcards and audio have also been unsuccessful. It is felt that further progress will require a...course: Books I and 11. San Diego: Navy Personnel Research and Development Center, September 1973. Main, R. E. The effectiveness of flashcards
Funder Report on Decision Support Systems Project Dissemination Activities, Fiscal Year 1985.
ERIC Educational Resources Information Center
Tetlow, William L.
Dissemination activities for the Decision Support Systems (DSS) for fiscal year (FY) 1985 are reported by the National Center for Higher Education Management Systems (NCHEMS). The main means for disseminating results of the DSS research and development project has been through computer-generated video presentations at meetings of higher education…
Facilities at Indian Institute of Astrophysics and New Initiatives
NASA Astrophysics Data System (ADS)
Bhatt, Bhuwan Chandra
2018-04-01
The Indian Institute of Astrophysics is a premier national institute of India for the study of and research into topics pertaining to astronomy, astrophysics and related subjects. The Institute's main campus in Bangalore city in southern India houses the main administrative set up, library and computer center, photonics lab and state of art mechanical workshop. IIA has a network of laboratories and observatories located in various places in India, including Kodaikanal (Tamilnadu), Kavalur (Tamilnadu), Gauribidanur (Karnataka), Leh & Hanle (Jammu & Kashmir) and Hosakote (Karnataka).
NASA Technical Reports Server (NTRS)
Kramer, Williams T. C.; Simon, Horst D.
1994-01-01
This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.
Computational Toxicology at the US EPA | Science Inventory ...
Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, EPA is developing robust and flexible computational tools that can be applied to the thousands of chemicals in commerce, and contaminant mixtures found in America’s air, water, and hazardous-waste sites. The ORD Computational Toxicology Research Program (CTRP) is composed of three main elements. The largest component is the National Center for Computational Toxicology (NCCT), which was established in 2005 to coordinate research on chemical screening and prioritization, informatics, and systems modeling. The second element consists of related activities in the National Health and Environmental Effects Research Laboratory (NHEERL) and the National Exposure Research Laboratory (NERL). The third and final component consists of academic centers working on various aspects of computational toxicology and funded by the EPA Science to Achieve Results (STAR) program. Key intramural projects of the CTRP include digitizing legacy toxicity testing information toxicity reference database (ToxRefDB), predicting toxicity (ToxCast™) and exposure (ExpoCast™), and creating virtual liver (v-Liver™) and virtual embryo (v-Embryo™) systems models. The models and underlying data are being made publicly available t
Extending the farm on external sites: the INFN Tier-1 experience
NASA Astrophysics Data System (ADS)
Boccali, T.; Cavalli, A.; Chiarelli, L.; Chierici, A.; Cesini, D.; Ciaschini, V.; Dal Pra, S.; dell'Agnello, L.; De Girolamo, D.; Falabella, A.; Fattibene, E.; Maron, G.; Prosperini, A.; Sapunenko, V.; Virgilio, S.; Zani, S.
2017-10-01
The Tier-1 at CNAF is the main INFN computing facility offering computing and storage resources to more than 30 different scientific collaborations including the 4 experiments at the LHC. It is also foreseen a huge increase in computing needs in the following years mainly driven by the experiments at the LHC (especially starting with the run 3 from 2021) but also by other upcoming experiments such as CTA[1] While we are considering the upgrade of the infrastructure of our data center, we are also evaluating the possibility of using CPU resources available in other data centres or even leased from commercial cloud providers. Hence, at INFN Tier-1, besides participating to the EU project HNSciCloud, we have also pledged a small amount of computing resources (˜ 2000 cores) located at the Bari ReCaS[2] for the WLCG experiments for 2016 and we are testing the use of resources provided by a commercial cloud provider. While the Bari ReCaS data center is directly connected to the GARR network[3] with the obvious advantage of a low latency and high bandwidth connection, in the case of the commercial provider we rely only on the General Purpose Network. In this paper we describe the set-up phase and the first results of these installations started in the last quarter of 2015, focusing on the issues that we have had to cope with and discussing the measured results in terms of efficiency.
NASA Astrophysics Data System (ADS)
Kerr, Rebecca
The purpose of this descriptive quantitative and basic qualitative study was to examine fifth and eighth grade science teachers' responses, perceptions of the role of technology in the classroom, and how they felt that computer applications, tools, and the Internet influence student understanding. The purposeful sample included survey and interview responses from fifth grade and eighth grade general and physical science teachers. Even though they may not be generalizable to other teachers or classrooms due to a low response rate, findings from this study indicated teachers with fewer years of teaching science had a higher level of computer use but less computer access, especially for students, in the classroom. Furthermore, teachers' choice of professional development moderated the relationship between the level of school performance and teachers' knowledge/skills, with the most positive relationship being with workshops that occurred outside of the school. Eighteen interviews revealed that teachers perceived the role of technology in classroom instruction mainly as teacher-centered and supplemental, rather than student-centered activities.
ERIC Educational Resources Information Center
Macias, J. A.
2012-01-01
Project-based learning is one of the main successful student-centered pedagogies broadly used in computing science courses. However, this approach can be insufficient when dealing with practical subjects that implicitly require many deliverables and a great deal of feedback and organizational resources. In this paper, a worked e-portfolio is…
Test and control computer user's guide for a digital beam former test system
NASA Technical Reports Server (NTRS)
Alexovich, Robert E.; Mallasch, Paul G.
1992-01-01
A Digital Beam Former Test System was developed to determine the effects of noise, interferers and distortions, and digital implementations of beam forming as applied to the Tracking and Data Relay Satellite 2 (TDRS 2) architectures. The investigation of digital beam forming with application to TDRS 2 architectures, as described in TDRS 2 advanced concept design studies, was conducted by the NASA/Lewis Research Center for NASA/Goddard Space Flight Center. A Test and Control Computer (TCC) was used as the main controlling element of the digital Beam Former Test System. The Test and Control Computer User's Guide for a Digital Beam Former Test System provides an organized description of the Digital Beam Former Test System commands. It is written for users who wish to conduct tests of the Digital Beam forming Test processor using the TCC. The document describes the function, use, and syntax of the TCC commands available to the user while summarizing and demonstrating the use of the commands wtihin DOS batch files.
Computer networking at SLR stations
NASA Technical Reports Server (NTRS)
Novotny, Antonin
1993-01-01
There are several existing communication methods to deliver data from the satellite laser ranging (SLR) station to the SLR data center and back: telephonmodem, telex, and computer networks. The SLR scientific community has been exploiting mainly INTERNET, BITNET/EARN, and SPAN. The total of 56 countries are connected to INTERNET and the number of nodes is exponentially growing. The computer networks mentioned above and others are connected through E-mail protocol. The scientific progress of SLR requires the increase of communication speed and the amount of the transmitted data. The TOPEX/POSEIDON test campaign required to deliver Quick Look data (1.7 kB/pass) from a SLR site to SLR data center within 8 hours and full rate data (up to 500 kB/pass) within 24 hours. We developed networking for the remote SLR station in Helwan, Egypt. The reliable scheme for data delivery consists of: compression of MERIT2 format (up to 89 percent), encoding to ASCII Me (files); and e-mail sending from SLR station--e-mail receiving, decoding, and decompression at the center. We do propose to use the ZIP method for compression/decompression and the UUCODE method for ASCII encoding/decoding. This method will be useful for stations connected via telephonemodems or commercial networks. The electronics delivery could solve the problem of the too late receiving of the FR data by SLR data center.
Computer networking at SLR stations
NASA Astrophysics Data System (ADS)
Novotny, Antonin
1993-06-01
There are several existing communication methods to deliver data from the satellite laser ranging (SLR) station to the SLR data center and back: telephonmodem, telex, and computer networks. The SLR scientific community has been exploiting mainly INTERNET, BITNET/EARN, and SPAN. The total of 56 countries are connected to INTERNET and the number of nodes is exponentially growing. The computer networks mentioned above and others are connected through E-mail protocol. The scientific progress of SLR requires the increase of communication speed and the amount of the transmitted data. The TOPEX/POSEIDON test campaign required to deliver Quick Look data (1.7 kB/pass) from a SLR site to SLR data center within 8 hours and full rate data (up to 500 kB/pass) within 24 hours. We developed networking for the remote SLR station in Helwan, Egypt. The reliable scheme for data delivery consists of: compression of MERIT2 format (up to 89 percent), encoding to ASCII Me (files); and e-mail sending from SLR station--e-mail receiving, decoding, and decompression at the center. We do propose to use the ZIP method for compression/decompression and the UUCODE method for ASCII encoding/decoding. This method will be useful for stations connected via telephonemodems or commercial networks. The electronics delivery could solve the problem of the too late receiving of the FR data by SLR data center.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tapia, Richard
1998-06-01
In June, The Center for Research on Parallel Computation (CRPC), an NSF-funded Science and Technology Center, hosted the 4th Annual Conference for African-American Reserachers in the Mathematical Sciences (CAARMS4) at Rice University. The main goal of this conference was to highlight current work by African-American researchers and graduate students in mathematics. This conference strengthened the mathematical sciences by encouraging the increased participation of African-American and underrepresented groups into the field, facilitating working relationships between them and helping to cultivate their careers. In addition to the talks there was a graduate student poster session and tutorials on topics in mathematics andmore » computer science. These talks, presentations, and discussions brought a broader perspective to the critical issues involving minority participation in mathematics.« less
ERIC Educational Resources Information Center
Houston, Linda; Johnson, Candice
After much trial and error, the Agricultural Technical Institute of the Ohio State University (ATI/OSO) discovered that training of writing lab tutors can best be done through collaboration of the Writing Lab Coordinator with the "Development of Tutor Effectiveness" course offered at the institute. The ATI/OSO main computer lab and…
Computer-Based Instruction within Transportation Mobility Training
1990-09-01
APT Lesson Plan .... 191 Appendix C: APT Workbook .... .............. . 209 Appendix D: Experiment Pretest and Posttest ..... .. 222 Appendix E : Test...Questions The following investigative questions are set forth to determine if CBI is an effective alternative to classroom training in the area of...Submotorpool covered too limited an area , focusing mainly on the dispatching and driving of vehicles. The Transportation Resources Control Center/Transportation
Milestone report TCTP application to the SSME hydrogen system analysis
NASA Technical Reports Server (NTRS)
Richards, J. S.
1975-01-01
The Transient Cryogen Transfer Computer Program (TCTP) developed and verified for LOX systems by analyses of Skylab S-1B stage loading data from John F. Kennedy Space Center launches was extended to include hydrogen as the working fluid. The feasibility of incorporating TCTP into the space shuttle main engine dynamic model was studied. The program applications are documented.
Comparative case study of two biomedical research collaboratories.
Schleyer, Titus K L; Teasley, Stephanie D; Bhatnagar, Rishi
2005-10-25
Working together efficiently and effectively presents a significant challenge in large-scale, complex, interdisciplinary research projects. Collaboratories are a nascent method to help meet this challenge. However, formal collaboratories in biomedical research centers are the exception rather than the rule. The main purpose of this paper is to compare and describe two collaboratories that used off-the-shelf tools and relatively modest resources to support the scientific activity of two biomedical research centers. The two centers were the Great Lakes Regional Center for AIDS Research (HIV/AIDS Center) and the New York University Oral Cancer Research for Adolescent and Adult Health Promotion Center (Oral Cancer Center). In each collaboratory, we used semistructured interviews, surveys, and contextual inquiry to assess user needs and define the technology requirements. We evaluated and selected commercial software applications by comparing their feature sets with requirements and then pilot-testing the applications. Local and remote support staff cooperated in the implementation and end user training for the collaborative tools. Collaboratory staff evaluated each implementation by analyzing utilization data, administering user surveys, and functioning as participant observers. The HIV/AIDS Center primarily required real-time interaction for developing projects and attracting new participants to the center; the Oral Cancer Center, on the other hand, mainly needed tools to support distributed and asynchronous work in small research groups. The HIV/AIDS Center's collaboratory included a center-wide website that also served as the launch point for collaboratory applications, such as NetMeeting, Timbuktu Conference, PlaceWare Auditorium, and iVisit. The collaboratory of the Oral Cancer Center used Groove and Genesys Web conferencing. The HIV/AIDS Center was successful in attracting new scientists to HIV/AIDS research, and members used the collaboratory for developing and implementing new research studies. The Oral Cancer Center successfully supported highly distributed and asynchronous research, and the collaboratory facilitated real-time interaction for analyzing data and preparing publications. The two collaboratory implementations demonstrated the feasibility of supporting biomedical research centers using off-the-shelf commercial tools, but they also identified several barriers to successful collaboration. These barriers included computing platform incompatibilities, network infrastructure complexity, variable availability of local versus remote IT support, low computer and collaborative software literacy, and insufficient maturity of available collaborative software. Factors enabling collaboratory use included collaboration incentives through funding mechanism, a collaborative versus competitive relationship of researchers, leadership by example, and tools well matched to tasks and technical progress. Integrating electronic collaborative tools into routine scientific practice can be successful but requires further research on the technical, social, and behavioral factors influencing the adoption and use of collaboratories.
Optical information processing at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Reid, Max B.; Bualat, Maria G.; Cho, Young C.; Downie, John D.; Gary, Charles K.; Ma, Paul W.; Ozcan, Meric; Pryor, Anna H.; Spirkovska, Lilly
1993-01-01
The combination of analog optical processors with digital electronic systems offers the potential of tera-OPS computational performance, while often requiring less power and weight relative to all-digital systems. NASA is working to develop and demonstrate optical processing techniques for on-board, real time science and mission applications. Current research areas and applications under investigation include optical matrix processing for space structure vibration control and the analysis of Space Shuttle Main Engine plume spectra, optical correlation-based autonomous vision for robotic vehicles, analog computation for robotic path planning, free-space optical interconnections for information transfer within digital electronic computers, and multiplexed arrays of fiber optic interferometric sensors for acoustic and vibration measurements.
Sa, Eduardo Costa; Ferreira Junior, Mario; Rocha, Lys Esther
2012-01-01
The aims of this study were to investigate work conditions, to estimate the prevalence and to describe risk factors associated with Computer Vision Syndrome among two call centers' operators in São Paulo (n = 476). The methods include a quantitative cross-sectional observational study and an ergonomic work analysis, using work observation, interviews and questionnaires. The case definition was the presence of one or more specific ocular symptoms answered as always, often or sometimes. The multiple logistic regression model, were created using the stepwise forward likelihood method and remained the variables with levels below 5% (p < 0.05). The operators were mainly female and young (from 15 to 24 years old). The call center was opened 24 hours and the operators weekly hours were 36 hours with break time from 21 to 35 minutes per day. The symptoms reported were eye fatigue (73.9%), "weight" in the eyes (68.2%), "burning" eyes (54.6%), tearing (43.9%) and weakening of vision (43.5%). The prevalence of Computer Vision Syndrome was 54.6%. Associations verified were: being female (OR 2.6, 95% CI 1.6 to 4.1), lack of recognition at work (OR 1.4, 95% CI 1.1 to 1.8), organization of work in call center (OR 1.4, 95% CI 1.1 to 1.7) and high demand at work (OR 1.1, 95% CI 1.0 to 1.3). The organization and psychosocial factors at work should be included in prevention programs of visual syndrome among call centers' operators.
Yu, Shu; Yang, Kuei-Feng
2006-08-01
Public health nurses (PHNs) often cannot receive in-service education due to limitations of time and space. Learning through the Internet has been a widely used technique in many professional and clinical nursing fields. The learner's attitude is the most important indicator that promotes learning. The purpose of this study was to investigate PHNs' attitude toward web-based learning and its determinants. This study conducted a cross-sectional research design. 369 health centers in Taiwan. The population involved this study was 2398 PHNs, and we used random sampling from this population. Finally, 329 PHNs completed the questionnaire, with a response rate of 84.0%. Data were collected by mailing the questionnaire. Most PHNs revealed a positive attitude toward web-based learning (mean+/-SD=55.02+/-6.39). PHNs who worked at village health centers, a service population less than 10,000, PHNs who had access to computer facility and on-line hardware in health centers and with better computer competence revealed more positive attitudes (p<0.01). Web-based learning is an important new way of in-service education; however, its success and hindering factors require further investigation. Individual computer competence is the main target for improvement, and educators should also consider how to establish a user-friendly learning environment on the Internet.
Wilcox, Lauren; Patel, Rupa; Chen, Yunan; Shachak, Aviv
2013-12-01
Health Information Technologies, such as electronic health records (EHR) and secure messaging, have already transformed interactions among patients and clinicians. In addition, technologies supporting asynchronous communication outside of clinical encounters, such as email, SMS, and patient portals, are being increasingly used for follow-up, education, and data reporting. Meanwhile, patients are increasingly adopting personal tools to track various aspects of health status and therapeutic progress, wishing to review these data with clinicians during consultations. These issues have drawn increasing interest from the human-computer interaction (HCI) community, with special focus on critical challenges in patient-centered interactions and design opportunities that can address these challenges. We saw this community presenting and interacting at the ACM SIGCHI 2013, Conference on Human Factors in Computing Systems, (also known as CHI), held April 27-May 2nd, 2013 at the Palais de Congrès de Paris in France. CHI 2013 featured many formal avenues to pursue patient-centered health communication: a well-attended workshop, tracks of original research, and a lively panel discussion. In this report, we highlight these events and the main themes we identified. We hope that it will help bring the health care communication and the HCI communities closer together. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Computational fluid dynamics at NASA Ames and the numerical aerodynamic simulation program
NASA Technical Reports Server (NTRS)
Peterson, V. L.
1985-01-01
Computers are playing an increasingly important role in the field of aerodynamics such as that they now serve as a major complement to wind tunnels in aerospace research and development. Factors pacing advances in computational aerodynamics are identified, including the amount of computational power required to take the next major step in the discipline. The four main areas of computational aerodynamics research at NASA Ames Research Center which are directed toward extending the state of the art are identified and discussed. Example results obtained from approximate forms of the governing equations are presented and discussed, both in the context of levels of computer power required and the degree to which they either further the frontiers of research or apply to programs of practical importance. Finally, the Numerical Aerodynamic Simulation Program--with its 1988 target of achieving a sustained computational rate of 1 billion floating-point operations per second--is discussed in terms of its goals, status, and its projected effect on the future of computational aerodynamics.
The Center for Multiscale Plasma Dynamics, Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gombosi, Tamas I.
The University of Michigan participated in the joint UCLA/Maryland fusion science center focused on plasma physics problems for which the traditional separation of the dynamics into microscale and macroscale processes breaks down. These processes involve large scale flows and magnetic fields tightly coupled to the small scale, kinetic dynamics of turbulence, particle acceleration and energy cascade. The interaction between these vastly disparate scales controls the evolution of the system. The enormous range of temporal and spatial scales associated with these problems renders direct simulation intractable even in computations that use the largest existing parallel computers. Our efforts focused on twomore » main problems: the development of Hall MHD solvers on solution adaptive grids and the development of solution adaptive grids using generalized coordinates so that the proper geometry of inertial confinement can be taken into account and efficient refinement strategies can be obtained.« less
High-Performance Computing Data Center | Energy Systems Integration
Facility | NREL High-Performance Computing Data Center High-Performance Computing Data Center The Energy Systems Integration Facility's High-Performance Computing Data Center is home to Peregrine -the largest high-performance computing system in the world exclusively dedicated to advancing
Marshall Space Flight Center CFD overview
NASA Technical Reports Server (NTRS)
Schutzenhofer, Luke A.
1989-01-01
Computational Fluid Dynamics (CFD) activities at Marshall Space Flight Center (MSFC) have been focused on hardware specific and research applications with strong emphasis upon benchmark validation. The purpose here is to provide insight into the MSFC CFD related goals, objectives, current hardware related CFD activities, propulsion CFD research efforts and validation program, future near-term CFD hardware related programs, and CFD expectations. The current hardware programs where CFD has been successfully applied are the Space Shuttle Main Engines (SSME), Alternate Turbopump Development (ATD), and Aeroassist Flight Experiment (AFE). For the future near-term CFD hardware related activities, plans are being developed that address the implementation of CFD into the early design stages of the Space Transportation Main Engine (STME), Space Transportation Booster Engine (STBE), and the Environmental Control and Life Support System (ECLSS) for the Space Station. Finally, CFD expectations in the design environment will be delineated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ongari, Daniele; Boyd, Peter G.; Barthel, Senja
Pore volume is one of the main properties for the characterization of microporous crystals. It is experimentally measurable, and it can also be obtained from the refined unit cell by a number of computational techniques. In this work, we assess the accuracy and the discrepancies between the different computational methods which are commonly used for this purpose, i.e, geometric, helium, and probe center pore volumes, by studying a database of more than 5000 frameworks. We developed a new technique to fully characterize the internal void of a microporous material and to compute the probe-accessible and -occupiable pore volume. Lasty, wemore » show that, unlike the other definitions of pore volume, the occupiable pore volume can be directly related to the experimentally measured pore volumes from nitrogen isotherms.« less
Berkeley Lab - Materials Sciences Division
Computational Study of Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion and Materials Physics Scattering and Instrumentation Science Centers Center for Computational Study of Sciences Centers Center for Computational Study of Excited-State Phenomena in Energy Materials Center for X
NASA Astrophysics Data System (ADS)
McConnaughey, P. K.; Schutzenhofer, L. A.
1992-07-01
This paper presents an overview of the NASA/Marshall Space Flight Center (MSFC) Computational Fluid Dynamics (CFD) Consortium for Applications in Propulsion Technology (CAPT). The objectives of this consortium are discussed, as is the approach of managing resources and technology to achieve these objectives. Significant results by the three CFD CAPT teams (Turbine, Pump, and Combustion) are briefly highlighted with respect to the advancement of CFD applications, the development and evaluation of advanced hardware concepts, and the integration of these results and CFD as a design tool to support Space Transportation Main Engine and National Launch System development.
Patel, Samir N.; Klufas, Michael A.; Ryan, Michael C.; Jonas, Karyn E.; Ostmo, Susan; Martinez-Castellanos, Maria Ana; Berrocal, Audina M.; Chiang, Michael F.; Chan, R.V. Paul
2016-01-01
Purpose To examine the utility of fluorescein angiography (FA) in identification of the macular center and the diagnosis of zone in patients with retinopathy of prematurity (ROP). Design Validity and reliability analysis of diagnostic tools Methods 32 sets (16 color fundus photographs; 16 color fundus photographs paired with the corresponding FA) of wide-angle retinal images obtained from 16 eyes of eight infants with ROP were compiled on a secure web site. 9 ROP experts (3 pediatric ophthalmologists; 6 vitreoretinal surgeons) participated in the study. For each image set, experts identified the macular center and provided a diagnosis of zone. Main Outcome Measures (1) Sensitivity and specificity of zone diagnosis (2) “Computer facilitated diagnosis of zone,” based on precise measurement of the macular center, optic disc center, and peripheral ROP. Results Computer facilitated diagnosis of zone agreed with the expert’s diagnosis of zone in 28/45 (62%) cases using color fundus photographs and in 31/45 (69%) cases using FA. Mean (95% CI) sensitivity for detection of zone I by experts as compared to a consensus reference standard diagnosis when interpreting the color fundus images alone versus interpreting the color fundus photographs and FA was 47% (35.3% – 59.3%) and 61.1% (48.9% – 72.4%), respectively, (t(9) ≥ (2.063), p = 0.073). Conclusions There is a marginally significant difference in zone diagnosis when using color fundus photographs compared to using color fundus photographs and the corresponding fluorescein angiograms. There is inconsistency between traditional zone diagnosis (based on ophthalmoscopic exam and image review) compared to a computer-facilitated diagnosis of zone. PMID:25637180
High-Performance Computing Data Center Warm-Water Liquid Cooling |
Computational Science | NREL Warm-Water Liquid Cooling High-Performance Computing Data Center Warm-Water Liquid Cooling NREL's High-Performance Computing Data Center (HPC Data Center) is liquid water Liquid cooling technologies offer a more energy-efficient solution that also allows for effective
Results and current status of the NPARC alliance validation effort
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Jones, Ralph R.
1996-01-01
The NPARC Alliance is a partnership between the NASA Lewis Research Center (LeRC) and the USAF Arnold Engineering Development Center (AEDC) dedicated to the establishment of a national CFD capability, centered on the NPARC Navier-Stokes computer program. The three main tasks of the Alliance are user support, code development, and validation. The present paper is a status report on the validation effort. It describes the validation approach being taken by the Alliance. Representative results are presented for laminar and turbulent flat plate boundary layers, a supersonic axisymmetric jet, and a glancing shock/turbulent boundary layer interaction. Cases scheduled to be run in the future are also listed. The archive of validation cases is described, including information on how to access it via the Internet.
Comparative Case Study of Two Biomedical Research Collaboratories
Teasley, Stephanie D; Bhatnagar, Rishi
2005-01-01
Background Working together efficiently and effectively presents a significant challenge in large-scale, complex, interdisciplinary research projects. Collaboratories are a nascent method to help meet this challenge. However, formal collaboratories in biomedical research centers are the exception rather than the rule. Objective The main purpose of this paper is to compare and describe two collaboratories that used off-the-shelf tools and relatively modest resources to support the scientific activity of two biomedical research centers. The two centers were the Great Lakes Regional Center for AIDS Research (HIV/AIDS Center) and the New York University Oral Cancer Research for Adolescent and Adult Health Promotion Center (Oral Cancer Center). Methods In each collaboratory, we used semistructured interviews, surveys, and contextual inquiry to assess user needs and define the technology requirements. We evaluated and selected commercial software applications by comparing their feature sets with requirements and then pilot-testing the applications. Local and remote support staff cooperated in the implementation and end user training for the collaborative tools. Collaboratory staff evaluated each implementation by analyzing utilization data, administering user surveys, and functioning as participant observers. Results The HIV/AIDS Center primarily required real-time interaction for developing projects and attracting new participants to the center; the Oral Cancer Center, on the other hand, mainly needed tools to support distributed and asynchronous work in small research groups. The HIV/AIDS Center’s collaboratory included a center-wide website that also served as the launch point for collaboratory applications, such as NetMeeting, Timbuktu Conference, PlaceWare Auditorium, and iVisit. The collaboratory of the Oral Cancer Center used Groove and Genesys Web conferencing. The HIV/AIDS Center was successful in attracting new scientists to HIV/AIDS research, and members used the collaboratory for developing and implementing new research studies. The Oral Cancer Center successfully supported highly distributed and asynchronous research, and the collaboratory facilitated real-time interaction for analyzing data and preparing publications. Conclusions The two collaboratory implementations demonstrated the feasibility of supporting biomedical research centers using off-the-shelf commercial tools, but they also identified several barriers to successful collaboration. These barriers included computing platform incompatibilities, network infrastructure complexity, variable availability of local versus remote IT support, low computer and collaborative software literacy, and insufficient maturity of available collaborative software. Factors enabling collaboratory use included collaboration incentives through funding mechanism, a collaborative versus competitive relationship of researchers, leadership by example, and tools well matched to tasks and technical progress. Integrating electronic collaborative tools into routine scientific practice can be successful but requires further research on the technical, social, and behavioral factors influencing the adoption and use of collaboratories. PMID:16403717
Multidetector Computer Tomography: Evaluation of Blunt Chest Trauma in Adults
Matos, António P.; Mascarenhas, Vasco; Herédia, Vasco
2014-01-01
Imaging plays an essential part of chest trauma care. By definition, the employed imaging technique in the emergency setting should reach the correct diagnosis as fast as possible. In severe chest blunt trauma, multidetector computer tomography (MDCT) has become part of the initial workup, mainly due to its high sensitivity and diagnostic accuracy of the technique for the detection and characterization of thoracic injuries and also due to its wide availability in tertiary care centers. The aim of this paper is to review and illustrate a spectrum of characteristic MDCT findings of blunt traumatic injuries of the chest including the lungs, mediastinum, pleural space, and chest wall. PMID:25295188
Multidetector computer tomography: evaluation of blunt chest trauma in adults.
Palas, João; Matos, António P; Mascarenhas, Vasco; Herédia, Vasco; Ramalho, Miguel
2014-01-01
Imaging plays an essential part of chest trauma care. By definition, the employed imaging technique in the emergency setting should reach the correct diagnosis as fast as possible. In severe chest blunt trauma, multidetector computer tomography (MDCT) has become part of the initial workup, mainly due to its high sensitivity and diagnostic accuracy of the technique for the detection and characterization of thoracic injuries and also due to its wide availability in tertiary care centers. The aim of this paper is to review and illustrate a spectrum of characteristic MDCT findings of blunt traumatic injuries of the chest including the lungs, mediastinum, pleural space, and chest wall.
Accurate Characterization of the Pore Volume in Microporous Crystalline Materials
2017-01-01
Pore volume is one of the main properties for the characterization of microporous crystals. It is experimentally measurable, and it can also be obtained from the refined unit cell by a number of computational techniques. In this work, we assess the accuracy and the discrepancies between the different computational methods which are commonly used for this purpose, i.e, geometric, helium, and probe center pore volumes, by studying a database of more than 5000 frameworks. We developed a new technique to fully characterize the internal void of a microporous material and to compute the probe-accessible and -occupiable pore volume. We show that, unlike the other definitions of pore volume, the occupiable pore volume can be directly related to the experimentally measured pore volumes from nitrogen isotherms. PMID:28636815
Accurate Characterization of the Pore Volume in Microporous Crystalline Materials
Ongari, Daniele; Boyd, Peter G.; Barthel, Senja; ...
2017-06-21
Pore volume is one of the main properties for the characterization of microporous crystals. It is experimentally measurable, and it can also be obtained from the refined unit cell by a number of computational techniques. In this work, we assess the accuracy and the discrepancies between the different computational methods which are commonly used for this purpose, i.e, geometric, helium, and probe center pore volumes, by studying a database of more than 5000 frameworks. We developed a new technique to fully characterize the internal void of a microporous material and to compute the probe-accessible and -occupiable pore volume. Lasty, wemore » show that, unlike the other definitions of pore volume, the occupiable pore volume can be directly related to the experimentally measured pore volumes from nitrogen isotherms.« less
Laboratory Computing Resource Center
Systems Computing and Data Resources Purchasing Resources Future Plans For Users Getting Started Using LCRC Software Best Practices and Policies Getting Help Support Laboratory Computing Resource Center Laboratory Computing Resource Center Latest Announcements See All April 27, 2018, Announcements, John Low
AGENDA: A task organizer and scheduler
NASA Technical Reports Server (NTRS)
Fratter, Isabelle
1993-01-01
AGENDA will be the main tool used in running the SPOT 4 Earth Observation Satellite's Operational Control Center. It will reduce the operator's work load and make the task easier. AGENDA sets up the work plan for a day of operations, automatically puts the day's tasks into sequence and monitors their progress in real time. Monitoring is centralized, and the tasks are run on different computers in the Center. Once informed of any problems, the operator can intervene at any time while an activity is taking place. To carry out the various functions, the operator has an advanced, efficient, ergonomic graphic interface based on X11 and OSF/MOTIF. Since AGENDA is the heart of the Center, it has to satisfy several constraints that have been taken into account during the various development phases. AGENDA is currently in its final development stages.
1997-01-01
This is a view of the Russian Mir Space Station photographed by a crewmember of the fifth Shuttle/Mir docking mission, STS-81. The image shows: upper center - Progress supply vehicle, Kvant-1 module, and Core module; center left - Priroda module; center right - Spektr module; bottom left - Kvant-2 module; bottom center - Soyuz; and bottom right - Kristall module and Docking module. The Progress was an unmarned, automated version of the Soyuz crew transfer vehicle, designed to resupply the Mir. The Kvant-1 provided research in the physics of galaxies, quasars, and neutron stars, by measuring electromagnetic spectra and x-ray emissions. The Core module served as the heart of the space station and contained the primary living and working areas, life support, and power, as well as the main computer, communications, and control equipment. Priroda's main purpose was Earth remote sensing. The Spektr module provided Earth observation. It also supported research into biotechnology, life sciences, materials science, and space technologies. American astronauts used the Spektr as their living quarters. Kvant-2 was a scientific and airlock module, providing biological research, Earth observations, and EVA (extravehicular activity) capability. The Soyuz typically ferried three crewmembers to and from the Mir. A main purpose of the Kristall module was to develop biological and materials production technologies in the space environment. The Docking module made it possible for the Space Shuttle to dock easily with the Mir. The journey of the 15-year-old Russian Mir Space Station ended March 23, 2001, as the Mir re-entered the Earth's atmosphere and fell into the south Pacific Ocean.
The Development of University Computing in Sweden 1965-1985
NASA Astrophysics Data System (ADS)
Dahlstrand, Ingemar
In 1965-70 the government agency, Statskontoret, set up five university computing centers, as service bureaux financed by grants earmarked for computer use. The centers were well equipped and staffed and caused a surge in computer use. When the yearly flow of grant money stagnated at 25 million Swedish crowns, the centers had to find external income to survive and acquire time-sharing. But the charging system led to the computers not being fully used. The computer scientists lacked equipment for laboratory use. The centers were decentralized and the earmarking abolished. Eventually they got new tasks like running computers owned by the departments, and serving the university administration.
NASA Astrophysics Data System (ADS)
Stockton, Gregory R.
2011-05-01
Over the last 10 years, very large government, military, and commercial computer and data center operators have spent millions of dollars trying to optimally cool data centers as each rack has begun to consume as much as 10 times more power than just a few years ago. In fact, the maximum amount of data computation in a computer center is becoming limited by the amount of available power, space and cooling capacity at some data centers. Tens of millions of dollars and megawatts of power are being annually spent to keep data centers cool. The cooling and air flows dynamically change away from any predicted 3-D computational fluid dynamic modeling during construction and as time goes by, and the efficiency and effectiveness of the actual cooling rapidly departs even farther from predicted models. By using 3-D infrared (IR) thermal mapping and other techniques to calibrate and refine the computational fluid dynamic modeling and make appropriate corrections and repairs, the required power for data centers can be dramatically reduced which reduces costs and also improves reliability.
NASA Technical Reports Server (NTRS)
Savelyev, V. V.
1943-01-01
For computing the critical flutter velocity of a wing among the data required are the position of the line of centers of gravity of the wing sections along the span and the mass moments and radii of inertia of any section of the wing about the axis passing through the center of gravity of the section. A sufficiently detailed computation of these magnitudes even if the weights of all the wing elements are known, requires a great deal of time expenditure. Thus a rapid competent worker would require from 70 to 100 hours for the preceding computations for one wing only, while hundreds of hours would be required if all the weights were included. With the aid of the formulas derived in the present paper, the preceding work can be performed with a degree of accuracy sufficient for practical purposes in from one to two hours, the only required data being the geometric dimensions of the outer wing (tapered part), the position of its longerons, the total weight of the outer wing, and the approximate weight of the longerons, The entire material presented in this paper is applicable mainly to wings of longeron construction of the CAHI type and investigations are therefore being conducted by CAHI for the derivation of formulas for the determination of the preceding data for wings of other types.
Programming Tools: Status, Evaluation, and Comparison
NASA Technical Reports Server (NTRS)
Cheng, Doreen Y.; Cooper, D. M. (Technical Monitor)
1994-01-01
In this tutorial I will first describe the characteristics of scientific applications and their developers, and describe the computing environment in a typical high-performance computing center. I will define the user requirements for tools that support application portability and present the difficulties to satisfy them. These form the basis of the evaluation and comparison of the tools. I will then describe the tools available in the market and the tools available in the public domain. Specifically, I will describe the tools for converting sequential programs, tools for developing portable new programs, tools for debugging and performance tuning, tools for partitioning and mapping, and tools for managing network of resources. I will introduce the main goals and approaches of the tools, and show main features of a few tools in each category. Meanwhile, I will compare tool usability for real-world application development and compare their different technological approaches. Finally, I will indicate the future directions of the tools in each category.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shorgin, Sergey Ya.; Pechinkin, Alexander V.; Samouylov, Konstantin E.
Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. Formore » better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures.« less
Impact of office productivity cloud computing on energy consumption and greenhouse gas emissions.
Williams, Daniel R; Tang, Yinshan
2013-05-07
Cloud computing is usually regarded as being energy efficient and thus emitting less greenhouse gases (GHG) than traditional forms of computing. When the energy consumption of Microsoft's cloud computing Office 365 (O365) and traditional Office 2010 (O2010) software suites were tested and modeled, some cloud services were found to consume more energy than the traditional form. The developed model in this research took into consideration the energy consumption at the three main stages of data transmission; data center, network, and end user device. Comparable products from each suite were selected and activities were defined for each product to represent a different computing type. Microsoft provided highly confidential data for the data center stage, while the networking and user device stages were measured directly. A new measurement and software apportionment approach was defined and utilized allowing the power consumption of cloud services to be directly measured for the user device stage. Results indicated that cloud computing is more energy efficient for Excel and Outlook which consumed less energy and emitted less GHG than the standalone counterpart. The power consumption of the cloud based Outlook (8%) and Excel (17%) was lower than their traditional counterparts. However, the power consumption of the cloud version of Word was 17% higher than its traditional equivalent. A third mixed access method was also measured for Word which emitted 5% more GHG than the traditional version. It is evident that cloud computing may not provide a unified way forward to reduce energy consumption and GHG. Direct conversion from the standalone package into the cloud provision platform can now consider energy and GHG emissions at the software development and cloud service design stage using the methods described in this research.
Computational Fluid Dynamics Program at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Holst, Terry L.
1989-01-01
The Computational Fluid Dynamics (CFD) Program at NASA Ames Research Center is reviewed and discussed. The technical elements of the CFD Program are listed and briefly discussed. These elements include algorithm research, research and pilot code development, scientific visualization, advanced surface representation, volume grid generation, and numerical optimization. Next, the discipline of CFD is briefly discussed and related to other areas of research at NASA Ames including experimental fluid dynamics, computer science research, computational chemistry, and numerical aerodynamic simulation. These areas combine with CFD to form a larger area of research, which might collectively be called computational technology. The ultimate goal of computational technology research at NASA Ames is to increase the physical understanding of the world in which we live, solve problems of national importance, and increase the technical capabilities of the aerospace community. Next, the major programs at NASA Ames that either use CFD technology or perform research in CFD are listed and discussed. Briefly, this list includes turbulent/transition physics and modeling, high-speed real gas flows, interdisciplinary research, turbomachinery demonstration computations, complete aircraft aerodynamics, rotorcraft applications, powered lift flows, high alpha flows, multiple body aerodynamics, and incompressible flow applications. Some of the individual problems actively being worked in each of these areas is listed to help define the breadth or extent of CFD involvement in each of these major programs. State-of-the-art examples of various CFD applications are presented to highlight most of these areas. The main emphasis of this portion of the presentation is on examples which will not otherwise be treated at this conference by the individual presentations. Finally, a list of principal current limitations and expected future directions is given.
Dan Goldin Presentation: Pathway to the Future
NASA Technical Reports Server (NTRS)
1999-01-01
In the "Path to the Future" presentation held at NASA's Langley Center on March 31, 1999, NASA's Administrator Daniel S. Goldin outlined the future direction and strategies of NASA in relation to the general space exploration enterprise. NASA's Vision, Future System Characteristics, Evolutions of Engineering, and Revolutionary Changes are the four main topics of the presentation. In part one, the Administrator talks in detail about NASA's vision in relation to the NASA Strategic Activities that are Space Science, Earth Science, Human Exploration, and Aeronautics & Space Transportation. Topics discussed in this section include: space science for the 21st century, flying in mars atmosphere (mars plane), exploring new worlds, interplanetary internets, earth observation and measurements, distributed information-system-in-the-sky, science enabling understanding and application, space station, microgravity, science and exploration strategies, human mars mission, advance space transportation program, general aviation revitalization, and reusable launch vehicles. In part two, he briefly talks about the future system characteristics. He discusses major system characteristics like resiliencey, self-sufficiency, high distribution, ultra-efficiency, and autonomy and the necessity to overcome any distance, time, and extreme environment barriers. Part three of Mr. Goldin's talk deals with engineering evolution, mainly evolution in the Computer Aided Design (CAD)/Computer Aided Engineering (CAE) systems. These systems include computer aided drafting, computerized solid models, virtual product development (VPD) systems, networked VPD systems, and knowledge enriched networked VPD systems. In part four, the last part, the Administrator talks about the need for revolutionary changes in communication and networking areas of a system. According to the administrator, the four major areas that need cultural changes in the creativity process are human-centered computing, an infrastructure for distributed collaboration, rapid synthesis and simulation tools, and life-cycle integration and validation. Mr. Goldin concludes his presentation with the following maxim "Collaborate, Integrate, Innovate or Stagnate and Evaporate." He also answers some questions after the presentation.
Edge analyzing properties of center/surround response functions in cybernetic vision
NASA Technical Reports Server (NTRS)
Jobson, D. J.
1984-01-01
The ability of center/surround response functions to make explicit high resolution spatial information in optical images was investigated by performing convolutions of two dimensional response functions and image intensity functions (mainly edges). The center/surround function was found to have the unique property of separating edge contrast from shape variations and of providing a direct basis for determining contrast and subsequently shape of edges in images. Computationally simple measures of contrast and shape were constructed for potential use in cybernetic vision systems. For one class of response functions these measures were found to be reasonably resilient for a range of scan direction and displacements of the response functions relative to shaped edges. A pathological range of scan directions was also defined and methods for detecting and handling these cases were developed. The relationship of these results to biological vision is discussed speculatively.
Adaptive Optics Images of the Galactic Center: Using Empirical Noise-maps to Optimize Image Analysis
NASA Astrophysics Data System (ADS)
Albers, Saundra; Witzel, Gunther; Meyer, Leo; Sitarski, Breann; Boehle, Anna; Ghez, Andrea M.
2015-01-01
Adaptive Optics images are one of the most important tools in studying our Galactic Center. In-depth knowledge of the noise characteristics is crucial to optimally analyze this data. Empirical noise estimates - often represented by a constant value for the entire image - can be greatly improved by computing the local detector properties and photon noise contributions pixel by pixel. To comprehensively determine the noise, we create a noise model for each image using the three main contributors—photon noise of stellar sources, sky noise, and dark noise. We propagate the uncertainties through all reduction steps and analyze the resulting map using Starfinder. The estimation of local noise properties helps to eliminate fake detections while improving the detection limit of fainter sources. We predict that a rigorous understanding of noise allows a more robust investigation of the stellar dynamics in the center of our Galaxy.
WIS Implementation Study Report. Volume 1. Main Report.
1983-10-01
Luenberger, Prof. David G. * Stanford University Ries, Dr. Daniel R. * Computer Corporation of America Schill, John Naval Ocean Systems Center Shrier , Dr...Robert E. 43 Kaczmarek, Dr. Thomas S. 45 Klein, Dr. Stanely A. 47 Kramer, Dr. John F. 49 Larsen, Dr. Robert E. 55 Luenberger, Prof. David G. 58...Riddle, Dr. William E. 76 Ries, Dr. Daniel R. 82 Sapp, Mr. John W. 88 Shelley, Mr. Stephen H. 89 Shrier , Dr. Stefan 94 Slusarczuk, Dr. Marko M.G. 96
2007-09-01
also relatively easy to change the wind tunnel model to allow detailed parametric effects to be investigated. The main disadvantage of wind tunnel...as Magnus force and moment coefficients are difficult to obtain in a wind tunnel and require a complex physical wind tunnel model. Over the past...7) The terms containing YPAC constitute the Magnus air load acting at the Magnus center of pressure while the terms containing 0 2, ,X X NAC C C
Studies of asteroids, comets, and Jupiter's outer satellites
NASA Technical Reports Server (NTRS)
Bowell, Edward
1991-01-01
Observational, theoretical, and computational research was performed, mainly on asteroids. Two principal areas of research, centering on astrometry and photometry, are interrelated in their aim to study the overall structure of the asteroid belt and the physical and orbital properties of individual asteroids. Two highlights are: detection of CN emission from Chiron; and realization that 1990 MB is the first known Trojan type asteroid of a planet other than Jupiter. A new method of asteroid orbital error analysis, based on Bayesian theory, was developed.
2003-04-15
KENNEDY SPACE CENTER, FLA. -- In the Payload Hazardous Servicing Facility, the lander petals of the Mars Exploration Rover 2 (MER-2) have been reopened to allow technicians access to one of the spacecraft's circuit boards. A concern arose during prelaunch testing regarding how the spacecraft interprets signals sent from its main computer to peripherals in the cruise stage, lander and small deep space transponder. The MER Mission consists of two identical rovers set to launch in June 2003. The problem will be fixed on both rovers.
2003-04-15
KENNEDY SPACE CENTER, FLA. -- In the Payload Hazardous Servicing Facility, technicians reopen the lander petals of the Mars Exploration Rover 2 (MER-2) to allow access to one of the spacecraft's circuit boards. A concern arose during prelaunch testing regarding how the spacecraft interprets signals sent from its main computer to peripherals in the cruise stage, lander and small deep space transponder. The MER Mission consists of two identical rovers set to launch in June 2003. The problem will be fixed on both rovers.
2003-04-15
KENNEDY SPACE CENTER, FLA. -- In the Payload Hazardous Servicing Facility, technicians reopen the lander petals of the Mars Exploration Rover 2 (MER-2) to allow access to one of the spacecraft's circuit boards. A concern arose during prelaunch testing regarding how the spacecraft interprets signals sent from its main computer to peripherals in the cruise stage, lander and small deep space transponder. The MER Mission consists of two identical rovers set to launch in June 2003. The problem will be fixed on both rovers.
Computers in aeronautics and space research at the Lewis Research Center
NASA Technical Reports Server (NTRS)
1991-01-01
This brochure presents a general discussion of the role of computers in aerospace research at NASA's Lewis Research Center (LeRC). Four particular areas of computer applications are addressed: computer modeling and simulation, computer assisted engineering, data acquisition and analysis, and computer controlled testing.
NASA Astrophysics Data System (ADS)
Guo, Minghuan; Sun, Feihu; Wang, Zhifeng
2017-06-01
The solar tower concentrator is mainly composed of the central receiver on the tower top and the heliostat field around the tower. The optical efficiencies of a solar tower concentrator are important to the whole thermal performance of the solar tower collector, and the aperture plane of a cavity receiver or the (inner or external) absorbing surface of any central receiver is a key interface of energy flux. So it is necessary to simulate and analyze the concentrated time-changing solar flux density distributions on the flat or curved receiving surface of the collector, with main optical errors considered. The transient concentrated solar flux on the receiving surface is the superimposition of the flux density distributions of all the normal working heliostats in the field. In this paper, we will mainly introduce a new backward ray tracing (BRT) method combined with the lumped effective solar cone, to simulate the flux density map on the receiving-surface. For BRT, bundles of rays are launched at the receiving-surface points of interest, strike directly on the valid cell centers among the uniformly sampled mirror cell centers in the mirror surface of the heliostats, and then direct to the effective solar cone around the incident sun beam direction after reflection. All the optical errors are convoluted into the effective solar cone. The brightness distribution of the effective solar cone is here supposed to be circular Gaussian type. The mirror curvature can be adequately formulated by certain number of local normal vectors at the mirror cell centers of a heliostat. The shading & blocking mirror region of a heliostat by neighbor heliostats and also the solar tower shading on the heliostat mirror are all computed on the flat-ground-plane platform, i.e., projecting the mirror contours and the envelope cylinder of the tower onto the horizontal ground plane along the sun-beam incident direction or along the reflection directions. If the shading projection of a sampled mirror point of the current heliostat is inside the shade cast of a neighbor heliostat or in the shade cast of the tower, this mirror point should be shaded from the incident sun beam. A code based on this new ray tracing method for the 1MW Badaling solar tower power plant in Beijing has been developed using MATLAB. There are 100 azimuth-elevation tracking heliostats in the solar field and the total tower is 118 meters high. The mirror surface of the heliostats is 10m wide and 10m long, it is composed of 8 rows × 8 columns of square mirror facets and each mirror facet has the size of 1.25m×1.25m. This code also was verified by two sets of sun-beam concentrating experiments of the heliostat field on the June 14, 2015. One set of optical experiments were conducted between some typical heliostats to verify the shading & blocking computation of the code, since shading & blocking computation is the most complicated, time-consuming and important optical computing section of the code. The other set of solar concentrating tests were carried out on the field center heliostat (No. 78) to verify the simulated the solar flux images on the white target region of the northern wall of the tower. The target center is 74.5 m high to the ground plane.
Silberstein, M.; Tzemach, A.; Dovgolevsky, N.; Fishelson, M.; Schuster, A.; Geiger, D.
2006-01-01
Computation of LOD scores is a valuable tool for mapping disease-susceptibility genes in the study of Mendelian and complex diseases. However, computation of exact multipoint likelihoods of large inbred pedigrees with extensive missing data is often beyond the capabilities of a single computer. We present a distributed system called “SUPERLINK-ONLINE,” for the computation of multipoint LOD scores of large inbred pedigrees. It achieves high performance via the efficient parallelization of the algorithms in SUPERLINK, a state-of-the-art serial program for these tasks, and through the use of the idle cycles of thousands of personal computers. The main algorithmic challenge has been to efficiently split a large task for distributed execution in a highly dynamic, nondedicated running environment. Notably, the system is available online, which allows computationally intensive analyses to be performed with no need for either the installation of software or the maintenance of a complicated distributed environment. As the system was being developed, it was extensively tested by collaborating medical centers worldwide on a variety of real data sets, some of which are presented in this article. PMID:16685644
User-Centered Computer Aided Language Learning
ERIC Educational Resources Information Center
Zaphiris, Panayiotis, Ed.; Zacharia, Giorgos, Ed.
2006-01-01
In the field of computer aided language learning (CALL), there is a need for emphasizing the importance of the user. "User-Centered Computer Aided Language Learning" presents methodologies, strategies, and design approaches for building interfaces for a user-centered CALL environment, creating a deeper understanding of the opportunities and…
CFD Modeling Activities at the NASA Stennis Space Center
NASA Technical Reports Server (NTRS)
Allgood, Daniel
2007-01-01
A viewgraph presentation on NASA Stennis Space Center's Computational Fluid Dynamics (CFD) Modeling activities is shown. The topics include: 1) Overview of NASA Stennis Space Center; 2) Role of Computational Modeling at NASA-SSC; 3) Computational Modeling Tools and Resources; and 4) CFD Modeling Applications.
Mathematics and Computer Science | Argonne National Laboratory
Genomics and Systems Biology LCRCLaboratory Computing Resource Center MCSGMidwest Center for Structural Genomics NAISENorthwestern-Argonne Institute of Science & Engineering SBCStructural Biology Center
Computer Center Harris 1600 Operator’s Guide.
1982-06-01
RECIPIENT’S CATALOG NUMBER CMLD-82-15 Vb /9 7 ’ 4. TITLE (and Subtitle) S. TYPE OF REPORT & PERIOD COVERED Computer Center Harris 1600 Operator’s Guide...AD-AIAA 077 DAVID W TAYLOR NAVAL SHIP RESEARCH AND DEVELOPMENT CE--ETC F/G. 5/9 COMPUTER CENTER HARRIS 1600 OPEAATOR’S GUIDE.dU) M JUN 62 D A SOMMER...20084 COMPUTER CENTER HARRIS 1600 OPERATOR’s GUIDE by David V. Sommer & Sharon E. Good APPROVED FOR PUBLIC RELEASE: DISTRIBUTION UNLIMITED ’-.7 SJ0 o 0
The Russian effort in establishing large atomic and molecular databases
NASA Astrophysics Data System (ADS)
Presnyakov, Leonid P.
1998-07-01
The database activities in Russia have been developed in connection with UV and soft X-ray spectroscopic studies of extraterrestrial and laboratory (magnetically confined and laser-produced) plasmas. Two forms of database production are used: i) a set of computer programs to calculate radiative and collisional data for the general atom or ion, and ii) development of numeric database systems with the data stored in the computer. The first form is preferable for collisional data. At the Lebedev Physical Institute, an appropriate set of the codes has been developed. It includes all electronic processes at collision energies from the threshold up to the relativistic limit. The ion -atom (and -ion) collisional data are calculated with the methods developed recently. The program for the calculations of the level populations and line intensities is used for spectrical diagnostics of transparent plasmas. The second form of database production is widely used at the Institute of Physico-Technical Measurements (VNIIFTRI), and the Troitsk Center: the Institute of Spectroscopy and TRINITI. The main results obtained at the centers above are reviewed. Plans for future developments jointly with international collaborations are discussed.
González-Navarrete, Patricio; Schlangen, Maria; Wu, Xiao-Nan; Schwarz, Helmut
2016-02-24
The ion/molecule reactions of molybdenum and tungsten dioxide cations with ethanol have been studied by Fourier transform ion-cyclotron resonance mass spectrometry (FT-ICR MS) and density functional theory (DFT) calculations. Dehydration of ethanol has been found as the dominant reaction channel, while generation of the ethyl cation corresponds to a minor product. Cleary, the reactions are mainly governed by the Lewis acidity of the metal center. Computational results, together with isotopic labeling experiments, show that the dehydration of ethanol can proceed either through a conventional concerted [1,2]-elimination mechanism or a step-wise process; the latter occurs via a hydroxyethoxy intermediate. Formation of C2 H5 (+) takes place by transfer of OH(-) from ethanol to the metal center of MO2 (+) . The molybdenum and tungsten dioxide cations exhibit comparable reactivities toward ethanol, and this is reflected in similar reaction rate constants and branching ratios. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Changing the batch system in a Tier 1 computing center: why and how
NASA Astrophysics Data System (ADS)
Chierici, Andrea; Dal Pra, Stefano
2014-06-01
At the Italian Tierl Center at CNAF we are evaluating the possibility to change the current production batch system. This activity is motivated mainly because we are looking for a more flexible licensing model as well as to avoid vendor lock-in. We performed a technology tracking exercise and among many possible solutions we chose to evaluate Grid Engine as an alternative because its adoption is increasing in the HEPiX community and because it's supported by the EMI middleware that we currently use on our computing farm. Another INFN site evaluated Slurm and we will compare our results in order to understand pros and cons of the two solutions. We will present the results of our evaluation of Grid Engine, in order to understand if it can fit the requirements of a Tier 1 center, compared to the solution we adopted long ago. We performed a survey and a critical re-evaluation of our farming infrastructure: many production softwares (accounting and monitoring on top of all) rely on our current solution and changing it required us to write new wrappers and adapt the infrastructure to the new system. We believe the results of this investigation can be very useful to other Tier-ls and Tier-2s centers in a similar situation, where the effort of switching may appear too hard to stand. We will provide guidelines in order to understand how difficult this operation can be and how long the change may take.
77 FR 34941 - Privacy Act of 1974; Notice of a Computer Matching Program
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-12
...; Notice of a Computer Matching Program AGENCY: Defense Manpower Data Center, DoD. ACTION: Notice of a... computer matching program are the Department of Veterans Affairs (VA) and the Defense Manpower Data Center... identified as DMDC 01, entitled ``Defense Manpower Data Center Data Base,'' last published in the Federal...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-13
... the Defense Manpower Data Center, Department of Defense AGENCY: Postal Service TM . ACTION: Notice of Computer Matching Program--United States Postal Service and the Defense Manpower Data Center, Department of... as the recipient agency in a computer matching program with the Defense Manpower Data Center (DMDC...
RB-ARD: A proof of concept rule-based abort
NASA Technical Reports Server (NTRS)
Smith, Richard; Marinuzzi, John
1987-01-01
The Abort Region Determinator (ARD) is a console program in the space shuttle mission control center. During shuttle ascent, the Flight Dynamics Officer (FDO) uses the ARD to determine the possible abort modes and make abort calls for the crew. The goal of the Rule-based Abort region Determinator (RB/ARD) project was to test the concept of providing an onboard ARD for the shuttle or an automated ARD for the mission control center (MCC). A proof of concept rule-based system was developed on a LMI Lambda computer using PICON, a knowdedge-based system shell. Knowdedge derived from documented flight rules and ARD operation procedures was coded in PICON rules. These rules, in conjunction with modules of conventional code, enable the RB-ARD to carry out key parts of the ARD task. Current capabilities of the RB-ARD include: continuous updating of the available abort mode, recognition of a limited number of main engine faults and recommendation of safing actions. Safing actions recommended by the RB-ARD concern the Space Shuttle Main Engine (SSME) limit shutdown system and powerdown of the SSME Ac buses.
Data Center Consolidation: A Step towards Infrastructure Clouds
NASA Astrophysics Data System (ADS)
Winter, Markus
Application service providers face enormous challenges and rising costs in managing and operating a growing number of heterogeneous system and computing landscapes. Limitations of traditional computing environments force IT decision-makers to reorganize computing resources within the data center, as continuous growth leads to an inefficient utilization of the underlying hardware infrastructure. This paper discusses a way for infrastructure providers to improve data center operations based on the findings of a case study on resource utilization of very large business applications and presents an outlook beyond server consolidation endeavors, transforming corporate data centers into compute clouds.
Computer Maintenance Operations Center (CMOC), additional computer support equipment ...
Computer Maintenance Operations Center (CMOC), additional computer support equipment - Beale Air Force Base, Perimeter Acquisition Vehicle Entry Phased-Array Warning System, Techinical Equipment Building, End of Spencer Paul Road, north of Warren Shingle Road (14th Street), Marysville, Yuba County, CA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beckman, P.; Martin, D.; Drugan, C.
2010-11-23
This year the Argonne Leadership Computing Facility (ALCF) delivered nearly 900 million core hours of science. The research conducted at their leadership class facility touched our lives in both minute and massive ways - whether it was studying the catalytic properties of gold nanoparticles, predicting protein structures, or unearthing the secrets of exploding stars. The authors remained true to their vision to act as the forefront computational center in extending science frontiers by solving pressing problems for our nation. Our success in this endeavor was due mainly to the Department of Energy's (DOE) INCITE (Innovative and Novel Computational Impact onmore » Theory and Experiment) program. The program awards significant amounts of computing time to computationally intensive, unclassified research projects that can make high-impact scientific advances. This year, DOE allocated 400 million hours of time to 28 research projects at the ALCF. Scientists from around the world conducted the research, representing such esteemed institutions as the Princeton Plasma Physics Laboratory, National Institute of Standards and Technology, and European Center for Research and Advanced Training in Scientific Computation. Argonne also provided Director's Discretionary allocations for research challenges, addressing such issues as reducing aerodynamic noise, critical for next-generation 'green' energy systems. Intrepid - the ALCF's 557-teraflops IBM Blue/Gene P supercomputer - enabled astounding scientific solutions and discoveries. Intrepid went into full production five months ahead of schedule. As a result, the ALCF nearly doubled the days of production computing available to the DOE Office of Science, INCITE awardees, and Argonne projects. One of the fastest supercomputers in the world for open science, the energy-efficient system uses about one-third as much electricity as a machine of comparable size built with more conventional parts. In October 2009, President Barack Obama recognized the excellence of the entire Blue Gene series by awarding it to the National Medal of Technology and Innovation. Other noteworthy achievements included the ALCF's collaboration with the National Energy Research Scientific Computing Center (NERSC) to examine cloud computing as a potential new computing paradigm for scientists. Named Magellan, the DOE-funded initiative will explore which science application programming models work well within the cloud, as well as evaluate the challenges that come with this new paradigm. The ALCF obtained approval for its next-generation machine, a 10-petaflops system to be delivered in 2012. This system will allow us to resolve ever more pressing problems, even more expeditiously through breakthrough science in the years to come.« less
Kavlock, Robert; Dix, David
2010-02-01
Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, the U.S. Environmental Protection Agency EPA is developing robust and flexible computational tools that can be applied to the thousands of chemicals in commerce, and contaminant mixtures found in air, water, and hazardous-waste sites. The Office of Research and Development (ORD) Computational Toxicology Research Program (CTRP) is composed of three main elements. The largest component is the National Center for Computational Toxicology (NCCT), which was established in 2005 to coordinate research on chemical screening and prioritization, informatics, and systems modeling. The second element consists of related activities in the National Health and Environmental Effects Research Laboratory (NHEERL) and the National Exposure Research Laboratory (NERL). The third and final component consists of academic centers working on various aspects of computational toxicology and funded by the U.S. EPA Science to Achieve Results (STAR) program. Together these elements form the key components in the implementation of both the initial strategy, A Framework for a Computational Toxicology Research Program (U.S. EPA, 2003), and the newly released The U.S. Environmental Protection Agency's Strategic Plan for Evaluating the Toxicity of Chemicals (U.S. EPA, 2009a). Key intramural projects of the CTRP include digitizing legacy toxicity testing information toxicity reference database (ToxRefDB), predicting toxicity (ToxCast) and exposure (ExpoCast), and creating virtual liver (v-Liver) and virtual embryo (v-Embryo) systems models. U.S. EPA-funded STAR centers are also providing bioinformatics, computational toxicology data and models, and developmental toxicity data and models. The models and underlying data are being made publicly available through the Aggregated Computational Toxicology Resource (ACToR), the Distributed Structure-Searchable Toxicity (DSSTox) Database Network, and other U.S. EPA websites. While initially focused on improving the hazard identification process, the CTRP is placing increasing emphasis on using high-throughput bioactivity profiling data in systems modeling to support quantitative risk assessments, and in developing complementary higher throughput exposure models. This integrated approach will enable analysis of life-stage susceptibility, and understanding of the exposures, pathways, and key events by which chemicals exert their toxicity in developing systems (e.g., endocrine-related pathways). The CTRP will be a critical component in next-generation risk assessments utilizing quantitative high-throughput data and providing a much higher capacity for assessing chemical toxicity than is currently available.
Ren, Hongjiang; Huang, Xinwei; Li, Shuna
2017-01-01
The V-centered bicapped hexagonal antiprism structure (A), as the most stable geometry of the cationic V15+ cluster, is determined by using infrared multiple photo dissociation (IR-MPD) in combination with density functional theory computations. It is found that the A structure can be stabilized by 18 delocalized 3c-2e σ-bonds on outer V3 triangles of the bicapped hexagonal antiprism surface and 12 delocalized 4c-2e σ-bonds on inner trigonal pyramidal V4 moiety, and the features are related to the strong p-d hybridization of the cluster. The total magnetic moments on the cluster are predicted to be 2.0 µB, which come mainly from the central vanadium atom. PMID:28665337
30 CFR 75.825 - Power centers.
Code of Federal Regulations, 2014 CFR
2014-07-01
....825 Power centers. (a) Main disconnecting switch. The power center supplying high voltage power to the continuous mining machine must be equipped with a main disconnecting switch that, when in the open position... the main disconnecting switch required in paragraph (a) of this section, the power center must be...
30 CFR 75.825 - Power centers.
Code of Federal Regulations, 2012 CFR
2012-07-01
....825 Power centers. (a) Main disconnecting switch. The power center supplying high voltage power to the continuous mining machine must be equipped with a main disconnecting switch that, when in the open position... the main disconnecting switch required in paragraph (a) of this section, the power center must be...
30 CFR 75.825 - Power centers.
Code of Federal Regulations, 2013 CFR
2013-07-01
....825 Power centers. (a) Main disconnecting switch. The power center supplying high voltage power to the continuous mining machine must be equipped with a main disconnecting switch that, when in the open position... the main disconnecting switch required in paragraph (a) of this section, the power center must be...
ERIC Educational Resources Information Center
Lin, Che-Li; Liang, Jyh-Chong; Su, Yi-Ching; Tsai, Chin-Chung
2013-01-01
Teacher-centered instruction has been widely adopted in college computer science classrooms and has some benefits in training computer science undergraduates. Meanwhile, student-centered contexts have been advocated to promote computer science education. How computer science learners respond to or prefer the two types of teacher authority,…
Computer Maintenance Operations Center (CMOC), showing duplexed cyber 170174 computers ...
Computer Maintenance Operations Center (CMOC), showing duplexed cyber 170-174 computers - Beale Air Force Base, Perimeter Acquisition Vehicle Entry Phased-Array Warning System, Techinical Equipment Building, End of Spencer Paul Road, north of Warren Shingle Road (14th Street), Marysville, Yuba County, CA
NASA Center for Computational Sciences: History and Resources
NASA Technical Reports Server (NTRS)
2000-01-01
The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.
Center for Computing Research Summer Research Proceedings 2015.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, Andrew Michael; Parks, Michael L.
2015-12-18
The Center for Computing Research (CCR) at Sandia National Laboratories organizes a summer student program each summer, in coordination with the Computer Science Research Institute (CSRI) and Cyber Engineering Research Institute (CERI).
Center for modeling of turbulence and transition: Research briefs, 1993
NASA Technical Reports Server (NTRS)
Liou, William W. (Editor)
1994-01-01
This research brief contains the progress reports of the research staff of the Center for Modeling of Turbulence and Transition (CMOTT) from June 1992 to July 1993. It is also an annual report to the Institute for Computational Mechanics in Propulsion located at Ohio Aerospace Institute and NASA Lewis Research Center. The main objectives of the research activities at CMOTT are to develop, validate, and implement turbulence and transition models for flows of interest in propulsion systems. Currently, our research covers eddy viscosity one- and two-equation models, Reynolds-stress algebraic equation models, Reynolds-stress transport equation models, nonequilibrium multiple-scale models, bypass transition models, joint scalar probability density function models, and Renormalization Group Theory and Direct Interaction Approximation methods. Some numerical simulations (LES and DNS) have also been carried out to support the development of turbulence modeling. Last year was CMOTT's third year in operation. During this period, in addition to the above mentioned research, CMOTT has also hosted the following programs: an eighteen-hour short course on 'Turbulence--Fundamentals and Computational Modeling (Part I)' given by CMOTT at the NASA Lewis Research Center; a productive summer visitor research program that has generated many encouraging results; collaborative programs with industry customers to help improve their turbulent flow calculations for propulsion system designs; a biweekly CMOTT seminar series with speakers from within and without the NASA Lewis Research Center including foreign speakers. In addition, CMOTT members have been actively involved in the national and international turbulence research activities. The current CMOTT roster and organization are listed in Appendix A. Listed in Appendix B are the abstracts of the biweekly CMOTT seminar. Appendix C lists the papers contributed by CMOTT members.
Sloshing in the Liquid Hydrogen and Liquid Oxygen Propellant Tanks After Main Engine Cut Off
NASA Technical Reports Server (NTRS)
Kim, Sura; West, Jeff
2011-01-01
NASA Marshall Space Flight Center is designing and developing the Main Propulsion System (MPS) for Ares launch vehicles. Propellant sloshing in the liquid hydrogen (LH2) and liquid oxygen (LO2) propellant tanks after Main Engine Cut Off (MECO) was modeled using the Volume of Fluid (VOF) module of the computational fluid dynamics code, CFD-ACE+. The present simulation shows that there is substantial sloshing side forces acting on the LH2 tank during the deceleration of the vehicle after MECO. The LH2 tank features a side wall drain pipe. The side loads result from the residual propellant mass motion in the LH2 tank which is initiated by the stop of flow into the drain pipe at MECO. The simulations show that radial force on the LH2 tank wall is less than 50 lbf and the radial moment calculated based up through the center of gravity of the vehicle is predicted to be as high as 300 lbf-ft. The LO2 tank features a bottom dome drain system and is equipped with sloshing baffles. The remaining LO2 in the tank slowly forms a liquid column along the centerline of tank under the zero gravity environments. The radial force on the LO2 tank wall is predicted to be less than 100 lbf. The radial moment calculated based on the center of gravity of the vehicle is predicted as high as 4500 lbf-ft just before MECO and dropped down to near zero after propellant draining stopped completely.
Preparation of Morpheus Vehicle for Vacuum Environment Testing
NASA Technical Reports Server (NTRS)
Sandoval, Armando
2016-01-01
The main objective for this summer 2016 tour was to prepare the Morpheus vehicle for its upcoming test inside Plum Brook's vacuum chamber at NASA John H. Glenn Research Center. My contributions towards this project were mostly analytical in nature, providing numerical models to validate test data, generating computer aided analyses for the structure support of the vehicle's engine, and designing a vacuum can that is to protect the high speed camera used during testing. Furthermore, I was also tasked with designing a tank toroidal spray bar system.
2012-02-17
Industrial Area Construction: Located 5 miles south of Launch Complex 39, construction of the main buildings -- Operations and Checkout Building, Headquarters Building, and Central Instrumentation Facility – began in 1963. In 1992, the Space Station Processing Facility was designed and constructed for the pre-launch processing of International Space Station hardware that was flown on the space shuttle. Along with other facilities, the industrial area provides spacecraft assembly and checkout, crew training, computer and instrumentation equipment, hardware preflight testing and preparations, as well as administrative offices. Poster designed by Kennedy Space Center Graphics Department/Greg Lee. Credit: NASA
Definition of ground test for verification of large space structure control
NASA Technical Reports Server (NTRS)
Doane, G. B., III; Glaese, J. R.; Tollison, D. K.; Howsman, T. G.; Curtis, S. (Editor); Banks, B.
1984-01-01
Control theory and design, dynamic system modelling, and simulation of test scenarios are the main ideas discussed. The overall effort is the achievement at Marshall Space Flight Center of a successful ground test experiment of a large space structure. A simplified planar model of ground test experiment of a large space structure. A simplified planar model of ground test verification was developed. The elimination from that model of the uncontrollable rigid body modes was also examined. Also studied was the hardware/software of computation speed.
Spacelab data analysis and interactive control study
NASA Technical Reports Server (NTRS)
Tarbell, T. D.; Drake, J. F.
1980-01-01
The study consisted of two main tasks, a series of interviews of Spacelab users and a survey of data processing and display equipment. Findings from the user interviews on questions of interactive control, downlink data formats, and Spacelab computer software development are presented. Equipment for quick look processing and display of scientific data in the Spacelab Payload Operations Control Center (POCC) was surveyed. Results of this survey effort are discussed in detail, along with recommendations for NASA development of several specific display systems which meet common requirements of many Spacelab experiments.
Flowfield visualization for SSME hot gas manifold
NASA Technical Reports Server (NTRS)
Roger, Robert P.
1988-01-01
The objective of this research, as defined by NASA-Marshall Space Flight Center, was two-fold: (1) to numerically simulate viscous subsonic flow in a proposed elliptical two-duct version of the fuel side Hot Gas Manifold (HGM) for the Space Shuttle Main Engine (SSME), and (2) to provide analytical support for SSME related numerical computational experiments, being performed by the Computational Fluid Dynamics staff in the Aerophysics Division of the Structures and Dynamics Laboratory at NASA-MSFC. Numerical results of HGM were calculations to complement both water flow visualization experiments and air flow visualization experiments and air experiments in two-duct geometries performed at NASA-MSFC and Rocketdyne. In addition, code modification and improvement efforts were to strengthen the CFD capabilities of NASA-MSFC for producing reliable predictions of flow environments within the SSME.
A test matrix sequencer for research test facility automation
NASA Technical Reports Server (NTRS)
Mccartney, Timothy P.; Emery, Edward F.
1990-01-01
The hardware and software configuration of a Test Matrix Sequencer, a general purpose test matrix profiler that was developed for research test facility automation at the NASA Lewis Research Center, is described. The system provides set points to controllers and contact closures to data systems during the course of a test. The Test Matrix Sequencer consists of a microprocessor controlled system which is operated from a personal computer. The software program, which is the main element of the overall system is interactive and menu driven with pop-up windows and help screens. Analog and digital input/output channels can be controlled from a personal computer using the software program. The Test Matrix Sequencer provides more efficient use of aeronautics test facilities by automating repetitive tasks that were once done manually.
Identification and modeling of the electrohydraulic systems of the main gun of a main battle tank
NASA Astrophysics Data System (ADS)
Campos, Luiz C. A.; Menegaldo, Luciano L.
2012-11-01
The black-box mathematical models of the electrohydraulic systems responsible for driving the two degrees of freedom (elevation and azimuth) of the main gun of a main battle tank (MBT) were identified. Such systems respond to gunner's inputs while acquiring and tracking targets. Identification experiments were designed to collect simultaneous data from two inertial measurement units (IMU) installed at the gunner's handle (input) and at the center of rotation of the turret (output), for the identification of the azimuth system. For the elevation system, IMUs were installed at the gunner's handle (input) and at the breech of the gun (output). Linear accelerations and angular rates were collected for both input and output. Several black-box model architectures were investigated. As a result, nonlinear autoregressive with exogenous variables (NARX) second order model and nonlinear finite impulse response (NFIR) fourth order model, demonstrate to best fit the experimental data, with low computational costs. The derived models are being employed in a broader research, aiming to reproduce such systems in a laboratory virtual main gun simulator.
Niv, Yaron; Itskoviz, David; Cohen, Michal; Hendel, Hagit; Bar-Giora, Yonit; Berkov, Evgeny; Weisbord, Irit; Leviron, Yifat; Isasschar, Assaf; Ganor, Arian
Failure modes and effects analysis (FMEA) is a tool used to identify potential risks in health care processes. We used the FMEA tool for improving the process of consultation in an academic medical center. A team of 10 staff members-5 physicians, 2 quality experts, 2 organizational consultants, and 1 nurse-was established. The consultation process steps, from ordering to delivering, were computed. Failure modes were assessed for likelihood of occurrence, detection, and severity. A risk priority number (RPN) was calculated. An interventional plan was designed according to the highest RPNs. Thereafter, we compared the percentage of completed computer-based documented consultations before and after the intervention. The team identified 3 main categories of failure modes that reached the highest RPNs: initiation of consultation by a junior staff physician without senior approval, failure to document the consultation in the computerized patient registry, and asking for consultation on the telephone. An interventional plan was designed, including meetings to update knowledge of the consultation request process, stressing the importance of approval by a senior physician, training sessions for closing requests in the patient file, and reporting of telephone requests. The number of electronically documented consultation results and recommendations significantly increased (75%) after intervention. FMEA is an important and efficient tool for improving the consultation process in an academic medical center.
78 FR 45513 - Privacy Act of 1974; Computer Matching Program
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-29
...; Computer Matching Program AGENCY: Defense Manpower Data Center (DMDC), DoD. ACTION: Notice of a Computer... individual's privacy, and would result in additional delay in determining eligibility and, if applicable, the... Defense. NOTICE OF A COMPUTER MATCHING PROGRAM AMONG THE DEFENSE MANPOWER DATA CENTER, THE DEPARTMENT OF...
20. SITE BUILDING 002 SCANNER BUILDING IN COMPUTER ...
20. SITE BUILDING 002 - SCANNER BUILDING - IN COMPUTER ROOM LOOKING AT "CONSOLIDATED MAINTENANCE OPERATIONS CENTER" JOB AREA AND OPERATION WORK CENTER. TASKS INCLUDE RADAR MAINTENANCE, COMPUTER MAINTENANCE, CYBER COMPUTER MAINTENANCE AND RELATED ACTIVITIES. - Cape Cod Air Station, Technical Facility-Scanner Building & Power Plant, Massachusetts Military Reservation, Sandwich, Barnstable County, MA
A convergent model for distributed processing of Big Sensor Data in urban engineering networks
NASA Astrophysics Data System (ADS)
Parygin, D. S.; Finogeev, A. G.; Kamaev, V. A.; Finogeev, A. A.; Gnedkova, E. P.; Tyukov, A. P.
2017-01-01
The problems of development and research of a convergent model of the grid, cloud, fog and mobile computing for analytical Big Sensor Data processing are reviewed. The model is meant to create monitoring systems of spatially distributed objects of urban engineering networks and processes. The proposed approach is the convergence model of the distributed data processing organization. The fog computing model is used for the processing and aggregation of sensor data at the network nodes and/or industrial controllers. The program agents are loaded to perform computing tasks for the primary processing and data aggregation. The grid and the cloud computing models are used for integral indicators mining and accumulating. A computing cluster has a three-tier architecture, which includes the main server at the first level, a cluster of SCADA system servers at the second level, a lot of GPU video cards with the support for the Compute Unified Device Architecture at the third level. The mobile computing model is applied to visualize the results of intellectual analysis with the elements of augmented reality and geo-information technologies. The integrated indicators are transferred to the data center for accumulation in a multidimensional storage for the purpose of data mining and knowledge gaining.
Harrigan, Robert L; Yvernault, Benjamin C; Boyd, Brian D; Damon, Stephen M; Gibney, Kyla David; Conrad, Benjamin N; Phillips, Nicholas S; Rogers, Baxter P; Gao, Yurui; Landman, Bennett A
2016-01-01
The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has developed a database built on XNAT housing over a quarter of a million scans. The database provides framework for (1) rapid prototyping, (2) large scale batch processing of images and (3) scalable project management. The system uses the web-based interfaces of XNAT and REDCap to allow for graphical interaction. A python middleware layer, the Distributed Automation for XNAT (DAX) package, distributes computation across the Vanderbilt Advanced Computing Center for Research and Education high performance computing center. All software are made available in open source for use in combining portable batch scripting (PBS) grids and XNAT servers. Copyright © 2015 Elsevier Inc. All rights reserved.
MERCATOR: Methods and Realization for Control of the Attitude and the Orbit of spacecraft
NASA Technical Reports Server (NTRS)
Tavernier, Gilles; Campan, Genevieve
1993-01-01
Since 1974, CNES has been involved in geostationary positioning. Among different entities participating in operations and their preparation, the Flight Dynamics Center (FDC) is in charge of performing the following tasks: orbit determination; attitude determination; computation, monitoring, and calibration of orbit maneuvers; computation, monitoring, and calibration of attitude maneuvers; and operational predictions. In order to fulfill this mission, the FDC receives telemetry from the satellite and localization measurements from ground stations (e.g., CNES, NASA, INTELSAT). These data are processed by space dynamics programs integrated in the MERCATOR system which is run on SUN workstations (UNIX O.S.). The main features of MERCATOR are redundancy, modularity, and flexibility: efficient, flexible, and user friendly man-machine interface; and four identical SUN stations redundantly linked in an Ethernet network. Each workstation can perform all the tasks from data acquisition to computation results dissemination through a video network. A team of four engineers can handle the space mechanics aspects of a complete geostationary positioning from the injection into a transfer orbit to the final maneuvers in the station-keeping window. MERCATOR has been or is to be used for operations related to more than ten geostationary positionings. Initially developed for geostationary satellites, MERCATOR's methodology was also used for satellite control centers and can be applied to a wide range of satellites and to future manned missions.
Theoretical Comparison Between Candidates for Dark Matter
NASA Astrophysics Data System (ADS)
McKeough, James; Hira, Ajit; Valdez, Alexandra
2017-01-01
Since the generally-accepted view among astrophysicists is that the matter component of the universe is mostly dark matter, the search for dark matter particles continues unabated. The Large Underground Xenon (LUX) improvements, aided by advanced computer simulations at the U.S. Department of Energy's Lawrence Berkeley National Laboratory's (Berkeley Lab) National Energy Research Scientific Computing Center (NERSC) and Brown University's Center for Computation and Visualization (CCV), can potentially eliminate some particle models of dark matter. Generally, the proposed candidates can be put in three categories: baryonic dark matter, hot dark matter, and cold dark matter. The Lightest Supersymmetric Particle(LSP) of supersymmetric models is a dark matter candidate, and is classified as a Weakly Interacting Massive Particle (WIMP). Similar to the cosmic microwave background radiation left over from the Big Bang, there is a background of low-energy neutrinos in our Universe. According to some researchers, these may be the explanation for the dark matter. One advantage of the Neutrino Model is that they are known to exist. Dark matter made from neutrinos is termed ``hot dark matter''. We formulate a novel empirical function for the average density profile of cosmic voids, identified via the watershed technique in ΛCDM N-body simulations. This function adequately treats both void size and redshift, and describes the scale radius and the central density of voids. We started with a five-parameter model. Our research is mainly on LSP and Neutrino models.
Energy 101: Energy Efficient Data Centers
None
2018-04-16
Data centers provide mission-critical computing functions vital to the daily operation of top U.S. economic, scientific, and technological organizations. These data centers consume large amounts of energy to run and maintain their computer systems, servers, and associated high-performance componentsâup to 3% of all U.S. electricity powers data centers. And as more information comes online, data centers will consume even more energy. Data centers can become more energy efficient by incorporating features like power-saving "stand-by" modes, energy monitoring software, and efficient cooling systems instead of energy-intensive air conditioners. These and other efficiency improvements to data centers can produce significant energy savings, reduce the load on the electric grid, and help protect the nation by increasing the reliability of critical computer operations.
11. SAC command center, main operations area, underground structure, building ...
11. SAC command center, main operations area, underground structure, building 501, undated - Offutt Air Force Base, Strategic Air Command Headquarters & Command Center, Command Center, 901 SAC Boulevard, Bellevue, Sarpy County, NE
9. SAC command center, main operations area, underground structure, building ...
9. SAC command center, main operations area, underground structure, building 501, undated - Offutt Air Force Base, Strategic Air Command Headquarters & Command Center, Command Center, 901 SAC Boulevard, Bellevue, Sarpy County, NE
Evolution of the Virtualized HPC Infrastructure of Novosibirsk Scientific Center
NASA Astrophysics Data System (ADS)
Adakin, A.; Anisenkov, A.; Belov, S.; Chubarov, D.; Kalyuzhny, V.; Kaplin, V.; Korol, A.; Kuchin, N.; Lomakin, S.; Nikultsev, V.; Skovpen, K.; Sukharev, A.; Zaytsev, A.
2012-12-01
Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM&MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for a particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM&MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gb/s connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. This contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure and the experience gained while using it for running production data analysis jobs related to HEP experiments being carried out at BINP, especially the KEDR detector experiment at the VEPP-4M electron-positron collider.
SSRL Emergency Response Shore Tool
NASA Technical Reports Server (NTRS)
Mah, Robert W.; Papasin, Richard; McIntosh, Dawn M.; Denham, Douglas; Jorgensen, Charles; Betts, Bradley J.; Del Mundo, Rommel
2006-01-01
The SSRL Emergency Response Shore Tool (wherein SSRL signifies Smart Systems Research Laboratory ) is a computer program within a system of communication and mobile-computing software and hardware being developed to increase the situational awareness of first responders at building collapses. This program is intended for use mainly in planning and constructing shores to stabilize partially collapsed structures. The program consists of client and server components, runs in the Windows operating system on commercial off-the-shelf portable computers, and can utilize such additional hardware as digital cameras and Global Positioning System devices. A first responder can enter directly, into a portable computer running this program, the dimensions of a required shore. The shore dimensions, plus an optional digital photograph of the shore site, can then be uploaded via a wireless network to a server. Once on the server, the shore report is time-stamped and made available on similarly equipped portable computers carried by other first responders, including shore wood cutters and an incident commander. The staff in a command center can use the shore reports and photographs to monitor progress and to consult with structural engineers to assess whether a building is in imminent danger of further collapse.
Atmospheric and oceanic excitation of decadal-scale Earth orientation variations
NASA Astrophysics Data System (ADS)
Gross, Richard S.; Fukumori, Ichiro; Menemenlis, Dimitris
2005-09-01
The contribution of atmospheric wind and surface pressure and oceanic current and bottom pressure variations during 1949-2002 to exciting changes in the Earth's orientation on decadal timescales is investigated using an atmospheric angular momentum series computed from the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis project and an oceanic angular momentum series computed from a near-global ocean model that was forced by surface fluxes from the NCEP/NCAR reanalysis project. Not surprisingly, since decadal-scale variations in the length of day are caused mainly by interactions between the mantle and core, the effect of the atmosphere and oceans is found to be only about 14% of that observed. More surprisingly, it is found that the effect of atmospheric and oceanic processes on decadal-scale changes in polar motion is also only about 20% (x component) and 38% (y component) of that observed. Therefore redistribution of mass within the atmosphere and oceans does not appear to be the main cause of the Markowitz wobble. It is also found that on timescales between 10 days and 4 years the atmospheric and oceanic angular momentum series used here have very little skill in explaining Earth orientation variations before the mid to late 1970s. This is attributed to errors in both the Earth orientation observations prior to 1976 when measurements from the accurate space-geodetic techniques became available and to errors in the modeled atmospheric fields prior to 1979 when the satellite era of global weather observing systems began.
An investigation of the effects of touchpad location within a notebook computer.
Kelaher, D; Nay, T; Lawrence, B; Lamar, S; Sommerich, C M
2001-02-01
This study evaluated effects of the location of a notebook computer's integrated touchpad, complimenting previous work in the area of desktop mouse location effects. Most often integrated touchpads are located in the computer's wrist rest, and centered on the keyboard. This study characterized effects of this bottom center location and four alternatives (top center, top right, right side, and bottom right) upon upper extremity posture, discomfort, preference, and performance. Touchpad location was found to significantly impact each of those measures. The top center location was particularly poor, in that it elicited more ulnar deviation, more shoulder flexion, more discomfort, and perceptions of performance impedance. In general, the bottom center, bottom right, and right side locations fared better, though subjects' wrists were more extended in the bottom locations. Suggestions for notebook computer design are provided.
FY 72 Computer Utilization at the Transportation Systems Center
DOT National Transportation Integrated Search
1972-08-01
The Transportation Systems Center currently employs a medley of on-site and off-site computer systems to obtain the computational support it requires. Examination of the monthly User Accountability Reports for FY72 indicated that during the fiscal ye...
Research in Hypersonic Airbreathing Propulsion at the NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Kumar, Ajay; Drummond, J. Philip; McClinton, Charles R.; Hunt, James L.
2001-01-01
The NASA Langley Research Center has been conducting research for over four decades to develop technology for an airbreathing-propelled vehicle. Several other organizations within the United States have also been involved in this endeavor. Even though significant progress has been made over this period, a hypersonic airbreathing vehicle has not yet been realized due to low technology maturity. One of the major reasons for the slow progress in technology development has been the low level and cyclic nature of funding. The paper provides a brief historical overview of research in hypersonic airbreathing technology and then discusses current efforts at NASA Langley to develop various analytical, computational, and experimental design tools and their application in the development of future hypersonic airbreathing vehicles. The main focus of this paper is on the hypersonic airbreathing propulsion technology.
-275-4303 Kevin Regimbal oversees NREL's High Performance Computing (HPC) Systems & Operations , engineering, and operations. Kevin is interested in data center design and computing as well as data center integration and optimization. Professional Experience HPC oversight: program manager, project manager, center
10. SAC command center, main operations area, underground structure, building ...
10. SAC command center, main operations area, underground structure, building 501, circa 1980 - Offutt Air Force Base, Strategic Air Command Headquarters & Command Center, Command Center, 901 SAC Boulevard, Bellevue, Sarpy County, NE
12. SAC command center, main operations area, underground structure, building ...
12. SAC command center, main operations area, underground structure, building 501, circa 1960 - Offutt Air Force Base, Strategic Air Command Headquarters & Command Center, Command Center, 901 SAC Boulevard, Bellevue, Sarpy County, NE
Localization of optic disc and fovea in retinal images using intensity based line scanning analysis.
Kamble, Ravi; Kokare, Manesh; Deshmukh, Girish; Hussin, Fawnizu Azmadi; Mériaudeau, Fabrice
2017-08-01
Accurate detection of diabetic retinopathy (DR) mainly depends on identification of retinal landmarks such as optic disc and fovea. Present methods suffer from challenges like less accuracy and high computational complexity. To address this issue, this paper presents a novel approach for fast and accurate localization of optic disc (OD) and fovea using one-dimensional scanned intensity profile analysis. The proposed method utilizes both time and frequency domain information effectively for localization of OD. The final OD center is located using signal peak-valley detection in time domain and discontinuity detection in frequency domain analysis. However, with the help of detected OD location, the fovea center is located using signal valley analysis. Experiments were conducted on MESSIDOR dataset, where OD was successfully located in 1197 out of 1200 images (99.75%) and fovea in 1196 out of 1200 images (99.66%) with an average computation time of 0.52s. The large scale evaluation has been carried out extensively on nine publicly available databases. The proposed method is highly efficient in terms of quickly and accurately localizing OD and fovea structure together compared with the other state-of-the-art methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Python in the NERSC Exascale Science Applications Program for Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronaghi, Zahra; Thomas, Rollin; Deslippe, Jack
We describe a new effort at the National Energy Re- search Scientific Computing Center (NERSC) in performance analysis and optimization of scientific Python applications targeting the Intel Xeon Phi (Knights Landing, KNL) many- core architecture. The Python-centered work outlined here is part of a larger effort called the NERSC Exascale Science Applications Program (NESAP) for Data. NESAP for Data focuses on applications that process and analyze high-volume, high-velocity data sets from experimental/observational science (EOS) facilities supported by the US Department of Energy Office of Science. We present three case study applications from NESAP for Data that use Python. These codesmore » vary in terms of “Python purity” from applications developed in pure Python to ones that use Python mainly as a convenience layer for scientists without expertise in lower level programming lan- guages like C, C++ or Fortran. The science case, requirements, constraints, algorithms, and initial performance optimizations for each code are discussed. Our goal with this paper is to contribute to the larger conversation around the role of Python in high-performance computing today and tomorrow, highlighting areas for future work and emerging best practices« less
Computational Plume Modeling of COnceptual ARES Vehicle Stage Tests
NASA Technical Reports Server (NTRS)
Allgood, Daniel C.; Ahuja, Vineet
2007-01-01
The plume-induced environment of a conceptual ARES V vehicle stage test at the NASA Stennis Space Center (NASA-SSC) was modeled using computational fluid dynamics (CFD). A full-scale multi-element grid was generated for the NASA-SSC B-2 test stand with the ARES V stage being located in a proposed off-center forward position. The plume produced by the ARES V main power plant (cluster of five RS-68 LOX/LH2 engines) was simulated using a multi-element flow solver - CRUNCH. The primary objective of this work was to obtain a fundamental understanding of the ARES V plume and its impingement characteristics on the B-2 flame-deflector. The location, size and shape of the impingement region were quantified along with the un-cooled deflector wall pressures, temperatures and incident heating rates. Issues with the proposed tests were identified and several of these addressed using the CFD methodology. The final results of this modeling effort will provide useful data and boundary conditions in upcoming engineering studies that are directed towards determining the required facility modifications for ensuring safe and reliable stage testing in support of the Constellation Program.
The role of dedicated data computing centers in the age of cloud computing
NASA Astrophysics Data System (ADS)
Caramarcu, Costin; Hollowell, Christopher; Strecker-Kellogg, William; Wong, Antonio; Zaytsev, Alexandr
2017-10-01
Brookhaven National Laboratory (BNL) anticipates significant growth in scientific programs with large computing and data storage needs in the near future and has recently reorganized support for scientific computing to meet these needs. A key component is the enhanced role of the RHIC-ATLAS Computing Facility (RACF) in support of high-throughput and high-performance computing (HTC and HPC) at BNL. This presentation discusses the evolving role of the RACF at BNL, in light of its growing portfolio of responsibilities and its increasing integration with cloud (academic and for-profit) computing activities. We also discuss BNL’s plan to build a new computing center to support the new responsibilities of the RACF and present a summary of the cost benefit analysis done, including the types of computing activities that benefit most from a local data center vs. cloud computing. This analysis is partly based on an updated cost comparison of Amazon EC2 computing services and the RACF, which was originally conducted in 2012.
Skylab earth resources experiment package /EREP/ - Sea surface topography experiment
NASA Technical Reports Server (NTRS)
Vonbun, F. O.; Marsh, J. G.; Mcgoogan, J. T.; Leitao, C. D.; Vincent, S.; Wells, W. T.
1976-01-01
The S-193 Skylab radar altimeter was operated in a round-the-world pass on Jan. 31, 1974. The main purpose of this experiment was to test and 'measure' the variation of the sea surface topography using the Goddard Space Flight Center (GSFC) geoid model as a reference. This model is based upon 430,000 satellite and 25,000 ground gravity observations. Variations of the sea surface on the order of -40 to +60 m were observed along this pass. The 'computed' and 'measured' sea surfaces have an rms agreement on the order of 7 m. This is quite satisfactory, considering that this was the first time the sea surface has been observed directly over a distance of nearly 35,000 km and compared to a computed model. The Skylab orbit for this global pass was computed using the Goddard Earth Model (GEM 6) and S-band radar tracking data, resulting in an orbital height uncertainty of better than 5 m over one orbital period.
CAROLINA CENTER FOR COMPUTATIONAL TOXICOLOGY
The Center will advance the field of computational toxicology through the development of new methods and tools, as well as through collaborative efforts. In each Project, new computer-based models will be developed and published that represent the state-of-the-art. The tools p...
The Science of Computing: Virtual Memory
NASA Technical Reports Server (NTRS)
Denning, Peter J.
1986-01-01
In the March-April issue, I described how a computer's storage system is organized as a hierarchy consisting of cache, main memory, and secondary memory (e.g., disk). The cache and main memory form a subsystem that functions like main memory but attains speeds approaching cache. What happens if a program and its data are too large for the main memory? This is not a frivolous question. Every generation of computer users has been frustrated by insufficient memory. A new line of computers may have sufficient storage for the computations of its predecessor, but new programs will soon exhaust its capacity. In 1960, a longrange planning committee at MIT dared to dream of a computer with 1 million words of main memory. In 1985, the Cray-2 was delivered with 256 million words. Computational physicists dream of computers with 1 billion words. Computer architects have done an outstanding job of enlarging main memories yet they have never kept up with demand. Only the shortsighted believe they can.
Real science at the petascale.
Saksena, Radhika S; Boghosian, Bruce; Fazendeiro, Luis; Kenway, Owain A; Manos, Steven; Mazzeo, Marco D; Sadiq, S Kashif; Suter, James L; Wright, David; Coveney, Peter V
2009-06-28
We describe computational science research that uses petascale resources to achieve scientific results at unprecedented scales and resolution. The applications span a wide range of domains, from investigation of fundamental problems in turbulence through computational materials science research to biomedical applications at the forefront of HIV/AIDS research and cerebrovascular haemodynamics. This work was mainly performed on the US TeraGrid 'petascale' resource, Ranger, at Texas Advanced Computing Center, in the first half of 2008 when it was the largest computing system in the world available for open scientific research. We have sought to use this petascale supercomputer optimally across application domains and scales, exploiting the excellent parallel scaling performance found on up to at least 32 768 cores for certain of our codes in the so-called 'capability computing' category as well as high-throughput intermediate-scale jobs for ensemble simulations in the 32-512 core range. Furthermore, this activity provides evidence that conventional parallel programming with MPI should be successful at the petascale in the short to medium term. We also report on the parallel performance of some of our codes on up to 65 636 cores on the IBM Blue Gene/P system at the Argonne Leadership Computing Facility, which has recently been named the fastest supercomputer in the world for open science.
Adaptation of a Control Center Development Environment for Industrial Process Control
NASA Technical Reports Server (NTRS)
Killough, Ronnie L.; Malik, James M.
1994-01-01
In the control center, raw telemetry data is received for storage, display, and analysis. This raw data must be combined and manipulated in various ways by mathematical computations to facilitate analysis, provide diversified fault detection mechanisms, and enhance display readability. A development tool called the Graphical Computation Builder (GCB) has been implemented which provides flight controllers with the capability to implement computations for use in the control center. The GCB provides a language that contains both general programming constructs and language elements specifically tailored for the control center environment. The GCB concept allows staff who are not skilled in computer programming to author and maintain computer programs. The GCB user is isolated from the details of external subsystem interfaces and has access to high-level functions such as matrix operators, trigonometric functions, and unit conversion macros. The GCB provides a high level of feedback during computation development that improves upon the often cryptic errors produced by computer language compilers. An equivalent need can be identified in the industrial data acquisition and process control domain: that of an integrated graphical development tool tailored to the application to hide the operating system, computer language, and data acquisition interface details. The GCB features a modular design which makes it suitable for technology transfer without significant rework. Control center-specific language elements can be replaced by elements specific to industrial process control.
Cloudbursting - Solving the 3-body problem
NASA Astrophysics Data System (ADS)
Chang, G.; Heistand, S.; Vakhnin, A.; Huang, T.; Zimdars, P.; Hua, H.; Hood, R.; Koenig, J.; Mehrotra, P.; Little, M. M.; Law, E.
2014-12-01
Many science projects in the future will be accomplished through collaboration among 2 or more NASA centers along with, potentially, external scientists. Science teams will be composed of more geographically dispersed individuals and groups. However, the current computing environment does not make this easy and seamless. By being able to share computing resources among members of a multi-center team working on a science/ engineering project, limited pre-competition funds could be more efficiently applied and technical work could be conducted more effectively with less time spent moving data or waiting for computing resources to free up. Based on the work from an NASA CIO IT Labs task, this presentation will highlight our prototype work in identifying the feasibility and identify the obstacles, both technical and management, to perform "Cloudbursting" among private clouds located at three different centers. We will demonstrate the use of private cloud computing infrastructure at the Jet Propulsion Laboratory, Langley Research Center, and Ames Research Center to provide elastic computation to each other to perform parallel Earth Science data imaging. We leverage elastic load balancing and auto-scaling features at each data center so that each location can independently define how many resources to allocate to a particular job that was "bursted" from another data center and demonstrate that compute capacity scales up and down with the job. We will also discuss future work in the area, which could include the use of cloud infrastructure from different cloud framework providers as well as other cloud service providers.
NASA Technical Reports Server (NTRS)
Manderscheid, J. M.; Kaufman, A.
1985-01-01
Turbine blades for reusable space propulsion systems are subject to severe thermomechanical loading cycles that result in large inelastic strains and very short lives. These components require the use of anisotropic high-temperature alloys to meet the safety and durability requirements of such systems. To assess the effects on blade life of material anisotropy, cyclic structural analyses are being performed for the first stage high-pressure fuel turbopump blade of the space shuttle main engine. The blade alloy is directionally solidified MAR-M 246 alloy. The analyses are based on a typical test stand engine cycle. Stress-strain histories at the airfoil critical location are computed using the MARC nonlinear finite-element computer code. The MARC solutions are compared to cyclic response predictions from a simplified structural analysis procedure developed at the NASA Lewis Research Center.
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Sarkar, S.
1993-01-01
The roles of pressure-strain and turbulent diffusion models in the numerical calculation of turbulent plane channel flows with second-moment closure models are investigated. Three turbulent diffusion and five pressure-strain models are utilized in the computations. The main characteristics of the mean flow and the turbulent fields are compared against experimental data. All the features of the mean flow are correctly predicted by all but one of the Reynolds stress closure models. The Reynolds stress anisotropies in the log layer are predicted to varying degrees of accuracy (good to fair) by the models. None of the models could predict correctly the extent of relaxation towards isotropy in the wake region near the center of the channel. Results from the directional numerical simulation are used to further clarify this behavior of the models.
Systematic study of Reynolds stress closure models in the computations of plane channel flows
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Sarkar, S.
1992-01-01
The roles of pressure-strain and turbulent diffusion models in the numerical calculation of turbulent plane channel flows with second-moment closure models are investigated. Three turbulent diffusion and five pressure-strain models are utilized in the computations. The main characteristics of the mean flow and the turbulent fields are compared against experimental data. All the features of the mean flow are correctly predicted by all but one of the Reynolds stress closure models. The Reynolds stress anisotropies in the log layer are predicted to varying degrees of accuracy (good to fair) by the models. None of the models could predict correctly the extent of relaxation towards isotropy in the wake region near the center of the channel. Results from the directional numerical simulation are used to further clarify this behavior of the models.
Computers and Media Centers--A Winning Combination.
ERIC Educational Resources Information Center
Graf, Nancy
1984-01-01
Profile of the computer program offered by the library/media center at Chief Joseph Junior High School in Richland, Washington, highlights program background, operator's licensing procedure, the trainer license, assistance from high school students, need for more computers, handling of software, and helpful hints. (EJS)
For operation of the Computer Software Management and Information Center (COSMIC)
NASA Technical Reports Server (NTRS)
Carmon, J. L.
1983-01-01
During the month of June, the Survey Research Center (SRC) at the University of Georgia designed new benefits questionnaires for computer software management and information center (COSMIC). As a test of their utility, these questionnaires are now used in the benefits identification process.
A Numerical Study of Anti-Vortex Film Cooling Designs at High Blowing Ratio
NASA Technical Reports Server (NTRS)
Heidmann, James D.
2008-01-01
A concept for mitigating the adverse effects of jet vorticity and liftoff at high blowing ratios for turbine film cooling flows has been developed and studied at NASA Glenn Research Center. This "anti-vortex" film cooling concept proposes the addition of two branched holes from each primary hole in order to produce a vorticity counter to the detrimental kidney vortices from the main jet. These vortices typically entrain hot freestream gas and are associated with jet separation from the turbine blade surface. The anti-vortex design is unique in that it requires only easily machinable round holes, unlike shaped film cooling holes and other advanced concepts. The anti-vortex film cooling hole concept has been modeled computationally for a single row of 30deg angled holes on a flat surface using the 3D Navier-Stokes solver Glenn-HT. A modification of the anti-vortex concept whereby the branched holes exit adjacent to the main hole has been studied computationally for blowing ratios of 1.0 and 2.0 and at density ratios of 1.0 and 2.0. This modified concept was selected because it has shown the most promise in recent experimental studies. The computational results show that the modified design improves the film cooling effectiveness relative to the round hole baseline and previous anti-vortex cases, in confirmation of the experimental studies.
Reinventing patient-centered computing for the twenty-first century.
Goldberg, H S; Morales, A; Gottlieb, L; Meador, L; Safran, C
2001-01-01
Despite evidence over the past decade that patients like and will use patient-centered computing systems in managing their health, patients have remained forgotten stakeholders in advances in clinical computing systems. We present a framework for patient empowerment and the technical realization of that framework in an architecture called CareLink. In an evaluation of the initial deployment of CareLink in the support of neonatal intensive care, we have demonstrated a reduction in the length of stay for very-low birthweight infants, and an improvement in family satisfaction with care delivery. With the ubiquitous adoption of the Internet into the general culture, patient-centered computing provides the opportunity to mend broken health care relationships and reconnect patients to the care delivery process. CareLink itself provides functionality to support both clinical care and research, and provides a living laboratory for the further study of patient-centered computing.
Joint the Center for Applied Scientific Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamblin, Todd; Bremer, Timo; Van Essen, Brian
The Center for Applied Scientific Computing serves as Livermore Lab’s window to the broader computer science, computational physics, applied mathematics, and data science research communities. In collaboration with academic, industrial, and other government laboratory partners, we conduct world-class scientific research and development on problems critical to national security. CASC applies the power of high-performance computing and the efficiency of modern computational methods to the realms of stockpile stewardship, cyber and energy security, and knowledge discovery for intelligence applications.
NASA Technical Reports Server (NTRS)
1982-01-01
A gallery of what might be called the ""Best of HCMM'' imagery is presented. These 100 images, consisting mainly of Day-VIS, Day-IR, and Night-IR scenes plus a few thermal inertia images, were selected from the collection accrued in the Missions Utilization Office (Code 902) at the Goddard Space Flight Center. They were selected because of both their pictorial quality and their information or interest content. Nearly all the images are the computer processed and contrast stretched products routinely produced by the image processing facility at GSFC. Several LANDSAT images, special HCMM images made by HCMM investigators, and maps round out the input.
High-Speed Observer: Automated Streak Detection in SSME Plumes
NASA Technical Reports Server (NTRS)
Rieckoff, T. J.; Covan, M.; OFarrell, J. M.
2001-01-01
A high frame rate digital video camera installed on test stands at Stennis Space Center has been used to capture images of Space Shuttle main engine plumes during test. These plume images are processed in real time to detect and differentiate anomalous plume events occurring during a time interval on the order of 5 msec. Such speed yields near instantaneous availability of information concerning the state of the hardware. This information can be monitored by the test conductor or by other computer systems, such as the integrated health monitoring system processors, for possible test shutdown before occurrence of a catastrophic engine failure.
Planetary geology, stellar evolution and galactic cosmology
NASA Technical Reports Server (NTRS)
1972-01-01
Field studies of selected basalt flows in the Snake River Plain, Idaho, were made for comparative lunar and Mars geological investigations. Studies of basalt lava tubes were also initiated in Washington, Oregon, Hawaii, and northern California. The main effort in the stellar evolution research is toward the development of a computer code to calculate hydrodynamic flow coupled with radiative energy transport. Estimates of the rotation effects on a collapsing cloud indicate that the total angular momentum is the critical parameter. The study of Paschen and Balmer alpha lines of positronium atoms in the center of a galaxy is mentioned.
Cnossen, Maryse C; Huijben, Jilske A; van der Jagt, Mathieu; Volovici, Victor; van Essen, Thomas; Polinder, Suzanne; Nelson, David; Ercole, Ari; Stocchetti, Nino; Citerio, Giuseppe; Peul, Wilco C; Maas, Andrew I R; Menon, David; Steyerberg, Ewout W; Lingsma, Hester F
2017-09-06
No definitive evidence exists on how intracranial hypertension should be treated in patients with traumatic brain injury (TBI). It is therefore likely that centers and practitioners individually balance potential benefits and risks of different intracranial pressure (ICP) management strategies, resulting in practice variation. The aim of this study was to examine variation in monitoring and treatment policies for intracranial hypertension in patients with TBI. A 29-item survey on ICP monitoring and treatment was developed on the basis of literature and expert opinion, and it was pilot-tested in 16 centers. The questionnaire was sent to 68 neurotrauma centers participating in the Collaborative European Neurotrauma Effectiveness Research in Traumatic Brain Injury (CENTER-TBI) study. The survey was completed by 66 centers (97% response rate). Centers were mainly academic hospitals (n = 60, 91%) and designated level I trauma centers (n = 44, 67%). The Brain Trauma Foundation guidelines were used in 49 (74%) centers. Approximately 90% of the participants (n = 58) indicated placing an ICP monitor in patients with severe TBI and computed tomographic abnormalities. There was no consensus on other indications or on peri-insertion precautions. We found wide variation in the use of first- and second-tier treatments for elevated ICP. Approximately half of the centers were classified as using a relatively aggressive approach to ICP monitoring and treatment (n = 32, 48%), whereas the others were considered more conservative (n = 34, 52%). Substantial variation was found regarding monitoring and treatment policies in patients with TBI and intracranial hypertension. The results of this survey indicate a lack of consensus between European neurotrauma centers and provide an opportunity and necessity for comparative effectiveness research.
General theory of remote gaze estimation using the pupil center and corneal reflections.
Guestrin, Elias Daniel; Eizenman, Moshe
2006-06-01
This paper presents a general theory for the remote estimation of the point-of-gaze (POG) from the coordinates of the centers of the pupil and corneal reflections. Corneal reflections are produced by light sources that illuminate the eye and the centers of the pupil and corneal reflections are estimated in video images from one or more cameras. The general theory covers the full range of possible system configurations. Using one camera and one light source, the POG can be estimated only if the head is completely stationary. Using one camera and multiple light sources, the POG can be estimated with free head movements, following the completion of a multiple-point calibration procedure. When multiple cameras and multiple light sources are used, the POG can be estimated following a simple one-point calibration procedure. Experimental and simulation results suggest that the main sources of gaze estimation errors are the discrepancy between the shape of real corneas and the spherical corneal shape assumed in the general theory, and the noise in the estimation of the centers of the pupil and corneal reflections. A detailed example of a system that uses the general theory to estimate the POG on a computer screen is presented.
NASA Technical Reports Server (NTRS)
Piascik, Robert S.; Prosser, William H.
2011-01-01
The Director of the NASA Engineering and Safety Center (NESC), requested an independent assessment of the anomalous gaseous hydrogen (GH2) flow incident on the Space Shuttle Program (SSP) Orbiter Vehicle (OV)-105 during the Space Transportation System (STS)-126 mission. The main propulsion system (MPS) engine #2 GH2 flow control valve (FCV) LV-57 transition from low towards high flow position without being commanded. Post-flight examination revealed that the FCV LV-57 poppet had experienced a fatigue failure that liberated a section of the poppet flange. The NESC assessment provided a peer review of the computational fluid dynamics (CFD), stress analysis, and impact testing. A probability of detection (POD) study was requested by the SSP Orbiter Project for the eddy current (EC) nondestructive evaluation (NDE) techniques that were developed to inspect the flight FCV poppets. This report contains the Appendices to the main report.
Advanced Biomedical Computing Center (ABCC) | DSITP
The Advanced Biomedical Computing Center (ABCC), located in Frederick Maryland (MD), provides HPC resources for both NIH/NCI intramural scientists and the extramural biomedical research community. Its mission is to provide HPC support, to provide collaborative research, and to conduct in-house research in various areas of computational biology and biomedical research.
Researchers at EPA’s National Center for Computational Toxicology integrate advances in biology, chemistry, and computer science to examine the toxicity of chemicals and help prioritize chemicals for further research based on potential human health risks. The goal of this researc...
Community Information Centers and the Computer.
ERIC Educational Resources Information Center
Carroll, John M.; Tague, Jean M.
Two computer data bases have been developed by the Computer Science Department at the University of Western Ontario for "Information London," the local community information center. One system, called LONDON, permits Boolean searches of a file of 5,000 records describing human service agencies in the London area. The second system,…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-21
... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2013-0059] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Centers for Medicare & Medicaid Services (CMS))--Match Number 1076 AGENCY: Social Security Administration (SSA). ACTION: Notice of a renewal of an existing computer matching...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-14
... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2011-0022] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Centers for Medicare & Medicaid Services (CMS))--Match Number 1076 AGENCY: Social Security Administration (SSA). ACTION: Notice of a renewal of an existing computer matching...
Neuronavigation. Principles. Surgical technique.
Ivanov, Marcel; Vlad Ciurea, Alexandru
2009-01-01
Neuronavigation and stereotaxy are techniques designed to help neurosurgeons precisely localize different intracerebral pathological processes by using a set of preoperative images (CT, MRI, fMRI, PET, SPECT etc.). The development of computer assisted surgery was possible only after a significant technological progress, especially in the area of informatics and imagistics. The main indications of neuronavigation are represented by the targeting of small and deep intracerebral lesions and choosing the best way to treat them, in order to preserve the neurological function. Stereotaxis also allows lesioning or stimulation of basal ganglia for the treatment of movement disorders. These techniques can bring an important amount of confort both to the patient and to the neurosurgeon. Neuronavigation was introduced in Romania around 2003, in four neurosurgical centers. We present our five-years experience in neuronavigation and describe the main principles and surgical techniques. PMID:20108488
Cornell University Center for Advanced Computing
Resource Center Data Management (RDMSG) Computational Agriculture National Science Foundation Other Public agriculture technology acquired Lifka joins National Science Foundation CISE Advisory Committee © Cornell
View From Camera Not Used During Curiosity's First Six Months on Mars
2017-12-08
This view of Curiosity's left-front and left-center wheels and of marks made by wheels on the ground in the "Yellowknife Bay" area comes from one of six cameras used on Mars for the first time more than six months after the rover landed. The left Navigation Camera (Navcam) linked to Curiosity's B-side computer took this image during the 223rd Martian day, or sol, of Curiosity's work on Mars (March 22, 2013). The wheels are 20 inches (50 centimeters) in diameter. Curiosity carries a pair of main computers, redundant to each other, in order to have a backup available if one fails. Each of the computers, A-side and B-side, also has other redundant subsystems linked to just that computer. Curiosity operated on its A-side from before the August 2012 landing until Feb. 28, when engineers commanded a switch to the B-side in response to a memory glitch on the A-side. One set of activities after switching to the B-side computer has been to check the six engineering cameras that are hard-linked to that computer. The rover's science instruments, including five science cameras, can each be operated by either the A-side or B-side computer, whichever is active. However, each of Curiosity's 12 engineering cameras is linked to just one of the computers. The engineering cameras are the Navigation Camera (Navcam), the Front Hazard-Avoidance Camera (Front Hazcam) and Rear Hazard-Avoidance Camera (Rear Hazcam). Each of those three named cameras has four cameras as part of it: two stereo pairs of cameras, with one pair linked to each computer. Only the pairs linked to the active computer can be used, and the A-side computer was active from before landing, in August, until Feb. 28. All six of the B-side engineering cameras have been used during March 2013 and checked out OK. Image Credit: NASA/JPL-Caltech NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Privacy preserving interactive record linkage (PPIRL).
Kum, Hye-Chung; Krishnamurthy, Ashok; Machanavajjhala, Ashwin; Reiter, Michael K; Ahalt, Stanley
2014-01-01
Record linkage to integrate uncoordinated databases is critical in biomedical research using Big Data. Balancing privacy protection against the need for high quality record linkage requires a human-machine hybrid system to safely manage uncertainty in the ever changing streams of chaotic Big Data. In the computer science literature, private record linkage is the most published area. It investigates how to apply a known linkage function safely when linking two tables. However, in practice, the linkage function is rarely known. Thus, there are many data linkage centers whose main role is to be the trusted third party to determine the linkage function manually and link data for research via a master population list for a designated region. Recently, a more flexible computerized third-party linkage platform, Secure Decoupled Linkage (SDLink), has been proposed based on: (1) decoupling data via encryption, (2) obfuscation via chaffing (adding fake data) and universe manipulation; and (3) minimum information disclosure via recoding. We synthesize this literature to formalize a new framework for privacy preserving interactive record linkage (PPIRL) with tractable privacy and utility properties and then analyze the literature using this framework. Human-based third-party linkage centers for privacy preserving record linkage are the accepted norm internationally. We find that a computer-based third-party platform that can precisely control the information disclosed at the micro level and allow frequent human interaction during the linkage process, is an effective human-machine hybrid system that significantly improves on the linkage center model both in terms of privacy and utility.
Digital optical computers at the optoelectronic computing systems center
NASA Technical Reports Server (NTRS)
Jordan, Harry F.
1991-01-01
The Digital Optical Computing Program within the National Science Foundation Engineering Research Center for Opto-electronic Computing Systems has as its specific goal research on optical computing architectures suitable for use at the highest possible speeds. The program can be targeted toward exploiting the time domain because other programs in the Center are pursuing research on parallel optical systems, exploiting optical interconnection and optical devices and materials. Using a general purpose computing architecture as the focus, we are developing design techniques, tools and architecture for operation at the speed of light limit. Experimental work is being done with the somewhat low speed components currently available but with architectures which will scale up in speed as faster devices are developed. The design algorithms and tools developed for a general purpose, stored program computer are being applied to other systems such as optimally controlled optical communication networks.
An Advanced Framework for Improving Situational Awareness in Electric Power Grid Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Huang, Zhenyu; Zhou, Ning
With the deployment of new smart grid technologies and the penetration of renewable energy in power systems, significant uncertainty and variability is being introduced into power grid operation. Traditionally, the Energy Management System (EMS) operates the power grid in a deterministic mode, and thus will not be sufficient for the future control center in a stochastic environment with faster dynamics. One of the main challenges is to improve situational awareness. This paper reviews the current status of power grid operation and presents a vision of improving wide-area situational awareness for a future control center. An advanced framework, consisting of parallelmore » state estimation, state prediction, parallel contingency selection, parallel contingency analysis, and advanced visual analytics, is proposed to provide capabilities needed for better decision support by utilizing high performance computing (HPC) techniques and advanced visual analytic techniques. Research results are presented to support the proposed vision and framework.« less
Chemical research projects office: An overview and bibliography, 1975-1980
NASA Technical Reports Server (NTRS)
Kourtides, D. A.; Heimbuch, A. H.; Parker, J. A.
1980-01-01
The activities of the Chemical Research Projects Office at Ames Research Center, Moffett Field, California are reported. The office conducts basic and applied research in the fields of polymer chemistry, computational chemistry, polymer physics, and physical and organic chemistry. It works to identify the chemical research and technology required for solutions to problems of national urgency, synchronous with the aeronautic and space effort. It conducts interdisciplinary research on chemical problems, mainly in areas of macromolecular science and fire research. The office also acts as liaison with the engineering community and assures that relevant technology is made available to other NASA centers, agencies, and industry. Recent accomplishments are listed in this report. Activities of the three research groups, Polymer Research, Aircraft Operating and Safety, and Engineering Testing, are summarized. A complete bibliography which lists all Chemical Research Projects Office publications, contracts, grants, patents, and presentations from 1975 to 1980 is included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kline, Josh; /SLAC
2006-08-28
The testing of the upgrade prototype for the bunch current monitors (BCMs) in the PEP-II storage rings at the Stanford Linear Accelerator Center (SLAC) is the topic of this paper. Bunch current monitors are used to measure the charge in the electron/positron bunches traveling in particle storage rings. The BCMs in the PEP-II storage rings need to be upgraded because components of the current system have failed and are known to be failure prone with age, and several of the integrated chips are no longer produced making repairs difficult if not impossible. The main upgrade is replacing twelve old (1995)more » field programmable gate arrays (FPGAs) with a single Virtex II FPGA. The prototype was tested using computer synthesis tools, a commercial signal generator, and a fast pulse generator.« less
Final Report. Center for Scalable Application Development Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mellor-Crummey, John
2014-10-26
The Center for Scalable Application Development Software (CScADS) was established as a part- nership between Rice University, Argonne National Laboratory, University of California Berkeley, University of Tennessee – Knoxville, and University of Wisconsin – Madison. CScADS pursued an integrated set of activities with the aim of increasing the productivity of DOE computational scientists by catalyzing the development of systems software, libraries, compilers, and tools for leadership computing platforms. Principal Center activities were workshops to engage the research community in the challenges of leadership computing, research and development of open-source software, and work with computational scientists to help them develop codesmore » for leadership computing platforms. This final report summarizes CScADS activities at Rice University in these areas.« less
Intention and Usage of Computer Based Information Systems in Primary Health Centers
ERIC Educational Resources Information Center
Hosizah; Kuntoro; Basuki N., Hari
2016-01-01
The computer-based information system (CBIS) is adopted by almost all of in health care setting, including the primary health center in East Java Province Indonesia. Some of softwares available were SIMPUS, SIMPUSTRONIK, SIKDA Generik, e-puskesmas. Unfortunately they were most of the primary health center did not successfully implemented. This…
NASA Technical Reports Server (NTRS)
McGalliard, James
2008-01-01
A viewgraph describing the use of multiple frameworks by NASA, GSA, and U.S. Government agencies is presented. The contents include: 1) Federal Systems Integration and Management Center (FEDSIM) and NASA Center for Computational Sciences (NCCS) Environment; 2) Ruling Frameworks; 3) Implications; and 4) Reconciling Multiple Frameworks.
Roy Fraley Roy Fraley Professional II-Engineer Roy.Fraley@nrel.gov | 303-384-6468 Roy Fraley is the high-performance computing (HPC) data center engineer with the Computational Science Center's HPC
NASA Technical Reports Server (NTRS)
Kandula, Max; Pearce, Daniel
1989-01-01
A steady incompressible three-dimensional (3-D) viscous flow analysis was conducted for the Space Shuttle Main Propulsion External Tank (ET)/Orbiter (ORB) propellant feed line quick separable 17-inch disconnect flapper valves for liquid oxygen (LO2) and liquid hydrogen (LH2). The main objectives of the analysis were to predict and correlate the hydrodynamic stability of the flappers and pressure drop with available water test data. Computational Fluid Dynamics (CFD) computer codes were procured at no cost from the public domain, and were modified and extended to carry out the disconnect flow analysis. The grid generator codes SVTGD3D and INGRID were obtained. NASA Ames Research Center supplied the flow solution code INS3D, and the color graphics code PLOT3D. A driver routine was developed to automate the grid generation process. Components such as pipes, elbows, and flappers can be generated with simple commands, and flapper angles can be varied easily. The flow solver INS3D code was modified to treat interior flappers, and other interfacing routines were developed, which include a turbulence model, a force/moment routine, a time-step routine, and initial and boundary conditions. In particular, an under-relaxation scheme was implemented to enhance the solution stability. Major physical assumptions and simplifications made in the analysis include the neglect of linkages, slightly reduced flapper diameter, and smooth solid surfaces. A grid size of 54 x 21 x 25 was employed for both the LO2 and LH2 units. Mixing length theory applied to turbulent shear flow in pipes formed the basis for the simple turbulence model. Results of the analysis are presented for LO2 and LH2 disconnects.
NASA Astrophysics Data System (ADS)
Zhao, L.; Chen, P.; Jordan, T. H.; Olsen, K. B.; Maechling, P.; Faerman, M.
2004-12-01
The Southern California Earthquake Center (SCEC) is developing a Community Modeling Environment (CME) to facilitate the computational pathways of physics-based seismic hazard analysis (Maechling et al., this meeting). Major goals are to facilitate the forward modeling of seismic wavefields in complex geologic environments, including the strong ground motions that cause earthquake damage, and the inversion of observed waveform data for improved models of Earth structure and fault rupture. Here we report on a unified approach to these coupled inverse problems that is based on the ability to generate and manipulate wavefields in densely gridded 3D Earth models. A main element of this approach is a database of receiver Green tensors (RGT) for the seismic stations, which comprises all of the spatial-temporal displacement fields produced by the three orthogonal unit impulsive point forces acting at each of the station locations. Once the RGT database is established, synthetic seismograms for any earthquake can be simply calculated by extracting a small, source-centered volume of the RGT from the database and applying the reciprocity principle. The partial derivatives needed for point- and finite-source inversions can be generated in the same way. Moreover, the RGT database can be employed in full-wave tomographic inversions launched from a 3D starting model, because the sensitivity (Fréchet) kernels for travel-time and amplitude anomalies observed at seismic stations in the database can be computed by convolving the earthquake-induced displacement field with the station RGTs. We illustrate all elements of this unified analysis with an RGT database for 33 stations of the California Integrated Seismic Network in and around the Los Angeles Basin, which we computed for the 3D SCEC Community Velocity Model (SCEC CVM3.0) using a fourth-order staggered-grid finite-difference code. For a spatial grid spacing of 200 m and a time resolution of 10 ms, the calculations took ~19,000 node-hours on the Linux cluster at USC's High-Performance Computing Center. The 33-station database with a volume of ~23.5 TB was archived in the SCEC digital library at the San Diego Supercomputer Center using the Storage Resource Broker (SRB). From a laptop, anyone with access to this SRB collection can compute synthetic seismograms for an arbitrary source in the CVM in a matter of minutes. Efficient approaches have been implemented to use this RGT database in the inversions of waveforms for centroid and finite moment tensors and tomographic inversions to improve the CVM. Our experience with these large problems suggests areas where the cyberinfrastructure currently available for geoscience computation needs to be improved.
Review of optical wireless communications for data centers
NASA Astrophysics Data System (ADS)
Arnon, Shlomi
2017-10-01
A data center (DC) is a facility either physical or virtual, for running applications, searching, storage, management and dissemination of information known as cloud computing, which consume a huge amount of energy. A DC includes thousands of servers, communication and storage equipment and a support system including an air conditioning system, security, monitoring equipment and electricity regulator units. Data center operators face the challenges of meeting exponentially increasing demands for network bandwidth without unreasonable increases in operation and infrastructure cost. In order to meet the requirements of moderate increase in operation and infrastructure cost technology, a revolution is required. One way to overcome the shortcomings of traditional static (wired) data center architectures is use of a hybrid network based on fiber and optical wireless communication (OWC) or free space optics (FSO). The OWC link could be deployed on top of the existing cable/fiber network layer, so that live migration could be done easily and dynamically. In that case the network topology is flexible and adapts quickly to changes in traffic, heat distribution, power consumption and characteristics of the applications. In addition, OWC could provide an easy way to maintain and scale up data centers. As a result total cost of ownership could be reduced and the return on investment could be increased. In this talk we will review the main OWC technologies applicable for data centers, indicate how energy could be saved using OWC multichannel communication and discuss the issue of OWC pointing accuracy for data center scenario.
Jaschob, Daniel; Riffle, Michael
2012-07-30
Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. JobCenter is a client-server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or "in the cloud") and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.
Computers in Schools of Southeast Texas in 1997.
ERIC Educational Resources Information Center
Henderson, David L.; Renfrow, Raylene
This study examined computer use in southeast Texas schools in 1997. The study population included 110 school districts in Education Service Center Regions IV and VI. These centers serve 22 counties of southeast Texas in the Houston area. Using questionnaires, researchers collected data on brands of computers presently in use, percent of computer…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-06
... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2012-0015] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Centers for Medicare and Medicaid Services (CMS))--Match Number 1094 AGENCY: Social Security Administration (SSA). ACTION: Notice of a new computer matching program that will expire...
The National Special Education Alliance: One Year Later.
ERIC Educational Resources Information Center
Green, Peter
1988-01-01
The National Special Education Alliance (a national network of local computer resource centers associated with Apple Computer, Inc.) consists, one year after formation, of 24 non-profit support centers staffed largely by volunteers. The NSEA now reaches more than 1000 disabled computer users each month and more growth in the future is expected.…
Researchers at EPA’s National Center for Computational Toxicology (NCCT) integrate advances in biology, chemistry, exposure and computer science to help prioritize chemicals for further research based on potential human health risks. The goal of this research is to quickly evalua...
Books, Bytes, and Bridges: Libraries and Computer Centers in Academic Institutions.
ERIC Educational Resources Information Center
Hardesty, Larry, Ed.
This book about the relationship between computer centers and libraries at academic institutions contains the following chapters: (1) "A History of the Rhetoric and Reality of Library and Computing Relationships" (Peggy Seiden and Michael D. Kathman); (2) "An Issue in Search of a Metaphor: Readings on the Marriageability of…
Computational structures technology and UVA Center for CST
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1992-01-01
Rapid advances in computer hardware have had a profound effect on various engineering and mechanics disciplines, including the materials, structures, and dynamics disciplines. A new technology, computational structures technology (CST), has recently emerged as an insightful blend between material modeling, structural and dynamic analysis and synthesis on the one hand, and other disciplines such as computer science, numerical analysis, and approximation theory, on the other hand. CST is an outgrowth of finite element methods developed over the last three decades. The focus of this presentation is on some aspects of CST which can impact future airframes and propulsion systems, as well as on the newly established University of Virginia (UVA) Center for CST. The background and goals for CST are described along with the motivations for developing CST, and a brief discussion is made on computational material modeling. We look at the future in terms of technical needs, computing environment, and research directions. The newly established UVA Center for CST is described. One of the research projects of the Center is described, and a brief summary of the presentation is given.
Computer programs: Operational and mathematical, a compilation
NASA Technical Reports Server (NTRS)
1973-01-01
Several computer programs which are available through the NASA Technology Utilization Program are outlined. Presented are: (1) Computer operational programs which can be applied to resolve procedural problems swiftly and accurately. (2) Mathematical applications for the resolution of problems encountered in numerous industries. Although the functions which these programs perform are not new and similar programs are available in many large computer center libraries, this collection may be of use to centers with limited systems libraries and for instructional purposes for new computer operators.
Velocity and pressure fields associated with near-wall turbulence structures
NASA Technical Reports Server (NTRS)
Johansson, Arne V.; Alfredsson, P. Henrik; Kim, John
1990-01-01
Computer generated databases containing velocity and pressure fields in three-dimensional space at a sequence of time-steps, were used for the investigation of near-wall turbulence structures, their space-time evolution, and their associated pressure fields. The main body of the results were obtained from simulation data for turbulent channel flow at a Reynolds number of 180 (based on half-channel height and friction velocity) with a grid of 128 x 129 x and 128 points. The flow was followed over a total time of 141 viscous time units. Spanwise centering of the detected structures was found to be essential in order to obtain a correct magnitude of the associated Reynolds stress contribution. A positive wall-pressure peak is found immediately beneath the center of the structure. The maximum amplitude of the pressure pattern was, however, found in the buffer region at the center of the shear-layer. It was also found that these flow structures often reach a maximum strength in connection with an asymmetric spanwise motion, which motivated the construction of a conditional sampling scheme that preserved this asymmetry.
For operation of the Computer Software Management and Information Center (COSMIC)
NASA Technical Reports Server (NTRS)
Carmon, J. L.
1983-01-01
Progress report on current status of computer software management and information center (COSMIC) includes the following areas: inventory, evaluation and publication, marketing, customer service, maintenance and support, and budget summary.
Center for Advanced Computational Technology
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
2000-01-01
The Center for Advanced Computational Technology (ACT) was established to serve as a focal point for diverse research activities pertaining to application of advanced computational technology to future aerospace systems. These activities include the use of numerical simulations, artificial intelligence methods, multimedia and synthetic environments, and computational intelligence, in the modeling, analysis, sensitivity studies, optimization, design and operation of future aerospace systems. The Center is located at NASA Langley and is an integral part of the School of Engineering and Applied Science of the University of Virginia. The Center has four specific objectives: 1) conduct innovative research on applications of advanced computational technology to aerospace systems; 2) act as pathfinder by demonstrating to the research community what can be done (high-potential, high-risk research); 3) help in identifying future directions of research in support of the aeronautical and space missions of the twenty-first century; and 4) help in the rapid transfer of research results to industry and in broadening awareness among researchers and engineers of the state-of-the-art in applications of advanced computational technology to the analysis, design prototyping and operations of aerospace and other high-performance engineering systems. In addition to research, Center activities include helping in the planning and coordination of the activities of a multi-center team of NASA and JPL researchers who are developing an intelligent synthesis environment for future aerospace systems; organizing workshops and national symposia; as well as writing state-of-the-art monographs and NASA special publications on timely topics.
Planning and management of cloud computing networks
NASA Astrophysics Data System (ADS)
Larumbe, Federico
The evolution of the Internet has a great impact on a big part of the population. People use it to communicate, query information, receive news, work, and as entertainment. Its extraordinary usefulness as a communication media made the number of applications and technological resources explode. However, that network expansion comes at the cost of an important power consumption. If the power consumption of telecommunication networks and data centers is considered as the power consumption of a country, it would rank at the 5 th place in the world. Furthermore, the number of servers in the world is expected to grow by a factor of 10 between 2013 and 2020. This context motivates us to study techniques and methods to allocate cloud computing resources in an optimal way with respect to cost, quality of service (QoS), power consumption, and environmental impact. The results we obtained from our test cases show that besides minimizing capital expenditures (CAPEX) and operational expenditures (OPEX), the response time can be reduced up to 6 times, power consumption by 30%, and CO2 emissions by a factor of 60. Cloud computing provides dynamic access to IT resources as a service. In this paradigm, programs are executed in servers connected to the Internet that users access from their computers and mobile devices. The first advantage of this architecture is to reduce the time of application deployment and interoperability, because a new user only needs a web browser and does not need to install software on local computers with specific operating systems. Second, applications and information are available from everywhere and with any device with an Internet access. Also, servers and IT resources can be dynamically allocated depending on the number of users and workload, a feature called elasticity. This thesis studies the resource management of cloud computing networks and is divided in three main stages. We start by analyzing the planning of cloud computing networks to get a comprehensive vision. The first question to be solved is what are the optimal data center locations. We found that the location of each data center has a big impact on cost, QoS, power consumption, and greenhouse gas emissions. An optimization problem with a multi-criteria objective function is proposed to decide jointly the optimal location of data centers and software components, link capacities, and information routing. Once the network planning has been analyzed, the problem of dynamic resource provisioning in real time is addressed. In this context, virtualization is a key technique in cloud computing because each server can be shared by multiple Virtual Machines (VMs) and the total power consumption can be reduced. In the same line of location problems, we propose a Green Cloud Broker that optimizes VM placement across multiple data centers. In fact, when multiple data centers are considered, response time can be reduced by placing VMs close to users, cost can be minimized, power consumption can be optimized by using energy efficient data centers, and CO2 emissions can be decreased by choosing data centers provided with renewable energy sources. The third stage of the analysis is the short-term management of a cloud data center. In particular, a method is proposed to assign VMs to servers by considering communication traffic among VMs. Cloud data centers receive new applications over time and these applications need on-demand resource provisioning. Each application is composed of multiple types of VMs that interact among themselves. A program called scheduler must place each new VM in a server and that impacts the QoS and power consumption. Our method places VMs that communicate among themselves in servers that are close to each other in the network topology, thus reducing communication delay and increasing the throughput available among VMs. Furthermore, the power consumption of each type of server is considered and the most efficient ones are chosen to place the VMs. The number of VMs of each application can be dynamically changed to match the workload and servers not needed in a particular period can be suspended to save energy. The methodology developed is based on Mixed Integer Programming (MIP) models to formalize the problems and use state of the art optimization solvers. Then, heuristics are developed to solve cases with more than 1,000 potential data center locations for the planning problem, 1,000 nodes for the cloud broker, and 128,000 servers for the VM placement problem. Solutions with very short optimality gaps, between 0% and 1.95%, are obtained, and execution time in the order of minutes for the planning problem and less than a second for real time cases. We consider that this thesis on resource provisioning of cloud computing networks includes important contributions on this research area, and innovative commercial applications based on the proposed methods have promising future.
Introduction to Cosmology, Proceedings of the Polish Astronomical Society volume 4
NASA Astrophysics Data System (ADS)
Biernacka, Monika; Bajan, Katarzyna; Stachowski, Grzegorz; Pollo, Agnieszka
2016-07-01
On 11-23 July 2016, Jan Kochanowski University in Kielce was the host of the Second Cosmological School "Introduction to Cosmology". The main purpose of the School was to provide an introduction to a selection of the most interesting topics in modern cosmology, both in theory and observations. The program included a series of mini-workshops on cosmological simulations, Virtual Observatory database and tools and Spectral Energy Distribution tting. The School was intended for undergraduate, MSc and PhD students, as well as young postdoctoral researchers. The School was co-organized by the Polish Astronomical Society, the Jan Kochanowski University in Kielce, the Jagiellonian University in Cracow, the Nuclear Centre for Nuclear Research and the N. Copernicus Astronomical Center in Warsaw. The Interdisciplinary Centre for Mathematical and Computational Modeling kindly provided us with the possibility to remotely use their computing facilities.
Introduction to Cosmology, Proceedings of the Polish Astronomical Society volume 4
NASA Astrophysics Data System (ADS)
Biernacka, Monika; Bajan, Katarzyna; Stachowski, Grzegorz; Pollo, Agnieszka
2017-08-01
On 11-23 July 2016, Jan Kochanowski University in Kielce was the host of the Second Cosmological School "Introduction to Cosmology". The main purpose of the School was to provide an introduction to a selection of the most interesting topics in modern cosmology, both in theory and observations. The program included a series of mini-workshops on cosmological simulations, Virtual Observatory database and tools and Spectral Energy Distribution tting. The School was intended for undergraduate, MSc and PhD students, as well as young postdoctoral researchers. The School was co-organized by the Polish Astronomical Society, the Jan Kochanowski University in Kielce, the Jagiellonian University in Cracow, the Nuclear Centre for Nuclear Research and the N. Copernicus Astronomical Center in Warsaw. The Interdisciplinary Centre for Mathematical and Computational Modeling kindly provided us with the possibility to remotely use their computing facilities.
Parallel Unsteady Turbopump Simulations for Liquid Rocket Engines
NASA Technical Reports Server (NTRS)
Kiris, Cetin C.; Kwak, Dochan; Chan, William
2000-01-01
This paper reports the progress being made towards complete turbo-pump simulation capability for liquid rocket engines. Space Shuttle Main Engine (SSME) turbo-pump impeller is used as a test case for the performance evaluation of the MPI and hybrid MPI/Open-MP versions of the INS3D code. Then, a computational model of a turbo-pump has been developed for the shuttle upgrade program. Relative motion of the grid system for rotor-stator interaction was obtained by employing overset grid techniques. Time-accuracy of the scheme has been evaluated by using simple test cases. Unsteady computations for SSME turbo-pump, which contains 136 zones with 35 Million grid points, are currently underway on Origin 2000 systems at NASA Ames Research Center. Results from time-accurate simulations with moving boundary capability, and the performance of the parallel versions of the code will be presented in the final paper.
CNC Turning Center Advanced Operations. Computer Numerical Control Operator/Programmer. 444-332.
ERIC Educational Resources Information Center
Skowronski, Steven D.; Tatum, Kenneth
This student guide provides materials for a course designed to introduce the student to the operations and functions of a two-axis computer numerical control (CNC) turning center. The course consists of seven units. Unit 1 presents course expectations and syllabus, covers safety precautions, and describes the CNC turning center components, CNC…
Zhang, Changzhe; Bu, Yuxiang
2016-09-14
Diffuse functions have been proved to be especially crucial for the accurate characterization of excess electrons which are usually bound weakly in intermolecular zones far away from the nuclei. To examine the effects of diffuse functions on the nature of the cavity-shaped excess electrons in water cluster surroundings, both the HOMO and LUMO distributions, vertical detachment energies (VDEs) and visible absorption spectra of two selected (H2O)24(-) isomers are investigated in the present work. Two main types of diffuse functions are considered in calculations including the Pople-style atom-centered diffuse functions and the ghost-atom-based floating diffuse functions. It is found that augmentation of atom-centered diffuse functions contributes to a better description of the HOMO (corresponding to the VDE convergence), in agreement with previous studies, but also leads to unreasonable diffuse characters of the LUMO with significant red-shifts in the visible spectra, which is against the conventional point of view that the more the diffuse functions, the better the results. The issue of designing extra floating functions for excess electrons has also been systematically discussed, which indicates that the floating diffuse functions are necessary not only for reducing the computational cost but also for improving both the HOMO and LUMO accuracy. Thus, the basis sets with a combination of partial atom-centered diffuse functions and floating diffuse functions are recommended for a reliable description of the weakly bound electrons. This work presents an efficient way for characterizing the electronic properties of weakly bound electrons accurately by balancing the addition of atom-centered diffuse functions and floating diffuse functions and also by balancing the computational cost and accuracy of the calculated results, and thus is very useful in the relevant calculations of various solvated electron systems and weakly bound anionic systems.
Nicholson, Anita; Tobin, Mary
2006-01-01
This presentation will discuss coupling commercial and customized computer-supported teaching aids to provide BSN nursing students with a friendly customer-centered self-study approach to psychomotor skill acquisition.
Computational mechanics and physics at NASA Langley Research Center
NASA Technical Reports Server (NTRS)
South, Jerry C., Jr.
1987-01-01
An overview is given of computational mechanics and physics at NASA Langley Research Center. Computational analysis is a major component and tool in many of Langley's diverse research disciplines, as well as in the interdisciplinary research. Examples are given for algorithm development and advanced applications in aerodynamics, transition to turbulence and turbulence simulation, hypersonics, structures, and interdisciplinary optimization.
Center for computation and visualization of geometric structures. Final report, 1992 - 1995
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-11-01
This report describes the overall goals and the accomplishments of the Geometry Center of the University of Minnesota, whose mission is to develop, support, and promote computational tools for visualizing geometric structures, for facilitating communication among mathematical and computer scientists and between these scientists and the public at large, and for stimulating research in geometry.
76 FR 56744 - Privacy Act of 1974; Notice of a Computer Matching Program
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-14
...; Notice of a Computer Matching Program AGENCY: Defense Manpower Data Center, Department of Defense (DoD... (SSA) and DoD Defense Manpower Data Center (DMDC) that their records are being matched by computer. The... intrusion of the individual's privacy and would result in additional delay in the eventual SSI payment and...
Applied technology center business plan and market survey
NASA Technical Reports Server (NTRS)
Hodgin, Robert F.; Marchesini, Roberto
1990-01-01
Business plan and market survey for the Applied Technology Center (ATC), computer technology transfer and development non-profit corporation, is presented. The mission of the ATC is to stimulate innovation in state-of-the-art and leading edge computer based technology. The ATC encourages the practical utilization of late-breaking computer technologies by firms of all variety.
A Queue Simulation Tool for a High Performance Scientific Computing Center
NASA Technical Reports Server (NTRS)
Spear, Carrie; McGalliard, James
2007-01-01
The NASA Center for Computational Sciences (NCCS) at the Goddard Space Flight Center provides high performance highly parallel processors, mass storage, and supporting infrastructure to a community of computational Earth and space scientists. Long running (days) and highly parallel (hundreds of CPUs) jobs are common in the workload. NCCS management structures batch queues and allocates resources to optimize system use and prioritize workloads. NCCS technical staff use a locally developed discrete event simulation tool to model the impacts of evolving workloads, potential system upgrades, alternative queue structures and resource allocation policies.
Cloud flexibility using DIRAC interware
NASA Astrophysics Data System (ADS)
Fernandez Albor, Víctor; Seco Miguelez, Marcos; Fernandez Pena, Tomas; Mendez Muñoz, Victor; Saborido Silva, Juan Jose; Graciani Diaz, Ricardo
2014-06-01
Communities of different locations are running their computing jobs on dedicated infrastructures without the need to worry about software, hardware or even the site where their programs are going to be executed. Nevertheless, this usually implies that they are restricted to use certain types or versions of an Operating System because either their software needs an definite version of a system library or a specific platform is required by the collaboration to which they belong. On this scenario, if a data center wants to service software to incompatible communities, it has to split its physical resources among those communities. This splitting will inevitably lead to an underuse of resources because the data centers are bound to have periods where one or more of its subclusters are idle. It is, in this situation, where Cloud Computing provides the flexibility and reduction in computational cost that data centers are searching for. This paper describes a set of realistic tests that we ran on one of such implementations. The test comprise software from three different HEP communities (Auger, LHCb and QCD phenomelogists) and the Parsec Benchmark Suite running on one or more of three Linux flavors (SL5, Ubuntu 10.04 and Fedora 13). The implemented infrastructure has, at the cloud level, CloudStack that manages the virtual machines (VM) and the hosts on which they run, and, at the user level, the DIRAC framework along with a VM extension that will submit, monitorize and keep track of the user jobs and also requests CloudStack to start or stop the necessary VM's. In this infrastructure, the community software is distributed via the CernVM-FS, which has been proven to be a reliable and scalable software distribution system. With the resulting infrastructure, users are allowed to send their jobs transparently to the Data Center. The main purpose of this system is the creation of flexible cluster, multiplatform with an scalable method for software distribution for several VOs. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine, which is transparent to the user.
The Importance of Information Analysis Centers in the Performance of Information Services.
ERIC Educational Resources Information Center
Weisman, Herman M.
It is necessary to distinguish the functions, services and products of various types of information services. For example, document centers, clearinghouses, referral centers, and special libraries deal mainly with information in a broad sense. The main function of information analysis centers, however, is to optimize the ratio of knowledge to…
ERIC Educational Resources Information Center
Fox, Annie
1978-01-01
Relates some experiences at this nonprofit center, which was designed so that interested members of the general public can walk in and learn about computers in a safe, nonintimidating environment. STARWARS HODGE, a game written in PILOT, is also described. (CMV)
75 FR 65639 - Center for Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-26
...: Computational Biology Special Emphasis Panel A. Date: October 29, 2010. Time: 2 p.m. to 3:30 p.m. Agenda: To.... Name of Committee: Center for Scientific Review Special Emphasis Panel; Member Conflict: Computational...
Building No. 1, left; Building No. 9, Guard House, center; ...
Building No. 1, left; Building No. 9, Guard House, center; Building No. 5, Main Building, right. View from across Main Street - Thomas A. Edison Laboratories, Main Street & Lakeside Avenue, West Orange, Essex County, NJ
ERIC Educational Resources Information Center
Cottrell, William B.; And Others
The Nuclear Safety Information Center (NSIC) is a highly sophisticated scientific information center operated at Oak Ridge National Laboratory (ORNL) for the U.S. Atomic Energy Commission. Its information file, which consists of both data and bibliographic information, is computer stored and numerous programs have been developed to facilitate the…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-15
... with the Department of Defense (DoD), Defense Manpower Data Center (DMDC). We have provided background... & Medicaid Services and the Department of Defense, Defense Manpower Data Center for the Determination of...), Centers for Medicare & Medicaid Services (CMS), and Department of Defense (DoD), Defense Manpower Data...
Development of sensor augmented robotic weld systems for aerospace propulsion system fabrication
NASA Technical Reports Server (NTRS)
Jones, C. S.; Gangl, K. J.
1986-01-01
In order to meet stringent performance goals for power and reuseability, the Space Shuttle Main Engine was designed with many complex, difficult welded joints that provide maximum strength and minimum weight. To this end, the SSME requires 370 meters of welded joints. Automation of some welds has improved welding productivity significantly over manual welding. Application has previously been limited by accessibility constraints, requirements for complex process control, low production volumes, high part variability, and stringent quality requirements. Development of robots for welding in this application requires that a unique set of constraints be addressed. This paper shows how robotic welding can enhance production of aerospace components by addressing their specific requirements. A development program at the Marshall Space Flight Center combining industrial robots with state-of-the-art sensor systems and computer simulation is providing technology for the automation of welds in Space Shuttle Main Engine production.
2012-01-01
Background Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. Results JobCenter is a client–server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or “in the cloud”) and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. Conclusions JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/. PMID:22846423
Ceramic matrix composite behavior -- Computational simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chamis, C.C.; Murthy, P.L.N.; Mital, S.K.
Development of analytical modeling and computational capabilities for the prediction of high temperature ceramic matrix composite behavior has been an ongoing research activity at NASA-Lewis Research Center. These research activities have resulted in the development of micromechanics based methodologies to evaluate different aspects of ceramic matrix composite behavior. The basis of the approach is micromechanics together with a unique fiber substructuring concept. In this new concept the conventional unit cell (the smallest representative volume element of the composite) of micromechanics approach has been modified by substructuring the unit cell into several slices and developing the micromechanics based equations at themore » slice level. Main advantage of this technique is that it can provide a much greater detail in the response of composite behavior as compared to a conventional micromechanics based analysis and still maintains a very high computational efficiency. This methodology has recently been extended to model plain weave ceramic composites. The objective of the present paper is to describe the important features of the modeling and simulation and illustrate with select examples of laminated as well as woven composites.« less
NASA Technical Reports Server (NTRS)
Pak, Chan-gi; Li, Wesley W.
2009-01-01
Supporting the Aeronautics Research Mission Directorate guidelines, the National Aeronautics and Space Administration [NASA] Dryden Flight Research Center is developing a multidisciplinary design, analysis, and optimization [MDAO] tool. This tool will leverage existing tools and practices, and allow the easy integration and adoption of new state-of-the-art software. Today s modern aircraft designs in transonic speed are a challenging task due to the computation time required for the unsteady aeroelastic analysis using a Computational Fluid Dynamics [CFD] code. Design approaches in this speed regime are mainly based on the manual trial and error. Because of the time required for unsteady CFD computations in time-domain, this will considerably slow down the whole design process. These analyses are usually performed repeatedly to optimize the final design. As a result, there is considerable motivation to be able to perform aeroelastic calculations more quickly and inexpensively. This paper will describe the development of unsteady transonic aeroelastic design methodology for design optimization using reduced modeling method and unsteady aerodynamic approximation. The method requires the unsteady transonic aerodynamics be represented in the frequency or Laplace domain. Dynamically linear assumption is used for creating Aerodynamic Influence Coefficient [AIC] matrices in transonic speed regime. Unsteady CFD computations are needed for the important columns of an AIC matrix which corresponded to the primary modes for the flutter. Order reduction techniques, such as Guyan reduction and improved reduction system, are used to reduce the size of problem transonic flutter can be found by the classic methods, such as Rational function approximation, p-k, p, root-locus etc. Such a methodology could be incorporated into MDAO tool for design optimization at a reasonable computational cost. The proposed technique is verified using the Aerostructures Test Wing 2 actually designed, built, and tested at NASA Dryden Flight Research Center. The results from the full order model and the approximate reduced order model are analyzed and compared.
NASA Astrophysics Data System (ADS)
Frolov, Alexei M.
2018-03-01
The universal variational expansion for the non-relativistic three-body systems is explicitly constructed. This universal expansion can be used to perform highly accurate numerical computations of the bound state spectra in various three-body systems, including Coulomb three-body systems with arbitrary particle masses and electric charges. Our main interest is related to the adiabatic three-body systems which contain one bound electron and two heavy nuclei of hydrogen isotopes: the protium p, deuterium d and tritium t. We also consider the analogous (model) hydrogen ion ∞H2+ with the two infinitely heavy nuclei.
Life sciences on-line: A study in hypermedia application
NASA Technical Reports Server (NTRS)
Christman, Linda A.; Hoang, Nam V.; Proctor, David R.
1990-01-01
The main objective was to determine the feasibility of using a computer-based interactive information recall module for the Life Sciences Project Division (LSPD) at NASA, Johnson Space Center. LSPD personnel prepare payload experiments to test and monitor physiological functions in zero gravity. Training refreshers and other types of online help are needed to support personnel in their tasks during mission testing and in flight. Results of a survey of other hypermedia and multimedia developers and lessons learned by the developer of the LSPD prototype module are presented. Related issues and future applications are also discussed and further hypermedia development within the LSPD is recommended.
Memory Network For Distributed Data Processors
NASA Technical Reports Server (NTRS)
Bolen, David; Jensen, Dean; Millard, ED; Robinson, Dave; Scanlon, George
1992-01-01
Universal Memory Network (UMN) is modular, digital data-communication system enabling computers with differing bus architectures to share 32-bit-wide data between locations up to 3 km apart with less than one millisecond of latency. Makes it possible to design sophisticated real-time and near-real-time data-processing systems without data-transfer "bottlenecks". This enterprise network permits transmission of volume of data equivalent to an encyclopedia each second. Facilities benefiting from Universal Memory Network include telemetry stations, simulation facilities, power-plants, and large laboratories or any facility sharing very large volumes of data. Main hub of UMN is reflection center including smaller hubs called Shared Memory Interfaces.
Proceedings of RIKEN BNL Research Center Workshop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samios, Nicholas P.
The twelfth evaluation of the RIKEN BNL Research Center (RBRC) took place on November 6 – 8, 2012 at Brookhaven National Laboratory. The members of the Scientific Review Committee (SRC), present at the meeting, were: Prof. Wit Busza, Prof. Miklos Gyulassy, Prof. Kenichi Imai, Prof. Richard Milner (Chair), Prof. Alfred Mueller, Prof. Charles Young Prescott, and Prof. Akira Ukawa. We are pleased that Dr. Hideto En’yo, the Director of the Nishina Institute of RIKEN, Japan, participated in this meeting both in informing the committee of the activities of the RIKEN Nishina Center for Accelerator- Based Science and the role ofmore » RBRC and as an observer of this review. In order to illustrate the breadth and scope of the RBRC program, each member of the Center made a presentation on his/her research efforts. This encompassed three major areas of investigation: theoretical, experimental and computational physics. In addition, the committee met privately with the fellows and postdocs to ascertain their opinions and concerns. Although the main purpose of this review is a report to RIKEN management on the health, scientific value, management and future prospects of the Center, the RBRC management felt that a compendium of the scientific presentations are of sufficient quality and interest that they warrant a wider distribution. Therefore we have made this compilation and present it to the community for its information and enlightenment.« less
Center for Center for Technology for Advanced Scientific Component Software (TASCS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kostadin, Damevski
A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technologymore » for Advanced Scientific Component Software (TASCS)1 tackles these these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Shujia; Duffy, Daniel; Clune, Thomas
The call for ever-increasing model resolutions and physical processes in climate and weather models demands a continual increase in computing power. The IBM Cell processor's order-of-magnitude peak performance increase over conventional processors makes it very attractive to fulfill this requirement. However, the Cell's characteristics, 256KB local memory per SPE and the new low-level communication mechanism, make it very challenging to port an application. As a trial, we selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column physics components (half the total computational time), (2) has an extremely high computational intensity: the ratiomore » of computational load to main memory transfers, and (3) exhibits embarrassingly parallel column computations. In this paper, we converted the baseline code (single-precision Fortran) to C and ported it to an IBM BladeCenter QS20. For performance, we manually SIMDize four independent columns and include several unrolling optimizations. Our results show that when compared with the baseline implementation running on one core of Intel's Xeon Woodcrest, Dempsey, and Itanium2, the Cell is approximately 8.8x, 11.6x, and 12.8x faster, respectively. Our preliminary analysis shows that the Cell can also accelerate the dynamics component (~;;25percent total computational time). We believe these dramatic performance improvements make the Cell processor very competitive as an accelerator.« less
Dou, Chao
2016-01-01
The storage volume of internet data center is one of the classical time series. It is very valuable to predict the storage volume of a data center for the business value. However, the storage volume series from a data center is always “dirty,” which contains the noise, missing data, and outliers, so it is necessary to extract the main trend of storage volume series for the future prediction processing. In this paper, we propose an irregular sampling estimation method to extract the main trend of the time series, in which the Kalman filter is used to remove the “dirty” data; then the cubic spline interpolation and average method are used to reconstruct the main trend. The developed method is applied in the storage volume series of internet data center. The experiment results show that the developed method can estimate the main trend of storage volume series accurately and make great contribution to predict the future volume value. PMID:28090205
Miao, Beibei; Dou, Chao; Jin, Xuebo
2016-01-01
The storage volume of internet data center is one of the classical time series. It is very valuable to predict the storage volume of a data center for the business value. However, the storage volume series from a data center is always "dirty," which contains the noise, missing data, and outliers, so it is necessary to extract the main trend of storage volume series for the future prediction processing. In this paper, we propose an irregular sampling estimation method to extract the main trend of the time series, in which the Kalman filter is used to remove the "dirty" data; then the cubic spline interpolation and average method are used to reconstruct the main trend. The developed method is applied in the storage volume series of internet data center. The experiment results show that the developed method can estimate the main trend of storage volume series accurately and make great contribution to predict the future volume value. .
Use of computers and Internet among people with severe mental illnesses at peer support centers.
Brunette, Mary F; Aschbrenner, Kelly A; Ferron, Joelle C; Ustinich, Lee; Kelly, Michael; Grinley, Thomas
2017-12-01
Peer support centers are an ideal setting where people with severe mental illnesses can access the Internet via computers for online health education, peer support, and behavioral treatments. The purpose of this study was to assess computer use and Internet access in peer support agencies. A peer-assisted survey assessed the frequency with which consumers in all 13 New Hampshire peer support centers (n = 702) used computers to access Internet resources. During the 30-day survey period, 200 of the 702 peer support consumers (28%) responded to the survey. More than 3 quarters (78.5%) of respondents had gone online to seek information in the past year. About half (49%) of respondents were interested in learning about online forums that would provide information and peer support for mental health issues. Peer support centers may be a useful venue for Web-based approaches to education, peer support, and intervention. Future research should assess facilitators and barriers to use of Web-based resources among people with severe mental illness in peer support centers. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Automated microwave ablation therapy planning with single and multiple entry points
NASA Astrophysics Data System (ADS)
Liu, Sheena X.; Dalal, Sandeep; Kruecker, Jochen
2012-02-01
Microwave ablation (MWA) has become a recommended treatment modality for interventional cancer treatment. Compared with radiofrequency ablation (RFA), MWA provides more rapid and larger-volume tissue heating. It allows simultaneous ablation from different entry points and allows users to change the ablation size by controlling the power/time parameters. Ablation planning systems have been proposed in the past, mainly addressing the needs for RFA procedures. Thus a planning system addressing MWA-specific parameters and workflows is highly desirable to help physicians achieve better microwave ablation results. In this paper, we design and implement an automated MWA planning system that provides precise probe locations for complete coverage of tumor and margin. We model the thermal ablation lesion as an ellipsoidal object with three known radii varying with the duration of the ablation and the power supplied to the probe. The search for the best ablation coverage can be seen as an iterative optimization problem. The ablation centers are steered toward the location which minimizes both un-ablated tumor tissue and the collateral damage caused to the healthy tissue. We assess the performance of our algorithm using simulated lesions with known "ground truth" optimal coverage. The Mean Localization Error (MLE) between the computed ablation center in 3D and the ground truth ablation center achieves 1.75mm (Standard deviation of the mean (STD): 0.69mm). The Mean Radial Error (MRE) which is estimated by comparing the computed ablation radii with the ground truth radii reaches 0.64mm (STD: 0.43mm). These preliminary results demonstrate the accuracy and robustness of the described planning algorithm.
Privacy preserving interactive record linkage (PPIRL)
Kum, Hye-Chung; Krishnamurthy, Ashok; Machanavajjhala, Ashwin; Reiter, Michael K; Ahalt, Stanley
2014-01-01
Objective Record linkage to integrate uncoordinated databases is critical in biomedical research using Big Data. Balancing privacy protection against the need for high quality record linkage requires a human–machine hybrid system to safely manage uncertainty in the ever changing streams of chaotic Big Data. Methods In the computer science literature, private record linkage is the most published area. It investigates how to apply a known linkage function safely when linking two tables. However, in practice, the linkage function is rarely known. Thus, there are many data linkage centers whose main role is to be the trusted third party to determine the linkage function manually and link data for research via a master population list for a designated region. Recently, a more flexible computerized third-party linkage platform, Secure Decoupled Linkage (SDLink), has been proposed based on: (1) decoupling data via encryption, (2) obfuscation via chaffing (adding fake data) and universe manipulation; and (3) minimum information disclosure via recoding. Results We synthesize this literature to formalize a new framework for privacy preserving interactive record linkage (PPIRL) with tractable privacy and utility properties and then analyze the literature using this framework. Conclusions Human-based third-party linkage centers for privacy preserving record linkage are the accepted norm internationally. We find that a computer-based third-party platform that can precisely control the information disclosed at the micro level and allow frequent human interaction during the linkage process, is an effective human–machine hybrid system that significantly improves on the linkage center model both in terms of privacy and utility. PMID:24201028
Annual Report of the Metals and Ceramics Information Center, 1 May 1979-30 April 1980.
1980-07-01
MANAGEMENT AND ECONOMIC ANALYSIS DEPT. * Computer and Information SyslemsiD. C Operations 1 Battelle Technical Inputs to Planning * Computer Systems 0...Biomass Resources * Education 0 Business Planning * Information Systems * Economics , Planning and Policy Analysis * Statistical and Mathematical Modelrng...Metals and Ceramics Information Center (MCIC) is one of several technical information analysis centers (IAC’s) chartered and sponsored by the
NASA Technical Reports Server (NTRS)
Jones, H. W.
1984-01-01
The computer-assisted C-matrix, Loewdin-alpha-function, single-center expansion method in spherical harmonics has been applied to the three-center nuclear-attraction integral (potential due to the product of separated Slater-type orbitals). Exact formulas are produced for 13 terms of an infinite series that permits evaluation to ten decimal digits of an example using 1s orbitals.
DOT National Transportation Integrated Search
2003-10-01
The purpose of this document is to expand upon the evaluation components presented in "Computer-aided dispatch--traffic management center field operational test final evaluation plan : WSDOT deployment". This document defines the objective, approach,...
DOT National Transportation Integrated Search
2006-05-01
This document provides the final report for the evaluation of the USDOT-sponsored Computer-Aided Dispatch - Traffic Management Center Integration Field Operations Test in the State of Washington. The document discusses evaluation findings in the foll...
Argonne Research Library | Argonne National Laboratory
Publications Researchers Postdocs Exascale Computing Institute for Molecular Engineering at Argonne Work with Scientific Publications Researchers Postdocs Exascale Computing Institute for Molecular Engineering at IMEInstitute for Molecular Engineering JCESRJoint Center for Energy Storage Research MCSGMidwest Center for
NASA Astrophysics Data System (ADS)
Gultom, Syamsul; Darma Sitepu, Indra; Hasibuan, Nurman
2018-03-01
Fatigue due to long and continuous computer usage can lead to problems of dominant fatigue associated with decreased performance and work motivation. Specific targets in the first phase have been achieved in this research such as: (1) Identified complaints on workers using computers, using the Bourdon Wiersma test kit. (2) Finding the right relaxation & work posture draft for a solution to reduce muscle fatigue in computer-based workers. The type of research used in this study is research and development method which aims to produce the products or refine existing products. The final product is a prototype of back-holder, monitoring filter and arranging a relaxation exercise as well as the manual book how to do this while in front of the computer to lower the fatigue level for computer users in Unimed’s Administration Center. In the first phase, observations and interviews have been conducted and identified the level of fatigue on the employees of computer users at Uniemd’s Administration Center using Bourdon Wiersma test and has obtained the following results: (1) The average velocity time of respondents in BAUK, BAAK and BAPSI after working with the value of interpretation of the speed obtained value of 8.4, WS 13 was in a good enough category, (2) The average of accuracy of respondents in BAUK, in BAAK and in BAPSI after working with interpretation value accuracy obtained Value of 5.5, WS 8 was in doubt-category. This result shows that computer users experienced a significant tiredness at the Unimed Administration Center, (3) the consistency of the average of the result in measuring tiredness level on computer users in Unimed’s Administration Center after working with values in consistency of interpretation obtained Value of 5.5 with WS 8 was put in a doubt-category, which means computer user in The Unimed Administration Center suffered an extreme fatigue. In phase II, based on the results of the first phase in this research, the researcher offers solutions such as the prototype of Back-Holder, monitoring filter, and design a proper relaxation exercise to reduce the fatigue level. Furthermore, in in order to maximize the exercise itself, a manual book will be given to employees whom regularly work in front of computers at Unimed’s Administration Center
Computer use in primary care and patient-physician communication.
Sobral, Dilermando; Rosenbaum, Marcy; Figueiredo-Braga, Margarida
2015-07-08
This study evaluated how physicians and patients perceive the impact of computer use on clinical communication, and how a patient-centered orientation can influence this impact. The study followed a descriptive cross-sectional design and included 106 family physicians and 392 patients. An original questionnaire assessed computer use, participants' perspective of its impact, and patient centered strategies. Physicians reported spending 42% of consultation time in contact with the computer. A negative impact of computer in patient-physician communication regarding the consultation length, confidentiality, maintaining eye contact, active listening to the patient, and ability to understand the patient was reported by physicians, while patients reported a positive effect for all the items. Physicians considered that the usual computer placement in their consultation room was significantly unfavorable to patient-physician communication. Physicians perceive the impact of computer use on patient-physician communication as negative, while patients have a positive perception of computer use on patient-physician communication. Consultation support can represent a challenge to physicians who recognize its negative impact in patient centered orientation. Medical education programs aiming to enhance specific communication skills and to better integrate computer use in primary care settings are needed. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Assigning Main Orientation to an EOH Descriptor on Multispectral Images.
Li, Yong; Shi, Xiang; Wei, Lijun; Zou, Junwei; Chen, Fang
2015-07-01
This paper proposes an approach to compute an EOH (edge-oriented histogram) descriptor with main orientation. EOH has a better matching ability than SIFT (scale-invariant feature transform) on multispectral images, but does not assign a main orientation to keypoints. Alternatively, it tends to assign the same main orientation to every keypoint, e.g., zero degrees. This limits EOH to matching keypoints between images of translation misalignment only. Observing this limitation, we propose assigning to keypoints the main orientation that is computed with PIIFD (partial intensity invariant feature descriptor). In the proposed method, SIFT keypoints are detected from images as the extrema of difference of Gaussians, and every keypoint is assigned to the main orientation computed with PIIFD. Then, EOH is computed for every keypoint with respect to its main orientation. In addition, an implementation variant is proposed for fast computation of the EOH descriptor. Experimental results show that the proposed approach performs more robustly than the original EOH on image pairs that have a rotation misalignment.
GSDC: A Unique Data Center in Korea for HEP research
NASA Astrophysics Data System (ADS)
Ahn, Sang-Un
2017-04-01
Global Science experimental Data hub Center (GSDC) at Korea Institute of Science and Technology Information (KISTI) is a unique data center in South Korea established for promoting the fundamental research fields by supporting them with the expertise on Information and Communication Technology (ICT) and the infrastructure for High Performance Computing (HPC), High Throughput Computing (HTC) and Networking. GSDC has supported various research fields in South Korea dealing with the large scale of data, e.g. RENO experiment for neutrino research, LIGO experiment for gravitational wave detection, Genome sequencing project for bio-medical, and HEP experiments such as CDF at FNAL, Belle at KEK, and STAR at BNL. In particular, GSDC has run a Tier-1 center for ALICE experiment using the LHC at CERN since 2013. In this talk, we present the overview on computing infrastructure that GSDC runs for the research fields and we discuss on the data center infrastructure management system deployed at GSDC.
High performance computing for advanced modeling and simulation of materials
NASA Astrophysics Data System (ADS)
Wang, Jue; Gao, Fei; Vazquez-Poletti, Jose Luis; Li, Jianjiang
2017-02-01
The First International Workshop on High Performance Computing for Advanced Modeling and Simulation of Materials (HPCMS2015) was held in Austin, Texas, USA, Nov. 18, 2015. HPCMS 2015 was organized by Computer Network Information Center (Chinese Academy of Sciences), University of Michigan, Universidad Complutense de Madrid, University of Science and Technology Beijing, Pittsburgh Supercomputing Center, China Institute of Atomic Energy, and Ames Laboratory.
ERIC Educational Resources Information Center
Ba, Harouna; Tally, Bill; Tsikalas, Kallen
The EDC (Educational Development Center) Center for Children and Technology (CCT) and Computers for Youth (CFY) completed a 1-year comparative study of children's use of computers in low- and middle-income homes. The study explores the digital divide as a literacy issue, rather than merely a technical one. Digital literacy is defined as a set of…
Impact of stent mis-sizing and mis-positioning on coronary fluid wall shear and intramural stress
Chen, Henry Y.; Koo, Bon-Kwon; Bhatt, Deepak L.
2013-01-01
Stent deployments with geographical miss (GM) are associated with increased risk of target-vessel revascularization and periprocedural myocardial infarction. The aim of the current study was to investigate the underlying biomechanical mechanisms for adverse events with GM. The hypothesis is that stent deployment with GM [longitudinal GM, or LGM (i.e., stent not centered on the lesion); or radial GM, RGM (i.e., stent oversizing)] results in unfavorable fluid wall shear stress (WSS), WSS gradient (WSSG), oscillatory shear index (OSI), and intramural circumferential wall stress (CWS). Three-dimensional computational models of stents and plaque were created using a computer-assisted design package. The models were then solved with validated finite element and computational fluid dynamic packages. The dynamic process of large deformation stent deployment was modeled to expand the stent to the desired vessel size. Stent deployed with GM resulted in a 45% increase in vessel CWS compared with stents that were centered and fully covered the lesion. A 20% oversized stent resulted in 72% higher CWS than a correct sized stent. The linkages between the struts had much higher stress than the main struts (i.e., 180 MPa vs. 80 MPa). Additionally, LGM and RGM reduced endothelial WSS and increased WSSG and OSI. The simulations suggest that both LGM and RGM adversely reduce WSS but increase WSSG, OSI, and CWS. These findings highlight the potential mechanical mechanism of the higher adverse events and underscore the importance of stent positioning and sizing for improved clinical outcome. PMID:23722708
DOT National Transportation Integrated Search
2006-07-01
This document provides the final report for the evaluation of the USDOT-sponsored Computer-Aided Dispatch Traffic Management Center Integration Field Operations Test in the State of Utah. The document discusses evaluation findings in the followin...
Aviation Mechanic General, Airframe, and Powerplant Knowledge Test Guide
DOT National Transportation Integrated Search
1995-01-01
The FAA has available hundreds of computer testing centers nationwide. These testing centers offer the full range of airman knowledge tests. Refer to appendix 1 in this guide for a list of computer testing designees. This knowledge test guide was dev...
DOT National Transportation Integrated Search
2004-01-01
The purpose of this document is to expand upon the evaluation components presented in "Computer-aided dispatch--traffic management center field operational test final evaluation plan : state of Utah". This document defines the objective, approach, an...
75 FR 70899 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-19
... submit to the Office of Management and Budget (OMB) for clearance the following proposal for collection... Annual Burden Hours: 2,952. Public Computer Center Reports (Quarterly and Annually) Number of Respondents... specific to Infrastructure and Comprehensive Community Infrastructure, Public Computer Center, and...
PCs: Key to the Future. Business Center Provides Sound Skills and Good Attitudes.
ERIC Educational Resources Information Center
Pay, Renee W.
1991-01-01
The Advanced Computing/Management Training Program at Jordan Technical Center (Sandy, Utah) simulates an automated office to teach five sets of skills: computer architecture and operating systems, word processing, data processing, communications skills, and management principles. (SK)
NASA Technical Reports Server (NTRS)
Gillian, Ronnie E.; Lotts, Christine G.
1988-01-01
The Computational Structural Mechanics (CSM) Activity at Langley Research Center is developing methods for structural analysis on modern computers. To facilitate that research effort, an applications development environment has been constructed to insulate the researcher from the many computer operating systems of a widely distributed computer network. The CSM Testbed development system was ported to the Numerical Aerodynamic Simulator (NAS) Cray-2, at the Ames Research Center, to provide a high end computational capability. This paper describes the implementation experiences, the resulting capability, and the future directions for the Testbed on supercomputers.
Validation of DYSTOOL for unsteady aerodynamic modeling of 2D airfoils
NASA Astrophysics Data System (ADS)
González, A.; Gomez-Iradi, S.; Munduate, X.
2014-06-01
From the point of view of wind turbine modeling, an important group of tools is based on blade element momentum (BEM) theory using 2D aerodynamic calculations on the blade elements. Due to the importance of this sectional computation of the blades, the National Renewable Wind Energy Center of Spain (CENER) developed DYSTOOL, an aerodynamic code for 2D airfoil modeling based on the Beddoes-Leishman model. The main focus here is related to the model parameters, whose values depend on the airfoil or the operating conditions. In this work, the values of the parameters are adjusted using available experimental or CFD data. The present document is mainly related to the validation of the results of DYSTOOL for 2D airfoils. The results of the computations have been compared with unsteady experimental data of the S809 and NACA0015 profiles. Some of the cases have also been modeled using the CFD code WMB (Wind Multi Block), within the framework of a collaboration with ACCIONA Windpower. The validation has been performed using pitch oscillations with different reduced frequencies, Reynolds numbers, amplitudes and mean angles of attack. The results have shown a good agreement using the methodology of adjustment for the value of the parameters. DYSTOOL have demonstrated to be a promising tool for 2D airfoil unsteady aerodynamic modeling.
Advanced Health Management System for the Space Shuttle Main Engine
NASA Technical Reports Server (NTRS)
Davidson, Matt; Stephens, John
2004-01-01
Boeing-Canoga Park (BCP) and NASA-Marshall Space Flight Center (NASA-MSFC) are developing an Advanced Health Management System (AHMS) for use on the Space Shuttle Main Engine (SSME) that will improve Shuttle safety by reducing the probability of catastrophic engine failures during the powered ascent phase of a Shuttle mission. This is a phased approach that consists of an upgrade to the current Space Shuttle Main Engine Controller (SSMEC) to add turbomachinery synchronous vibration protection and addition of a separate Health Management Computer (HMC) that will utilize advanced algorithms to detect and mitigate predefined engine anomalies. The purpose of the Shuttle AHMS is twofold; one is to increase the probability of successfully placing the Orbiter into the intended orbit, and the other is to increase the probability of being able to safely execute an abort of a Space Transportation System (STS) launch. Both objectives are achieved by increasing the useful work envelope of a Space Shuttle Main Engine after it has developed anomalous performance during launch and the ascent phase of the mission. This increase in work envelope will be the result of two new anomaly mitigation options, in addition to existing engine shutdown, that were previously unavailable. The added anomaly mitigation options include engine throttle-down and performance correction (adjustment of engine oxidizer to fuel ratio), as well as enhanced sensor disqualification capability. The HMC is intended to provide the computing power necessary to diagnose selected anomalous engine behaviors and for making recommendations to the engine controller for anomaly mitigation. Independent auditors have assessed the reduction in Shuttle ascent risk to be on the order of 40% with the combined system and a three times improvement in mission success.
ERIC Educational Resources Information Center
Osman, Abdulaziz
2016-01-01
The purpose of this research study was to examine the unknown fears of embracing cloud computing which stretches across measurements like fear of change from leaders and the complexity of the technology in 9-1-1 dispatch centers in USA. The problem that was addressed in the study was that many 9-1-1 dispatch centers in USA are still using old…
Bernstam, Elmer V.; Hersh, William R.; Johnson, Stephen B.; Chute, Christopher G.; Nguyen, Hien; Sim, Ida; Nahm, Meredith; Weiner, Mark; Miller, Perry; DiLaura, Robert P.; Overcash, Marc; Lehmann, Harold P.; Eichmann, David; Athey, Brian D.; Scheuermann, Richard H.; Anderson, Nick; Starren, Justin B.; Harris, Paul A.; Smith, Jack W.; Barbour, Ed; Silverstein, Jonathan C.; Krusch, David A.; Nagarajan, Rakesh; Becich, Michael J.
2010-01-01
Clinical and translational research increasingly requires computation. Projects may involve multiple computationally-oriented groups including information technology (IT) professionals, computer scientists and biomedical informaticians. However, many biomedical researchers are not aware of the distinctions among these complementary groups, leading to confusion, delays and sub-optimal results. Although written from the perspective of clinical and translational science award (CTSA) programs within academic medical centers, the paper addresses issues that extend beyond clinical and translational research. The authors describe the complementary but distinct roles of operational IT, research IT, computer science and biomedical informatics using a clinical data warehouse as a running example. In general, IT professionals focus on technology. The authors distinguish between two types of IT groups within academic medical centers: central or administrative IT (supporting the administrative computing needs of large organizations) and research IT (supporting the computing needs of researchers). Computer scientists focus on general issues of computation such as designing faster computers or more efficient algorithms, rather than specific applications. In contrast, informaticians are concerned with data, information and knowledge. Biomedical informaticians draw on a variety of tools, including but not limited to computers, to solve information problems in health care and biomedicine. The paper concludes with recommendations regarding administrative structures that can help to maximize the benefit of computation to biomedical research within academic health centers. PMID:19550198
Peripheral Distribution of Thrombus Does Not Affect Outcomes After Surgical Pulmonary Embolectomy.
Pasrija, Chetan; Shah, Aakash; George, Praveen; Mohammed, Isa; Brigante, Francis A; Ghoreishi, Mehrdad; Jeudy, Jean; Taylor, Bradley S; Gammie, James S; Griffith, Bartley P; Kon, Zachary N
2018-04-04
Thrombus located distal to the main or primary pulmonary arteries has been previously viewed as a relative contraindication to surgical pulmonary embolectomy. We compared outcomes for surgical pulmonary embolectomy for submassive and massive pulmonary embolism (PE) in patients with central versus peripheral thrombus burden. All consecutive patients (2011-2016) undergoing surgical pulmonary embolectomy at a single center were retrospectively reviewed. Based on computed tomographic angiography of each patient, central PE was defined as any thrombus originating within the lateral pericardial borders (main or right/left pulmonary arteries). Peripheral PE was defined as thrombus exclusively beyond the lateral pericardial borders, involving the lobar pulmonary arteries or distal. The primary outcome was in-hospital and 90-day survival. 70 patients were identified: 52 (74%) with central PE and 18 (26%) with peripheral PE. Preoperative vital signs and right ventricular dysfunction were similar between the two groups. Compared to the central PE cohort, operative time was significantly longer in the peripheral PE group (191 vs. 210 minutes, p<0.005)). Median right ventricular dysfunction decreased from moderate dysfunction preoperatively to no dysfunction at discharge in both groups. Overall 90-day survival was 94%, with 100% survival in patients with submassive PE in both cohorts. This single center experience demonstrates excellent overall outcomes for surgical pulmonary embolectomy with resolution of right ventricular dysfunction, and comparable morbidity and mortality for central and peripheral PE. In an experienced center and when physiologically warranted, surgical pulmonary embolectomy for peripheral distribution of thrombus is both technically feasible and effective. Copyright © 2018. Published by Elsevier Inc.
Funding Public Computing Centers: Balancing Broadband Availability and Expected Demand
ERIC Educational Resources Information Center
Jayakar, Krishna; Park, Eun-A
2012-01-01
The National Broadband Plan (NBP) recently announced by the Federal Communication Commission visualizes a significantly enhanced commitment to public computing centers (PCCs) as an element of the Commission's plans for promoting broadband availability. In parallel, the National Telecommunications and Information Administration (NTIA) has…
Transportation Research and Analysis Computing Center (TRACC) Year 6 Quarter 4 Progress Report
DOT National Transportation Integrated Search
2013-03-01
Argonne National Laboratory initiated a FY2006-FY2009 multi-year program with the US Department of Transportation (USDOT) on October 1, 2006, to establish the Transportation Research and Analysis Computing Center (TRACC). As part of the TRACC project...
Aerodynamic Characterization of a Modern Launch Vehicle
NASA Technical Reports Server (NTRS)
Hall, Robert M.; Holland, Scott D.; Blevins, John A.
2011-01-01
A modern launch vehicle is by necessity an extremely integrated design. The accurate characterization of its aerodynamic characteristics is essential to determine design loads, to design flight control laws, and to establish performance. The NASA Ares Aerodynamics Panel has been responsible for technical planning, execution, and vetting of the aerodynamic characterization of the Ares I vehicle. An aerodynamics team supporting the Panel consists of wind tunnel engineers, computational engineers, database engineers, and other analysts that address topics such as uncertainty quantification. The team resides at three NASA centers: Langley Research Center, Marshall Space Flight Center, and Ames Research Center. The Panel has developed strategies to synergistically combine both the wind tunnel efforts and the computational efforts with the goal of validating the computations. Selected examples highlight key flow physics and, where possible, the fidelity of the comparisons between wind tunnel results and the computations. Lessons learned summarize what has been gleaned during the project and can be useful for other vehicle development projects.
Computer Program for Steady Transonic Flow over Thin Airfoils by Finite Elements
1975-10-01
COMPUTER PROGRAM FOR STEADY JJ TRANSONIC FLOW OVER THIN AIRFOILS BY g FINITE ELEMENTS • *q^^ r ̂ c HUNTSVILLE RESEARCH & ENGINEERING CENTER...jglMMi B Jun’ INC ORGANIMTION NAME ANO ADDRESS Lö^kfteed Missiles & Space Company, Inc. Huntsville Research & Engineering Center,^ Huntsville, Alab...This report was prepared by personnel in the Computational Mechamcs Section of the Lockheed Missiles fc Space Company, Inc.. Huntsville Research
NASA Technical Reports Server (NTRS)
Baskharone, Erian A.
1993-01-01
This report describes the computational steps involved in executing a finite-element-based perturbation model for computing the rotor dynamic coefficients of a shrouded pump impeller or a simple seal. These arise from the fluid/rotor interaction in the clearance gap. In addition to the sample cases, the computational procedure also applies to a separate category of problems referred to as the 'seal-like' category. The problem, in this case, concerns a shrouded impeller, with the exception that the secondary, or leakage, passage is totally isolated from the primary-flow passage. The difference between this and the pump problem is that the former is analytically of the simple 'seal-like' configuration, with two (inlet and exit) flow-permeable stations, while the latter constitutes a double-entry / double-discharge flow problem. In all cases, the problem is that of a rotor clearance gap. The problem here is that of a rotor excitation in the form of a cylindrical whirl around the housing centerline for a smooth annular seal. In its centered operation mode, the rotor is assumed to give rise to an axisymmetric flow field in the clearance gap. As a result, problems involving longitudinal or helical grooves, in the rotor or housing surfaces, go beyond the code capabilities. Discarding, for the moment, the pre- and post-processing phases, the bulk of the computational procedure consists of two main steps. The first is aimed at producing the axisymmetric 'zeroth-order' flow solution in the given flow domain. Detailed description of this problem, including the flow-governing equations, turbulence closure, boundary conditions, and the finite-element formulation, was covered by Baskharone and Hensel. The second main step is where the perturbation model is implemented, with the input being the centered-rotor 'zeroth-order' flow solution and a prescribed whirl frequency ratio (whirl frequency divided by the impeller speed). The computational domain, in the latter case, is treated as three dimensional, with the number of computational planes in the circumferential direction being specified a priori. The reader is reminded that the deformations in the finite elements are all infinitesimally small because the rotor eccentricity itself is a virtual displacement. This explains why we have generically termed the perturbation model the 'virtually' deformable finite-element category. The primary outcome of implementing the perturbation model is the tangential and radial components, F(sub theta)(sup *) and F(sub r)(sup *) of the fluid-exerted force on the rotor surface due to the whirling motion. Repetitive execution of the perturbation model subprogram over a sufficient range of whirl frequency ratios, and subsequent interpolation of these fluid forces, using the least-square method, finally enable the user to compute the impeller rotor dynamic coefficients of the fluid/rotor interaction. These are the direct and cross-coupled stiffness, damping, and inertia effects of the fluid/rotor interaction.
DUKSUP: A Computer Program for High Thrust Launch Vehicle Trajectory Design and Optimization
NASA Technical Reports Server (NTRS)
Williams, C. H.; Spurlock, O. F.
2014-01-01
From the late 1960's through 1997, the leadership of NASA's Intermediate and Large class unmanned expendable launch vehicle projects resided at the NASA Lewis (now Glenn) Research Center (LeRC). One of LeRC's primary responsibilities --- trajectory design and performance analysis --- was accomplished by an internally-developed analytic three dimensional computer program called DUKSUP. Because of its Calculus of Variations-based optimization routine, this code was generally more capable of finding optimal solutions than its contemporaries. A derivation of optimal control using the Calculus of Variations is summarized including transversality, intermediate, and final conditions. The two point boundary value problem is explained. A brief summary of the code's operation is provided, including iteration via the Newton-Raphson scheme and integration of variational and motion equations via a 4th order Runge-Kutta scheme. Main subroutines are discussed. The history of the LeRC trajectory design efforts in the early 1960's is explained within the context of supporting the Centaur upper stage program. How the code was constructed based on the operation of the Atlas/Centaur launch vehicle, the limits of the computers of that era, the limits of the computer programming languages, and the missions it supported are discussed. The vehicles DUKSUP supported (Atlas/Centaur, Titan/Centaur, and Shuttle/Centaur) are briefly described. The types of missions, including Earth orbital and interplanetary, are described. The roles of flight constraints and their impact on launch operations are detailed (such as jettisoning hardware on heating, Range Safety, ground station tracking, and elliptical parking orbits). The computer main frames on which the code was hosted are described. The applications of the code are detailed, including independent check of contractor analysis, benchmarking, leading edge analysis, and vehicle performance improvement assessments. Several of DUKSUP's many major impacts on launches are discussed including Intelsat, Voyager, Pioneer Venus, HEAO, Galileo, and Cassini.
DUKSUP: A Computer Program for High Thrust Launch Vehicle Trajectory Design and Optimization
NASA Technical Reports Server (NTRS)
Spurlock, O. Frank; Williams, Craig H.
2015-01-01
From the late 1960s through 1997, the leadership of NASAs Intermediate and Large class unmanned expendable launch vehicle projects resided at the NASA Lewis (now Glenn) Research Center (LeRC). One of LeRCs primary responsibilities --- trajectory design and performance analysis --- was accomplished by an internally-developed analytic three dimensional computer program called DUKSUP. Because of its Calculus of Variations-based optimization routine, this code was generally more capable of finding optimal solutions than its contemporaries. A derivation of optimal control using the Calculus of Variations is summarized including transversality, intermediate, and final conditions. The two point boundary value problem is explained. A brief summary of the codes operation is provided, including iteration via the Newton-Raphson scheme and integration of variational and motion equations via a 4th order Runge-Kutta scheme. Main subroutines are discussed. The history of the LeRC trajectory design efforts in the early 1960s is explained within the context of supporting the Centaur upper stage program. How the code was constructed based on the operation of the AtlasCentaur launch vehicle, the limits of the computers of that era, the limits of the computer programming languages, and the missions it supported are discussed. The vehicles DUKSUP supported (AtlasCentaur, TitanCentaur, and ShuttleCentaur) are briefly described. The types of missions, including Earth orbital and interplanetary, are described. The roles of flight constraints and their impact on launch operations are detailed (such as jettisoning hardware on heating, Range Safety, ground station tracking, and elliptical parking orbits). The computer main frames on which the code was hosted are described. The applications of the code are detailed, including independent check of contractor analysis, benchmarking, leading edge analysis, and vehicle performance improvement assessments. Several of DUKSUPs many major impacts on launches are discussed including Intelsat, Voyager, Pioneer Venus, HEAO, Galileo, and Cassini.
Multi-blocking strategies for the INS3D incompressible Navier-Stokes code
NASA Technical Reports Server (NTRS)
Gatlin, Boyd
1990-01-01
With the continuing development of bigger and faster supercomputers, computational fluid dynamics (CFD) has become a useful tool for real-world engineering design and analysis. However, the number of grid points necessary to resolve realistic flow fields numerically can easily exceed the memory capacity of available computers. In addition, geometric shapes of flow fields, such as those in the Space Shuttle Main Engine (SSME) power head, may be impossible to fill with continuous grids upon which to obtain numerical solutions to the equations of fluid motion. The solution to this dilemma is simply to decompose the computational domain into subblocks of manageable size. Computer codes that are single-block by construction can be modified to handle multiple blocks, but ad-hoc changes in the FORTRAN have to be made for each geometry treated. For engineering design and analysis, what is needed is generalization so that the blocking arrangement can be specified by the user. INS3D is a computer program for the solution of steady, incompressible flow problems. It is used frequently to solve engineering problems in the CFD Branch at Marshall Space Flight Center. INS3D uses an implicit solution algorithm and the concept of artificial compressibility to provide the necessary coupling between the pressure field and the velocity field. The development of generalized multi-block capability in INS3D is described.
24. VIEW, LOOKING NORTHEAST, SHOWING MAIN TRANSMISSION IN LEFT FOREGROUND, ...
24. VIEW, LOOKING NORTHEAST, SHOWING MAIN TRANSMISSION IN LEFT FOREGROUND, GASOLINE-POWERED WAUKESHA AUXILIARY DRIVE MOTOR AT CENTER, AND ONE OF TWO MAIN ELECTRIC DRIVE MOTORS AT LEFT CENTER - Sacramento River Bridge, Spanning Sacramento River at California State Highway 275, Sacramento, Sacramento County, CA
Defense Enrollment Eligibility Reporting System (DEERS) Program Manual
1982-05-01
Episcopal Theological Seminary Edinboro State College Pennsylvania of the Southwest Texas Edison Community College Florida Erie Community College Edison...Maritime Academy Maine Lurleen B. Wallace State Junior Maine at Orono, University of Maine College Alabama Maine at Presque Isle , Luther College Iowa...Institute of Technology Pennsylvania Texas Tech University Health Triangle Institute of Technology- Science Center Texas Erie Center Pennsylvania Texas
Developing computer training programs for blood bankers.
Eisenbrey, L
1992-01-01
Two surveys were conducted in July 1991 to gather information about computer training currently performed within American Red Cross Blood Services Regions. One survey was completed by computer trainers from software developer-vendors and regional centers. The second survey was directed to the trainees, to determine their perception of the computer training. The surveys identified the major concepts, length of training, evaluations, and methods of instruction used. Strengths and weaknesses of training programs were highlighted by trainee respondents. Using the survey information and other sources, recommendations (including those concerning which computer skills and tasks should be covered) are made that can be used as guidelines for developing comprehensive computer training programs at any blood bank or blood center.
A Review of Hemolysis Prediction Models for Computational Fluid Dynamics.
Yu, Hai; Engel, Sebastian; Janiga, Gábor; Thévenin, Dominique
2017-07-01
Flow-induced hemolysis is a crucial issue for many biomedical applications; in particular, it is an essential issue for the development of blood-transporting devices such as left ventricular assist devices, and other types of blood pumps. In order to estimate red blood cell (RBC) damage in blood flows, many models have been proposed in the past. Most models have been validated by their respective authors. However, the accuracy and the validity range of these models remains unclear. In this work, the most established hemolysis models compatible with computational fluid dynamics of full-scale devices are described and assessed by comparing two selected reference experiments: a simple rheometric flow and a more complex hemodialytic flow through a needle. The quantitative comparisons show very large deviations concerning hemolysis predictions, depending on the model and model parameter. In light of the current results, two simple power-law models deliver the best compromise between computational efficiency and obtained accuracy. Finally, hemolysis has been computed in an axial blood pump. The reconstructed geometry of a HeartMate II shows that hemolysis occurs mainly at the tip and leading edge of the rotor blades, as well as at the leading edge of the diffusor vanes. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Internode data communications in a parallel computer
Archer, Charles J.; Blocksome, Michael A.; Miller, Douglas R.; Parker, Jeffrey J.; Ratterman, Joseph D.; Smith, Brian E.
2013-09-03
Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.
Internode data communications in a parallel computer
Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E
2014-02-11
Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.
Removing the center from computing: biology's new mode of digital knowledge production.
November, Joseph
2011-06-01
This article shows how the USA's National Institutes of Health (NIH) helped to bring about a major shift in the way computers are used to produce knowledge and in the design of computers themselves as a consequence of its early 1960s efforts to introduce information technology to biologists. Starting in 1960 the NIH sought to reform the life sciences by encouraging researchers to make use of digital electronic computers, but despite generous federal support biologists generally did not embrace the new technology. Initially the blame fell on biologists' lack of appropriate (i.e. digital) data for computers to process. However, when the NIH consulted MIT computer architect Wesley Clark about this problem, he argued that the computer's quality as a device that was centralized posed an even greater challenge to potential biologist users than did the computer's need for digital data. Clark convinced the NIH that if the agency hoped to effectively computerize biology, it would need to satisfy biologists' experimental and institutional needs by providing them the means to use a computer without going to a computing center. With NIH support, Clark developed the 1963 Laboratory Instrument Computer (LINC), a small, real-time interactive computer intended to be used inside the laboratory and controlled entirely by its biologist users. Once built, the LINC provided a viable alternative to the 1960s norm of large computers housed in computing centers. As such, the LINC not only became popular among biologists, but also served in later decades as an important precursor of today's computing norm in the sciences and far beyond, the personal computer.
Autonomic Computing for Spacecraft Ground Systems
NASA Technical Reports Server (NTRS)
Li, Zhenping; Savkli, Cetin; Jones, Lori
2007-01-01
Autonomic computing for spacecraft ground systems increases the system reliability and reduces the cost of spacecraft operations and software maintenance. In this paper, we present an autonomic computing solution for spacecraft ground systems at NASA Goddard Space Flight Center (GSFC), which consists of an open standard for a message oriented architecture referred to as the GMSEC architecture (Goddard Mission Services Evolution Center), and an autonomic computing tool, the Criteria Action Table (CAT). This solution has been used in many upgraded ground systems for NASA 's missions, and provides a framework for developing solutions with higher autonomic maturity.
Computers as learning resources in the health sciences: impact and issues.
Ellis, L B; Hannigan, G G
1986-01-01
Starting with two computer terminals in 1972, the Health Sciences Learning Resources Center of the University of Minnesota Bio-Medical Library expanded its instructional facilities to ten terminals and thirty-five microcomputers by 1985. Computer use accounted for 28% of total center circulation. The impact of these resources on health sciences curricula is described and issues related to use, support, and planning are raised and discussed. Judged by their acceptance and educational value, computers are successful health sciences learning resources at the University of Minnesota. PMID:3518843
Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong
2014-01-01
The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform. PMID:25097872
Applications of Modeling and Simulation for Flight Hardware Processing at Kennedy Space Center
NASA Technical Reports Server (NTRS)
Marshall, Jennifer L.
2010-01-01
The Boeing Design Visualization Group (DVG) is responsible for the creation of highly-detailed representations of both on-site facilities and flight hardware using computer-aided design (CAD) software, with a focus on the ground support equipment (GSE) used to process and prepare the hardware for space. Throughout my ten weeks at this center, I have had the opportunity to work on several projects: the modification of the Multi-Payload Processing Facility (MPPF) High Bay, weekly mapping of the Space Station Processing Facility (SSPF) floor layout, kinematics applications for the Orion Command Module (CM) hatches, and the design modification of the Ares I Upper Stage hatch for maintenance purposes. The main goal of each of these projects was to generate an authentic simulation or representation using DELMIA V5 software. This allowed for evaluation of facility layouts, support equipment placement, and greater process understanding once it was used to demonstrate future processes to customers and other partners. As such, I have had the opportunity to contribute to a skilled team working on diverse projects with a central goal of providing essential planning resources for future center operations.
NASA Astrophysics Data System (ADS)
Gordov, E.; Shiklomanov, A.; Okladnikov, I.; Prusevich, A.; Titov, A.
2016-11-01
We present an approach and first results of a collaborative project being carried out by a joint team of researchers from the Institute of Monitoring of Climatic and Ecological Systems, Russia and Earth Systems Research Center UNH, USA. Its main objective is development of a hardware and software platform prototype of a Distributed Research Center (DRC) for monitoring and projecting of regional climatic and environmental changes in the Northern extratropical areas. The DRC should provide the specialists working in climate related sciences and decision-makers with accurate and detailed climatic characteristics for the selected area and reliable and affordable tools for their in-depth statistical analysis and studies of the effects of climate change. Within the framework of the project, new approaches to cloud processing and analysis of large geospatial datasets (big geospatial data) inherent to climate change studies are developed and deployed on technical platforms of both institutions. We discuss here the state of the art in this domain, describe web based information-computational systems developed by the partners, justify the methods chosen to reach the project goal, and briefly list the results obtained so far.
Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong
2014-01-01
The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform.
AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 1, Issue 2
2011-01-01
area and the researchers working on these projects. Also inside: news from the AHPCRC consortium partners at Morgan State University and the NASA ...Computing Research Center is provided by the supercomputing and research facilities at Stanford University and at the NASA Ames Research Center at...atomic and molecular level, he said. He noted that “every general would like to have” a Star Trek -like holodeck, where holographic avatars could
To center or not to center? Investigating inertia with a multilevel autoregressive model.
Hamaker, Ellen L; Grasman, Raoul P P P
2014-01-01
Whether level 1 predictors should be centered per cluster has received considerable attention in the multilevel literature. While most agree that there is no one preferred approach, it has also been argued that cluster mean centering is desirable when the within-cluster slope and the between-cluster slope are expected to deviate, and the main interest is in the within-cluster slope. However, we show in a series of simulations that if one has a multilevel autoregressive model in which the level 1 predictor is the lagged outcome variable (i.e., the outcome variable at the previous occasion), cluster mean centering will in general lead to a downward bias in the parameter estimate of the within-cluster slope (i.e., the autoregressive relationship). This is particularly relevant if the main question is whether there is on average an autoregressive effect. Nonetheless, we show that if the main interest is in estimating the effect of a level 2 predictor on the autoregressive parameter (i.e., a cross-level interaction), cluster mean centering should be preferred over other forms of centering. Hence, researchers should be clear on what is considered the main goal of their study, and base their choice of centering method on this when using a multilevel autoregressive model.
To center or not to center? Investigating inertia with a multilevel autoregressive model
Hamaker, Ellen L.; Grasman, Raoul P. P. P.
2015-01-01
Whether level 1 predictors should be centered per cluster has received considerable attention in the multilevel literature. While most agree that there is no one preferred approach, it has also been argued that cluster mean centering is desirable when the within-cluster slope and the between-cluster slope are expected to deviate, and the main interest is in the within-cluster slope. However, we show in a series of simulations that if one has a multilevel autoregressive model in which the level 1 predictor is the lagged outcome variable (i.e., the outcome variable at the previous occasion), cluster mean centering will in general lead to a downward bias in the parameter estimate of the within-cluster slope (i.e., the autoregressive relationship). This is particularly relevant if the main question is whether there is on average an autoregressive effect. Nonetheless, we show that if the main interest is in estimating the effect of a level 2 predictor on the autoregressive parameter (i.e., a cross-level interaction), cluster mean centering should be preferred over other forms of centering. Hence, researchers should be clear on what is considered the main goal of their study, and base their choice of centering method on this when using a multilevel autoregressive model. PMID:25688215
NASA Technical Reports Server (NTRS)
Bates, Harry
1990-01-01
A number of optical communication lines are now in use at the Kennedy Space Center (KSC) for the transmission of voice, computer data, and video signals. Presently, all of these channels utilize a single carrier wavelength centered near 1300 nm. The theoretical bandwidth of the fiber far exceeds the utilized capacity. Yet, practical considerations limit the usable bandwidth. The fibers have the capability of transmitting a multiplicity of signals simultaneously in each of two separate bands (1300 and 1550 nm). Thus, in principle, the number of transmission channels can be increased without installing new cable if some means of wavelength division multiplexing (WDM) can be utilized. The main goal of these experiments was to demonstrate that a factor of 2 increase in bandwidth utilization can share the same fiber in both a unidirectional configuration and a bidirectional mode of operation. Both signal and multimode fiber are installed at KSC. The great majority is multimode; therefore, this effort concentrated on multimode systems.
A Comprehensive Opacities/Atomic Database for the Analysis of Astrophysical Spectra and Modeling
NASA Technical Reports Server (NTRS)
Pradhan, Anil K. (Principal Investigator)
1997-01-01
The main goals of this ADP award have been accomplished. The electronic database TOPBASE, consisting of the large volume of atomic data from the Opacity Project, has been installed and is operative at a NASA site at the Laboratory for High Energy Astrophysics Science Research Center (HEASRC) at the Goddard Space Flight Center. The database will be continually maintained and updated by the PI and collaborators. TOPBASE is publicly accessible from IP: topbase.gsfc.nasa.gov. During the last six months (since the previous progress report), considerable work has been carried out to: (1) put in the new data for low ionization stages of iron: Fe I - V, beginning with Fe II, (2) high-energy photoionization cross sections computed by Dr. Hong Lin Zhang (consultant on the Project) were 'merged' with the current Opacity Project data and input into TOPbase; (3) plans laid out for a further extension of TOPbase to include TIPbase, the database for collisional data to complement the radiative data in TOPbase.
An ergonomic evaluation of a call center performed by disabled agents.
Chi, Chia-Fen; Lin, Yen-Hui
2008-08-01
Potential ergonomic hazards for 27 disabled call center agents engaged in computer-telephone interactive tasks were evaluated for possible associations between the task behaviors and work-related disorders. Data included task description, 300 samples of performance, a questionnaire on workstation design, body-part discomfort rating, perceived stress, potential job stressors, and direct measurement of environmental factors. Analysis indicated agents were frequently exposed to prolonged static sitting and repetitive movements, together with unsupported back and flexed neck, causing musculoskeletal discomforts. Visual fatigue (85.2% of agents), discomfort of ears (66.7%), and musculoskeletal discomforts (59.3%) were the most pronounced and prevalent complaints after prolonged working. 17 of 27 agents described job pressure as high or very high, and dealing with difficult customers and trying to fulfill the customers' needs within the time standard were main stressors. Further work on surrounding noise, earphone use, possible hearing loss of experienced agents, training programs, feasible solutions for visual fatigue, musculoskeletal symptoms, and psychosocial stress should be conducted.
SANs and Large Scale Data Migration at the NASA Center for Computational Sciences
NASA Technical Reports Server (NTRS)
Salmon, Ellen M.
2004-01-01
Evolution and migration are a way of life for provisioners of high-performance mass storage systems that serve high-end computers used by climate and Earth and space science researchers: the compute engines come and go, but the data remains. At the NASA Center for Computational Sciences (NCCS), disk and tape SANs are deployed to provide high-speed I/O for the compute engines and the hierarchical storage management systems. Along with gigabit Ethernet, they also enable the NCCS's latest significant migration: the transparent transfer of 300 Til3 of legacy HSM data into the new Sun SAM-QFS cluster.
[Computer-aided prescribing: from utopia to reality].
Suárez-Varela Ubeda, J; Beltrán Calvo, C; Molina López, T; Navarro Marín, P
2005-05-31
To determine whether the introduction of computer-aided prescribing helped reduce the administrative burden at primary care centers. Descriptive, cross-sectional design. Torreblanca Health Center in the province of Seville, southern Spain. From 29 October 2003 to the present a pilot project involving nine pharmacies in the basic health zone served by this health center has been running to evaluate computer-aided prescribing (the Receta XXI project) with real patients. All patients on the center's list of patients who came to the center for an administrative consultation to renew prescriptions for medications or supplies for long-term treatment. Total number of administrative visits per patient for patients who came to the center to renew prescriptions for long-term treatment, as recorded by the Diraya system (Historia Clinica Digital del Ciudadano, or Citizen's Digital Medical Record) during the period from February to July 2004. Total number of the same type of administrative visits recorded by the previous system (TASS) during the period from February to July 2003. The mean number of administrative visits per month during the period from February to July 2003 was 160, compared to a mean number of 64 visits during the period from February to July 2004. The reduction in the number of visits for prescription renewal was 60%. Introducing a system for computer-aided prescribing significantly reduced the number of administrative visits for prescription renewal for long-term treatment. This could help reduce the administrative burden considerably in primary care if the system were used in all centers.
Computer systems and software engineering
NASA Technical Reports Server (NTRS)
Mckay, Charles W.
1988-01-01
The High Technologies Laboratory (HTL) was established in the fall of 1982 at the University of Houston Clear Lake. Research conducted at the High Tech Lab is focused upon computer systems and software engineering. There is a strong emphasis on the interrelationship of these areas of technology and the United States' space program. In Jan. of 1987, NASA Headquarters announced the formation of its first research center dedicated to software engineering. Operated by the High Tech Lab, the Software Engineering Research Center (SERC) was formed at the University of Houston Clear Lake. The High Tech Lab/Software Engineering Research Center promotes cooperative research among government, industry, and academia to advance the edge-of-knowledge and the state-of-the-practice in key topics of computer systems and software engineering which are critical to NASA. The center also recommends appropriate actions, guidelines, standards, and policies to NASA in matters pertinent to the center's research. Results of the research conducted at the High Tech Lab/Software Engineering Research Center have given direction to many decisions made by NASA concerning the Space Station Program.
Use of PL/1 in a Bibliographic Information Retrieval System.
ERIC Educational Resources Information Center
Schipma, Peter B.; And Others
The Information Sciences section of ITT Research Institute (IITRI) has developed a Computer Search Center and is currently conducting a research project to explore computer searching of a variety of machine-readable data bases. The Center provides Selective Dissemination of Information services to academic, industrial and research organizations…
1991-05-01
Marine Corps Tiaining Systems (CBESS) memorization training Inteligence Center, Dam Neck Threat memorization training Commander Tactical Wings, Atlantic...News Shipbuilding Technical training AEGIS Training Center, Dare Artificial Intelligence (Al) Tools Computerized firm-end analysis tools NETSCPAC...Technology Department and provides computational and electronic mail support for research in areas of artificial intelligence, computer-assisted instruction
Postdoctoral Fellow | Center for Cancer Research
The Neuro-Oncology Branch (NOB), Center for Cancer Research (CCR), National Cancer Institute (NCI) of the National Institutes of Health (NIH) is seeking outstanding postdoctoral candidates interested in studying metabolic and cell signaling pathways in the context of brain cancers through construction of computational models amenable to formal computational analysis and
Venus - Computer Simulated Global View Centered at 0 Degrees East Longitude
1996-03-14
This global view of the surface of Venus is centered at 0 degrees east longitude. NASA Magellan synthetic aperture radar mosaics from the first cycle of Magellan mapping were mapped onto a computer-simulated globe to create this image. http://photojournal.jpl.nasa.gov/catalog/PIA00257
Computer-Aided Corrosion Program Management
NASA Technical Reports Server (NTRS)
MacDowell, Louis
2010-01-01
This viewgraph presentation reviews Computer-Aided Corrosion Program Management at John F. Kennedy Space Center. The contents include: 1) Corrosion at the Kennedy Space Center (KSC); 2) Requirements and Objectives; 3) Program Description, Background and History; 4) Approach and Implementation; 5) Challenges; 6) Lessons Learned; 7) Successes and Benefits; and 8) Summary and Conclusions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shankar, Arjun
Computer scientist Arjun Shankar is director of the Compute and Data Environment for Science (CADES), ORNL’s multidisciplinary big data computing center. CADES offers computing, networking and data analytics to facilitate workflows for both ORNL and external research projects.
ISCR Annual Report: Fical Year 2004
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGraw, J R
2005-03-03
Large-scale scientific computation and all of the disciplines that support and help to validate it have been placed at the focus of Lawrence Livermore National Laboratory (LLNL) by the Advanced Simulation and Computing (ASC) program of the National Nuclear Security Administration (NNSA) and the Scientific Discovery through Advanced Computing (SciDAC) initiative of the Office of Science of the Department of Energy (DOE). The maturation of computational simulation as a tool of scientific and engineering research is underscored in the November 2004 statement of the Secretary of Energy that, ''high performance computing is the backbone of the nation's science and technologymore » enterprise''. LLNL operates several of the world's most powerful computers--including today's single most powerful--and has undertaken some of the largest and most compute-intensive simulations ever performed. Ultrascale simulation has been identified as one of the highest priorities in DOE's facilities planning for the next two decades. However, computers at architectural extremes are notoriously difficult to use efficiently. Furthermore, each successful terascale simulation only points out the need for much better ways of interacting with the resulting avalanche of data. Advances in scientific computing research have, therefore, never been more vital to LLNL's core missions than at present. Computational science is evolving so rapidly along every one of its research fronts that to remain on the leading edge, LLNL must engage researchers at many academic centers of excellence. In Fiscal Year 2004, the Institute for Scientific Computing Research (ISCR) served as one of LLNL's main bridges to the academic community with a program of collaborative subcontracts, visiting faculty, student internships, workshops, and an active seminar series. The ISCR identifies researchers from the academic community for computer science and computational science collaborations with LLNL and hosts them for short- and long-term visits with the aim of encouraging long-term academic research agendas that address LLNL's research priorities. Through such collaborations, ideas and software flow in both directions, and LLNL cultivates its future workforce. The Institute strives to be LLNL's ''eyes and ears'' in the computer and information sciences, keeping the Laboratory aware of and connected to important external advances. It also attempts to be the ''feet and hands'' that carry those advances into the Laboratory and incorporates them into practice. ISCR research participants are integrated into LLNL's Computing and Applied Research (CAR) Department, especially into its Center for Applied Scientific Computing (CASC). In turn, these organizations address computational challenges arising throughout the rest of the Laboratory. Administratively, the ISCR flourishes under LLNL's University Relations Program (URP). Together with the other five institutes of the URP, it navigates a course that allows LLNL to benefit from academic exchanges while preserving national security. While it is difficult to operate an academic-like research enterprise within the context of a national security laboratory, the results declare the challenges well met and worth the continued effort.« less
Star polymers as unit cells for coarse-graining cross-linked networks
NASA Astrophysics Data System (ADS)
Molotilin, Taras Y.; Maduar, Salim R.; Vinogradova, Olga I.
2018-03-01
Reducing the complexity of cross-linked polymer networks by preserving their main macroscale properties is key to understanding them, and a crucial issue is to relate individual properties of the polymer constituents to those of the reduced network. Here we study polymer networks in a good solvent, by considering star polymers as their unit elements, and first quantify the interaction between their centers of masses. We then reduce the complexity of a network by replacing sets of its bridged star polymers by equivalent effective soft particles with dense cores. Our coarse graining allows us to approximate complex polymer networks by much simpler ones, keeping their relevant mechanical properties, as illustrated in computer experiments.
Launch Vehicle Systems Analysis
NASA Technical Reports Server (NTRS)
Olds, John R.
1999-01-01
This report summaries the key accomplishments of Georgia Tech's Space Systems Design Laboratory (SSDL) under NASA Grant NAG8-1302 from NASA - Marshall Space Flight Center. The report consists of this summary white paper, copies of technical papers written under this grant, and several viewgraph-style presentations. During the course of this grant four main tasks were completed: (1)Simulated Combined-Cycle Rocket Engine Analysis Module (SCCREAM), a computer analysis tool for predicting the performance of various RBCC engine configurations; (2) Hyperion, a single stage to orbit vehicle capable of delivering 25,000 pound payloads to the International Space Station Orbit; (3) Bantam-X Support - a small payload mission; (4) International Trajectory Support for interplanetary human Mars missions.
Role of CFD in propulsion design - Government perspective
NASA Technical Reports Server (NTRS)
Schutzenhofer, L. A.; Mcconnaughey, H. V.; Mcconnaughey, P. K.
1990-01-01
Various aspects of computational fluid dynamics (CFD), as it relates to design applications in rocket propulsion activities from the government perspective, are discussed. Specific examples are given that demonstrate the application of CFD to support hardware development activities, such as Space Shuttle Main Engine flight issues, and the associated teaming strategy used for solving such problems. In addition, select examples that delineate the motivation, methods of approach, goals and key milestones for several space flight progams are cited. An approach is described toward applying CFD in the design environment from the government perspective. A discussion of benchmark validation, advanced technology hardware concepts, accomplishments, needs, future applications, and near-term expectations from the flight-center perspective is presented.
NASA Astrophysics Data System (ADS)
Roberts, John
2005-11-01
The rapid advancements in micro/nano biotechnology demand quantitative tools for characterizing microfluidic flows in lab-on-a-chip applications, validation of computational results for fully 3D flows in complex micro-devices, and efficient observation of cellular dynamics in 3D. We present a novel 3D micron-scale DPTV (defocused particle tracking velocimetry) that is capable of mapping out 3D Lagrangian, as well as 3D Eulerian velocity flow fields at sub-micron resolution and with one camera. The main part of the imaging system is an epi-fluorescent microscope (Olympus IX 51), and the seeding particles are fluorescent particles with diameter range 300nm - 10um. A software package has been developed for identifying (x,y,z,t) coordinates of the particles using the defocused images. Using the imaging system, we successfully mapped the pressure driven flow fields in microfluidic channels. In particular, we measured the Laglangian flow fields in a microfluidic channel with a herring bone pattern at the bottom, the later is used to enhance fluid mixing in lateral directions. The 3D particle tracks revealed the flow structure that has only been seen in numerical computation. This work is supported by the National Science Foundation (CTS - 0514443), the Nanobiotechnology Center at Cornell, and The New York State Center for Life Science Enterprise.
NCC: A Multidisciplinary Design/Analysis Tool for Combustion Systems
NASA Technical Reports Server (NTRS)
Liu, Nan-Suey; Quealy, Angela
1999-01-01
A multi-disciplinary design/analysis tool for combustion systems is critical for optimizing the low-emission, high-performance combustor design process. Based on discussions between NASA Lewis Research Center and the jet engine companies, an industry-government team was formed in early 1995 to develop the National Combustion Code (NCC), which is an integrated system of computer codes for the design and analysis of combustion systems. NCC has advanced features that address the need to meet designer's requirements such as "assured accuracy", "fast turnaround", and "acceptable cost". The NCC development team is comprised of Allison Engine Company (Allison), CFD Research Corporation (CFDRC), GE Aircraft Engines (GEAE), NASA Lewis Research Center (LeRC), and Pratt & Whitney (P&W). This development team operates under the guidance of the NCC steering committee. The "unstructured mesh" capability and "parallel computing" are fundamental features of NCC from its inception. The NCC system is composed of a set of "elements" which includes grid generator, main flow solver, turbulence module, turbulence and chemistry interaction module, chemistry module, spray module, radiation heat transfer module, data visualization module, and a post-processor for evaluating engine performance parameters. Each element may have contributions from several team members. Such a multi-source multi-element system needs to be integrated in a way that facilitates inter-module data communication, flexibility in module selection, and ease of integration.
NASA Accelerates SpaceCube Technology into Orbit
NASA Technical Reports Server (NTRS)
Petrick, David
2010-01-01
On May 11, 2009, STS-125 Space Shuttle Atlantis blasted off from Kennedy Space Center on a historic mission to service the Hubble Space Telescope (HST). In addition to sending up the hardware and tools required to repair the observatory, the servicing team at NASA's Goddard Space Flight Center also sent along a complex experimental payload called Relative Navigation Sensors (RNS). The main objective of the RNS payload was to provide real-time image tracking of HST during rendezvous and docking operations. RNS was a complete success, and was brought to life by four Xilinx FPGAs (Field Programmable Gate Arrays) tightly packed into one integrated computer called SpaceCube. SpaceCube is a compact, reconfigurable, multiprocessor computing platform for space applications demanding extreme processing capabilities based on Xilinx Virtex 4 FX60 FPGAs. In a matter of months, the concept quickly went from the white board to a fully funded flight project. The 4-inch by 4-inch SpaceCube processor card was prototyped by a group of Goddard engineers using internal research funding. Once engineers were able to demonstrate the processing power of SpaceCube to NASA, HST management stood behind the product and invested in a flight qualified version, inserting it into the heart of the RNS system. With the determination of putting Xilinx into space, the team strengthened to a small army and delivered a fully functional, space qualified system to the mission.
NASA Astrophysics Data System (ADS)
Tsuda, Kunikazu; Tano, Shunichi; Ichino, Junko
To lower power consumption has becomes a worldwide concern. It is also becoming a bigger area in Computer Systems, such as reflected by the growing use of software-as-a-service and cloud computing whose market has increased since 2000, at the same time, the number of data centers that accumulates and manages the computer has increased rapidly. Power consumption at data centers is accounts for a big share of the entire IT power usage, and is still rapidly increasing. This research focuses on the air-conditioning that occupies accounts for the biggest portion of electric power consumption by data centers, and proposes to develop a technique to lower the power consumption by applying the natural cool air and the snow for control temperature and humidity. We verify those effectiveness of this approach by the experiment. Furthermore, we also examine the extent to which energy reduction is possible when a data center is located in Hokkaido.
Freisling, Heinz; Moskal, Aurelie; Ferrari, Pietro; Nicolas, Geneviève; Knaze, Viktoria; Clavel-Chapelon, Françoise; Boutron-Ruault, Marie-Christine; Nailler, Laura; Teucher, Birgit; Grote, Verena A; Boeing, Heiner; Clemens, Matthias; Tjønneland, Anne; Olsen, Anja; Overvad, Kim; Quirós, J Ramón; Duell, Eric J; Sánchez, María-José; Amiano, Pilar; Chirlaque, Maria-Dolores; Barricarte, Aurelio; Khaw, Kay-Tee; Wareham, Nicholas J; Crowe, Francesca L; Gallo, Valentina; Oikonomou, Eleni; Naska, Androniki; Trichopoulou, Antonia; Palli, Domenico; Agnoli, Claudia; Tumino, Rosario; Polidoro, Silvia; Mattiello, Amalia; Bueno-de-Mesquita, H Bas; Ocké, Marga C; Peeters, Petra H M; Wirfält, Elisabet; Ericson, Ulrika; Bergdahl, Ingvar A; Johansson, Ingegerd; Hjartåker, Anette; Engeset, Dagrun; Skeie, Guri; Riboli, Elio; Slimani, Nadia
2013-06-01
Methodological differences in assessing dietary acrylamide (AA) often hamper comparisons of intake across populations. Our aim was to describe the mean dietary AA intake in 27 centers of 10 European countries according to selected lifestyle characteristics and its contributing food sources in the European Prospective Investigation into Cancer and Nutrition (EPIC) study. In this cross-sectional analysis, 36 994 men and women, aged 35-74 years completed a single, standardized 24-hour dietary recall using EPIC-Soft. Food consumption data were matched to a harmonized AA database. Intake was computed by gender and center, and across categories of habitual alcohol consumption, smoking status, physical activity, education, and body mass index (BMI). Adjustment was made for participants' age, height, weight, and energy intake using linear regression models. Adjusted mean AA intake across centers ranged from 13 to 47 μg/day in men and from 12 to 39 μg/day in women; intakes were higher in northern European centers. In most centers, intake in women was significantly higher among alcohol drinkers compared with abstainers. There were no associations between AA intake and physical activity, BMI, or education. At least 50 % of AA intake across centers came from two food groups "bread, crisp bread, rusks" and "coffee." The third main contributing food group was "potatoes". Dietary AA intake differs greatly among European adults residing in different geographical regions. This observed heterogeneity in AA intake deserves consideration in the design and interpretation of population-based studies of dietary AA intake and health outcomes.
Pelle, Gina; Perrucci, Mauro Gianni; Galati, Gaspare; Fattori, Patrizia; Galletti, Claudio; Committeri, Giorgia
2012-01-01
Background Several psychophysical experiments found evidence for the involvement of gaze-centered and/or body-centered coordinates in arm-movement planning and execution. Here we aimed at investigating the frames of reference involved in the visuomotor transformations for reaching towards visual targets in space by taking target eccentricity and performing hand into account. Methodology/Principal Findings We examined several performance measures while subjects reached, in complete darkness, memorized targets situated at different locations relative to the gaze and/or to the body, thus distinguishing between an eye-centered and a body-centered frame of reference involved in the computation of the movement vector. The errors seem to be mainly affected by the visual hemifield of the target, independently from its location relative to the body, with an overestimation error in the horizontal reaching dimension (retinal exaggeration effect). The use of several target locations within the perifoveal visual field allowed us to reveal a novel finding, that is, a positive linear correlation between horizontal overestimation errors and target retinal eccentricity. In addition, we found an independent influence of the performing hand on the visuomotor transformation process, with each hand misreaching towards the ipsilateral side. Conclusions While supporting the existence of an internal mechanism of target-effector integration in multiple frames of reference, the present data, especially the linear overshoot at small target eccentricities, clearly indicate the primary role of gaze-centered coding of target location in the visuomotor transformation for reaching. PMID:23272180
Closeup View of the Space Shuttle Main Engine (SSME) 2044 ...
Close-up View of the Space Shuttle Main Engine (SSME) 2044 mounted in a SSME Engine Handler in the SSME processing Facility at Kennedy Space Center. This view shows SSME 2044 with its expansion nozzle removed and an Engine Leak-Test Plug is set in the throat of the Main Combustion Chamber in the approximate center of the image, the insulated, High-Pressure Fuel Turbopump sits below that and the Low Pressure Oxidizer Turbopump Discharge Duct sits towards the top of the engine assembly in this view. - Space Transportation System, Space Shuttle Main Engine, Lyndon B. Johnson Space Center, 2101 NASA Parkway, Houston, Harris County, TX
2006-08-01
preparing a COBRE Molecular Targets Project with a goal to extend the computational work of Specific Aims of this project to the discovery of novel...million Center of Biomedical Research Excellence ( COBRE ) grant from the National Center for Research Resources at the National Institutes of Health...three year COBRE -funded project in Molecular Targets. My recruitment to the University of Louisville’s Brown Cancer Center and my proposed COBRE
DoDs Efforts to Consolidate Data Centers Need Improvement
2016-03-29
Consolidation Initiative, February 26, 2010. 3 Green IT minimizes negative environmental impact of IT operations by ensuring that computers and computer-related...objectives for consolidating data centers. DoD’s objectives were to: • reduce cost; • reduce environmental impact ; • improve efficiency and service levels...number of DoD data centers. Finding A DODIG-2016-068 │ 7 information in DCIM, the DoD CIO did not confirm whether those changes would impact DoD’s
International Reference Ionosphere (IRI): Task Force Activity 2000
NASA Technical Reports Server (NTRS)
Bilitza, D.
2000-01-01
The annual IRI Task Force Activity was held at the Abdus Salam International Center for Theoretical Physics in Trieste, Italy from July 10 to July 14. The participants included J. Adeniyi (University of Ilorin, Nigeria), D. Bilitza (NSSDC/RITSS, USA), D. Buresova (Institute of Atmospheric Physics, Czech Republic), B. Forte (ICTP, Italy), R. Leitinger (University of Graz, Austria), B. Nava (ICTP, Italy), M. Mosert (University National Tucuman, Argentina), S. Pulinets (IZMIRAN, Russia), S. Radicella (ICTP, Italy), and B. Reinisch (University of Mass. Lowell, USA). The main topic of this Task Force Activity was the modeling of the topside ionosphere and the development of strategies for modeling of ionospheric variability. Each day during the workshop week the team debated a specific modeling problem in the morning during informal presentations and round table discussions of all participants. Ways of resolving the specific modeling problem were devised and tested in the afternoon in front of the computers of the ICTP Aeronomy and Radiopropagation Laboratory using ICTP s computer networks and internet access.
Salovey, Peter; Williams-Piehota, Pamela; Mowad, Linda; Moret, Marta Elisa; Edlund, Denielle; Andersen, Judith
2009-01-01
This article describes the establishment of two community technology centers affiliated with Head Start early childhood education programs focused especially on Latino and African American parents of children enrolled in Head Start. A 6-hour course concerned with computer and cancer literacy was presented to 120 parents and other community residents who earned a free, refurbished, Internet-ready computer after completing the program. Focus groups provided the basis for designing the structure and content of the course and modifying it during the project period. An outcomes-based assessment comparing program participants with 70 nonparticipants at baseline, immediately after the course ended, and 3 months later suggested that the program increased knowledge about computers and their use, knowledge about cancer and its prevention, and computer use including health information-seeking via the Internet. The creation of community computer technology centers requires the availability of secure space, capacity of a community partner to oversee project implementation, and resources of this partner to ensure sustainability beyond core funding.
The Center for Computational Biology: resources, achievements, and challenges
Dinov, Ivo D; Thompson, Paul M; Woods, Roger P; Van Horn, John D; Shattuck, David W; Parker, D Stott
2011-01-01
The Center for Computational Biology (CCB) is a multidisciplinary program where biomedical scientists, engineers, and clinicians work jointly to combine modern mathematical and computational techniques, to perform phenotypic and genotypic studies of biological structure, function, and physiology in health and disease. CCB has developed a computational framework built around the Manifold Atlas, an integrated biomedical computing environment that enables statistical inference on biological manifolds. These manifolds model biological structures, features, shapes, and flows, and support sophisticated morphometric and statistical analyses. The Manifold Atlas includes tools, workflows, and services for multimodal population-based modeling and analysis of biological manifolds. The broad spectrum of biomedical topics explored by CCB investigators include the study of normal and pathological brain development, maturation and aging, discovery of associations between neuroimaging and genetic biomarkers, and the modeling, analysis, and visualization of biological shape, form, and size. CCB supports a wide range of short-term and long-term collaborations with outside investigators, which drive the center's computational developments and focus the validation and dissemination of CCB resources to new areas and scientific domains. PMID:22081221
The Center for Computational Biology: resources, achievements, and challenges.
Toga, Arthur W; Dinov, Ivo D; Thompson, Paul M; Woods, Roger P; Van Horn, John D; Shattuck, David W; Parker, D Stott
2012-01-01
The Center for Computational Biology (CCB) is a multidisciplinary program where biomedical scientists, engineers, and clinicians work jointly to combine modern mathematical and computational techniques, to perform phenotypic and genotypic studies of biological structure, function, and physiology in health and disease. CCB has developed a computational framework built around the Manifold Atlas, an integrated biomedical computing environment that enables statistical inference on biological manifolds. These manifolds model biological structures, features, shapes, and flows, and support sophisticated morphometric and statistical analyses. The Manifold Atlas includes tools, workflows, and services for multimodal population-based modeling and analysis of biological manifolds. The broad spectrum of biomedical topics explored by CCB investigators include the study of normal and pathological brain development, maturation and aging, discovery of associations between neuroimaging and genetic biomarkers, and the modeling, analysis, and visualization of biological shape, form, and size. CCB supports a wide range of short-term and long-term collaborations with outside investigators, which drive the center's computational developments and focus the validation and dissemination of CCB resources to new areas and scientific domains.
1981-01-01
A Space Shuttle Main Engine undergoes test-firing at the National Space Technology Laboratories (now the Sternis Space Center) in Mississippi. The Marshall Space Flight Center had management responsibility of Space Shuttle propulsion elements, including the Main Engines.
Decomposition of algebraic sets and applications to weak centers of cubic systems
NASA Astrophysics Data System (ADS)
Chen, Xingwu; Zhang, Weinian
2009-10-01
There are many methods such as Gröbner basis, characteristic set and resultant, in computing an algebraic set of a system of multivariate polynomials. The common difficulties come from the complexity of computation, singularity of the corresponding matrices and some unnecessary factors in successive computation. In this paper, we decompose algebraic sets, stratum by stratum, into a union of constructible sets with Sylvester resultants, so as to simplify the procedure of elimination. Applying this decomposition to systems of multivariate polynomials resulted from period constants of reversible cubic differential systems which possess a quadratic isochronous center, we determine the order of weak centers and discuss the bifurcation of critical periods.
Development of Advanced Computational Aeroelasticity Tools at NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Bartels, R. E.
2008-01-01
NASA Langley Research Center has continued to develop its long standing computational tools to address new challenges in aircraft and launch vehicle design. This paper discusses the application and development of those computational aeroelastic tools. Four topic areas will be discussed: 1) Modeling structural and flow field nonlinearities; 2) Integrated and modular approaches to nonlinear multidisciplinary analysis; 3) Simulating flight dynamics of flexible vehicles; and 4) Applications that support both aeronautics and space exploration.
Twisting Anderson pseudospins with light: Quench dynamics in THz-pumped BCS superconductors
NASA Astrophysics Data System (ADS)
Chou, Yang-Zhi; Liao, Yunxiang; Foster, Matthew
We study the preparation and the detection of coherent far-from-equilibrium BCS superconductor dynamics in THz pump-probe experiments. In a recent experiment, an intense monocycle THz pulse with center frequency ω = Δ was injected into a superconductor with BCS gap Δ the post-pump evolution was detected via the optical conductivity. It was argued that nonlinear coupling of the pump to the Anderson pseudospins of the superconductor induces coherent dynamics of the Higgs mode Δ (t) . We validate this picture in a 2D BCS model with a combination of exact numerics and the Lax reduction, and we compute the dynamical phase diagram. The main effect of the pump is to scramble the orientations of Anderson pseudospins along the Fermi surface by twisting them in the xy-plane. We show that more intense pulses can induce a far-from-equilibrium gapless phase (phase I), originally predicted in the context of interaction quenches. We show that the THz pump can reach phase I at much lower energy densities than an interaction quench, and we demonstrate that Lax reduction provides a quantitative tool for computing coherent BCS dynamics. We also compute the optical conductivity for the states discussed here.
Unsteady Flow Interactions Between the LH2 Feed Line and SSME LPFP Inducer
NASA Technical Reports Server (NTRS)
Dorney, Dan; Griffin, Lisa; Marcu, Bogdan; Williams, Morgan
2006-01-01
An extensive computational effort has been performed in order to investigate the nature of unsteady flow in the fuel line supplying the three Space Shuttle Main Engines during flight. Evidence of high cycle fatigue (HCF) in the flow liner one diameter upstream of the Low Pressure Fuel Pump inducer has been observed in several locations. The analysis presented in this report has the objective of determining the driving mechanisms inducing HCF and the associated fluid flow phenomena. The simulations have been performed using two different computational codes, the NASA MSFC PHANTOM code and the Pratt and Whitney Rocketdyne ENIGMA code. The fuel flow through the flow liner and the pump inducer have been modeled in full three-dimensional geometry, and the results of the computations compared with test data taken during hot fire tests at NASA Stennis Space Center, and cold-flow water flow test data obtained at NASA MSFC. The numerical results indicate that unsteady pressure fluctuations at specific frequencies develop in the duct at the flow-liner location. Detailed frequency analysis of the flow disturbances is presented. The unsteadiness is believed to be an important source for fluctuating pressures generating high cycle fatigue.
Reflection Effects in Multimode Fiber Systems Utilizing Laser Transmitters
NASA Technical Reports Server (NTRS)
Bates, Harry E.
1991-01-01
A number of optical communication lines are now in use at NASA-Kennedy for the transmission of voice, computer data, and video signals. Now, all of these channels use a single carrier wavelength centered near 1300 or 1550 nm. Engineering tests in the past have given indications of the growth of systematic and random noise in the RF spectrum of a fiber network as the number of connector pairs is increased. This noise seems to occur when a laser transmitter is used instead of a LED. It has been suggested that the noise is caused by back reflections created at connector fiber interfaces. Experiments were performed to explore the effect of reflection on the transmitting laser under conditions of reflective feedback. This effort included computer integration of some of the instrumentation in the fiber optic lab using the Lab View software recently acquired by the lab group. The main goal was to interface the Anritsu Optical and RF spectrum analyzers to the MacIntosh II computer so that laser spectra and network RF spectra could be simultaneously and rapidly acquired in a form convenient for analysis. Both single and multimode fiber is installed at Kennedy. Since most are multimode, this effort concentrated on multimode systems.
Brain-computer interface devices for patients with paralysis and amputation: a meeting report
NASA Astrophysics Data System (ADS)
Bowsher, K.; Civillico, E. F.; Coburn, J.; Collinger, J.; Contreras-Vidal, J. L.; Denison, T.; Donoghue, J.; French, J.; Getzoff, N.; Hochberg, L. R.; Hoffmann, M.; Judy, J.; Kleitman, N.; Knaack, G.; Krauthamer, V.; Ludwig, K.; Moynahan, M.; Pancrazio, J. J.; Peckham, P. H.; Pena, C.; Pinto, V.; Ryan, T.; Saha, D.; Scharen, H.; Shermer, S.; Skodacek, K.; Takmakov, P.; Tyler, D.; Vasudevan, S.; Wachrathit, K.; Weber, D.; Welle, C. G.; Ye, M.
2016-04-01
Objective. The Food and Drug Administration’s (FDA) Center for Devices and Radiological Health (CDRH) believes it is important to help stakeholders (e.g., manufacturers, health-care professionals, patients, patient advocates, academia, and other government agencies) navigate the regulatory landscape for medical devices. For innovative devices involving brain-computer interfaces, this is particularly important. Approach. Towards this goal, on 21 November, 2014, CDRH held an open public workshop on its White Oak, MD campus with the aim of fostering an open discussion on the scientific and clinical considerations associated with the development of brain-computer interface (BCI) devices, defined for the purposes of this workshop as neuroprostheses that interface with the central or peripheral nervous system to restore lost motor or sensory capabilities. Main results. This paper summarizes the presentations and discussions from that workshop. Significance. CDRH plans to use this information to develop regulatory considerations that will promote innovation while maintaining appropriate patient protections. FDA plans to build on advances in regulatory science and input provided in this workshop to develop guidance that provides recommendations for premarket submissions for BCI devices. These proceedings will be a resource for the BCI community during the development of medical devices for consumers.
Radar Model of Asteroid 216 Kleopatra
NASA Technical Reports Server (NTRS)
2000-01-01
These images show several views from a radar-based computer model of asteroid 216 Kleopatra. The object, located in the main asteroid belt between Mars and Jupiter, is about 217 kilometers (135 miles) long and about 94 kilometers (58 miles) wide, or about the size of New Jersey.
This dog bone-shaped asteroid is an apparent leftover from an ancient, violent cosmic collision. Kleopatra is one of several dozen asteroids whose coloring suggests they contain metal.A team of astronomers observing Kleopatra used the 305-meter (1,000-foot) telescope of the Arecibo Observatory in Puerto Rico to bounce encoded radio signals off Kleopatra. Using sophisticated computer analysis techniques, they decoded the echoes, transformed them into images, and assembled a computer model of the asteroid's shape.The images were obtained when Kleopatra was about 171 million kilometers (106 million miles) from Earth. This model is accurate to within about 15 kilometers (9 miles).The Arecibo Observatory is part of the National Astronomy and Ionosphere Center, operated by Cornell University, Ithaca, N.Y., for the National Science Foundation. The Kleopatra radar observations were supported by NASA's Office of Space Science, Washington, DC. JPL is managed for NASA by the California Institute of Technology in Pasadena.Reflection effects in multimode fiber systems utilizing laser transmitters
NASA Astrophysics Data System (ADS)
Bates, Harry E.
1991-11-01
A number of optical communication lines are now in use at NASA-Kennedy for the transmission of voice, computer data, and video signals. Now, all of these channels use a single carrier wavelength centered near 1300 or 1550 nm. Engineering tests in the past have given indications of the growth of systematic and random noise in the RF spectrum of a fiber network as the number of connector pairs is increased. This noise seems to occur when a laser transmitter is used instead of a LED. It has been suggested that the noise is caused by back reflections created at connector fiber interfaces. Experiments were performed to explore the effect of reflection on the transmitting laser under conditions of reflective feedback. This effort included computer integration of some of the instrumentation in the fiber optic lab using the Lab View software recently acquired by the lab group. The main goal was to interface the Anritsu Optical and RF spectrum analyzers to the MacIntosh II computer so that laser spectra and network RF spectra could be simultaneously and rapidly acquired in a form convenient for analysis. Both single and multimode fiber is installed at Kennedy. Since most are multimode, this effort concentrated on multimode systems.
Managing internode data communications for an uninitialized process in a parallel computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Archer, Charles J; Blocksome, Michael A; Miller, Douglas R
2014-05-20
A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior tomore » initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory.« less
Managing internode data communications for an uninitialized process in a parallel computer
Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E
2014-05-20
A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior to initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory.
Expanding HPC and Research Computing--The Sustainable Way
ERIC Educational Resources Information Center
Grush, Mary
2009-01-01
Increased demands for research and high-performance computing (HPC)--along with growing expectations for cost and environmental savings--are putting new strains on the campus data center. More and more, CIOs like the University of Notre Dame's (Indiana) Gordon Wishon are seeking creative ways to build more sustainable models for data center and…
School Data Processing Services in Texas. A Cooperative Approach. [Revised.
ERIC Educational Resources Information Center
Texas Education Agency, Austin. Management Information Center.
The Texas plan for computer services provides services to public school districts through a statewide network of 20 regional Education Service Centers (ESC). Each of the three Multi-Regional Processing Centers (MRPCs) operates a large computer facility providing school district services within from three to eight ESC regions; each of the five…
School Data Processing Services in Texas: A Cooperative Approach.
ERIC Educational Resources Information Center
Texas Education Agency, Austin.
The Texas plan for computer services provides services to public school districts through a statewide network of 20 regional Education Service Centers (ESC). Each of the three Multi-Regional Processing Centers (MRPCs) operates a large computer facility providing school district services within from three to eight ESC regions; each of the five…
School Data Processing Services in Texas: A Cooperative Approach.
ERIC Educational Resources Information Center
Texas Education Agency, Austin.
The Texas plan for computer services provides services to public school districts through a statewide network of 20 regional Education Service Centers (ESO). Each of the three Multi-Regional Processing Centers (MRPCs) operates a large computer facility providing school district services within from three to eight ESC regions each of the five…
Remote Science Operation Center research
NASA Technical Reports Server (NTRS)
Banks, P. M.
1986-01-01
Progress in the following areas is discussed: the design, planning and operation of a remote science payload operations control center; design and planning of a data link via satellite; and the design and prototyping of an advanced workstation environment for multi-media (3-D computer aided design/computer aided engineering, voice, video, text) communications and operations.
SAM: The "Search and Match" Computer Program of the Escherichia coli Genetic Stock Center
ERIC Educational Resources Information Center
Bachmann, B. J.; And Others
1973-01-01
Describes a computer program used at a genetic stock center to locate particular strains of bacteria. The program can match up to 30 strain descriptions requested by a researcher with the records on file. Uses of this particular program can be made in many fields. (PS)
Hibbing Community College's Community Computer Center.
ERIC Educational Resources Information Center
Regional Technology Strategies, Inc., Carrboro, NC.
This paper reports on the development of the Community Computer Center (CCC) at Hibbing Community College (HCC) in Minnesota. HCC is located in the largest U.S. iron mining area in the United States. Closures of steel-producing plants are affecting the Hibbing area. Outmigration, particularly of younger workers and their families, has been…
48 CFR 9905.506-60 - Illustrations.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., installs a computer service center to begin operations on May 1. The operating expense related to the new... operating expenses of the computer service center for the 8-month part of the cost accounting period may be... 48 Federal Acquisition Regulations System 7 2013-10-01 2012-10-01 true Illustrations. 9905.506-60...
The Mathematics and Computer Science Learning Center (MLC).
ERIC Educational Resources Information Center
Abraham, Solomon T.
The Mathematics and Computer Science Learning Center (MLC) was established in the Department of Mathematics at North Carolina Central University during the fall semester of the 1982-83 academic year. The initial operations of the MLC were supported by grants to the University from the Burroughs-Wellcome Company and the Kenan Charitable Trust Fund.…
Film Library Information Management System.
ERIC Educational Resources Information Center
Minnella, C. Vincent; And Others
The computer program described not only allows the user to determine rental sources for a particular film title quickly, but also to select the least expensive of the sources. This program developed at SUNY Cortland's Sperry Learning Resources Center and Computer Center is designed to maintain accurate data on rental and purchase films in both…
1999-05-26
Looking for a faster computer? How about an optical computer that processes data streams simultaneously and works with the speed of light? In space, NASA researchers have formed optical thin-film. By turning these thin-films into very fast optical computer components, scientists could improve computer tasks, such as pattern recognition. Dr. Hossin Abdeldayem, physicist at NASA/Marshall Space Flight Center (MSFC) in Huntsville, Al, is working with lasers as part of an optical system for pattern recognition. These systems can be used for automated fingerprinting, photographic scarning and the development of sophisticated artificial intelligence systems that can learn and evolve. Photo credit: NASA/Marshall Space Flight Center (MSFC)
Knowledge management: Role of the the Radiation Safety Information Computational Center (RSICC)
NASA Astrophysics Data System (ADS)
Valentine, Timothy
2017-09-01
The Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL) is an information analysis center that collects, archives, evaluates, synthesizes and distributes information, data and codes that are used in various nuclear technology applications. RSICC retains more than 2,000 software packages that have been provided by code developers from various federal and international agencies. RSICC's customers (scientists, engineers, and students from around the world) obtain access to such computing codes (source and/or executable versions) and processed nuclear data files to promote on-going research, to ensure nuclear and radiological safety, and to advance nuclear technology. The role of such information analysis centers is critical for supporting and sustaining nuclear education and training programs both domestically and internationally, as the majority of RSICC's customers are students attending U.S. universities. Additionally, RSICC operates a secure CLOUD computing system to provide access to sensitive export-controlled modeling and simulation (M&S) tools that support both domestic and international activities. This presentation will provide a general review of RSICC's activities, services, and systems that support knowledge management and education and training in the nuclear field.
Detail of main entry at center of southeast elevation; camera ...
Detail of main entry at center of southeast elevation; camera facing west. - Mare Island Naval Shipyard, Hospital Ward, Johnson Lane, west side at intersection of Johnson Lane & Cossey Street, Vallejo, Solano County, CA
Computer Learning for Young Children.
ERIC Educational Resources Information Center
Choy, Anita Y.
1995-01-01
Computer activities that combine education and entertainment make learning easy and fun for preschoolers. Computers encourage social skills, language and literacy skills, cognitive development, problem solving, and eye-hand coordination. The paper describes one teacher's experiences setting up a computer center and using computers with…
NASA Astrophysics Data System (ADS)
Petrushin, Alexey; Ferrara, Lorenzo; Blau, Axel
2016-12-01
Objective. In light of recent progress in mapping neural function to behavior, we briefly and selectively review past and present endeavors to reveal and reconstruct nervous system function in Caenorhabditis elegans through simulation. Approach. Rather than presenting an all-encompassing review on the mathematical modeling of C. elegans, this contribution collects snapshots of pathfinding key works and emerging technologies that recent single- and multi-center simulation initiatives are building on. We thereby point out a few general limitations and problems that these undertakings are faced with and discuss how these may be addressed and overcome. Main results. Lessons learned from past and current computational approaches to deciphering and reconstructing information flow in the C. elegans nervous system corroborate the need of refining neural response models and linking them to intra- and extra-environmental interactions to better reflect and understand the actual biological, biochemical and biophysical events that lead to behavior. Together with single-center research efforts, the Si elegans and OpenWorm projects aim at providing the required, in some cases complementary tools for different hardware architectures to support advancement into this direction. Significance. Despite its seeming simplicity, the nervous system of the hermaphroditic nematode C. elegans with just 302 neurons gives rise to a rich behavioral repertoire. Besides controlling vital functions (feeding, defecation, reproduction), it encodes different stimuli-induced as well as autonomous locomotion modalities (crawling, swimming and jumping). For this dichotomy between system simplicity and behavioral complexity, C. elegans has challenged neurobiologists and computational scientists alike. Understanding the underlying mechanisms that lead to a context-modulated functionality of individual neurons would not only advance our knowledge on nervous system function and its failure in pathological states, but have directly exploitable benefits for robotics and the engineering of brain-mimetic computational architectures that are orthogonal to current von-Neumann-type machines.
Oh, Pok-Ja; Kim, Il-Ok; Shin, Sung-Rae; Jung, Hoe-Kyung
2004-10-01
This study was to develop Web-based multimedia content for Physical Examination and Health Assessment. The multimedia content was developed based on Jung's teaching and learning structure plan model, using the following 5 processes : 1) Analysis Stage, 2) Planning Stage, 3) Storyboard Framing and Production Stage, 4) Program Operation Stage, and 5) Final Evaluation Stage. The web based multimedia content consisted of an intro movie, main page and sub pages. On the main page, there were 6 menu bars that consisted of Announcement center, Information of professors, Lecture guide, Cyber lecture, Q&A, and Data centers, and a site map which introduced 15 week lectures. In the operation of web based multimedia content, HTML, JavaScript, Flash, and multimedia technology (Audio and Video) were utilized and the content consisted of text content, interactive content, animation, and audio & video. Consultation with the experts in context, computer engineering, and educational technology was utilized in the development of these processes. Web-based multimedia content is expected to offer individualized and tailored learning opportunities to maximize and facilitate the effectiveness of the teaching and learning process. Therefore, multimedia content should be utilized concurrently with the lecture in the Physical Examination and Health Assessment classes as a vital teaching aid to make up for the weakness of the face-to- face teaching-learning method.
Computer Literacy Project. A General Orientation in Basic Computer Concepts and Applications.
ERIC Educational Resources Information Center
Murray, David R.
This paper proposes a two-part, basic computer literacy program for university faculty, staff, and students with no prior exposure to computers. The program described would introduce basic computer concepts and computing center service programs and resources; provide fundamental preparation for other computer courses; and orient faculty towards…
Cloud services on an astronomy data center
NASA Astrophysics Data System (ADS)
Solar, Mauricio; Araya, Mauricio; Farias, Humberto; Mardones, Diego; Wang, Zhong
2016-08-01
The research on computational methods for astronomy performed by the first phase of the Chilean Virtual Observatory (ChiVO) led to the development of functional prototypes, implementing state-of-the-art computational methods and proposing new algorithms and techniques. The ChiVO software architecture is based on the use of the IVOA protocols and standards. These protocols and standards are grouped in layers, with emphasis on the application and data layers, because their basic standards define the minimum operation that a VO should conduct. As momentary verification, the current implementation works with a set of data, with 1 TB capacity, which comes from the reduction of the cycle 0 of ALMA. This research was mainly focused on spectroscopic data cubes coming from the cycle 0 ALMA's public data. As the dataset size increases when the cycle 1 ALMA's public data is also increasing every month, data processing is becoming a major bottleneck for scientific research in astronomy. When designing the ChiVO, we focused on improving both computation and I/ O costs, and this led us to configure a data center with 424 high speed cores of 2,6 GHz, 1 PB of storage (distributed in hard disk drives-HDD and solid state drive-SSD) and high speed communication Infiniband. We are developing a cloud based e-infrastructure for ChiVO services, in order to have a coherent framework for developing novel web services for on-line data processing in the ChiVO. We are currently parallelizing these new algorithms and techniques using HPC tools to speed up big data processing, and we will report our results in terms of data size, data distribution, number of cores and response time, in order to compare different processing and storage configurations.
NASA Technical Reports Server (NTRS)
Davis, G. J.
1994-01-01
One area of research of the Information Sciences Division at NASA Ames Research Center is devoted to the analysis and enhancement of processors and advanced computer architectures, specifically in support of automation and robotic systems. To compare systems' abilities to efficiently process Lisp and Ada, scientists at Ames Research Center have developed a suite of non-parallel benchmarks called ELAPSE. The benchmark suite was designed to test a single computer's efficiency as well as alternate machine comparisons on Lisp, and/or Ada languages. ELAPSE tests the efficiency with which a machine can execute the various routines in each environment. The sample routines are based on numeric and symbolic manipulations and include two-dimensional fast Fourier transformations, Cholesky decomposition and substitution, Gaussian elimination, high-level data processing, and symbol-list references. Also included is a routine based on a Bayesian classification program sorting data into optimized groups. The ELAPSE benchmarks are available for any computer with a validated Ada compiler and/or Common Lisp system. Of the 18 routines that comprise ELAPSE, provided within this package are 14 developed or translated at Ames. The others are readily available through literature. The benchmark that requires the most memory is CHOLESKY.ADA. Under VAX/VMS, CHOLESKY.ADA requires 760K of main memory. ELAPSE is available on either two 5.25 inch 360K MS-DOS format diskettes (standard distribution) or a 9-track 1600 BPI ASCII CARD IMAGE format magnetic tape. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The ELAPSE benchmarks were written in 1990. VAX and VMS are trademarks of Digital Equipment Corporation. MS-DOS is a registered trademark of Microsoft Corporation.
Considerations for Software Defined Networking (SDN): Approaches and use cases
NASA Astrophysics Data System (ADS)
Bakshi, K.
Software Defined Networking (SDN) is an evolutionary approach to network design and functionality based on the ability to programmatically modify the behavior of network devices. SDN uses user-customizable and configurable software that's independent of hardware to enable networked systems to expand data flow control. SDN is in large part about understanding and managing a network as a unified abstraction. It will make networks more flexible, dynamic, and cost-efficient, while greatly simplifying operational complexity. And this advanced solution provides several benefits including network and service customizability, configurability, improved operations, and increased performance. There are several approaches to SDN and its practical implementation. Among them, two have risen to prominence with differences in pedigree and implementation. This paper's main focus will be to define, review, and evaluate salient approaches and use cases of the OpenFlow and Virtual Network Overlay approaches to SDN. OpenFlow is a communication protocol that gives access to the forwarding plane of a network's switches and routers. The Virtual Network Overlay relies on a completely virtualized network infrastructure and services to abstract the underlying physical network, which allows the overlay to be mobile to other physical networks. This is an important requirement for cloud computing, where applications and associated network services are migrated to cloud service providers and remote data centers on the fly as resource demands dictate. The paper will discuss how and where SDN can be applied and implemented, including research and academia, virtual multitenant data center, and cloud computing applications. Specific attention will be given to the cloud computing use case, where automated provisioning and programmable overlay for scalable multi-tenancy is leveraged via the SDN approach.
Evaluate the ability of clinical decision support systems (CDSSs) to improve clinical practice.
Ajami, Sima; Amini, Fatemeh
2013-01-01
Prevalence of new diseases, medical science promotion and increase of referring to health care centers, provide a good situation for medical errors growth. Errors can involve medicines, surgery, diagnosis, equipment, or lab reports. Medical errors can occur anywhere in the health care system: In hospitals, clinics, surgery centers, doctors' offices, nursing homes, pharmacies, and patients' homes. According to the Institute of Medicine (IOM), 98,000 people die every year from preventable medical errors. In 2010 from all referred medical error records to Iran Legal Medicine Organization, 46/5% physician and medical team members were known as delinquent. One of new technologies that can reduce medical errors is clinical decision support systems (CDSSs). This study was unsystematic-review study. The literature was searched on evaluate the "ability of clinical decision support systems to improve clinical practice" with the help of library, books, conference proceedings, data bank, and also searches engines available at Google, Google scholar. For our searches, we employed the following keywords and their combinations: medical error, clinical decision support systems, Computer-Based Clinical Decision Support Systems, information technology, information system, health care quality, computer systems in the searching areas of title, keywords, abstract, and full text. In this study, more than 100 articles and reports were collected and 38 of them were selected based on their relevancy. The CDSSs are computer programs, designed for help to health care careers. These systems as a knowledge-based tool could help health care manager in analyze evaluation, improvement and selection of effective solutions in clinical decisions. Therefore, it has a main role in medical errors reduction. The aim of this study was to express ability of the CDSSs to improve
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-10-16
A workshop was held at the RIKEN-BNL Research Center on October 16, 1998, as part of the first anniversary celebration for the center. This meeting brought together the physicists from RIKEN-BNL, BNL and Columbia who are using the QCDSP (Quantum Chromodynamics on Digital Signal Processors) computer at the RIKEN-BNL Research Center for studies of QCD. Many of the talks in the workshop were devoted to domain wall fermions, a discretization of the continuum description of fermions which preserves the global symmetries of the continuum, even at finite lattice spacing. This formulation has been the subject of analytic investigation for somemore » time and has reached the stage where large-scale simulations in QCD seem very promising. With the computational power available from the QCDSP computers, scientists are looking forward to an exciting time for numerical simulations of QCD.« less
Computational Science News | Computational Science | NREL
-Cooled High-Performance Computing Technology at the ESIF February 28, 2018 NREL Launches New Website for High-Performance Computing System Users The National Renewable Energy Laboratory (NREL) Computational Science Center has launched a revamped website for users of the lab's high-performance computing (HPC
Computer Training for Seniors: An Academic-Community Partnership
ERIC Educational Resources Information Center
Sanders, Martha J.; O'Sullivan, Beth; DeBurra, Katherine; Fedner, Alesha
2013-01-01
Computer technology is integral to information retrieval, social communication, and social interaction. However, only 47% of seniors aged 65 and older use computers. The purpose of this study was to determine the impact of a client-centered computer program on computer skills, attitudes toward computer use, and generativity in novice senior…
Benefits Analysis of Multi-Center Dynamic Weather Routes
NASA Technical Reports Server (NTRS)
Sheth, Kapil; McNally, David; Morando, Alexander; Clymer, Alexis; Lock, Jennifer; Petersen, Julien
2014-01-01
Dynamic weather routes are flight plan corrections that can provide airborne flights more than user-specified minutes of flying-time savings, compared to their current flight plan. These routes are computed from the aircraft's current location to a flight plan fix downstream (within a predefined limit region), while avoiding forecasted convective weather regions. The Dynamic Weather Routes automation has been continuously running with live air traffic data for a field evaluation at the American Airlines Integrated Operations Center in Fort Worth, TX since July 31, 2012, where flights within the Fort Worth Air Route Traffic Control Center are evaluated for time savings. This paper extends the methodology to all Centers in United States and presents benefits analysis of Dynamic Weather Routes automation, if it was implemented in multiple airspace Centers individually and concurrently. The current computation of dynamic weather routes requires a limit rectangle so that a downstream capture fix can be selected, preventing very large route changes spanning several Centers. In this paper, first, a method of computing a limit polygon (as opposed to a rectangle used for Fort Worth Center) is described for each of the 20 Centers in the National Airspace System. The Future ATM Concepts Evaluation Tool, a nationwide simulation and analysis tool, is used for this purpose. After a comparison of results with the Center-based Dynamic Weather Routes automation in Fort Worth Center, results are presented for 11 Centers in the contiguous United States. These Centers are generally most impacted by convective weather. A breakdown of individual Center and airline savings is presented and the results indicate an overall average savings of about 10 minutes of flying time are obtained per flight.
Library Media Learning and Play Center.
ERIC Educational Resources Information Center
Faber, Therese; And Others
Preschool educators developed a library media learning and play center to enable children to "experience" a library; establish positive attitudes about the library; and encourage respect for self, others, and property. The center had the following areas: check-in and check-out desk, quiet reading section, computer center, listening center, video…
Swart, Marcel; Bickelhaupt, F Matthias
2006-03-01
We have carried out an extensive exploration of the gas-phase basicity of archetypal anionic bases across the periodic system using the generalized gradient approximation of density functional theory (DFT) at BP86/QZ4P//BP86/TZ2P. First, we validate DFT as a reliable tool for computing proton affinities and related thermochemical quantities: BP86/QZ4P//BP86/TZ2P is shown to yield a mean absolute deviation of 1.6 kcal/mol for the proton affinity at 0 K with respect to high-level ab initio benchmark data. The main purpose of this work is to provide the proton affinities (and corresponding entropies) at 298 K of the anionic conjugate bases of all main-group-element hydrides of groups 14-17 and periods 2-6. We have also studied the effect of stepwise methylation of the protophilic center of the second- and third-period bases.
NASA Technical Reports Server (NTRS)
Piascik, Robert S.; Prosser, William H.
2011-01-01
The Director of the NASA Engineering and Safety Center (NESC), requested an independent assessment of the anomalous gaseous hydrogen (GH2) flow incident on the Space Shuttle Program (SSP) Orbiter Vehicle (OV)-105 during the Space Transportation System (STS)-126 mission. The main propulsion system (MPS) engine #2 GH2 flow control valve (FCV) LV-57 transition from low towards high flow position without being commanded. Post-flight examination revealed that the FCV LV-57 poppet had experienced a fatigue failure that liberated a section of the poppet flange. The NESC assessment provided a peer review of the computational fluid dynamics (CFD), stress analysis, and impact testing. A probability of detection (POD) study was requested by the SSP Orbiter Project for the eddy current (EC) nondestructive evaluation (NDE) techniques that were developed to inspect the flight FCV poppets. This report contains the findings and recommendations from the NESC assessment.
The effective use of virtualization for selection of data centers in a cloud computing environment
NASA Astrophysics Data System (ADS)
Kumar, B. Santhosh; Parthiban, Latha
2018-04-01
Data centers are the places which consist of network of remote servers to store, access and process the data. Cloud computing is a technology where users worldwide will submit the tasks and the service providers will direct the requests to the data centers which are responsible for execution of tasks. The servers in the data centers need to employ the virtualization concept so that multiple tasks can be executed simultaneously. In this paper we proposed an algorithm for data center selection based on energy of virtual machines created in server. The virtualization energy in each of the server is calculated and total energy of the data center is obtained by the summation of individual server energy. The tasks submitted are routed to the data center with least energy consumption which will result in minimizing the operational expenses of a service provider.
Garzón-Alvarado, Diego A
2013-01-21
This article develops a model of the appearance and location of the primary centers of ossification in the calvaria. The model uses a system of reaction-diffusion equations of two molecules (BMP and Noggin) whose behavior is of type activator-substrate and its solution produces Turing patterns, which represents the primary ossification centers. Additionally, the model includes the level of cell maturation as a function of the location of mesenchymal cells. Thus the mature cells can become osteoblasts due to the action of BMP2. Therefore, with this model, we can have two frontal primary centers, two parietal, and one, two or more occipital centers. The location of these centers in the simplified computational model is highly consistent with those centers found at an embryonic level. Copyright © 2012 Elsevier Ltd. All rights reserved.
27. Pump Room interiorDrainage pump motor control center with main ...
27. Pump Room interior-Drainage pump motor control center with main valve control panel at right. - Hunters Point Naval Shipyard, Drydock No. 4, East terminus of Palou Avenue, San Francisco, San Francisco County, CA
10. INTERIOR, SHOWING RECEPTION ROOM IN CENTER SECTION, WITH MAIN ...
10. INTERIOR, SHOWING RECEPTION ROOM IN CENTER SECTION, WITH MAIN ENTRANCE AT RIGHT. VIEW TO SOUTHWEST. - Fort David A. Russell, Red Cross Building, Third Street between Randall Avenue & Tenth Cavalry Avenue, Cheyenne, Laramie County, WY
40. MAIN DRIVE SHAFT IN CENTER, PATTERN STORAGE IN REAR, ...
40. MAIN DRIVE SHAFT IN CENTER, PATTERN STORAGE IN REAR, WATER TANK AT RIGHT-LOOKING EAST. - W. A. Young & Sons Foundry & Machine Shop, On Water Street along Monongahela River, Rices Landing, Greene County, PA
NASA Astrophysics Data System (ADS)
Assi, Abed Al Nasser
2018-03-01
Reduction of the patient's received radiation dose to as low as reasonably achievable (ALARA) is based on recommendations of radiation protection organizations such as the International Commission on Radiological Protection (ICRP) and the National Radiological Protection Board (NRPB). The aim of this study was to explore the frequency and characteristics of rejected / repeated radiographic films in governmental and private centers in Jenin city. The radiological centers were chosen based on their high volume of radiographic studies. The evaluation was carried out over a period of four months. The collected data were compiled at the end of each week and entered into a computer for analysis at the end of study. Overall 5000 films (images) were performed in four months, The average repeat rate of radiographic images was 10% (500 films). Repetition rate was the same for both thoracic and abdominal images (42%). The main reason for repeating imaging was inadequate imaging quality (58.2%) and poor film processing (38%). Human error was the most likely reason necessitating the repetition of the radiographs (48 %). Infant and children groups comprised 85% of the patient population that required repetition of the radiographic studies. In conclusion, we have a higher repetition rate of imaging studies compared to the international standards (10% vs. 4-6%, respectively). This is especially noticeable in infants and children, and mainly attributed to human error in obtaining and processing images. This is an important issue that needs to be addressed on a national level due to the ill effects associated with excessive exposure to radiation especially in children, and to reduce cost of the care delivered.
Establishment of a Beta Test Center for the NPARC Code at Central State University
NASA Technical Reports Server (NTRS)
Okhio, Cyril B.
1996-01-01
Central State University has received a supplementary award to purchase computer workstations for the NPARC (National Propulsion Ames Research Center) computational fluid dynamics code BETA Test Center. The computational code has also been acquired for installation on the workstations. The acquisition of this code is an initial step for CSU in joining an alliance composed of NASA, AEDC, The Aerospace Industry, and academia. A post-Doctoral research Fellow from a neighboring university will assist the PI in preparing a template for Tutorial documents for the BETA test center. The major objective of the alliance is to establish a national applications-oriented CFD capability, centered on the NPARC code. By joining the alliance, the BETA test center at CSU will allow the PI, as well as undergraduate and post-graduate students to test the capability of the NPARC code in predicting the physics of aerodynamic/geometric configurations that are of interest to the alliance. Currently, CSU is developing a once a year, hands-on conference/workshop based upon the experience acquired from running other codes similar to the NPARC code in the first year of this grant.
Michael Ernst
2017-12-09
As the sole Tier-1 computing facility for ATLAS in the United States and the largest ATLAS computing center worldwide Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing,
Patient-centered computing: can it curb malpractice risk?
Bartlett, E E
1993-01-01
The threat of a medical malpractice suit represents a major cause of career dissatisfaction for American physicians. Patient-centered computing may improve physician-patient communications, thereby reducing liability risk. This review describes programs that have sought to enhance patient education and involvement pertaining to 5 major categories of malpractice lawsuits: Diagnosis, medications, obstetrics, surgery, and treatment errors.
Patient-centered computing: can it curb malpractice risk?
Bartlett, E. E.
1993-01-01
The threat of a medical malpractice suit represents a major cause of career dissatisfaction for American physicians. Patient-centered computing may improve physician-patient communications, thereby reducing liability risk. This review describes programs that have sought to enhance patient education and involvement pertaining to 5 major categories of malpractice lawsuits: Diagnosis, medications, obstetrics, surgery, and treatment errors. PMID:8130563
Initial Comparison of Single Cylinder Stirling Engine Computer Model Predictions with Test Results
NASA Technical Reports Server (NTRS)
Tew, R. C., Jr.; Thieme, L. G.; Miao, D.
1979-01-01
A Stirling engine digital computer model developed at NASA Lewis Research Center was configured to predict the performance of the GPU-3 single-cylinder rhombic drive engine. Revisions to the basic equations and assumptions are discussed. Model predictions with the early results of the Lewis Research Center GPU-3 tests are compared.
A Report on the Design and Construction of the University of Massachusetts Computer Science Center.
ERIC Educational Resources Information Center
Massachusetts State Office of the Inspector General, Boston.
This report describes a review conducted by the Massachusetts Office of the Inspector General on the construction of the Computer Science and Development Center at the University of Massachusetts, Amherst. The office initiated the review after hearing concerns about the management of the project, including its delayed completion and substantial…
Higher-order methods for simulations on quantum computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sornborger, A.T.; Stewart, E.D.
1999-09-01
To implement many-qubit gates for use in quantum simulations on quantum computers efficiently, we develop and present methods reexpressing exp[[minus]i(H[sub 1]+H[sub 2]+[center dot][center dot][center dot])[Delta]t] as a product of factors exp[[minus]iH[sub 1][Delta]t], exp[[minus]iH[sub 2][Delta]t],[hor ellipsis], which is accurate to third or fourth order in [Delta]t. The methods we derive are an extended form of the symplectic method, and can also be used for an integration of classical Hamiltonians on classical computers. We derive both integral and irrational methods, and find the most efficient methods in both cases. [copyright] [ital 1999] [ital The American Physical Society
NASA Astrophysics Data System (ADS)
Bogdanov, A. V.; Iuzhanin, N. V.; Zolotarev, V. I.; Ezhakova, T. R.
2017-12-01
In this article the problem of scientific projects support throughout their lifecycle in the computer center is considered in every aspect of support. Configuration Management system plays a connecting role in processes related to the provision and support of services of a computer center. In view of strong integration of IT infrastructure components with the use of virtualization, control of infrastructure becomes even more critical to the support of research projects, which means higher requirements for the Configuration Management system. For every aspect of research projects support, the influence of the Configuration Management system is being reviewed and development of the corresponding elements of the system is being described in the present paper.
Finance for practicing radiologists.
Berlin, Jonathan W; Lexa, Frank James
2005-03-01
This article reviews basic finance for radiologists. Using the example of a hypothetical outpatient computed tomography center, readers are introduced to the concept of net present value. This concept refers to the current real value of anticipated income in the future, realizing that revenue in the future has less value than it does today. Positive net present value projects add wealth to a practice and should be pursued. The article details how costs and revenues for a hypothetical outpatient computed tomography center are determined and elucidates the difference between fixed costs and variable costs. The article provides readers with the steps used to calculate the break-even volume for an outpatient computed tomography center given situation-specific assumptions regarding staff, equipment lease rates, rent, and third-party payer mix.
Readiness of healthcare providers for eHealth: the case from primary healthcare centers in Lebanon.
Saleh, Shadi; Khodor, Rawya; Alameddine, Mohamad; Baroud, Maysa
2016-11-10
eHealth can positively impact the efficiency and quality of healthcare services. Its potential benefits extend to the patient, healthcare provider, and organization. Primary healthcare (PHC) settings may particularly benefit from eHealth. In these settings, healthcare provider readiness is key to successful eHealth implementation. Accordingly, it is necessary to explore the potential readiness of providers to use eHealth tools. Therefore, the purpose of this study was to assess the readiness of healthcare providers working in PHC centers in Lebanon to use eHealth tools. A self-administered questionnaire was used to assess participants' socio-demographics, computer use, literacy, and access, and participants' readiness for eHealth implementation (appropriateness, management support, change efficacy, personal beneficence). The study included primary healthcare providers (physicians, nurses, other providers) working in 22 PHC centers distributed across Lebanon. Descriptive and bivariate analyses (ANOVA, independent t-test, Kruskal Wallis, Tamhane's T2) were used to compare participant characteristics to the level of readiness for the implementation of eHealth. Of the 541 questionnaires, 213 were completed (response rate: 39.4 %). The majority of participants were physicians (46.9 %), and nurses (26.8 %). Most physicians (54.0 %), nurses (61.4 %), and other providers (50.9 %) felt comfortable using computers, and had access to computers at their PHC center (physicians: 77.0 %, nurses: 87.7 %, others: 92.5 %). Frequency of computer use varied. The study found a significant difference for personal beneficence, management support, and change efficacy among different healthcare providers, and relative to participants' level of comfort using computers. There was a significant difference by level of comfort using computers and appropriateness. A significant difference was also found between those with access to computers in relation to personal beneficence and change efficacy; and between frequency of computer use and change efficacy. The implementation of eHealth cannot be achieved without the readiness of healthcare providers. This study demonstrates that the majority of healthcare providers at PHC centers across Lebanon are ready for eHealth implementation. The findings of this study can be considered by decision makers to enhance and scale-up the use of eHealth in PHC centers nationally. Efforts should be directed towards capacity building for healthcare providers.
Selected Papers of the Southeastern Writing Center Association.
ERIC Educational Resources Information Center
Roberts, David H., Ed.; Wolff, William C., Ed.
Addressing a variety of concerns of writing center directors and staff, directors of freshman composition, and English department chairs, the papers in this collection discuss writing center research and evaluation, writing center tutors, and computers in the writing center. The titles of the essays and their authors are as follows: (1) "Narrative…
General view of a Space Shuttle Main Engine (SSME) mounted ...
General view of a Space Shuttle Main Engine (SSME) mounted on an SSME engine handler, taken in the SSME Processing Facility at Kennedy Space Center. The most prominent features of the engine assembly in this view are the Low-Pressure Oxidizer Turbopump Discharge Duct looping around the right side of the engine assembly then turning in and connecting to the High-Pressure Oxidizer Turbopump. The sphere in the approximate center of the assembly is the POGO System Accumulator, the Engine Controller is located on the bottom and slightly left of the center of the Engine Assembly in this view. - Space Transportation System, Space Shuttle Main Engine, Lyndon B. Johnson Space Center, 2101 NASA Parkway, Houston, Harris County, TX
The Hospital-Based Drug Information Center.
ERIC Educational Resources Information Center
Hopkins, Leigh
1982-01-01
Discusses the rise of drug information centers in hospitals and medical centers, highlighting staffing, functions, typical categories of questions received by centers, and sources used. An appendix of drug information sources included in texts, updated services, journals, and computer databases is provided. Thirteen references are listed. (EJS)
12. CONTROL PANELS, WEST SIDE (LEFT & RIGHT), MAIN FLOOR: ...
12. CONTROL PANELS, WEST SIDE (LEFT & RIGHT), MAIN FLOOR: CENTER OF CLUSTERS, TOP BOX: MEGAWATT METER CENTER OF CLUSTERS, LOWER THREE BOXES: AMPERE METERS LEFT SIDE OF CLUSTERS: VOLTAGE CHART RECORDER RIGHT SIDE OF CLUSTERS: RECLOSE RELAY CENTER UNDER CLUSTERS: TESTING SWITCHES BELOW TESTING SWITCHES: BREAKER SWITCHES - Bonneville Power Administration South Bank Substation, I-84, South of Bonneville Dam Powerhouse, Bonneville, Multnomah County, OR
NASA Astrophysics Data System (ADS)
Salamunićcar, Goran; Lončarić, Sven
Crater detection algorithms (CDAs) are an important subject of recent scientific research, as evident from the numerous recent publications in the field [ASR, 42 (1), 6-19]. In our previous work: (1) all the craters from the major currently available manually assembled catalogues have been merged into the catalogue with 57633 known Martian impact-craters [PSS, 56 (15), 1992-2008]; and (2) the CDA (developed to search for still uncatalogued impact-craters using 1/128° MOLA data) has been used to extend GT-57633 catalogue with 57592 additional craters resulting in GT-115225 catalog [GRS, 48 (5), in press, doi:10.1109/TGRS.2009.2037750]. On the other hand, the most complete catalog for Moon is the Morphological catalog of Lunar craters [edited by V. V. Shevchenko], which includes information on 14923 craters larger than 10km, visible on the lunar nearside and farside. This was the main motivation for application of our CDA to newly available Lunar SELENE LALT data. However, one of the main differences between MOLA and LALT data is the highest available resolution, wherein MOLA is available in 1/128° and LALT in 1/16° . The consequence is that only the largest craters can be detected using LALT dataset. However, this is still an excellent opportunity for further work on CDA in order to prepare it for forthcoming LRO LOLA data (which is expected to be in even better resolution than MOLA). The importance is in the fact that morphologically Martian and Lunar craters are not the same. Therefore, it is important to use the dataset for Moon in order to work on the CDA which is meant for detection of Lunar craters as well. In order to overcome the problem of currently available topography data in low resolution only, we particularly concentrated our work on the CDA's capability to detect very small craters relative to available dataset (up to the extreme case wherein the radius is as small as only two pixels). For this purpose, we improved the previous CDA with a new algorithm for sub-pixel interpolation of elevation samples, before subsequent computations. For elevation samples on larger distances from the crater's center, linear interpolation was used in order to speed-up the computations. For samples closer to the crater's center, the elevation value at the crater's center and relative sub-pixel distance to the selected elevation sample is additionally taken into account. The purpose is to compute the most realistic values for estimated elevation at a selected point. The results are, according to the initial visual evaluation, that numerous craters were successfully detected using SELENE LALT data.
Forecasting database for the tsunami warning regional center for the western Mediterranean Sea
NASA Astrophysics Data System (ADS)
Gailler, A.; Hebert, H.; Loevenbruck, A.; Hernandez, B.
2010-12-01
Improvements in the availability of sea-level observations and advances in numerical modeling techniques are increasing the potential for tsunami warnings to be based on numerical model forecasts. Numerical tsunami propagation and inundation models are well developed, but they present a challenge to run in real-time, partly due to computational limitations and also to a lack of detailed knowledge on the earthquake rupture parameters. Through the establishment of the tsunami warning regional center for NE Atlantic and western Mediterranean Sea, the CEA is especially in charge of providing rapidly a map with uncertainties showing zones in the main axis of energy at the Mediterranean scale. The strategy is based initially on a pre-computed tsunami scenarios database, as source parameters available a short time after an earthquake occurs are preliminary and may be somewhat inaccurate. Existing numerical models are good enough to provide a useful guidance for warning structures to be quickly disseminated. When an event will occur, an appropriate variety of offshore tsunami propagation scenarios by combining pre-computed propagation solutions (single or multi sources) may be recalled through an automatic interface. This approach would provide quick estimates of tsunami offshore propagation, and aid hazard assessment and evacuation decision-making. As numerical model accuracy is inherently limited by errors in bathymetry and topography, and as inundation maps calculation is more complex and expensive in term of computational time, only tsunami offshore propagation modeling will be included in the forecasting database using a single sparse bathymetric computation grid for the numerical modeling. Because of too much variability in the mechanism of tsunamigenic earthquakes, all possible magnitudes cannot be represented in the scenarios database. In principle, an infinite number of tsunami propagation scenarios can be constructed by linear combinations of a finite number of pre-computed unit scenarios. The whole notion of a pre-computed forecasting database also requires a historical earthquake and tsunami database, as well as an up-to-date seismotectonic database including faults geometry and a zonation based on seismotectonic synthesis of source zones and tsunamigenic faults. Our forecast strategy is thus based on a unit source function methodology, whereby the model runs are combined and scaled linearly to produce any composite tsunamis propagation solution. Each unit source function is equivalent to a tsunami generated by a Mo 1.75E+19 N.m earthquake (Mw ~6.8) with a rectangular fault 25 km by 20 km in size and 1 m in slip. The faults of the unit functions are placed adjacent to each other, following the discretization of the main seismogenic faults bounding the western Mediterranean basin. The number of unit functions involved varies with the magnitude of the wanted composite solution and the combined waveheights are multiplied by a given scaling factor to produce the new arbitrary scenario. Some test-cases examples are presented (e.g., Boumerdès 2003 [Algeria, Mw 6.8], Djijel 1856 [Algeria, Mw 7.2], Ligure 1887 [Italia, Mw 6.5-6.7]).
Singularity: Scientific containers for mobility of compute.
Kurtzer, Gregory M; Sochat, Vanessa; Bauer, Michael W
2017-01-01
Here we present Singularity, software developed to bring containers and reproducibility to scientific computing. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. Singularity is an open source initiative that harnesses the expertise of system and software engineers and researchers alike, and integrates seamlessly into common workflows for both of these groups. As its primary use case, Singularity brings mobility of computing to both users and HPC centers, providing a secure means to capture and distribute software and compute environments. This ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science.
Singularity: Scientific containers for mobility of compute
Kurtzer, Gregory M.; Bauer, Michael W.
2017-01-01
Here we present Singularity, software developed to bring containers and reproducibility to scientific computing. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. Singularity is an open source initiative that harnesses the expertise of system and software engineers and researchers alike, and integrates seamlessly into common workflows for both of these groups. As its primary use case, Singularity brings mobility of computing to both users and HPC centers, providing a secure means to capture and distribute software and compute environments. This ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science. PMID:28494014
Deep Space Network (DSN), Network Operations Control Center (NOCC) computer-human interfaces
NASA Technical Reports Server (NTRS)
Ellman, Alvin; Carlton, Magdi
1993-01-01
The Network Operations Control Center (NOCC) of the DSN is responsible for scheduling the resources of DSN, and monitoring all multi-mission spacecraft tracking activities in real-time. Operations performs this job with computer systems at JPL connected to over 100 computers at Goldstone, Australia and Spain. The old computer system became obsolete, and the first version of the new system was installed in 1991. Significant improvements for the computer-human interfaces became the dominant theme for the replacement project. Major issues required innovating problem solving. Among these issues were: How to present several thousand data elements on displays without overloading the operator? What is the best graphical representation of DSN end-to-end data flow? How to operate the system without memorizing mnemonics of hundreds of operator directives? Which computing environment will meet the competing performance requirements? This paper presents the technical challenges, engineering solutions, and results of the NOCC computer-human interface design.
Wright, Robert J; Zhang, Wei; Yang, Xinzheng; Fasulo, Meg; Tilley, T Don
2012-01-07
Proposed electrocatalytic proton reduction intermediates of hydrogenase mimics were synthesized, observed, and studied computationally. A new mechanism for H(2) generation appears to involve Fe(2)(CO)(6)(1,2-S(2)C(6)H(4)) (3), the dianions {[1,2-S(2)C(6)H(4)][Fe(CO)(3)(μ-CO)Fe(CO)(2)](2-) (3(2-)), the bridging hydride {[1,2-S(2)C(6)H(4)][Fe(CO)(3)(μ-CO)(μ-H)Fe(CO)(2)]}(-), 3H(-)(bridging), and the terminal hydride 3H(-)(term-stag), {[1,2-S(2)C(6)H(4)][HFe(CO)(3)Fe(CO)(3)]}(-), as intermediates. The dimeric sodium derivative of 3(2-), {[Na(2)(THF)(OEt(2))(3)][3(2-)]}(2) (4) was isolated from reaction of Fe(2)(CO)(6)(1,2-S(2)C(6)H(4)) (3) with excess sodium and was characterized by X-ray crystallography. It possesses a bridging CO and an unsymmetrically bridging dithiolate ligand. Complex 4 reacts with 4 equiv. of triflic or benzoic acid (2 equiv. per Fe center) to generate H(2) and 3 in 75% and 60% yields, respectively. Reaction of 4 with 2 equiv. of benzoic acid generated two hydrides in a 1.7 : 1 ratio (by (1)H NMR spectroscopy). Chemical shift calculations on geometry optimized structures of possible hydride isomers strongly suggest that the main product, 3H(-)(bridging), possesses a bridging hydride ligand, while the minor product is a terminal hydride, 3H(-)(term-stag). Computational studies support a catalytic proton reduction mechanism involving a two-electron reduction of 3 that severs an Fe-S bond to generate a dangling thiolate and an electron rich Fe center. The latter iron center is the initial site of protonation, and this event is followed by protonation at the dangling thiolate to give the thiol thiolate [Fe(2)H(CO)(6)(1,2-SHSC(6)H(4))]. This species then undergoes an intramolecular acid-base reaction to form a dihydrogen complex that loses H(2) and regenerates 3.
Computer Center: Setting Up a Microcomputer Center--1 Person's Perspective.
ERIC Educational Resources Information Center
Duhrkopf, Richard, Ed.; Collins, Michael, A. J., Ed.
1988-01-01
Considers eight components to be considered in setting up a microcomputer center for use with college classes. Discussions include hardware, software, physical facility, furniture, technical support, personnel, continuing financial expenditures, and security. (CW)
4. Overall view of complex. Foundry (MN99B) at center. Main ...
4. Overall view of complex. Foundry (MN-99-B) at center. Main section of roundhouse (MN-99-A) at left. Machine shop section of roundhouse in center behind foundry. East end of air brake shop section of roundhouse to right of machine shop. Top of sand tower (MN-99-E) just visible above main section of roundhouse at far left. Photograph taken from second floor of office (MN-99-D). View to south. - Duluth & Iron Range Rail Road Company Shops, Southwest of downtown Two Harbors, northwest of Agate Bay, Two Harbors, Lake County, MN
Snythesis and characterization of the first main group oxo-centered trinuclear carboxylate
NASA Technical Reports Server (NTRS)
Duraj, Stan A.
1994-01-01
The synthesis and structural characterization of the first main group oxo-centered, trinuclear carboxylato-bridged species is reported, namely (Ga3(mu(sub 3)-O) (mu-O2CC6H5)6 (4-Mepy)3) GaCl4 center dot 4-Mepy (compound 1), where 4-Mepy is 4-methylpyridine. Compound 1 is a main group example of a well-established class of complexes, referred to as 'basic carboxylates' of the general formula (M3(mu(sub 3)-O)(mu-O2CR)6L3)(+), previously observed only for transition metals.
NASA Technical Reports Server (NTRS)
Parikh, Paresh; Engelund, Walter; Armand, Sasan; Bittner, Robert
2004-01-01
A computational fluid dynamic (CFD) study is performed on the Hyper-X (X-43A) Launch Vehicle stack configuration in support of the aerodynamic database generation in the transonic to hypersonic flow regime. The main aim of the study is the evaluation of a CFD method that can be used to support aerodynamic database development for similar future configurations. The CFD method uses the NASA Langley Research Center developed TetrUSS software, which is based on tetrahedral, unstructured grids. The Navier-Stokes computational method is first evaluated against a set of wind tunnel test data to gain confidence in the code s application to hypersonic Mach number flows. The evaluation includes comparison of the longitudinal stability derivatives on the complete stack configuration (which includes the X-43A/Hyper-X Research Vehicle, the launch vehicle and an adapter connecting the two), detailed surface pressure distributions at selected locations on the stack body and component (rudder, elevons) forces and moments. The CFD method is further used to predict the stack aerodynamic performance at flow conditions where no experimental data is available as well as for component loads for mechanical design and aero-elastic analyses. An excellent match between the computed and the test data over a range of flow conditions provides a computational tool that may be used for future similar hypersonic configurations with confidence.
Towards Effective Non-Invasive Brain-Computer Interfaces Dedicated to Gait Rehabilitation Systems
Castermans, Thierry; Duvinage, Matthieu; Cheron, Guy; Dutoit, Thierry
2014-01-01
In the last few years, significant progress has been made in the field of walk rehabilitation. Motor cortex signals in bipedal monkeys have been interpreted to predict walk kinematics. Epidural electrical stimulation in rats and in one young paraplegic has been realized to partially restore motor control after spinal cord injury. However, these experimental trials are far from being applicable to all patients suffering from motor impairments. Therefore, it is thought that more simple rehabilitation systems are desirable in the meanwhile. The goal of this review is to describe and summarize the progress made in the development of non-invasive brain-computer interfaces dedicated to motor rehabilitation systems. In the first part, the main principles of human locomotion control are presented. The paper then focuses on the mechanisms of supra-spinal centers active during gait, including results from electroencephalography, functional brain imaging technologies [near-infrared spectroscopy (NIRS), functional magnetic resonance imaging (fMRI), positron-emission tomography (PET), single-photon emission-computed tomography (SPECT)] and invasive studies. The first brain-computer interface (BCI) applications to gait rehabilitation are then presented, with a discussion about the different strategies developed in the field. The challenges to raise for future systems are identified and discussed. Finally, we present some proposals to address these challenges, in order to contribute to the improvement of BCI for gait rehabilitation. PMID:24961699
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pete Beckman and Ian Foster
Chicago Matters: Beyond Burnham (WTTW). Chicago has become a world center of "cloud computing." Argonne experts Pete Beckman and Ian Foster explain what "cloud computing" is and how you probably already use it on a daily basis.
Development of Parallel Code for the Alaska Tsunami Forecast Model
NASA Astrophysics Data System (ADS)
Bahng, B.; Knight, W. R.; Whitmore, P.
2014-12-01
The Alaska Tsunami Forecast Model (ATFM) is a numerical model used to forecast propagation and inundation of tsunamis generated by earthquakes and other means in both the Pacific and Atlantic Oceans. At the U.S. National Tsunami Warning Center (NTWC), the model is mainly used in a pre-computed fashion. That is, results for hundreds of hypothetical events are computed before alerts, and are accessed and calibrated with observations during tsunamis to immediately produce forecasts. ATFM uses the non-linear, depth-averaged, shallow-water equations of motion with multiply nested grids in two-way communications between domains of each parent-child pair as waves get closer to coastal waters. Even with the pre-computation the task becomes non-trivial as sub-grid resolution gets finer. Currently, the finest resolution Digital Elevation Models (DEM) used by ATFM are 1/3 arc-seconds. With a serial code, large or multiple areas of very high resolution can produce run-times that are unrealistic even in a pre-computed approach. One way to increase the model performance is code parallelization used in conjunction with a multi-processor computing environment. NTWC developers have undertaken an ATFM code-parallelization effort to streamline the creation of the pre-computed database of results with the long term aim of tsunami forecasts from source to high resolution shoreline grids in real time. Parallelization will also permit timely regeneration of the forecast model database with new DEMs; and, will make possible future inclusion of new physics such as the non-hydrostatic treatment of tsunami propagation. The purpose of our presentation is to elaborate on the parallelization approach and to show the compute speed increase on various multi-processor systems.
NASA Technical Reports Server (NTRS)
Pitts, William C; Nielsen, Jack N; Kaattari, George E
1957-01-01
A method is presented for calculating the lift and centers of pressure of wing-body and wing-body-tail combinations at subsonic, transonic, and supersonic speeds. A set of design charts and a computing table are presented which reduce the computations to routine operations. Comparison between the estimated and experimental characteristics for a number of wing-body and wing-body-tail combinations shows correlation to within + or - 10 percent on lift and to within about + or - 0.02 of the body length on center of pressure.
The Role of Computers in Research and Development at Langley Research Center
NASA Technical Reports Server (NTRS)
Wieseman, Carol D. (Compiler)
1994-01-01
This document is a compilation of presentations given at a workshop on the role cf computers in research and development at the Langley Research Center. The objectives of the workshop were to inform the Langley Research Center community of the current software systems and software practices in use at Langley. The workshop was organized in 10 sessions: Software Engineering; Software Engineering Standards, methods, and CASE tools; Solutions of Equations; Automatic Differentiation; Mosaic and the World Wide Web; Graphics and Image Processing; System Design Integration; CAE Tools; Languages; and Advanced Topics.
Mass storage system experiences and future needs at the National Center for Atmospheric Research
NASA Technical Reports Server (NTRS)
Olear, Bernard T.
1991-01-01
A summary and viewgraphs of a discussion presented at the National Space Science Data Center (NSSDC) Mass Storage Workshop is included. Some of the experiences of the Scientific Computing Division at the National Center for Atmospheric Research (NCAR) dealing the the 'data problem' are discussed. A brief history and a development of some basic mass storage system (MSS) principles are given. An attempt is made to show how these principles apply to the integration of various components into NCAR's MSS. Future MSS needs for future computing environments is discussed.
Traffic flow collection wireless sensor network node for intersection light control
NASA Astrophysics Data System (ADS)
Li, Xu; Li, Xue
2011-10-01
Wireless sensor network (WSN) is expected to be deployed in intersection to monitor the traffic flow continuously, and the monitoring datum can be used as the foundation of traffic light control. In this paper, a WSN based on ZigBee protocol for monitoring traffic flow is proposed. Structure, hardware and work flow of WSN nodes are designed. CC2431 from Texas Instrument is chosen as the main computational and transmission unit, and CC2591 as the amplification unit. The stability experiment and the actual environment experiment are carried out in the last of the paper. The results of experiments show that WSN has the ability to collect traffic flow information quickly and transmit the datum to the processing center in real time.
A secure communication using cascade chaotic computing systems on clinical decision support.
Koksal, Ahmet Sertol; Er, Orhan; Evirgen, Hayrettin; Yumusak, Nejat
2016-06-01
Clinical decision support systems (C-DSS) provide supportive tools to the expert for the determination of the disease. Today, many of the support systems, which have been developed for a better and more accurate diagnosis, have reached a dynamic structure due to artificial intelligence techniques. However, in cases when important diagnosis studies should be performed in secret, a secure communication system is required. In this study, secure communication of a DSS is examined through a developed double layer chaotic communication system. The developed communication system consists of four main parts: random number generator, cascade chaotic calculation layer, PCM, and logical mixer layers. Thanks to this system, important patient data created by DSS will be conveyed to the center through a secure communication line.
ERIC Educational Resources Information Center
Insolia, Gerard
This document contains course outlines in computer-aided manufacturing developed for a business-industry technology resource center for firms in eastern Pennsylvania by Northampton Community College. The four units of the course cover the following: (1) introduction to computer-assisted design (CAD)/computer-assisted manufacturing (CAM); (2) CAM…
NASA Technical Reports Server (NTRS)
Harrison, Cecil A.
1986-01-01
The efforts to automate the electromagentic compatibility (EMC) test facilites at Marshall Flight Center were examined. A battery of nine standard tests is to be integrated by means of a desktop computer-controller in order to provide near real-time data assessment, store the data acquired during testing on flexible disk, and provide computer production of the certification report.
Mind Transplants Or: The Role of Computer Assisted Instruction in the Future of the Library.
ERIC Educational Resources Information Center
Lyon, Becky J.
Computer assisted instruction (CAI) may well represent the next phase in the involvement of the library or learning resources center with media and the educational process. The Lister Hill Center Experimental CAI Network was established in July, 1972, on the recommendation of the National Library of Medicine, to test the feasibility of sharing CAI…
Center for Efficient Exascale Discretizations Software Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolev, Tzanio; Dobrev, Veselin; Tomov, Vladimir
The CEED Software suite is a collection of generally applicable software tools focusing on the following computational motives: PDE discretizations on unstructured meshes, high-order finite element and spectral element methods and unstructured adaptive mesh refinement. All of this software is being developed as part of CEED, a co-design Center for Efficient Exascale Discretizations, within DOE's Exascale Computing Project (ECP) program.
ERIC Educational Resources Information Center
Santoso, Harry B.; Batuparan, Alivia Khaira; Isal, R. Yugo K.; Goodridge, Wade H.
2018-01-01
Student Centered e-Learning Environment (SCELE) is a Moodle-based learning management system (LMS) that has been modified to enhance learning within a computer science department curriculum offered by the Faculty of Computer Science of large public university in Indonesia. This Moodle provided a mechanism to record students' activities when…
ERIC Educational Resources Information Center
Skowronski, Steven D.
This student guide provides materials for a course designed to instruct the student in the recommended procedures used when setting up tooling and verifying part programs for a two-axis computer numerical control (CNC) turning center. The course consists of seven units. Unit 1 discusses course content and reviews and demonstrates set-up procedures…
ERIC Educational Resources Information Center
Buckley, Elizabeth; Johnston, Peter
In February 1977, computer assisted instruction (CAI) was introducted to the Great Neck Adult Learning Centers (GNALC) to promote greater cognitive and affective growth of educationally disadvantaged adults. The project expanded to include not only adult basic education (ABE) students studying in the learning laboratory, but also ABE students…
The Development of a Robot-Based Learning Companion: A User-Centered Design Approach
ERIC Educational Resources Information Center
Hsieh, Yi-Zeng; Su, Mu-Chun; Chen, Sherry Y.; Chen, Gow-Dong
2015-01-01
A computer-vision-based method is widely employed to support the development of a variety of applications. In this vein, this study uses a computer-vision-based method to develop a playful learning system, which is a robot-based learning companion named RobotTell. Unlike existing playful learning systems, a user-centered design (UCD) approach is…
CENTER CONDITIONS AND CYCLICITY FOR A FAMILY OF CUBIC SYSTEMS: COMPUTER ALGEBRA APPROACH.
Ferčec, Brigita; Mahdi, Adam
2013-01-01
Using methods of computational algebra we obtain an upper bound for the cyclicity of a family of cubic systems. We overcame the problem of nonradicality of the associated Bautin ideal by moving from the ring of polynomials to a coordinate ring. Finally, we determine the number of limit cycles bifurcating from each component of the center variety.
About High-Performance Computing at NREL | High-Performance Computing |
Day(s): First Thursday of every month Hours: 11 a.m. - 12 p.m. Location: ESIF B211-Edison Conference Room Contact: Jennifer Southerland Insight Center - Visualization Tools Day(s): Every Monday Hours: 10 Data System Day(s): Every Monday Hours: 10 a.m. - 11 a.m. Location: ESIF B308-Insight Center
ERIC Educational Resources Information Center
Tsai, Chia-Wen; Shen, Pei-Di; Lin, Rong-An
2015-01-01
This study investigated, via quasi-experiments, the effects of student-centered project-based learning with initiation (SPBL with Initiation) on the development of students' computing skills. In this study, 96 elementary school students were selected from four class sections taking a course titled "Digital Storytelling" and were assigned…
EPA uses high-end scientific computing, geospatial services and remote sensing/imagery analysis to support EPA's mission. The Center for Environmental Computing (CEC) assists the Agency's program offices and regions to meet staff needs in these areas.
HPCCP/CAS Workshop Proceedings 1998
NASA Technical Reports Server (NTRS)
Schulbach, Catherine; Mata, Ellen (Editor); Schulbach, Catherine (Editor)
1999-01-01
This publication is a collection of extended abstracts of presentations given at the HPCCP/CAS (High Performance Computing and Communications Program/Computational Aerosciences Project) Workshop held on August 24-26, 1998, at NASA Ames Research Center, Moffett Field, California. The objective of the Workshop was to bring together the aerospace high performance computing community, consisting of airframe and propulsion companies, independent software vendors, university researchers, and government scientists and engineers. The Workshop was sponsored by the HPCCP Office at NASA Ames Research Center. The Workshop consisted of over 40 presentations, including an overview of NASA's High Performance Computing and Communications Program and the Computational Aerosciences Project; ten sessions of papers representative of the high performance computing research conducted within the Program by the aerospace industry, academia, NASA, and other government laboratories; two panel sessions; and a special presentation by Mr. James Bailey.
Dynamical and Radiative Modeling of Sagittarius A*
NASA Astrophysics Data System (ADS)
Shcherbakov, Roman V.
2011-09-01
Sgr A* in our Galactic Center is the closest supermassive black hole (SMBH) with the largest event horizon angular size. Most other SMBHs are likely in the same dormant low-luminosity accretion state as Sgr A*. Thus, the important physical effects in lives of BHs can be best observed and studied in our Galactic Center. One of these effects is electron heat conduction. Conduction may be the main reason why Sgr A* is so dramatically underluminous: it transfers heat outwards from the inner flow and unbinds the outer flow, quenching the accretion. In Chapter 3 I build a realistic model of accretion with conduction, which incorporates feeding by stellar winds. In a model with accretion rate < 1% of the naive Bondi estimate I achieve agreement of the X-ray surface brightness profile and Faraday rotation measure to observations. An earlier model proposed in Chapter 2 with adiabatic accretion of turbulent magnetized medium cannot be tweaked to match the observations. Its accretion rate appears too large, so turbulent magnetic field cannot stop gas from falling in. Low accretion rate leads to a peculiar radiation pattern from near the BH: cyclo-synchrotron polarized radiation is observed in radio/sub-mm. Since it comes from several Schwarzschild radii, the BH spin can be determined, when we overcome all modeling challenges. I fit the average observed radiation spectrum with a theoretical spectrum, which is computed by radiative transfer over a simulation-based model. Relevant plasma effects responsible for the observed polarization state are accurately computed for thermal plasma in Chapter 4. The prescription of how to perform the correct general relativistic polarized radiative transfer is elaborated in Chapter 5. Application of this technique to three-dimensional general relativistic magneto hydrodynamic numerical simulations is reported in Chapter 6. The main results of analysis are that the spin inclination angle is estimated to lie within a narrow range theta est = 50° -- 59°, and most probable value of BH spin is a* = 0.9. I believe the researched topics will play a central role in future modeling of typical SMBH accretion and will lead to effective ways to determine the spins of these starving eaters. Computations of plasma effects reported here will also find applications when comparing models of jets to observations.
PCDS as a tool in teaching and research at the University of Michigan
NASA Technical Reports Server (NTRS)
Abreu, V.
1986-01-01
The Space Physics Research Laboratory's (SPRL) use of the Pilot Climate Data System (PCDS) is discussed. For this purpose, a computer center was established to provide the hardware and software necessary to fully utilize existing data bases for research and teaching purposes. A schematic of the SPRL network is given. The core of the system consists of two VAX 11/750s and a VAX 8600, networked through ETHERNET to several LSI 11/23 microprocessors. Much of the system is used for external communications with major networks and data centers. A VAX 11/750 provides DECNET services through the SPAN network to the PCDS. A functional diagram of PCDS usage is given. The browsing capabilities of the PCDS are used to generate data files, which are later transferred to the SPRL center for further data manipulation and display. This mode of operation for classroom instruction will be used to effectively use terminals and to simplify usage of the data base. The Atmosphere Explorer data base has been used successfully in a similar manner in courses related to the thermosphere and ionosphere. The main motivation to access the PCDS was to complement research efforts related to the High Resolution Doppler Imager (HRDI), to be flown on the Upper Atmosphere Research Satellite (UARS).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samios,N.P.
The ninth evaluation of the RIKEN BNL Research Center (RBRC) took place on Nov. 17-18, 2008, at Brookhaven National Laboratory. The members of the Scientific Review Committee (SRC) were Dr. Dr. Wit Busza (Chair), Dr. Miklos Gyulassy, Dr. Akira Masaike, Dr. Richard Milner, Dr. Alfred Mueller, and Dr. Akira Ukawa. We are pleased that Dr. Yasushige Yano, the Director of the Nishina Institute of RIKEN, Japan participated in this meeting both in informing the committee of the activities of the Nishina Institute and the role of RBRC and as an observer of this review. In order to illustrate the breadthmore » and scope of the RBRC program, each member of the Center made a presentation on his/her research efforts. This encompassed three major areas of investigation, theoretical, experimental and computational physics. In addition the committee met privately with the fellows and postdocs to ascertain their opinions and concerns. Although the main purpose of this review is a report to RIKEN Management (Dr. Ryoji Noyori, RIKEN President) on the health, scientific value, management and future prospects of the Center, the RBRC management felt that a compendium of the scientific presentations are of sufficient quality and interest that they warrant a wider distribution. Therefore we have made this compilation and present it to the community for its information and enlightenment.« less
NASA Astrophysics Data System (ADS)
Gallagher, L.; Morse, M.; Maxwell, R. M.
2017-12-01
The Integrated GroundWater Modeling Center (IGWMC) at Colorado School of Mines has, over the past three years, developed a community outreach program focusing on hydrologic science education, targeting K-12 teachers and students, and providing experiential learning for undergraduate and graduate students. During this time, the programs led by the IGWMC reached approximately 7500 students, teachers, and community members along the Colorado Front Range. An educational campaign of this magnitude for a small (2 full-time employees, 4 PIs) research center required restructuring and modularizing of the outreach strategy. We refined our approach to include three main "modules" of delivery. First: grassroots education delivery in the form of K-12 classroom visits, science fairs, and teacher workshops. Second: content development in the form of lesson plans for K-12 classrooms and STEM camps, hands-on physical and computer model activities, and long-term citizen science partnerships. Lastly: providing education/outreach experiences for undergraduate and graduate student volunteers, training them via a 3-credit honors course, and instilling the importance of effective science communication skills. Here we present specific case studies and examples of the successes and failures of our three-pronged system, future developments, and suggestions for entities newly embarking on an earth science education outreach campaign.
Computer and online health information literacy among Belgrade citizens aged 66-89 years.
Gazibara, Tatjana; Kurtagic, Ilma; Kisic-Tepavcevic, Darija; Nurkovic, Selmina; Kovacevic, Nikolina; Gazibara, Teodora; Pekmezovic, Tatjana
2016-06-01
Computer users over 65 years of age in Serbia are rare. The purpose of this study was to (i) describe main demographic characteristics of computer users older than 65; (ii) evaluate their online health information literacy and (iii) assess factors associated with computer use in this population. Persons above 65 years of age were recruited at the Community Health Center 'Vračar' in Belgrade from November 2012 to January 2013. Data were collected after medical checkups using a questionnaire. Of 480 persons who were invited to participate 354 (73.7%) agreed to participate, while 346 filled in the questionnaire (72.1%). A total of 70 (20.2%) older persons were computer users (23.4% males vs. 17.7% females). Of those, 23.7% explored health-related web sites. The majority of older persons who do not use computers reported that they do not have a reason to use a computer (76.5%), while every third senior (30.4%) did not own a computer. Predictors of computer use were being younger [odds ratio (OR) = 2.14, 95% confidence interval (CI) 1.30-4.04; p = 0.019], having less members of household (OR = 2.97, 95% CI 1.45-6.08; p = 0.003), being more educated (OR = 3.53, 95% CI 1.88-6.63; p = 0.001), having higher income (OR = 2.31, 95% CI 1.17-4.58; p = 0.016) as well as fewer comorbidities (OR = 0.42, 95% CI 0.23-0.79; p = 0.007). Being male was independent predictor of online health information use at the level of marginal significance (OR = 4.43, 95% CI 1.93-21.00; p = 0.061). Frequency of computer and Internet use among older adults in Belgrade is similar to other populations. Patterns of Internet use as well as non-use demonstrate particular socio-cultural characteristics. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Idiopathic Interstitial Pneumonia
Flaherty, Kevin R.; Andrei, Adin-Cristian; King, Talmadge E.; Raghu, Ganesh; Colby, Thomas V.; Wells, Athol; Bassily, Nadir; Brown, Kevin; du Bois, Roland; Flint, Andrew; Gay, Steven E.; Gross, Barry H.; Kazerooni, Ella A.; Knapp, Robert; Louvar, Edmund; Lynch, David; Nicholson, Andrew G.; Quick, John; Thannickal, Victor J.; Travis, William D.; Vyskocil, James; Wadenstorer, Frazer A.; Wilt, Jeffrey; Toews, Galen B.; Murray, Susan; Martinez, Fernando J.
2007-01-01
Rationale: Treatment and prognoses of diffuse parenchymal lung diseases (DPLDs) varies by diagnosis. Obtaining a uniform diagnosis among observers is difficult. Objectives: Evaluate diagnostic agreement between academic and community-based physicians for patients with DPLDs, and determine if an interactive approach between clinicians, radiologists, and pathologists improved diagnostic agreement in community and academic centers. Methods: Retrospective review of 39 patients with DPLD. A total of 19 participants reviewed cases at 2 community locations and 1 academic location. Information from the history, physical examination, pulmonary function testing, high-resolution computed tomography, and surgical lung biopsy was collected. Data were presented in the same sequential fashion to three groups of physicians on separate days. Measurements and Main Results: Each observer's diagnosis was coded into one of eight categories. A κ statistic allowing for multiple raters was used to assess agreement in diagnosis. Interactions between clinicians, radiologists, and pathologists improved interobserver agreement at both community and academic sites; however, final agreement was better within academic centers (κ = 0.55–0.71) than within community centers (κ = 0.32–0.44). Clinically significant disagreement was present between academic and community-based physicians (κ = 0.11–0.56). Community physicians were more likely to assign a final diagnosis of idiopathic pulmonary fibrosis compared with academic physicians. Conclusions: Significant disagreement exists in the diagnosis of DPLD between physicians based in communities compared with those in academic centers. Wherever possible, patients should be referred to centers with expertise in diffuse parenchymal lung disorders to help clarify the diagnosis and provide suggestions regarding treatment options. PMID:17255566
NASA Center for Climate Simulation (NCCS) Advanced Technology AT5 Virtualized Infiniband Report
NASA Technical Reports Server (NTRS)
Thompson, John H.; Bledsoe, Benjamin C.; Wagner, Mark; Shakshober, John; Fromkin, Russ
2013-01-01
The NCCS is part of the Computational and Information Sciences and Technology Office (CISTO) of Goddard Space Flight Center's (GSFC) Sciences and Exploration Directorate. The NCCS's mission is to enable scientists to increase their understanding of the Earth, the solar system, and the universe by supplying state-of-the-art high performance computing (HPC) solutions. To accomplish this mission, the NCCS (https://www.nccs.nasa.gov) provides high performance compute engines, mass storage, and network solutions to meet the specialized needs of the Earth and space science user communities
ED adds business center to wait area.
2007-10-01
Providing your patients with Internet access in the waiting area can do wonders for their attitudes and make them much more understanding of long wait times. What's more, it doesn't take a fortune to create a business center. The ED at Florida Hospital Celebration (FL) Health made a world of difference with just a couple of computers and a printer. Have your information technology staff set the computers up to preserve the privacy of your internal computer system, and block out offensive sites. Access to medical sites can help reinforce your patient education efforts.
Application of technology developed for flight simulation at NASA. Langley Research Center
NASA Technical Reports Server (NTRS)
Cleveland, Jeff I., II
1991-01-01
In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations including mathematical model computation and data input/output to the simulators must be deterministic and be completed in as short a time as possible. Personnel at NASA's Langley Research Center are currently developing the use of supercomputers for simulation mathematical model computation for real-time simulation. This, coupled with the use of an open systems software architecture, will advance the state-of-the-art in real-time flight simulation.
Implementing the Data Center Energy Productivity Metric in a High Performance Computing Data Center
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sego, Landon H.; Marquez, Andres; Rawson, Andrew
2013-06-30
As data centers proliferate in size and number, the improvement of their energy efficiency and productivity has become an economic and environmental imperative. Making these improvements requires metrics that are robust, interpretable, and practical. We discuss the properties of a number of the proposed metrics of energy efficiency and productivity. In particular, we focus on the Data Center Energy Productivity (DCeP) metric, which is the ratio of useful work produced by the data center to the energy consumed performing that work. We describe our approach for using DCeP as the principal outcome of a designed experiment using a highly instrumented,more » high-performance computing data center. We found that DCeP was successful in clearly distinguishing different operational states in the data center, thereby validating its utility as a metric for identifying configurations of hardware and software that would improve energy productivity. We also discuss some of the challenges and benefits associated with implementing the DCeP metric, and we examine the efficacy of the metric in making comparisons within a data center and between data centers.« less
ENVIRONMENTAL BIOINFORMATICS AND COMPUTATIONAL TOXICOLOGY CENTER
The Center activities focused on integrating developmental efforts from the various research projects of the Center, and collaborative applications involving scientists from other institutions and EPA, to enhance research in critical areas. A representative sample of specif...
Improving User Notification on Frequently Changing HPC Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuson, Christopher B; Renaud, William A
2016-01-01
Today s HPC centers user environments can be very complex. Centers often contain multiple large complicated computational systems each with their own user environment. Changes to a system s environment can be very impactful; however, a center s user environment is, in one-way or another, frequently changing. Because of this, it is vital for centers to notify users of change. For users, untracked changes can be costly, resulting in unnecessary debug time as well as wasting valuable compute allocations and research time. Communicating frequent change to diverse user communities is a common and ongoing task for HPC centers. This papermore » will cover the OLCF s current processes and methods used to communicate change to users of the center s large Cray systems and supporting resources. The paper will share lessons learned and goals as well as practices, tools, and methods used to continually improve and reach members of the OLCF user community.« less
Robust pupil center detection using a curvature algorithm
NASA Technical Reports Server (NTRS)
Zhu, D.; Moore, S. T.; Raphan, T.; Wall, C. C. (Principal Investigator)
1999-01-01
Determining the pupil center is fundamental for calculating eye orientation in video-based systems. Existing techniques are error prone and not robust because eyelids, eyelashes, corneal reflections or shadows in many instances occlude the pupil. We have developed a new algorithm which utilizes curvature characteristics of the pupil boundary to eliminate these artifacts. Pupil center is computed based solely on points related to the pupil boundary. For each boundary point, a curvature value is computed. Occlusion of the boundary induces characteristic peaks in the curvature function. Curvature values for normal pupil sizes were determined and a threshold was found which together with heuristics discriminated normal from abnormal curvature. Remaining boundary points were fit with an ellipse using a least squares error criterion. The center of the ellipse is an estimate of the pupil center. This technique is robust and accurately estimates pupil center with less than 40% of the pupil boundary points visible.
On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers
NASA Astrophysics Data System (ADS)
Erli, G.; Fischer, F.; Fleig, G.; Giffels, M.; Hauth, T.; Quast, G.; Schnepf, M.; Heese, J.; Leppert, K.; Arnaez de Pedro, J.; Sträter, R.
2017-10-01
This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers. Rather than relying on dedicated HEP computing centers, it is nowadays more reasonable and flexible to utilize remote computing capacity via virtualization techniques or container concepts. We report on recent experience from incorporating a remote HPC center (NEMO Cluster, Freiburg University) and resources dynamically requested from the commercial provider 1&1 Internet SE into our intitute’s computing infrastructure. The Freiburg HPC resources are requested via the standard batch system, allowing HPC and HEP applications to be executed simultaneously, such that regular batch jobs run side by side to virtual machines managed via OpenStack [1]. For the inclusion of the 1&1 commercial resources, a Python API and SDK as well as the possibility to upload images were available. Large scale tests prove the capability to serve the scientific use case in the European 1&1 datacenters. The described environment at the Institute of Experimental Nuclear Physics (IEKP) at KIT serves the needs of researchers participating in the CMS and Belle II experiments. In total, resources exceeding half a million CPU hours have been provided by remote sites.
NASA Technical Reports Server (NTRS)
Denning, P. J.; Adams, G. B., III; Brown, R. L.; Kanerva, P.; Leiner, B. M.; Raugh, M. R.
1986-01-01
Large, complex computer systems require many years of development. It is recognized that large scale systems are unlikely to be delivered in useful condition unless users are intimately involved throughout the design process. A mechanism is described that will involve users in the design of advanced computing systems and will accelerate the insertion of new systems into scientific research. This mechanism is embodied in a facility called the Center for Advanced Architectures (CAA). CAA would be a division of RIACS (Research Institute for Advanced Computer Science) and would receive its technical direction from a Scientific Advisory Board established by RIACS. The CAA described here is a possible implementation of a center envisaged in a proposed cooperation between NASA and DARPA.
Bayesian Research at the NASA Ames Research Center,Computational Sciences Division
NASA Technical Reports Server (NTRS)
Morris, Robin D.
2003-01-01
NASA Ames Research Center is one of NASA s oldest centers, having started out as part of the National Advisory Committee on Aeronautics, (NACA). The site, about 40 miles south of San Francisco, still houses many wind tunnels and other aviation related departments. In recent years, with the growing realization that space exploration is heavily dependent on computing and data analysis, its focus has turned more towards Information Technology. The Computational Sciences Division has expanded rapidly as a result. In this article, I will give a brief overview of some of the past and present projects with a Bayesian content. Much more than is described here goes on with the Division. The web pages at http://ic.arc. nasa.gov give more information on these, and the other Division projects.
Cyber Security: Big Data Think II Working Group Meeting
NASA Technical Reports Server (NTRS)
Hinke, Thomas; Shaw, Derek
2015-01-01
This presentation focuses on approaches that could be used by a data computation center to identify attacks and ensure malicious code and backdoors are identified if planted in system. The goal is to identify actionable security information from the mountain of data that flows into and out of an organization. The approaches are applicable to big data computational center and some must also use big data techniques to extract the actionable security information from the mountain of data that flows into and out of a data computational center. The briefing covers the detection of malicious delivery sites and techniques for reducing the mountain of data so that intrusion detection information can be useful, and not hidden in a plethora of false alerts. It also looks at the identification of possible unauthorized data exfiltration.
Postdoctoral Fellow | Center for Cancer Research
The Neuro-Oncology Branch (NOB), Center for Cancer Research (CCR), National Cancer Institute (NCI) of the National Institutes of Health (NIH) is seeking outstanding postdoctoral candidates interested in studying metabolic and cell signaling pathways in the context of brain cancers through construction of computational models amenable to formal computational analysis and simulation. The ability to closely collaborate with the modern metabolomics center developed at CCR provides a unique opportunity for a postdoctoral candidate with a strong theoretical background and interest in demonstrating the incredible potential of computational approaches to solve problems from scientific disciplines and improve lives. The candidate will be given the opportunity to both construct data-driven models, as well as biologically validate the models by demonstrating the ability to predict the effects of altering tumor metabolism in laboratory and clinical settings.
Computational Modeling Develops Ultra-Hard Steel
NASA Technical Reports Server (NTRS)
2007-01-01
Glenn Research Center's Mechanical Components Branch developed a spiral bevel or face gear test rig for testing thermal behavior, surface fatigue, strain, vibration, and noise; a full-scale, 500-horsepower helicopter main-rotor transmission testing stand; a gear rig that allows fundamental studies of the dynamic behavior of gear systems and gear noise; and a high-speed helical gear test for analyzing thermal behavior for rotorcraft. The test rig provides accelerated fatigue life testing for standard spur gears at speeds of up to 10,000 rotations per minute. The test rig enables engineers to investigate the effects of materials, heat treat, shot peen, lubricants, and other factors on the gear's performance. QuesTek Innovations LLC, based in Evanston, Illinois, recently developed a carburized, martensitic gear steel with an ultra-hard case using its computational design methodology, but needed to verify surface fatigue, lifecycle performance, and overall reliability. The Battelle Memorial Institute introduced the company to researchers at Glenn's Mechanical Components Branch and facilitated a partnership allowing researchers at the NASA Center to conduct spur gear fatigue testing for the company. Testing revealed that QuesTek's gear steel outperforms the current state-of-the-art alloys used for aviation gears in contact fatigue by almost 300 percent. With the confidence and credibility provided by the NASA testing, QuesTek is commercializing two new steel alloys. Uses for this new class of steel are limitless in areas that demand exceptional strength for high throughput applications.
Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds
NASA Astrophysics Data System (ADS)
Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni
2012-09-01
Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.
Argonne Out Loud: Computation, Big Data, and the Future of Cities
Catlett, Charlie
2018-01-16
Charlie Catlett, a Senior Computer Scientist at Argonne and Director of the Urban Center for Computation and Data at the Computation Institute of the University of Chicago and Argonne, talks about how he and his colleagues are using high-performance computing, data analytics, and embedded systems to better understand and design cities.
LaRC local area networks to support distributed computing
NASA Technical Reports Server (NTRS)
Riddle, E. P.
1984-01-01
The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.
CILT2000: Ubiquitous Computing--Spanning the Digital Divide.
ERIC Educational Resources Information Center
Tinker, Robert; Vahey, Philip
2002-01-01
Discusses the role of ubiquitous and handheld computers in education. Summarizes the contributions of the Center for Innovative Learning Technologies (CILT) and describes the ubiquitous computing sessions at the CILT2000 Conference. (Author/YDS)
First-ever evening public engine test of a Space Shuttle Main Engine
2001-04-21
Thousands of people watch the first-ever evening public engine test of a Space Shuttle Main Engine at NASA's John C. Stennis Space Center. The spectacular test marked Stennis Space Center's 20th anniversary celebration of the first Space Shuttle mission.
ERIC Educational Resources Information Center
Zamora, Ramon M.
Alternative learning environments offering computer-related instruction are developing around the world. Storefront learning centers, museum-based computer facilities, and special theme parks are some of the new concepts. ComputerTown, USA! is a public access computer literacy project begun in 1979 to serve both adults and children in Menlo Park…
NASA Technical Reports Server (NTRS)
2004-01-01
In early 1995, NASA s Glenn Research Center (then Lewis Research Center) formed an industry-government team with several jet engine companies to develop the National Combustion Code (NCC), which would help aerospace engineers solve complex aerodynamics and combustion problems in gas turbine, rocket, and hypersonic engines. The original development team consisted of Allison Engine Company (now Rolls-Royce Allison), CFD Research Corporation, GE Aircraft Engines, Pratt and Whitney, and NASA. After the baseline beta version was established in July 1998, the team focused its efforts on consolidation, streamlining, and integration, as well as enhancement, evaluation, validation, and application. These activities, mainly conducted at NASA Glenn, led to the completion of NCC version 1.0 in October 2000. NCC version 1.0 features high-fidelity representation of complex geometry, advanced models for two-phase turbulent combustion, and massively parallel computing. Researchers and engineers at Glenn have been using NCC to provide analysis and design support for various aerospace propulsion technology development projects. NASA transfers NCC technology to external customers using non- exclusive Space Act Agreements. Glenn researchers also communicate research and development results derived from NCC's further development through publications and special sessions at technical conferences.
Rocket Engine Plume Diagnostics at Stennis Space Center
NASA Technical Reports Server (NTRS)
Tejwani, Gopal D.; Langford, Lester A.; VanDyke, David B.; McVay, Gregory P.; Thurman, Charles C.
2003-01-01
The Stennis Space Center has been at the forefront of development and application of exhaust plume spectroscopy to rocket engine health monitoring since 1989. Various spectroscopic techniques, such as emission, absorption, FTIR, LIF, and CARS, have been considered for application at the engine test stands. By far the most successful technology h a been exhaust plume emission spectroscopy. In particular, its application to the Space Shuttle Main Engine (SSME) ground test health monitoring has been invaluable in various engine testing and development activities at SSC since 1989. On several occasions, plume diagnostic methods have successfully detected a problem with one or more components of an engine long before any other sensor indicated a problem. More often, they provide corroboration for a failure mode, if any occurred during an engine test. This paper gives a brief overview of our instrumentation and computational systems for rocket engine plume diagnostics at SSC. Some examples of successful application of exhaust plume spectroscopy (emission as well as absorption) to the SSME testing are presented. Our on-going plume diagnostics technology development projects and future requirements are discussed.
Thermal System Upgrade of the Space Environment Simulation Test Chamber
NASA Technical Reports Server (NTRS)
Desai, Ashok B.
1997-01-01
The paper deals with the refurbishing and upgrade of the thermal system for the existing thermal vacuum test facility, the Space Environment Simulator, at NASA's Goddard Space Flight Center. The chamber is the largest such facility at the center. This upgrade is the third phase of the long range upgrade of the chamber that has been underway for last few years. The first phase dealt with its vacuum system, the second phase involved the GHe subsystem. The paper describes the considerations of design philosophy options for the thermal system; approaches taken and methodology applied, in the evaluation of the remaining "life" in the chamber shrouds and related equipment by conducting special tests and studies; feasibility and extent of automation, using computer interfaces and Programmable Logic Controllers in the control system and finally, matching the old components to the new ones into an integrated, highly reliable and cost effective thermal system for the facility. This is a multi-year project just started and the paper deals mainly with the plans and approaches to implement the project successfully within schedule and costs.
Postgraduate Studies in the Field of HCI
NASA Astrophysics Data System (ADS)
Vainio, Teija; Surakka, Veikko; Raisamo, Roope; Räihä, Kari-Jouko; Isokoski, Poika; Väänänen-Vainio-Mattila, Kaisa; Kujala, Sari
In September of 2007, the Tampere Unit for Computer Human Interaction (TAUCHI) at the University of Tampere and The Unit of Human-Centered Technology (IHTE) at the Tampere University of Technology initiated a joint effort to increase collaboration in the field of human-technology interaction (HTI). One of the main aims was to develop higher quality education for university students and to carry out joint internationally recognized HTI research. Both research units have their own master and postgraduate students while the focus of education is at IHTE on usability and humancentered design of interactive products and services whereas TAUCHI focuses on human-technology interaction developing it by harmonizing the potential of technology with human abilities, needs, and limitations. Based on our joint analysis we know now that together TAUCHI and IHTE are offering an internationally competitive master’s program consisting of more than 40 basic, intermediate and advanced level courses. Although both units are partners in the national Graduate School in User- Centered Information Technology (UCIT) led by TAUCHI we have recognized a clear need for developing and systematizing our doctoral education.
Research activities at the Center for Modeling of Turbulence and Transition
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing
1993-01-01
The main research activities at the Center for Modeling of Turbulence and Transition (CMOTT) are described. The research objective of CMOTT is to improve and/or develop turbulence and transition models for propulsion systems. The flows of interest in propulsion systems can be both compressible and incompressible, three dimensional, bounded by complex wall geometries, chemically reacting, and involve 'bypass' transition. The most relevant turbulence and transition models for the above flows are one- and two-equation eddy viscosity models, Reynolds stress algebraic- and transport-equation models, pdf models, and multiple-scale models. All these models are classified as one-point closure schemes since only one-point (in time and space) turbulent correlations, such as second moments (Reynolds stresses and turbulent heat fluxes) and third moments, are involved. In computational fluid dynamics, all turbulent quantities are one-point correlations. Therefore, the study of one-point turbulent closure schemes is the focus of our turbulence research. However, other research, such as the renormalization group theory, the direct interaction approximation method, and numerical simulations are also pursued to support the development of turbulence modeling.
Transputer parallel processing at NASA Lewis Research Center
NASA Technical Reports Server (NTRS)
Ellis, Graham K.
1989-01-01
The transputer parallel processing lab at NASA Lewis Research Center (LeRC) consists of 69 processors (transputers) that can be connected into various networks for use in general purpose concurrent processing applications. The main goal of the lab is to develop concurrent scientific and engineering application programs that will take advantage of the computational speed increases available on a parallel processor over the traditional sequential processor. Current research involves the development of basic programming tools. These tools will help standardize program interfaces to specific hardware by providing a set of common libraries for applications programmers. The thrust of the current effort is in developing a set of tools for graphics rendering/animation. The applications programmer currently has two options for on-screen plotting. One option can be used for static graphics displays and the other can be used for animated motion. The option for static display involves the use of 2-D graphics primitives that can be called from within an application program. These routines perform the standard 2-D geometric graphics operations in real-coordinate space as well as allowing multiple windows on a single screen.
ERIC Educational Resources Information Center
Association for Educational Data Systems, Washington, DC.
Fifteen papers on computer centers and data processing management presented at the Association for Educational Data Systems (AEDS) 1976 convention are included in this document. The first two papers review the recent controversy for proposed licensing of data processors, and they are followed by a description of the Institute for Certification of…
37. Photograph of plan for repairs to computer room, 1958, ...
37. Photograph of plan for repairs to computer room, 1958, prepared by the Public Works Office, Underwater Sound Laboratory. Drawing on file at Caretaker Site Office, Naval Undersea Warfare Center, New London. Copyright-free. - Naval Undersea Warfare Center, Bowditch Hall, 600 feet east of Smith Street & 350 feet south of Columbia Cove, West bank of Thames River, New London, New London County, CT
ERIC Educational Resources Information Center
Vakil, Sepehr
2018-01-01
In this essay, Sepehr Vakil argues that a more serious engagement with critical traditions in education research is necessary to achieve a justice-centered approach to equity in computer science (CS) education. With CS rapidly emerging as a distinct feature of K-12 public education in the United States, calls to expand CS education are often…
The U.S. Environmental Protection Agency (EPA) Computational Toxicology Program integrates advances in biology, chemistry, and computer science to help prioritize chemicals for further research based on potential human health risks. This work involves computational and data drive...
Thousands gather to watch a Space Shuttle Main Engine Test
2001-04-21
Approximately 13,000 people fill the grounds at NASA's John C. Stennis Space Center for the first-ever evening public engine test of a Space Shuttle Main Engine. The test marked Stennis Space Center's 20th anniversary celebration of the first Space Shuttle mission.
CHIRAL--A Computer Aided Application of the Cahn-Ingold-Prelog Rules.
ERIC Educational Resources Information Center
Meyer, Edgar F., Jr.
1978-01-01
A computer program is described for identification of chiral centers in molecules. Essential input to the program includes both atomic and bonding information. The program does not require computer graphic input-output. (BB)
Facilities | Integrated Energy Solutions | NREL
strategies needed to optimize our entire energy system. A photo of the high-performance computer at NREL . High-Performance Computing Data Center High-performance computing facilities at NREL provide high-speed
Computers and Technological Forecasting
ERIC Educational Resources Information Center
Martino, Joseph P.
1971-01-01
Forecasting is becoming increasingly automated, thanks in large measure to the computer. It is now possible for a forecaster to submit his data to a computation center and call for the appropriate program. (No knowledge of statistics is required.) (Author)
Applied Computational Fluid Dynamics at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Kwak, Dochan (Technical Monitor)
1994-01-01
The field of Computational Fluid Dynamics (CFD) has advanced to the point where it can now be used for many applications in fluid mechanics research and aerospace vehicle design. A few applications being explored at NASA Ames Research Center will be presented and discussed. The examples presented will range in speed from hypersonic to low speed incompressible flow applications. Most of the results will be from numerical solutions of the Navier-Stokes or Euler equations in three space dimensions for general geometry applications. Computational results will be used to highlight the presentation as appropriate. Advances in computational facilities including those associated with NASA's CAS (Computational Aerosciences) Project of the Federal HPCC (High Performance Computing and Communications) Program will be discussed. Finally, opportunities for future research will be presented and discussed. All material will be taken from non-sensitive, previously-published and widely-disseminated work.
A resource-sharing model based on a repeated game in fog computing.
Sun, Yan; Zhang, Nan
2017-03-01
With the rapid development of cloud computing techniques, the number of users is undergoing exponential growth. It is difficult for traditional data centers to perform many tasks in real time because of the limited bandwidth of resources. The concept of fog computing is proposed to support traditional cloud computing and to provide cloud services. In fog computing, the resource pool is composed of sporadic distributed resources that are more flexible and movable than a traditional data center. In this paper, we propose a fog computing structure and present a crowd-funding algorithm to integrate spare resources in the network. Furthermore, to encourage more resource owners to share their resources with the resource pool and to supervise the resource supporters as they actively perform their tasks, we propose an incentive mechanism in our algorithm. Simulation results show that our proposed incentive mechanism can effectively reduce the SLA violation rate and accelerate the completion of tasks.
Salvador-Carulla, Luis; Cloninger, C Robert; Thornicroft, Amalia; Mezzich, Juan E.
2015-01-01
Declarations are relevant tools to frame new areas in health care, to raise awareness and to facilitate knowledge-to-action. The International College on Person Centered Medicine (ICPCM) is seeking to extend the impact of the ICPCM Conference Series by producing a declaration on every main topic. The aim of this paper is to describe the development of the 2013 Geneva Declaration on Person-centered Health Research and to provide additional information on the research priority areas identified during this iterative process. There is a need for more PCM research and for the incorporation of the PCM approach into general health research. Main areas of research focus include: Conceptual, terminological, and ontological issues; research to enhance the empirical evidence of PCM main components such as PCM informed clinical communication; PCM-based diagnostic models; person-centered care and interventions; and people-centered care, research on training and curriculum development. Dissemination and implementation of PCM knowledge-base is integral to Person-centered Health Research and shall engage currently available scientific and translational dissemination tools such journals, events and eHealth. PMID:26146541
Design of a robotic vehicle with self-contained intelligent wheels
NASA Astrophysics Data System (ADS)
Poulson, Eric A.; Jacob, John S.; Gunderson, Robert W.; Abbott, Ben A.
1998-08-01
The Center for Intelligent Systems has developed a small robotic vehicle named the Advanced Rover Chassis 3 (ARC 3) with six identical intelligent wheel units attached to a payload via a passive linkage suspension system. All wheels are steerable, so the ARC 3 can move in any direction while rotating at any rate allowed by the terrain and motors. Each intelligent wheel unit contains a drive motor, steering motor, batteries, and computer. All wheel units are identical, so manufacturing, programing, and spare replacement are greatly simplified. The intelligent wheel concept would allow the number and placement of wheels on the vehicle to be changed with no changes to the control system, except to list the position of all the wheels relative to the vehicle center. The task of controlling the ARC 3 is distributed between one master computer and the wheel computers. Tasks such as controlling the steering motors and calculating the speed of each wheel relative to the vehicle speed in a corner are dependent on the location of a wheel relative to the vehicle center and ar processed by the wheel computers. Conflicts between the wheels are eliminated by computing the vehicle velocity control in the master computer. Various approaches to this distributed control problem, and various low level control methods, have been explored.
Berkeley Lab - Materials Sciences Division
Computational Study of Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion Facilities and Centers Staff Center for X-ray Optics Patrick Naulleau Director 510-486-4529 2-432 PNaulleau
Otsuna, Hideo; Shinomiya, Kazunori; Ito, Kei
2014-01-01
Compared with connections between the retinae and primary visual centers, relatively less is known in both mammals and insects about the functional segregation of neural pathways connecting primary and higher centers of the visual processing cascade. Here, using the Drosophila visual system as a model, we demonstrate two levels of parallel computation in the pathways that connect primary visual centers of the optic lobe to computational circuits embedded within deeper centers in the central brain. We show that a seemingly simple achromatic behavior, namely phototaxis, is under the control of several independent pathways, each of which is responsible for navigation towards unique wavelengths. Silencing just one pathway is enough to disturb phototaxis towards one characteristic monochromatic source, whereas phototactic behavior towards white light is not affected. The response spectrum of each demonstrable pathway is different from that of individual photoreceptors, suggesting subtractive computations. A choice assay between two colors showed that these pathways are responsible for navigation towards, but not for the detection itself of, the monochromatic light. The present study provides novel insights about how visual information is separated and processed in parallel to achieve robust control of an innate behavior. PMID:24574974
Virtual microscopy: merging of computer mediated communication and intuitive interfacing
NASA Astrophysics Data System (ADS)
de Ridder, Huib; de Ridder-Sluiter, Johanna G.; Kluin, Philip M.; Christiaans, Henri H. C. M.
2009-02-01
Ubiquitous computing (or Ambient Intelligence) is an upcoming technology that is usually associated with futuristic smart environments in which information is available anytime anywhere and with which humans can interact in a natural, multimodal way. However spectacular the corresponding scenarios may be, it is equally challenging to consider how this technology may enhance existing situations. This is illustrated by a case study from the Dutch medical field: central quality reviewing for pathology in child oncology. The main goal of the review is to assess the quality of the diagnosis based on patient material. The sharing of knowledge in social face-to-face interaction during such meeting is an important advantage. At the same time there is the disadvantage that the experts from the seven Dutch academic medical centers have to travel to the review meeting and that the required logistics to collect and bring patient material and data to the meeting is cumbersome and time-consuming. This paper focuses on how this time-consuming, nonefficient way of reviewing can be replaced by a virtual collaboration system by merging technology supporting Computer Mediated Collaboration and intuitive interfacing. This requires insight in the preferred way of communication and collaboration as well as knowledge about preferred interaction style with a virtual shared workspace.
Advanced piloted aircraft flight control system design methodology. Volume 1: Knowledge base
NASA Technical Reports Server (NTRS)
Mcruer, Duane T.; Myers, Thomas T.
1988-01-01
The development of a comprehensive and electric methodology for conceptual and preliminary design of flight control systems is presented and illustrated. The methodology is focused on the design stages starting with the layout of system requirements and ending when some viable competing system architectures (feedback control structures) are defined. The approach is centered on the human pilot and the aircraft as both the sources of, and the keys to the solution of, many flight control problems. The methodology relies heavily on computational procedures which are highly interactive with the design engineer. To maximize effectiveness, these techniques, as selected and modified to be used together in the methodology, form a cadre of computational tools specifically tailored for integrated flight control system preliminary design purposes. While theory and associated computational means are an important aspect of the design methodology, the lore, knowledge and experience elements, which guide and govern applications are critical features. This material is presented as summary tables, outlines, recipes, empirical data, lists, etc., which encapsulate a great deal of expert knowledge. Much of this is presented in topical knowledge summaries which are attached as Supplements. The composite of the supplements and the main body elements constitutes a first cut at a a Mark 1 Knowledge Base for manned-aircraft flight control.
Probabilistic evaluation of uncertainties and risks in aerospace components
NASA Technical Reports Server (NTRS)
Shah, A. R.; Shiao, M. C.; Nagpal, V. K.; Chamis, C. C.
1992-01-01
This paper summarizes a methodology developed at NASA Lewis Research Center which computationally simulates the structural, material, and load uncertainties associated with Space Shuttle Main Engine (SSME) components. The methodology was applied to evaluate the scatter in static, buckling, dynamic, fatigue, and damage behavior of the SSME turbo pump blade. Also calculated are the probability densities of typical critical blade responses, such as effective stress, natural frequency, damage initiation, most probable damage path, etc. Risk assessments were performed for different failure modes, and the effect of material degradation on the fatigue and damage behaviors of a blade were calculated using a multi-factor interaction equation. Failure probabilities for different fatigue cycles were computed and the uncertainties associated with damage initiation and damage propagation due to different load cycle were quantified. Evaluations on the effects of mistuned blades on a rotor were made; uncertainties in the excitation frequency were found to significantly amplify the blade responses of a mistuned rotor. The effects of the number of blades on a rotor were studied. The autocorrelation function of displacements and the probability density function of the first passage time for deterministic and random barriers for structures subjected to random processes also were computed. A brief discussion was included on the future direction of probabilistic structural analysis.
Current state and future direction of computer systems at NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Rogers, James L. (Editor); Tucker, Jerry H. (Editor)
1992-01-01
Computer systems have advanced at a rate unmatched by any other area of technology. As performance has dramatically increased there has been an equally dramatic reduction in cost. This constant cost performance improvement has precipitated the pervasiveness of computer systems into virtually all areas of technology. This improvement is due primarily to advances in microelectronics. Most people are now convinced that the new generation of supercomputers will be built using a large number (possibly thousands) of high performance microprocessors. Although the spectacular improvements in computer systems have come about because of these hardware advances, there has also been a steady improvement in software techniques. In an effort to understand how these hardware and software advances will effect research at NASA LaRC, the Computer Systems Technical Committee drafted this white paper to examine the current state and possible future directions of computer systems at the Center. This paper discusses selected important areas of computer systems including real-time systems, embedded systems, high performance computing, distributed computing networks, data acquisition systems, artificial intelligence, and visualization.
Future of Department of Defense Cloud Computing Amid Cultural Confusion
2013-03-01
enterprise cloud - computing environment and transition to a public cloud service provider. Services have started the development of individual cloud - computing environments...endorsing cloud computing . It addresses related issues in matters of service culture changes and how strategic leaders will dictate the future of cloud ...through data center consolidation and individual Service provided cloud computing .
Computer Needs and Computer Problems in Developing Countries.
ERIC Educational Resources Information Center
Huskey, Harry D.
A survey of the computer environment in a developing country is provided. Levels of development are considered and the educational requirements of countries at various levels are discussed. Computer activities in India, Burma, Pakistan, Brazil and a United Nations sponsored educational center in Hungary are all described. (SK/Author)
Computer Viruses. Legal and Policy Issues Facing Colleges and Universities.
ERIC Educational Resources Information Center
Johnson, David R.; And Others
Compiled by various members of the higher educational community together with risk managers, computer center managers, and computer industry experts, this report recommends establishing policies on an institutional level to protect colleges and universities from computer viruses and the accompanying liability. Various aspects of the topic are…
PACCE: Perl Algorithm to Compute Continuum and Equivalent Widths
NASA Astrophysics Data System (ADS)
Riffel, Rogério; Borges Vale, Tibério
2011-05-01
PACCE (Perl Algorithm to Compute continuum and Equivalent Widths) computes continuum and equivalent widths. PACCE is able to determine mean continuum and continuum at line center values, which are helpful in stellar population studies, and is also able to compute the uncertainties in the equivalent widths using photon statistics.
The U.S. Environmental Protection Agency (EPA) Computational Toxicology Program integrates advances in biology, chemistry, and computer science to help prioritize chemicals for further research based on potential human health risks. This work involves computational and data drive...
76 FR 1410 - Privacy Act of 1974; Computer Matching Program
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-10
...; Computer Matching Program AGENCY: Defense Manpower Data Center (DMDC), DoD. ACTION: Notice of a Computer... administrative burden, constitute a greater intrusion of the individual's privacy, and would result in additional... Liaison Officer, Department of Defense. Notice of a Computer Matching Program Among the Defense Manpower...
Computers, Networks, and Desegregation at San Jose High Academy.
ERIC Educational Resources Information Center
Solomon, Gwen
1987-01-01
Describes magnet high school which was created in California to meet desegregation requirements and emphasizes computer technology. Highlights include local computer networks that connect science and music labs, the library/media center, business computer lab, writing lab, language arts skills lab, and social studies classrooms; software; teacher…
The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency
ERIC Educational Resources Information Center
Oder, Karl; Pittman, Stephanie
2015-01-01
Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…
Holkenbrink, Patrick F.
1978-01-01
Landsat data are received by National Aeronautics and Space Administration (NASA) tracking stations and converted into digital form on high-density tapes (HDTs) by the Image Processing Facility (IPF) at the Goddard Space Flight Center (GSFC), Greenbelt, Maryland. The HDTs are shipped to the EROS Data Center (EDC) where they are converted into customer products by the EROS Data Center digital image processing system (EDIPS). This document describes in detail one of these products: the computer-compatible tape (CCT) produced from Landsat-1, -2, and -3 multispectral scanner (MSS) data and Landsat-3 only return-beam vidicon (RBV) data. Landsat-1 and -2 RBV data will not be processed by IPF/EDIPS to CCT format.
Chenoweth, Lynn; Vickland, Victor; Stein-Parbury, Jane; Jeon, Yun-Hee; Kenny, Patricia; Brodaty, Henry
2015-10-01
To answer questions on the essential components (services, operations and resources) of a person-centered aged care home (iHome) using computer simulation. iHome was developed with AnyLogic software using extant study data obtained from 60 Australian aged care homes, 900+ clients and 700+ aged care staff. Bayesian analysis of simulated trial data will determine the influence of different iHome characteristics on care service quality and client outcomes. Interim results: A person-centered aged care home (socio-cultural context) and care/lifestyle services (interactional environment) can produce positive outcomes for aged care clients (subjective experiences) in the simulated environment. Further testing will define essential characteristics of a person-centered care home.
NASA Technical Reports Server (NTRS)
Sisk, Gregory A.
1989-01-01
The high-pressure oxidizer turbopump (HPOTP) consists of two centrifugal pumps, on a common shaft, that are directly driven by a hot-gas turbine. Pump shaft axial thrust is balanced in that the double-entry main inducer/impeller is inherently balanced and the thrusts of the preburner pump and turbine are nearly equal but opposite. Residual shaft thrust is controlled by a self-compensating, non-rubbing, balance piston. Shaft hang-up must be avoided if the balance piston is to perform properly. One potential cause of shaft hang-up is contact between the Phase 2 bearing support and axial spring cartridge of the HPOTP main pump housing. The status of the bearing support/axial spring cartridge interface is investigated under current loading conditions. An ANSYS version 4.3, three-dimensional, finite element model was generated on Lockheed's VAX 11/785 computer. A nonlinear thermal analysis was then executed on the Marshall Space Flight Center Engineering Analysis Data System (EADS). These thermal results were then applied along with the interference fit and bolt preloads to the model as load conditions for a static analysis to determine the gap status of the bearing support/axial spring cartridge interface. For possible further analysis of the local regions of HPOTP main pump housing assembly, detailed ANSYS submodels were generated using I-DEAS Geomod and Supertab (Appendix A).
Structure and variability of the Western Maine Coastal Current
Churchill, J.H.; Pettigrew, N.R.; Signell, R.P.
2005-01-01
Analyses of CTD and moored current meter data from 1998 and 2000 reveal a number of mechanisms influencing the flow along the western coast of Maine. On occasions, the Eastern Maine Coastal Current extends into the western Gulf of Maine where it takes the form of a deep (order 100 m deep) and broad (order 20 km wide) southwestward flow with geostrophic velocities exceeding 20 cm s -1. This is not a coastally trapped flow, however. In fields of geostrophic velocity, computed from shipboard-CTD data, the core of this current is roughly centered at the 100 m isobath and its onshore edge is no closer than 10 km from the coast. Geostrophic velocity fields also reveal a relatively shallow (order 10 m deep) baroclinic flow adjacent to the coast. This flow is also directed to the southwest and appears to be principally comprised of local river discharge. Analyses of moored current meter data reveal wind-driven modulations of the coastal flow that are consistent with expectations from simple theoretical models. However, a large fraction of the near-shore current variance does not appear to be directly related to wind forcing. Sea-surface temperature imagery, combined with analysis of the moored current meter data, suggests that eddies and meanders within the coastal flow may at times dominate the near-shore current variance. ?? 2005 Elsevier Ltd. All rights reserved.
7. INTERIOR, MAIN GARAGE, SOUTHERN WALL, FROM CLOSE TO WALL, ...
7. INTERIOR, MAIN GARAGE, SOUTHERN WALL, FROM CLOSE TO WALL, LOOKING SOUTH, SHOWING 'GAMEWELL' FIRE ALARM TAPE CONTROL SYSTEM (TECHNOLOGY CIRCA 1910) AT CENTER, AND ENTRY TO OFFICE AT FAR RIGHT. - Oakland Naval Supply Center, Firehouse, East of Fourth Street, between A & B Streets, Oakland, Alameda County, CA
Cost of Pre-School Education Provision.
ERIC Educational Resources Information Center
Gilder, Paula; Jardine, Paul; Guerin, Sinead
1998-01-01
This study examined the current costs of preschool education in Scotland. Eleven preschool centers were studied in order to facilitate identification of key issues and to assist in designing the main questionnaire. Study findings indicated that main issues were the extent of between-center differences, information availability, and the use of…
Sen. John C. Stennis celebrates a successful Space Shuttle Main Engine test
NASA Technical Reports Server (NTRS)
1978-01-01
Sen. John C. Stennis dances a jig on top of the Test Control Center at Stennis Space Center following the successful test of a Space Shuttle Main Engine in 1978. A staunch supporter of the National Aeronautics and Space Administration (NASA), the senior senator from DeKalb, Miss., supported the establishment of the space center in Hancock County and spoke personally with local residents who would relocate their homes to accommodate Mississippi's entry into the space age. Stennis Space Center was named for Sen. Stennis by Executive Order of President Ronald Reagan on May 20, 1988.
Comprehensive efficiency analysis of supercomputer resource usage based on system monitoring data
NASA Astrophysics Data System (ADS)
Mamaeva, A. A.; Shaykhislamov, D. I.; Voevodin, Vad V.; Zhumatiy, S. A.
2018-03-01
One of the main problems of modern supercomputers is the low efficiency of their usage, which leads to the significant idle time of computational resources, and, in turn, to the decrease in speed of scientific research. This paper presents three approaches to study the efficiency of supercomputer resource usage based on monitoring data analysis. The first approach performs an analysis of computing resource utilization statistics, which allows to identify different typical classes of programs, to explore the structure of the supercomputer job flow and to track overall trends in the supercomputer behavior. The second approach is aimed specifically at analyzing off-the-shelf software packages and libraries installed on the supercomputer, since efficiency of their usage is becoming an increasingly important factor for the efficient functioning of the entire supercomputer. Within the third approach, abnormal jobs – jobs with abnormally inefficient behavior that differs significantly from the standard behavior of the overall supercomputer job flow – are being detected. For each approach, the results obtained in practice in the Supercomputer Center of Moscow State University are demonstrated.
Gastrointestinal robot-assisted surgery. A current perspective.
Lunca, Sorinel; Bouras, George; Stanescu, Alexandru Calin
2005-12-01
Minimally invasive techniques have revolutionized operative surgery. Computer aided surgery and robotic surgical systems strive to improve further on currently available minimally invasive surgery and open new horizons. Only several centers are currently using surgical robots and publishing data. In gastrointestinal surgery, robotic surgery is applied to a wide range of procedures, but is still in its infancy. Cholecystectomy, Nissen fundoplication and Heller myotomy are among the most frequently performed operations. The ZEUS (Computer Motion, Goleta, CA) and the da Vinci (Intuitive Surgical, Mountain View, CA) surgical systems are today the most advanced robotic systems used in gastrointestinal surgery. Most studies reported that robotic gastrointestinal surgery is feasible and safe, provides improved dexterity, better visualization, reduced fatigue and high levels of precision when compared to conventional laparoscopic surgery. Its main drawbacks are the absence of force feedback and extremely high costs. At this moment there are no reports to clearly demonstrate the superiority of robotics over conventional laparoscopic surgery. Further research and more prospective randomized trials are needed to better define the optimal application of this new technology in gastrointestinal surgery.
Development of an explicit multiblock/multigrid flow solver for viscous flows in complex geometries
NASA Technical Reports Server (NTRS)
Steinthorsson, E.; Liou, M. S.; Povinelli, L. A.
1993-01-01
A new computer program is being developed for doing accurate simulations of compressible viscous flows in complex geometries. The code employs the full compressible Navier-Stokes equations. The eddy viscosity model of Baldwin and Lomax is used to model the effects of turbulence on the flow. A cell centered finite volume discretization is used for all terms in the governing equations. The Advection Upwind Splitting Method (AUSM) is used to compute the inviscid fluxes, while central differencing is used for the diffusive fluxes. A four-stage Runge-Kutta time integration scheme is used to march solutions to steady state, while convergence is enhanced by a multigrid scheme, local time-stepping, and implicit residual smoothing. To enable simulations of flows in complex geometries, the code uses composite structured grid systems where all grid lines are continuous at block boundaries (multiblock grids). Example results shown are a flow in a linear cascade, a flow around a circular pin extending between the main walls in a high aspect-ratio channel, and a flow of air in a radial turbine coolant passage.
Development of an explicit multiblock/multigrid flow solver for viscous flows in complex geometries
NASA Technical Reports Server (NTRS)
Steinthorsson, E.; Liou, M.-S.; Povinelli, L. A.
1993-01-01
A new computer program is being developed for doing accurate simulations of compressible viscous flows in complex geometries. The code employs the full compressible Navier-Stokes equations. The eddy viscosity model of Baldwin and Lomax is used to model the effects of turbulence on the flow. A cell centered finite volume discretization is used for all terms in the governing equations. The Advection Upwind Splitting Method (AUSM) is used to compute the inviscid fluxes, while central differencing is used for the diffusive fluxes. A four-stage Runge-Kutta time integration scheme is used to march solutions to steady state, while convergence is enhanced by a multigrid scheme, local time-stepping and implicit residual smoothing. To enable simulations of flows in complex geometries, the code uses composite structured grid systems where all grid lines are continuous at block boundaries (multiblock grids). Example results are shown a flow in a linear cascade, a flow around a circular pin extending between the main walls in a high aspect-ratio channel, and a flow of air in a radial turbine coolant passage.
NASA Technical Reports Server (NTRS)
Slooff, J. W.
1986-01-01
The Special Course on Aircraft Drag Prediction was sponsored by the AGARD Fluid Dynamics Panel and the von Karman Institute and presented at the von Karman Institute, Rhode-Saint-Genese, Belgium, on 20 to 23 May 1985 and at the NASA Langley Research Center, Hampton, Virginia, USA, 5 to 6 August 1985. The course began with a general review of drag reduction technology. Then the possibility of reduction of skin friction through control of laminar flow and through modification of the structure of the turbulence in the boundary layer were discussed. Methods for predicting and reducing the drag of external stores, of nacelles, of fuselage protuberances, and of fuselage afterbodies were then presented followed by discussion of transonic drag rise. The prediction of viscous and wave drag by a method matching inviscid flow calculations and boundary layer integral calculations, and the reduction of transonic drag through boundary layer control are also discussed. This volume comprises Paper No. 9 Computational Drag Analyses and Minimization: Mission Impossible, which was not included in AGARD Report 723 (main volume).
Fault Injection and Monitoring Capability for a Fault-Tolerant Distributed Computation System
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo; Yates, Amy M.; Malekpour, Mahyar R.
2010-01-01
The Configurable Fault-Injection and Monitoring System (CFIMS) is intended for the experimental characterization of effects caused by a variety of adverse conditions on a distributed computation system running flight control applications. A product of research collaboration between NASA Langley Research Center and Old Dominion University, the CFIMS is the main research tool for generating actual fault response data with which to develop and validate analytical performance models and design methodologies for the mitigation of fault effects in distributed flight control systems. Rather than a fixed design solution, the CFIMS is a flexible system that enables the systematic exploration of the problem space and can be adapted to meet the evolving needs of the research. The CFIMS has the capabilities of system-under-test (SUT) functional stimulus generation, fault injection and state monitoring, all of which are supported by a configuration capability for setting up the system as desired for a particular experiment. This report summarizes the work accomplished so far in the development of the CFIMS concept and documents the first design realization.
77 FR 38630 - Open Internet Advisory Committee
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-28
... Computer Science and Co-Founder of the Berkman Center for Internet and Society, Harvard University, is... of Technology Computer Science and Artificial Intelligence Laboratory, is appointed vice-chairperson... Jennifer Rexford, Professor of Computer Science, Princeton University Dennis Roberson, Vice Provost...
Exposure Science and the US EPA National Center for Computational Toxicology
The emerging field of computational toxicology applies mathematical and computer models and molecular biological and chemical approaches to explore both qualitative and quantitative relationships between sources of environmental pollutant exposure and adverse health outcomes. The...
NASA Technical Reports Server (NTRS)
Salmon, Ellen
1996-01-01
The data storage and retrieval demands of space and Earth sciences researchers have made the NASA Center for Computational Sciences (NCCS) Mass Data Storage and Delivery System (MDSDS) one of the world's most active Convex UniTree systems. Science researchers formed the NCCS's Computer Environments and Research Requirements Committee (CERRC) to relate their projected supercomputing and mass storage requirements through the year 2000. Using the CERRC guidelines and observations of current usage, some detailed projections of requirements for MDSDS network bandwidth and mass storage capacity and performance are presented.
TOSCA calculations and measurements for the SLAC SLC damping ring dipole magnet
NASA Astrophysics Data System (ADS)
Early, R. A.; Cobb, J. K.
1985-04-01
The SLAC damping ring dipole magnet was originally designed with removable nose pieces at the ends. Recently, a set of magnetic measurements was taken of the vertical component of induction along the center of the magnet for four different pole-end configurations and several current settings. The three dimensional computer code TOSCA, which is currently installed on the National Magnetic Fusion Energy Computer Center's Cray X-MP, was used to compute field values for the four configurations at current settings near saturation. Comparisons were made for magnetic induction as well as effective magnetic lengths for the different configurations.
NASA Technical Reports Server (NTRS)
Bennett, Jerome (Technical Monitor)
2002-01-01
The NASA Center for Computational Sciences (NCCS) is a high-performance scientific computing facility operated, maintained and managed by the Earth and Space Data Computing Division (ESDCD) of NASA Goddard Space Flight Center's (GSFC) Earth Sciences Directorate. The mission of the NCCS is to advance leading-edge science by providing the best people, computers, and data storage systems to NASA's Earth and space sciences programs and those of other U.S. Government agencies, universities, and private institutions. Among the many computationally demanding Earth science research efforts supported by the NCCS in Fiscal Year 1999 (FY99) are the NASA Seasonal-to-Interannual Prediction Project, the NASA Search and Rescue Mission, Earth gravitational model development efforts, the National Weather Service's North American Observing System program, Data Assimilation Office studies, a NASA-sponsored project at the Center for Ocean-Land-Atmosphere Studies, a NASA-sponsored microgravity project conducted by researchers at the City University of New York and the University of Pennsylvania, the completion of a satellite-derived global climate data set, simulations of a new geodynamo model, and studies of Earth's torque. This document presents highlights of these research efforts and an overview of the NCCS, its facilities, and its people.
CMSA: a heterogeneous CPU/GPU computing system for multiple similar RNA/DNA sequence alignment.
Chen, Xi; Wang, Chen; Tang, Shanjiang; Yu, Ce; Zou, Quan
2017-06-24
The multiple sequence alignment (MSA) is a classic and powerful technique for sequence analysis in bioinformatics. With the rapid growth of biological datasets, MSA parallelization becomes necessary to keep its running time in an acceptable level. Although there are a lot of work on MSA problems, their approaches are either insufficient or contain some implicit assumptions that limit the generality of usage. First, the information of users' sequences, including the sizes of datasets and the lengths of sequences, can be of arbitrary values and are generally unknown before submitted, which are unfortunately ignored by previous work. Second, the center star strategy is suited for aligning similar sequences. But its first stage, center sequence selection, is highly time-consuming and requires further optimization. Moreover, given the heterogeneous CPU/GPU platform, prior studies consider the MSA parallelization on GPU devices only, making the CPUs idle during the computation. Co-run computation, however, can maximize the utilization of the computing resources by enabling the workload computation on both CPU and GPU simultaneously. This paper presents CMSA, a robust and efficient MSA system for large-scale datasets on the heterogeneous CPU/GPU platform. It performs and optimizes multiple sequence alignment automatically for users' submitted sequences without any assumptions. CMSA adopts the co-run computation model so that both CPU and GPU devices are fully utilized. Moreover, CMSA proposes an improved center star strategy that reduces the time complexity of its center sequence selection process from O(mn 2 ) to O(mn). The experimental results show that CMSA achieves an up to 11× speedup and outperforms the state-of-the-art software. CMSA focuses on the multiple similar RNA/DNA sequence alignment and proposes a novel bitmap based algorithm to improve the center star strategy. We can conclude that harvesting the high performance of modern GPU is a promising approach to accelerate multiple sequence alignment. Besides, adopting the co-run computation model can maximize the entire system utilization significantly. The source code is available at https://github.com/wangvsa/CMSA .
UC Merced Center for Computational Biology Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colvin, Michael; Watanabe, Masakatsu
Final report for the UC Merced Center for Computational Biology. The Center for Computational Biology (CCB) was established to support multidisciplinary scientific research and academic programs in computational biology at the new University of California campus in Merced. In 2003, the growing gap between biology research and education was documented in a report from the National Academy of Sciences, Bio2010 Transforming Undergraduate Education for Future Research Biologists. We believed that a new type of biological sciences undergraduate and graduate programs that emphasized biological concepts and considered biology as an information science would have a dramatic impact in enabling the transformationmore » of biology. UC Merced as newest UC campus and the first new U.S. research university of the 21st century was ideally suited to adopt an alternate strategy - to create a new Biological Sciences majors and graduate group that incorporated the strong computational and mathematical vision articulated in the Bio2010 report. CCB aimed to leverage this strong commitment at UC Merced to develop a new educational program based on the principle of biology as a quantitative, model-driven science. Also we expected that the center would be enable the dissemination of computational biology course materials to other university and feeder institutions, and foster research projects that exemplify a mathematical and computations-based approach to the life sciences. As this report describes, the CCB has been successful in achieving these goals, and multidisciplinary computational biology is now an integral part of UC Merced undergraduate, graduate and research programs in the life sciences. The CCB began in fall 2004 with the aid of an award from U.S. Department of Energy (DOE), under its Genomes to Life program of support for the development of research and educational infrastructure in the modern biological sciences. This report to DOE describes the research and academic programs made possible by the CCB from its inception until August, 2010, at the end of the final extension. Although DOE support for the center ended in August 2010, the CCB will continue to exist and support its original objectives. The research and academic programs fostered by the CCB have led to additional extramural funding from other agencies, and we anticipate that CCB will continue to provide support for quantitative and computational biology program at UC Merced for many years to come. Since its inception in fall 2004, CCB research projects have continuously had a multi-institutional collaboration with Lawrence Livermore National Laboratory (LLNL), and the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, as well as individual collaborators at other sites. CCB affiliated faculty cover a broad range of computational and mathematical research including molecular modeling, cell biology, applied math, evolutional biology, bioinformatics, etc. The CCB sponsored the first distinguished speaker series at UC Merced, which had an important role is spreading the word about the computational biology emphasis at this new campus. One of CCB's original goals is to help train a new generation of biologists who bridge the gap between the computational and life sciences. To archive this goal, by summer 2006, a new program - summer undergraduate internship program, have been established under CCB to train the highly mathematical and computationally intensive Biological Science researchers. By the end of summer 2010, 44 undergraduate students had gone through this program. Out of those participants, 11 students have been admitted to graduate schools and 10 more students are interested in pursuing graduate studies in the sciences. The center is also continuing to facilitate the development and dissemination of undergraduate and graduate course materials based on the latest research in computational biology.« less
Computing Protein-Protein Association Affinity with Hybrid Steered Molecular Dynamics.
Rodriguez, Roberto A; Yu, Lili; Chen, Liao Y
2015-09-08
Computing protein-protein association affinities is one of the fundamental challenges in computational biophysics/biochemistry. The overwhelming amount of statistics in the phase space of very high dimensions cannot be sufficiently sampled even with today's high-performance computing power. In this article, we extend a potential of mean force (PMF)-based approach, the hybrid steered molecular dynamics (hSMD) approach we developed for ligand-protein binding, to protein-protein association problems. For a protein complex consisting of two protomers, P1 and P2, we choose m (≥3) segments of P1 whose m centers of mass are to be steered in a chosen direction and n (≥3) segments of P2 whose n centers of mass are to be steered in the opposite direction. The coordinates of these m + n centers constitute a phase space of 3(m + n) dimensions (3(m + n)D). All other degrees of freedom of the proteins, ligands, solvents, and solutes are freely subject to the stochastic dynamics of the all-atom model system. Conducting SMD along a line in this phase space, we obtain the 3(m + n)D PMF difference between two chosen states: one single state in the associated state ensemble and one single state in the dissociated state ensemble. This PMF difference is the first of four contributors to the protein-protein association energy. The second contributor is the 3(m + n - 1)D partial partition in the associated state accounting for the rotations and fluctuations of the (m + n - 1) centers while fixing one of the m + n centers of the P1-P2 complex. The two other contributors are the 3(m - 1)D partial partition of P1 and the 3(n - 1)D partial partition of P2 accounting for the rotations and fluctuations of their m - 1 or n - 1 centers while fixing one of the m/n centers of P1/P2 in the dissociated state. Each of these three partial partitions can be factored exactly into a 6D partial partition in multiplication with a remaining factor accounting for the small fluctuations while fixing three of the centers of P1, P2, or the P1-P2 complex, respectively. These small fluctuations can be well-approximated as Gaussian, and every 6D partition can be reduced in an exact manner to three problems of 1D sampling, counting the rotations and fluctuations around one of the centers as being fixed. We implement this hSMD approach to the Ras-RalGDS complex, choosing three centers on RalGDS and three on Ras (m = n = 3). At a computing cost of about 71.6 wall-clock hours using 400 computing cores in parallel, we obtained the association energy, -9.2 ± 1.9 kcal/mol on the basis of CHARMM 36 parameters, which well agrees with the experimental data, -8.4 ± 0.2 kcal/mol.
Trip attraction rates of shopping centers in Northern New Castle County, Delaware.
DOT National Transportation Integrated Search
2004-07-01
This report presents the trip attraction rates of the shopping centers in Northern New : Castle County in Delaware. The study aims to provide an alternative to ITE Trip : Generation Manual (1997) for computing the trip attraction of shopping centers ...
NETL - Supercomputing: NETL Simulation Based Engineering User Center (SBEUC)
None
2018-02-07
NETL's Simulation-Based Engineering User Center, or SBEUC, integrates one of the world's largest high-performance computers with an advanced visualization center. The SBEUC offers a collaborative environment among researchers at NETL sites and those working through the NETL-Regional University Alliance.
Computer Bits: Child Care Center Management Software Buying Guide Update.
ERIC Educational Resources Information Center
Neugebauer, Roger
1987-01-01
Compares seven center management programs used for basic financial and data management tasks such as accounting, payroll and attendance records, and mailing lists. Describes three other specialized programs and gives guidelines for selecting the best software for a particular center. (NH)
NETL - Supercomputing: NETL Simulation Based Engineering User Center (SBEUC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2013-09-30
NETL's Simulation-Based Engineering User Center, or SBEUC, integrates one of the world's largest high-performance computers with an advanced visualization center. The SBEUC offers a collaborative environment among researchers at NETL sites and those working through the NETL-Regional University Alliance.
ERIC Educational Resources Information Center
Grandgenett, Neal; And Others
McMillan Magnet Center is located in urban Omaha, Nebraska, and specializes in math, computers, and communications. Once a junior high school, it was converted to a magnet center for seventh and eighth graders in the 1983-84 school year as part of Omaha's voluntary desegregation plan. Now the ethnic makeup of the student population is about 50%…
Study of the Use of Time-Mean Vortices to Generate Lift for MAV Applications
2011-05-31
microplate to in-plane resonance. Computational effort centers around optimization of a range of parameters (geometry, frequency, amplitude of oscillation, etc...issue involved. Towards this end, a suspended microplate was fabricated via MEMS technology and driven to in-plane resonance via Lorentz force...force to drive the suspended MEMS-based microplate to in-plane resonance. Computational effort centers around optimization of a range of parameters
1984-06-29
sheet metal, machined and composite parts and assembling the components into final pruJucts o Planning, evaluating, testing, inspecting and...Research showed that current programs were pursuing the design and demonstration of integrated centers for sheet metal, machining and composite ...determine any metal parts required and to schedule these requirements from the machining center. Figure 3-33, Planned Composite Production, shows
3D Object Recognition: Symmetry and Virtual Views
1992-12-01
NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATIONI Artificial Intelligence Laboratory REPORT NUMBER 545 Technology Square AIM 1409 Cambridge... ARTIFICIAL INTELLIGENCE LABORATORY and CENTER FOR BIOLOGICAL AND COMPUTATIONAL LEARNING A.I. Memo No. 1409 December 1992 C.B.C.L. Paper No. 76 3D Object...research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences, and at the Artificial
[Automated processing of data from the 1985 population and housing census].
Cholakov, S
1987-01-01
The author describes the method of automated data processing used in the 1985 census of Bulgaria. He notes that the computerization of the census involves decentralization and the use of regional computing centers as well as data processing at the Central Statistical Office's National Information Computer Center. Special attention is given to problems concerning the projection and programming of census data. (SUMMARY IN ENG AND RUS)
Costs of cloud computing for a biometry department. A case study.
Knaus, J; Hieke, S; Binder, H; Schwarzer, G
2013-01-01
"Cloud" computing providers, such as the Amazon Web Services (AWS), offer stable and scalable computational resources based on hardware virtualization, with short, usually hourly, billing periods. The idea of pay-as-you-use seems appealing for biometry research units which have only limited access to university or corporate data center resources or grids. This case study compares the costs of an existing heterogeneous on-site hardware pool in a Medical Biometry and Statistics department to a comparable AWS offer. The "total cost of ownership", including all direct costs, is determined for the on-site hardware, and hourly prices are derived, based on actual system utilization during the year 2011. Indirect costs, which are difficult to quantify are not included in this comparison, but nevertheless some rough guidance from our experience is given. To indicate the scale of costs for a methodological research project, a simulation study of a permutation-based statistical approach is performed using AWS and on-site hardware. In the presented case, with a system utilization of 25-30 percent and 3-5-year amortization, on-site hardware can result in smaller costs, compared to hourly rental in the cloud dependent on the instance chosen. Renting cloud instances with sufficient main memory is a deciding factor in this comparison. Costs for on-site hardware may vary, depending on the specific infrastructure at a research unit, but have only moderate impact on the overall comparison and subsequent decision for obtaining affordable scientific computing resources. Overall utilization has a much stronger impact as it determines the actual computing hours needed per year. Taking this into ac count, cloud computing might still be a viable option for projects with limited maturity, or as a supplement for short peaks in demand.
A high-resolution computational localization method for transcranial magnetic stimulation mapping.
Aonuma, Shinta; Gomez-Tames, Jose; Laakso, Ilkka; Hirata, Akimasa; Takakura, Tomokazu; Tamura, Manabu; Muragaki, Yoshihiro
2018-05-15
Transcranial magnetic stimulation (TMS) is used for the mapping of brain motor functions. The complexity of the brain deters determining the exact localization of the stimulation site using simplified methods (e.g., the region below the center of the TMS coil) or conventional computational approaches. This study aimed to present a high-precision localization method for a specific motor area by synthesizing computed non-uniform current distributions in the brain for multiple sessions of TMS. Peritumoral mapping by TMS was conducted on patients who had intra-axial brain neoplasms located within or close to the motor speech area. The electric field induced by TMS was computed using realistic head models constructed from magnetic resonance images of patients. A post-processing method was implemented to determine a TMS hotspot by combining the computed electric fields for the coil orientations and positions that delivered high motor-evoked potentials during peritumoral mapping. The method was compared to the stimulation site localized via intraoperative direct brain stimulation and navigated TMS. Four main results were obtained: 1) the dependence of the computed hotspot area on the number of peritumoral measurements was evaluated; 2) the estimated localization of the hand motor area in eight non-affected hemispheres was in good agreement with the position of a so-called "hand-knob"; 3) the estimated hotspot areas were not sensitive to variations in tissue conductivity; and 4) the hand motor areas estimated by this proposal and direct electric stimulation (DES) were in good agreement in the ipsilateral hemisphere of four glioma patients. The TMS localization method was validated by well-known positions of the "hand-knob" in brains for the non-affected hemisphere, and by a hotspot localized via DES during awake craniotomy for the tumor-containing hemisphere. Copyright © 2018 Elsevier Inc. All rights reserved.