DOE Office of Scientific and Technical Information (OSTI.GOV)
Eslinger, Paul W.; Aaberg, Rosanne L.; Lopresti, Charles A.
2004-09-14
This document contains detailed user instructions for a suite of utility codes developed for Rev. 1 of the Systems Assessment Capability. The suite of computer codes for Rev. 1 of Systems Assessment Capability performs many functions.
NASA Technical Reports Server (NTRS)
Byrne, F. (Inventor)
1981-01-01
A high speed common data buffer system is described for providing an interface and communications medium between a plurality of computers utilized in a distributed computer complex forming part of a checkout, command and control system for space vehicles and associated ground support equipment. The system includes the capability for temporarily storing data to be transferred between computers, for transferring a plurality of interrupts between computers, for monitoring and recording these transfers, and for correcting errors incurred in these transfers. Validity checks are made on each transfer and appropriate error notification is given to the computer associated with that transfer.
Face recognition system and method using face pattern words and face pattern bytes
Zheng, Yufeng
2014-12-23
The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.
An Architecture for Cross-Cloud System Management
NASA Astrophysics Data System (ADS)
Dodda, Ravi Teja; Smith, Chris; van Moorsel, Aad
The emergence of the cloud computing paradigm promises flexibility and adaptability through on-demand provisioning of compute resources. As the utilization of cloud resources extends beyond a single provider, for business as well as technical reasons, the issue of effectively managing such resources comes to the fore. Different providers expose different interfaces to their compute resources utilizing varied architectures and implementation technologies. This heterogeneity poses a significant system management problem, and can limit the extent to which the benefits of cross-cloud resource utilization can be realized. We address this problem through the definition of an architecture to facilitate the management of compute resources from different cloud providers in an homogenous manner. This preserves the flexibility and adaptability promised by the cloud computing paradigm, whilst enabling the benefits of cross-cloud resource utilization to be realized. The practical efficacy of the architecture is demonstrated through an implementation utilizing compute resources managed through different interfaces on the Amazon Elastic Compute Cloud (EC2) service. Additionally, we provide empirical results highlighting the performance differential of these different interfaces, and discuss the impact of this performance differential on efficiency and profitability.
Distributed Accounting on the Grid
NASA Technical Reports Server (NTRS)
Thigpen, William; Hacker, Thomas J.; McGinnis, Laura F.; Athey, Brian D.
2001-01-01
By the late 1990s, the Internet was adequately equipped to move vast amounts of data between HPC (High Performance Computing) systems, and efforts were initiated to link together the national infrastructure of high performance computational and data storage resources together into a general computational utility 'grid', analogous to the national electrical power grid infrastructure. The purpose of the Computational grid is to provide dependable, consistent, pervasive, and inexpensive access to computational resources for the computing community in the form of a computing utility. This paper presents a fully distributed view of Grid usage accounting and a methodology for allocating Grid computational resources for use on a Grid computing system.
Spectrum/Orbit-Utilization Program
NASA Technical Reports Server (NTRS)
Miller, Edward F.; Sawitz, Paul; Zusman, Fred
1988-01-01
Interferences among geostationary satellites determine allocations. Spectrum/Orbit Utilization Program (SOUP) is analytical computer program for determining mutual interferences among geostationary-satellite communication systems operating in given scenario. Major computed outputs are carrier-to-interference ratios at receivers at specified stations on Earth. Information enables determination of acceptability of planned communication systems. Written in FORTRAN.
Computer Operating System Maintenance.
1982-06-01
FACILITY The Computer Management Information Facility ( CMIF ) system was developed by Rapp Systems to fulfill the need at the CRF to record and report on...computer center resource usage and utilization. The foundation of the CMIF system is a System 2000 data base (CRFMGMT) which stores and permits access
Videodisc-Computer Interfaces.
ERIC Educational Resources Information Center
Zollman, Dean
1984-01-01
Lists microcomputer-videodisc interfaces currently available from 26 sources, including home use systems connected through remote control jack and industrial/educational systems utilizing computer ports and new laser reflective and stylus technology. Information provided includes computer and videodisc type, language, authoring system, educational…
Utilization of KSC Present Broadband Communications Data System for Digital Video Services
NASA Technical Reports Server (NTRS)
Andrawis, Alfred S.
2002-01-01
This report covers a visibility study of utilizing present KSC broadband communications data system (BCDS) for digital video services. Digital video services include compressed digital TV delivery and video-on-demand. Furthermore, the study examines the possibility of providing interactive video on demand to desktop personal computers via KSC computer network.
Utilization of KSC Present Broadband Communications Data System For Digital Video Services
NASA Technical Reports Server (NTRS)
Andrawis, Alfred S.
2001-01-01
This report covers a visibility study of utilizing present KSC broadband communications data system (BCDS) for digital video services. Digital video services include compressed digital TV delivery and video-on-demand. Furthermore, the study examines the possibility of providing interactive video on demand to desktop personal computers via KSC computer network.
Introduction to the computational structural mechanics testbed
NASA Technical Reports Server (NTRS)
Lotts, C. G.; Greene, W. H.; Mccleary, S. L.; Knight, N. F., Jr.; Paulson, S. S.; Gillian, R. E.
1987-01-01
The Computational Structural Mechanics (CSM) testbed software system based on the SPAR finite element code and the NICE system is described. This software is denoted NICE/SPAR. NICE was developed at Lockheed Palo Alto Research Laboratory and contains data management utilities, a command language interpreter, and a command language definition for integrating engineering computational modules. SPAR is a system of programs used for finite element structural analysis developed for NASA by Lockheed and Engineering Information Systems, Inc. It includes many complementary structural analysis, thermal analysis, utility functions which communicate through a common database. The work on NICE/SPAR was motivated by requirements for a highly modular and flexible structural analysis system to use as a tool in carrying out research in computational methods and exploring computer hardware. Analysis examples are presented which demonstrate the benefits gained from a combination of the NICE command language with a SPAR computational modules.
FY 72 Computer Utilization at the Transportation Systems Center
DOT National Transportation Integrated Search
1972-08-01
The Transportation Systems Center currently employs a medley of on-site and off-site computer systems to obtain the computational support it requires. Examination of the monthly User Accountability Reports for FY72 indicated that during the fiscal ye...
Computer Utilization in Middle Tennessee High Schools.
ERIC Educational Resources Information Center
Lucas, Sam
In order to determine the capacity of high schools to profit from the pre-high school computer experiences of its students, a study was conducted to measure computer utilization in selected high schools of Middle Tennessee. Questionnaires distributed to 50 principals in 28 school systems covered the following areas: school enrollment; number and…
Cost-effectiveness methodology for computer systems selection
NASA Technical Reports Server (NTRS)
Vallone, A.; Bajaj, K. S.
1980-01-01
A new approach to the problem of selecting a computer system design has been developed. The purpose of this methodology is to identify a system design that is capable of fulfilling system objectives in the most economical way. The methodology characterizes each system design by the cost of the system life cycle and by the system's effectiveness in reaching objectives. Cost is measured by a 'system cost index' derived from an analysis of all expenditures and possible revenues over the system life cycle. Effectiveness is measured by a 'system utility index' obtained by combining the impact that each selection factor has on the system objectives and it is assessed through a 'utility curve'. A preestablished algorithm combines cost and utility and provides a ranking of the alternative system designs from which the 'best' design is selected.
Computer-aided Instructional System for Transmission Line Simulation.
ERIC Educational Resources Information Center
Reinhard, Erwin A.; Roth, Charles H., Jr.
A computer-aided instructional system has been developed which utilizes dynamic computer-controlled graphic displays and which requires student interaction with a computer simulation in an instructional mode. A numerical scheme has been developed for digital simulation of a uniform, distortionless transmission line with resistive terminations and…
Construction and application of Red5 cluster based on OpenStack
NASA Astrophysics Data System (ADS)
Wang, Jiaqing; Song, Jianxin
2017-08-01
With the application and development of cloud computing technology in various fields, the resource utilization rate of the data center has been improved obviously, and the system based on cloud computing platform has also improved the expansibility and stability. In the traditional way, Red5 cluster resource utilization is low and the system stability is poor. This paper uses cloud computing to efficiently calculate the resource allocation ability, and builds a Red5 server cluster based on OpenStack. Multimedia applications can be published to the Red5 cloud server cluster. The system achieves the flexible construction of computing resources, but also greatly improves the stability of the cluster and service efficiency.
2013-01-01
Background Incorporation of information technology advancements in healthcare has gained wide acceptance in the last two decades. Developed countries have successfully incorporated information technology advancements in their healthcare system thus, improving healthcare. However, only a limited application of information technology advancements is seen in developing countries in their healthcare system. Hence, this study was aimed at assessing knowledge and utilization of computer among health workers in Addis Ababa hospitals. Methods A quantitative cross-sectional study was conducted among 304 health workers who were selected using stratified sampling technique from all governmental hospitals in Addis Ababa. Data was collected from April 15 to April 30, 2010 using a structured, self-administered, and pre-tested questionnaire from five government hospitals in Addis Ababa. The data was entered into Epi Info version 3.5.1 and exported to SPSS version 16. Analysis was done using multinomial logistic regression technique. Results A total of 270 participants, age ranging from 21 to 60 years responded to the survey (88.8% response rate). A total of 91 (33.7%) respondents had an adequate knowledge of computers while 108 (40.0%) had fair knowledge and 71(26.3%) of the respondents showed inadequate knowledge. A total of 38(14.1%) were adequately utilizing computers, 14(5.2%) demonstrated average or fair utilization and majority of the respondents 218(80.7%) inadequately utilized computers. Significant predictor variables were average monthly income, job satisfaction index and own computer possession. Conclusions Computer knowledge and utilization habit of health workers were found to be very low. Increasing accessibility to computers and delivering training on the use of computers for workers will increases the knowledge and utilization of computers. This will facilitate the rate of diffusion of the technology to the health sector. Hence, programs targeted at enhancing knowledge and skill of computer use and increasing access to computer should be designed. The association between computer knowledge/skill and health care delivery competence should be studied. PMID:23514191
Computer-Based Career Interventions.
ERIC Educational Resources Information Center
Mau, Wei-Cheng
The possible utilities and limitations of computer-assisted career guidance systems (CACG) have been widely discussed although the effectiveness of CACG has not been systematically considered. This paper investigates the effectiveness of a theory-based CACG program, integrating Sequential Elimination and Expected Utility strategies. Three types of…
Distribution of Software Changes for Battlefield Computer Systems: A lingering Problem
1983-06-03
Defense, 10 June 1963), pp. 1-4. 3 Ibid. 4Automatic Data Processing Systems, Book - 1 Introduction (U.S. Army Signal School, Fort Monmouth, New Jersey, 15...January 1960) , passim. 5Automatic Data Processing Systems, Book - 2 Army Use of ADPS (U.S. Army Signal School, Fort Monmouth, New Jersey, 15 October...execute an application or utility program. It controls how the computer functions during a given operation. Utility programs are merely general use
Economic assessment photovoltaic/battery systems
NASA Astrophysics Data System (ADS)
Day, J. T.; Hayes, T. P.; Hobbs, W. J.
1981-02-01
The economics of residential PV/battery systems were determined from the utility perspective using detailed computer simulation to determine marginal costs. Brief consideration is also given to the economics of customer ownership, utility distribution system impact, and the implications of PURPA.
NASA Technical Reports Server (NTRS)
Hickey, J. S.
1983-01-01
The Mesoscale Analysis and Space Sensor (MASS) Data Management and Analysis System developed by Atsuko Computing International (ACI) on the MASS HP-1000 Computer System within the Systems Dynamics Laboratory of the Marshall Space Flight Center is described. The MASS Data Management and Analysis System was successfully implemented and utilized daily by atmospheric scientists to graphically display and analyze large volumes of conventional and satellite derived meteorological data. The scientists can process interactively various atmospheric data (Sounding, Single Level, Gird, and Image) by utilizing the MASS (AVE80) share common data and user inputs, thereby reducing overhead, optimizing execution time, and thus enhancing user flexibility, useability, and understandability of the total system/software capabilities. In addition ACI installed eight APPLE III graphics/imaging computer terminals in individual scientist offices and integrated them into the MASS HP-1000 Computer System thus providing significant enhancement to the overall research environment.
Controller/Computer Interface with an Air-Ground Data Link
DOT National Transportation Integrated Search
1976-06-01
This report describes the results of an experiment for evaluating the controller/computer interface in an ARTS III/M&S system modified for use with a simulated digital data link and a voice link utilizing a computer-generated voice system. A modified...
A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations
NASA Astrophysics Data System (ADS)
Demir, I.; Agliamzanov, R.
2014-12-01
Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.
Kuhl, Mitchell; Beimel, Claudia
2016-10-01
The goal of this study was to evaluate the ability of a novel computer assisted surgery system to guide ideal placement of a lag screw during cephalomedullary nailing and then accurately measure the tip-apex distance (TAD) measurement intraoperatively. Retrospective case review. Level II trauma hospital. The initial 98 consecutive clinical cases treated with a cephalomedullary nail in conjunction with a novel computer assisted surgery system were retrospectively reviewed. A novel computer assisted surgery system was utilized to enhance lag screw placement during cephalomedullary nailing procedures. The computer assisted surgery system calculates the TAD intraoperatively after final lag screw placement. The ideal TAD was considered to be within a range of 5mm-20mm. The ability of the computer assisted surgery system (CASS) to assist in placement of a lag screw within the ideal TAD was evaluated. Intraoperative TAD measurements provided by the computer assisted surgery system were then compared to standard postoperative TAD measurements on PACS (picture archiving and communication system) images to determine whether these measurements are equivalent. 79 cases (80.6%) were available with complete information for a retrospective review. All cases had CASS TAD and PACS TAD measurements >5mm and<20mm. In addition, no significant difference could be detected between the intraoperative CASS TAD and the postoperative PACS TAD (p=0.374, Wilcoxon Test; p=0.174, paired T-Test). A cut-out rate of 0% was observed in all patients who were treated with CASS in this case series (95% CI: 0 - 3.01%). The novel computer assisted surgery system tested here is an effective and reliable adjunct that can be utilized for optimal lag screw placement in cephalomedullary nailing procedures. The computer assisted surgery system provides an accurate intraoperative TAD measurement that is equivalent to the standard postoperative measurement utilizing PACS images. Therapeutic Level IV. Copyright © 2016 Elsevier Ltd. All rights reserved.
An information retrieval system for research file data
Joan E. Lengel; John W. Koning
1978-01-01
Research file data have been successfully retrieved at the Forest Products Laboratory through a high-speed cross-referencing system involving the computer program FAMULUS as modified by the Madison Academic Computing Center at the University of Wisconsin. The method of data input, transfer to computer storage, system utilization, and effectiveness are discussed....
Computational approaches to metabolic engineering utilizing systems biology and synthetic biology.
Fong, Stephen S
2014-08-01
Metabolic engineering modifies cellular function to address various biochemical applications. Underlying metabolic engineering efforts are a host of tools and knowledge that are integrated to enable successful outcomes. Concurrent development of computational and experimental tools has enabled different approaches to metabolic engineering. One approach is to leverage knowledge and computational tools to prospectively predict designs to achieve the desired outcome. An alternative approach is to utilize combinatorial experimental tools to empirically explore the range of cellular function and to screen for desired traits. This mini-review focuses on computational systems biology and synthetic biology tools that can be used in combination for prospective in silico strain design.
Coal-seismic, desktop computer programs in BASIC; Part 7, Display and compute shear-pair seismograms
Hasbrouck, W.P.
1983-01-01
Processing of geophysical data taken with the U.S. Geological Survey's coal-seismic system is done with a desk-top, stand-alone computer. Programs for this computer are written in the extended BASIC language utilized by the Tektronix 4051 Graphic System. This report discusses and presents five computer pro grams used to display and compute shear-pair seismograms.
Permanent-File-Validation Utility Computer Program
NASA Technical Reports Server (NTRS)
Derry, Stephen D.
1988-01-01
Errors in files detected and corrected during operation. Permanent File Validation (PFVAL) utility computer program provides CDC CYBER NOS sites with mechanism to verify integrity of permanent file base. Locates and identifies permanent file errors in Mass Storage Table (MST) and Track Reservation Table (TRT), in permanent file catalog entries (PFC's) in permit sectors, and in disk sector linkage. All detected errors written to listing file and system and job day files. Program operates by reading system tables , catalog track, permit sectors, and disk linkage bytes to vaidate expected and actual file linkages. Used extensively to identify and locate errors in permanent files and enable online correction, reducing computer-system downtime.
Computer Conferencing: A Campus Meets Online.
ERIC Educational Resources Information Center
Tooey, Mary Joan; Wester, Beverly R.
1989-01-01
Describes the implementation and use of a computer conferencing system at the University of Maryland at Baltimore. The discussion covers the pros and cons of computer conferencing in general, an informal evaluation of the system at Baltimore, and some predictions for future enhancements and utilization. (CLB)
47 CFR 64.702 - Furnishing of enhanced services and customer-premises equipment.
Code of Federal Regulations, 2012 CFR
2012-10-01
... separate operating, marketing, installation, and maintenance personnel, and utilize separate computer... available to the separate corporation any capacity or computer system component on its computer system or... Enhanced Services and Customer-Premises Equipment by Bell Operating Companies; Telephone Operator Services...
47 CFR 64.702 - Furnishing of enhanced services and customer-premises equipment.
Code of Federal Regulations, 2013 CFR
2013-10-01
... separate operating, marketing, installation, and maintenance personnel, and utilize separate computer... available to the separate corporation any capacity or computer system component on its computer system or... Enhanced Services and Customer-Premises Equipment by Bell Operating Companies; Telephone Operator Services...
47 CFR 64.702 - Furnishing of enhanced services and customer-premises equipment.
Code of Federal Regulations, 2011 CFR
2011-10-01
... separate operating, marketing, installation, and maintenance personnel, and utilize separate computer... available to the separate corporation any capacity or computer system component on its computer system or... Enhanced Services and Customer-Premises Equipment by Bell Operating Companies; Telephone Operator Services...
47 CFR 64.702 - Furnishing of enhanced services and customer-premises equipment.
Code of Federal Regulations, 2010 CFR
2010-10-01
... separate operating, marketing, installation, and maintenance personnel, and utilize separate computer... available to the separate corporation any capacity or computer system component on its computer system or... Enhanced Services and Customer-Premises Equipment by Bell Operating Companies; Telephone Operator Services...
47 CFR 64.702 - Furnishing of enhanced services and customer-premises equipment.
Code of Federal Regulations, 2014 CFR
2014-10-01
... separate operating, marketing, installation, and maintenance personnel, and utilize separate computer... available to the separate corporation any capacity or computer system component on its computer system or... Enhanced Services and Customer-Premises Equipment by Bell Operating Companies; Telephone Operator Services...
Overview of ASC Capability Computing System Governance Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doebling, Scott W.
This document contains a description of the Advanced Simulation and Computing Program's Capability Computing System Governance Model. Objectives of the Governance Model are to ensure that the capability system resources are allocated on a priority-driven basis according to the Program requirements; and to utilize ASC Capability Systems for the large capability jobs for which they were designed and procured.
1977-01-26
Sisteme Matematicheskogo Obespecheniya YeS EVM [ Applied Programs in the Software System for the Unified System of Computers], by A. Ye. Fateyev, A. I...computerized systems are most effective in large production complexes , in which the level of utilization of computers can be as high as 500,000...performance of these tasks could be furthered by the complex introduction of electronic computers in automated control systems. The creation of ASU
NASA Technical Reports Server (NTRS)
Morgan, R. P.; Singh, J. P.; Rothenberg, D.; Robinson, B. E.
1975-01-01
The needs to be served, the subsectors in which the system might be used, the technology employed, and the prospects for future utilization of an educational telecommunications delivery system are described and analyzed. Educational subsectors are analyzed with emphasis on the current status and trends within each subsector. Issues which affect future development, and prospects for future use of media, technology, and large-scale electronic delivery within each subsector are included. Information on technology utilization is presented. Educational telecommunications services are identified and grouped into categories: public television and radio, instructional television, computer aided instruction, computer resource sharing, and information resource sharing. Technology based services, their current utilization, and factors which affect future development are stressed. The role of communications satellites in providing these services is discussed. Efforts to analyze and estimate future utilization of large-scale educational telecommunications are summarized. Factors which affect future utilization are identified. Conclusions are presented.
Project Solo; Newsletter Number Four.
ERIC Educational Resources Information Center
Pittsburgh Univ., PA. Project Solo.
A paper titled "Myopia, Cornucopia and Utopia" makes up the major portion of this Project Solo Newsletter. It emphasizes the danger involved in the belief that the larger the system the better, and points out that although the computer utilizes technology, the human with judgment utilizes the computer. Some details of the Project Solo…
Life Lab Computer Support System's Manual.
ERIC Educational Resources Information Center
Lippman, Beatrice D.; Walfish, Stephen
Step-by-step procedures for utilizing the computer support system of Miami-Dade Community College's Life Lab program are described for the following categories: (1) Registration--Student's Lists and Labels, including three separate computer programs for current listings, next semester listings, and grade listings; (2) Competence and Resource…
Terrace Layout Using a Computer Assisted System
USDA-ARS?s Scientific Manuscript database
Development of a web-based terrace design tool based on the MOTERR program is presented, along with representative layouts for conventional and parallel terrace systems. Using digital elevation maps and geographic information systems (GIS), this tool utilizes personal computers to rapidly construct ...
NASA Technical Reports Server (NTRS)
1973-01-01
Design and development efforts for a spaceborne modular computer system are reported. An initial baseline description is followed by an interface design that includes definition of the overall system response to all classes of failure. Final versions for the register level designs for all module types were completed. Packaging, support and control executive software, including memory utilization estimates and design verification plan, were formalized to insure a soundly integrated design of the digital computer system.
ISSYS: An integrated synergistic Synthesis System
NASA Technical Reports Server (NTRS)
Dovi, A. R.
1980-01-01
Integrated Synergistic Synthesis System (ISSYS), an integrated system of computer codes in which the sequence of program execution and data flow is controlled by the user, is discussed. The commands available to exert such control, the ISSYS major function and rules, and the computer codes currently available in the system are described. Computational sequences frequently used in the aircraft structural analysis and synthesis are defined. External computer codes utilized by the ISSYS system are documented. A bibliography on the programs is included.
Analysis and design of hospital management information system based on UML
NASA Astrophysics Data System (ADS)
Ma, Lin; Zhao, Huifang; You, Shi Jun; Ge, Wenyong
2018-05-01
With the rapid development of computer technology, computer information management system has been utilized in many industries. Hospital Information System (HIS) is in favor of providing data for directors, lightening the workload for the medical workers, and improving the workers efficiency. According to the HIS demand analysis and system design, this paper focus on utilizing unified modeling language (UML) models to establish the use case diagram, class diagram, sequence chart and collaboration diagram, and satisfying the demands of the daily patient visit, inpatient, drug management and other relevant operations. At last, the paper summarizes the problems of the system and puts forward an outlook of the HIS system.
Over the last several years, there has been increased pressure to utilize novel technologies derived from computational chemistry, molecular biology and systems biology in toxicological risk assessment. This new area has been referred to as "Computational Toxicology". Our resear...
Utility functions and resource management in an oversubscribed heterogeneous computing environment
Khemka, Bhavesh; Friese, Ryan; Briceno, Luis Diego; ...
2014-09-26
We model an oversubscribed heterogeneous computing system where tasks arrive dynamically and a scheduler maps the tasks to machines for execution. The environment and workloads are based on those being investigated by the Extreme Scale Systems Center at Oak Ridge National Laboratory. Utility functions that are designed based on specifications from the system owner and users are used to create a metric for the performance of resource allocation heuristics. Each task has a time-varying utility (importance) that the enterprise will earn based on when the task successfully completes execution. We design multiple heuristics, which include a technique to drop lowmore » utility-earning tasks, to maximize the total utility that can be earned by completing tasks. The heuristics are evaluated using simulation experiments with two levels of oversubscription. The results show the benefit of having fast heuristics that account for the importance of a task and the heterogeneity of the environment when making allocation decisions in an oversubscribed environment. Furthermore, the ability to drop low utility-earning tasks allow the heuristics to tolerate the high oversubscription as well as earn significant utility.« less
Image-Processing Software For A Hypercube Computer
NASA Technical Reports Server (NTRS)
Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.
1992-01-01
Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.
Imprecise results: Utilizing partial computations in real-time systems
NASA Technical Reports Server (NTRS)
Lin, Kwei-Jay; Natarajan, Swaminathan; Liu, Jane W.-S.
1987-01-01
In real-time systems, a computation may not have time to complete its execution because of deadline requirements. In such cases, no result except the approximate results produced by the computations up to that point will be available. It is desirable to utilize these imprecise results if possible. Two approaches are proposed to enable computations to return imprecise results when executions cannot be completed normally. The milestone approach records results periodically, and if a deadline is reached, returns the last recorded result. The sieve approach demarcates sections of code which can be skipped if the time available is insufficient. By using these approaches, the system is able to produce imprecise results when deadlines are reached. The design of the Concord project is described which supports imprecise computations using these techniques. Also presented is a general model of imprecise computations using these techniques, as well as one which takes into account the influence of the environment, showing where the latter approach fits into this model.
NASA Technical Reports Server (NTRS)
Johannes, J. D.
1974-01-01
Techniques, methods, and system requirements are reported for an onboard computerized communications system that provides on-line computing capability during manned space exploration. Communications between man and computer take place by sequential execution of each discrete step of a procedure, by interactive progression through a tree-type structure to initiate tasks or by interactive optimization of a task requiring man to furnish a set of parameters. Effective communication between astronaut and computer utilizes structured vocabulary techniques and a word recognition system.
Reliability analysis of a robotic system using hybridized technique
NASA Astrophysics Data System (ADS)
Kumar, Naveen; Komal; Lather, J. S.
2017-09-01
In this manuscript, the reliability of a robotic system has been analyzed using the available data (containing vagueness, uncertainty, etc). Quantification of involved uncertainties is done through data fuzzification using triangular fuzzy numbers with known spreads as suggested by system experts. With fuzzified data, if the existing fuzzy lambda-tau (FLT) technique is employed, then the computed reliability parameters have wide range of predictions. Therefore, decision-maker cannot suggest any specific and influential managerial strategy to prevent unexpected failures and consequently to improve complex system performance. To overcome this problem, the present study utilizes a hybridized technique. With this technique, fuzzy set theory is utilized to quantify uncertainties, fault tree is utilized for the system modeling, lambda-tau method is utilized to formulate mathematical expressions for failure/repair rates of the system, and genetic algorithm is utilized to solve established nonlinear programming problem. Different reliability parameters of a robotic system are computed and the results are compared with the existing technique. The components of the robotic system follow exponential distribution, i.e., constant. Sensitivity analysis is also performed and impact on system mean time between failures (MTBF) is addressed by varying other reliability parameters. Based on analysis some influential suggestions are given to improve the system performance.
On-line computer system for use with low- energy nuclear physics experiments is reported
NASA Technical Reports Server (NTRS)
Gemmell, D. S.
1969-01-01
Computer program handles data from low-energy nuclear physics experiments which utilize the ND-160 pulse-height analyzer and the PHYLIS computing system. The program allows experimenters to choose from about 50 different basic data-handling functions and to prescribe the order in which these functions will be performed.
Proceedings of the Fourth Annual Workshop on the Use of Digital Computers in Process Control.
ERIC Educational Resources Information Center
Smith, Cecil L., Ed.
Contents: Computer hardware testing (results of vendor-user interaction); CODIL (a new language for process control programing); the design and implementation of control systems utilizing CRT display consoles; the systems contractor - valuable professional or unnecessary middle man; power station digital computer applications; from inspiration to…
Computer Applications in the Design Process.
ERIC Educational Resources Information Center
Winchip, Susan
Computer Assisted Design (CAD) and Computer Assisted Manufacturing (CAM) are emerging technologies now being used in home economics and interior design applications. A microcomputer in a computer network system is capable of executing computer graphic functions such as three-dimensional modeling, as well as utilizing office automation packages to…
Flight code validation simulator
NASA Astrophysics Data System (ADS)
Sims, Brent A.
1996-05-01
An End-To-End Simulation capability for software development and validation of missile flight software on the actual embedded computer has been developed utilizing a 486 PC, i860 DSP coprocessor, embedded flight computer and custom dual port memory interface hardware. This system allows real-time interrupt driven embedded flight software development and checkout. The flight software runs in a Sandia Digital Airborne Computer and reads and writes actual hardware sensor locations in which Inertial Measurement Unit data resides. The simulator provides six degree of freedom real-time dynamic simulation, accurate real-time discrete sensor data and acts on commands and discretes from the flight computer. This system was utilized in the development and validation of the successful premier flight of the Digital Miniature Attitude Reference System in January of 1995 at the White Sands Missile Range on a two stage attitude controlled sounding rocket.
Man Machine Systems in Education.
ERIC Educational Resources Information Center
Sall, Malkit S.
This review of the research literature on the interaction between humans and computers discusses how man machine systems can be utilized effectively in the learning-teaching process, especially in secondary education. Beginning with a definition of man machine systems and comments on the poor quality of much of the computer-based learning material…
Real-time computational photon-counting LiDAR
NASA Astrophysics Data System (ADS)
Edgar, Matthew; Johnson, Steven; Phillips, David; Padgett, Miles
2018-03-01
The availability of compact, low-cost, and high-speed MEMS-based spatial light modulators has generated widespread interest in alternative sampling strategies for imaging systems utilizing single-pixel detectors. The development of compressed sensing schemes for real-time computational imaging may have promising commercial applications for high-performance detectors, where the availability of focal plane arrays is expensive or otherwise limited. We discuss the research and development of a prototype light detection and ranging (LiDAR) system via direct time of flight, which utilizes a single high-sensitivity photon-counting detector and fast-timing electronics to recover millimeter accuracy three-dimensional images in real time. The development of low-cost real time computational LiDAR systems could have importance for applications in security, defense, and autonomous vehicles.
Flight control systems development of highly maneuverable aircraft technology /HiMAT/ vehicle
NASA Technical Reports Server (NTRS)
Petersen, K. L.
1979-01-01
The highly maneuverable aircraft technology (HiMAT) program was conceived to demonstrate advanced technology concepts through scaled-aircraft flight tests using a remotely piloted technique. Closed-loop primary flight control is performed from a ground-based cockpit, utilizing a digital computer and up/down telemetry links. A backup flight control system for emergency operation resides in an onboard computer. The onboard systems are designed to provide fail-operational capabilities and utilize two microcomputers, dual uplink receiver/decoders, and redundant hydraulic actuation and power systems. This paper discusses the design and validation of the primary and backup digital flight control systems as well as the unique pilot and specialized systems interfaces.
ERIC Educational Resources Information Center
Barker, Philip
1986-01-01
Discussion of developments in information storage technology likely to have significant impact upon library utilization focuses on hardware (videodisc technology) and software developments (knowledge databases; computer networks; database management systems; interactive video, computer, and multimedia user interfaces). Three generic computer-based…
An overview of computer vision
NASA Technical Reports Server (NTRS)
Gevarter, W. B.
1982-01-01
An overview of computer vision is provided. Image understanding and scene analysis are emphasized, and pertinent aspects of pattern recognition are treated. The basic approach to computer vision systems, the techniques utilized, applications, the current existing systems and state-of-the-art issues and research requirements, who is doing it and who is funding it, and future trends and expectations are reviewed.
National electronic medical records integration on cloud computing system.
Mirza, Hebah; El-Masri, Samir
2013-01-01
Few Healthcare providers have an advanced level of Electronic Medical Record (EMR) adoption. Others have a low level and most have no EMR at all. Cloud computing technology is a new emerging technology that has been used in other industry and showed a great success. Despite the great features of Cloud computing, they haven't been utilized fairly yet in healthcare industry. This study presents an innovative Healthcare Cloud Computing system for Integrating Electronic Health Record (EHR). The proposed Cloud system applies the Cloud Computing technology on EHR system, to present a comprehensive EHR integrated environment.
An element search ant colony technique for solving virtual machine placement problem
NASA Astrophysics Data System (ADS)
Srija, J.; Rani John, Rose; Kanaga, Grace Mary, Dr.
2017-09-01
The data centres in the cloud environment play a key role in providing infrastructure for ubiquitous computing, pervasive computing, mobile computing etc. This computing technique tries to utilize the available resources in order to provide services. Hence maintaining the resource utilization without wastage of power consumption has become a challenging task for the researchers. In this paper we propose the direct guidance ant colony system for effective mapping of virtual machines to the physical machine with maximal resource utilization and minimal power consumption. The proposed algorithm has been compared with the existing ant colony approach which is involved in solving virtual machine placement problem and thus the proposed algorithm proves to provide better result than the existing technique.
User's guide for FRMOD, a zero dimensional FRM burn code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Driemeryer, D.; Miley, G.H.
1979-10-15
The zero-dimensional FRM plasma burn code, FRMOD is written in the FORTRAN language and is currently available on the Control Data Corporation (CDC) 7600 computer at the Magnetic Fusion Energy Computer Center (MFECC), sponsored by the US Department of Energy, in Livermore, CA. This guide assumes that the user is familiar with the system architecture and some of the utility programs available on the MFE-7600 machine, since online documentation is available for system routines through the use of the DOCUMENT utility. Users may therefore refer to it for answers to system related questions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tawhai, Merryn; Bischoff, Jeff; Einstein, Daniel R.
2009-05-01
Abstract In this article, we describe some current multiscale modeling issues in computational biomechanics from the perspective of the musculoskeletal and respiratory systems and mechanotransduction. First, we outline the necessity of multiscale simulations in these biological systems. Then we summarize challenges inherent to multiscale biomechanics modeling, regardless of the subdiscipline, followed by computational challenges that are system-specific. We discuss some of the current tools that have been utilized to aid research in multiscale mechanics simulations, and the priorities to further the field of multiscale biomechanics computation.
An Innovative Improvement of Engineering Learning System Using Computational Fluid Dynamics Concept
ERIC Educational Resources Information Center
Hung, T. C.; Wang, S. K.; Tai, S. W.; Hung, C. T.
2007-01-01
An innovative concept of an electronic learning system has been established in an attempt to achieve a technology that provides engineering students with an instructive and affordable framework for learning engineering-related courses. This system utilizes an existing Computational Fluid Dynamics (CFD) package, Active Server Pages programming,…
NASA Astrophysics Data System (ADS)
Li, Haiqing; Chatterjee, Samir
With rapid advances in information and communication technology, computer-mediated communication (CMC) technologies are utilizing multiple IT platforms such as email, websites, cell-phones/PDAs, social networking sites, and gaming environments. However, no studies have compared the effectiveness of a persuasive system using such alternative channels and various persuasive techniques. Moreover, how affective computing impacts the effectiveness of persuasive systems is not clear. This study proposes (1) persuasive technology channels in combination with persuasive strategies will have different persuasive effectiveness; (2) Adding positive emotion to a message that leads to a better overall user experience could increase persuasive effectiveness. The affective computing or emotion information was added to the experiment using emoticons. The initial results of a pilot study show that computer-mediated communication channels along with various persuasive strategies can affect the persuasive effectiveness to varying degrees. These results also shows that adding a positive emoticon to a message leads to a better user experience which increases the overall persuasive effectiveness of a system.
An Educational Approach to Computationally Modeling Dynamical Systems
ERIC Educational Resources Information Center
Chodroff, Leah; O'Neal, Tim M.; Long, David A.; Hemkin, Sheryl
2009-01-01
Chemists have used computational science methodologies for a number of decades and their utility continues to be unabated. For this reason we developed an advanced lab in computational chemistry in which students gain understanding of general strengths and weaknesses of computation-based chemistry by working through a specific research problem.…
Fine grained event processing on HPCs with the ATLAS Yoda system
NASA Astrophysics Data System (ADS)
Calafiura, Paolo; De, Kaushik; Guan, Wen; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Panitkin, Sergey; Tsulaia, Vakhtang; Van Gemmeren, Peter; Wenaus, Torre
2015-12-01
High performance computing facilities present unique challenges and opportunities for HEP event processing. The massive scale of many HPC systems means that fractionally small utilization can yield large returns in processing throughput. Parallel applications which can dynamically and efficiently fill any scheduling opportunities the resource presents benefit both the facility (maximal utilization) and the (compute-limited) science. The ATLAS Yoda system provides this capability to HEP-like event processing applications by implementing event-level processing in an MPI-based master-client model that integrates seamlessly with the more broadly scoped ATLAS Event Service. Fine grained, event level work assignments are intelligently dispatched to parallel workers to sustain full utilization on all cores, with outputs streamed off to destination object stores in near real time with similarly fine granularity, such that processing can proceed until termination with full utilization. The system offers the efficiency and scheduling flexibility of preemption without requiring the application actually support or employ check-pointing. We will present the new Yoda system, its motivations, architecture, implementation, and applications in ATLAS data processing at several US HPC centers.
ERIC Educational Resources Information Center
Brown, Carrie; And Others
This final report describes activities and outcomes of a research project on a sound-to-speech translation system utilizing a graphic mediation interface for students with severe disabilities. The STS/Graphics system is a voice recognition, computer-based system designed to allow individuals with mental retardation and/or severe physical…
Accounting utility for determining individual usage of production level software systems
NASA Technical Reports Server (NTRS)
Garber, S. C.
1984-01-01
An accounting package was developed which determines the computer resources utilized by a user during the execution of a particular program and updates a file containing accumulated resource totals. The accounting package is divided into two separate programs. The first program determines the total amount of computer resources utilized by a user during the execution of a particular program. The second program uses these totals to update a file containing accumulated totals of computer resources utilized by a user for a particular program. This package is useful to those persons who have several other users continually accessing and running programs from their accounts. The package provides the ability to determine which users are accessing and running specified programs along with their total level of usage.
Algorithm-Based Fault Tolerance Integrated with Replication
NASA Technical Reports Server (NTRS)
Some, Raphael; Rennels, David
2008-01-01
In a proposed approach to programming and utilization of commercial off-the-shelf computing equipment, a combination of algorithm-based fault tolerance (ABFT) and replication would be utilized to obtain high degrees of fault tolerance without incurring excessive costs. The basic idea of the proposed approach is to integrate ABFT with replication such that the algorithmic portions of computations would be protected by ABFT, and the logical portions by replication. ABFT is an extremely efficient, inexpensive, high-coverage technique for detecting and mitigating faults in computer systems used for algorithmic computations, but does not protect against errors in logical operations surrounding algorithms.
Data Center Consolidation: A Step towards Infrastructure Clouds
NASA Astrophysics Data System (ADS)
Winter, Markus
Application service providers face enormous challenges and rising costs in managing and operating a growing number of heterogeneous system and computing landscapes. Limitations of traditional computing environments force IT decision-makers to reorganize computing resources within the data center, as continuous growth leads to an inefficient utilization of the underlying hardware infrastructure. This paper discusses a way for infrastructure providers to improve data center operations based on the findings of a case study on resource utilization of very large business applications and presents an outlook beyond server consolidation endeavors, transforming corporate data centers into compute clouds.
NASA Technical Reports Server (NTRS)
Klumpar, D. M. (Principal Investigator)
1981-01-01
Progress is reported in reading MAGSAT tapes in modeling procedure developed to compute the magnetic fields at satellite orbit due to current distributions in the ionosphere. The modeling technique utilizes a linear current element representation of the large-scale space-current system.
Hasbrouck, W.P.
1983-01-01
Processing of data taken with the U.S. Geological Survey's coal-seismic system is done with a desktop, stand-alone computer. Programs for this computer are written in the extended BASIC language utilized by the Tektronix 4051 Graphic System. This report presents computer programs used to develop rms velocity functions and apply mute and normal moveout to a 12-trace seismogram.
Reconciliation of the cloud computing model with US federal electronic health record regulations
2011-01-01
Cloud computing refers to subscription-based, fee-for-service utilization of computer hardware and software over the Internet. The model is gaining acceptance for business information technology (IT) applications because it allows capacity and functionality to increase on the fly without major investment in infrastructure, personnel or licensing fees. Large IT investments can be converted to a series of smaller operating expenses. Cloud architectures could potentially be superior to traditional electronic health record (EHR) designs in terms of economy, efficiency and utility. A central issue for EHR developers in the US is that these systems are constrained by federal regulatory legislation and oversight. These laws focus on security and privacy, which are well-recognized challenges for cloud computing systems in general. EHRs built with the cloud computing model can achieve acceptable privacy and security through business associate contracts with cloud providers that specify compliance requirements, performance metrics and liability sharing. PMID:21727204
Reconciliation of the cloud computing model with US federal electronic health record regulations.
Schweitzer, Eugene J
2012-01-01
Cloud computing refers to subscription-based, fee-for-service utilization of computer hardware and software over the Internet. The model is gaining acceptance for business information technology (IT) applications because it allows capacity and functionality to increase on the fly without major investment in infrastructure, personnel or licensing fees. Large IT investments can be converted to a series of smaller operating expenses. Cloud architectures could potentially be superior to traditional electronic health record (EHR) designs in terms of economy, efficiency and utility. A central issue for EHR developers in the US is that these systems are constrained by federal regulatory legislation and oversight. These laws focus on security and privacy, which are well-recognized challenges for cloud computing systems in general. EHRs built with the cloud computing model can achieve acceptable privacy and security through business associate contracts with cloud providers that specify compliance requirements, performance metrics and liability sharing.
NASA Technical Reports Server (NTRS)
Yanosy, James L.
1988-01-01
Over the years, computer modeling has been used extensively in many disciplines to solve engineering problems. A set of computer program tools is proposed to assist the engineer in the various phases of the Space Station program from technology selection through flight operations. The development and application of emulation and simulation transient performance modeling tools for life support systems are examined. The results of the development and the demonstration of the utility of three computer models are presented. The first model is a detailed computer model (emulation) of a solid amine water desorbed (SAWD) CO2 removal subsystem combined with much less detailed models (simulations) of a cabin, crew, and heat exchangers. This model was used in parallel with the hardware design and test of this CO2 removal subsystem. The second model is a simulation of an air revitalization system combined with a wastewater processing system to demonstrate the capabilities to study subsystem integration. The third model is that of a Space Station total air revitalization system. The station configuration consists of a habitat module, a lab module, two crews, and four connecting nodes.
Design Principles for a Comprehensive Library System.
ERIC Educational Resources Information Center
Uluakar, Tamer; And Others
1981-01-01
Describes an online design featuring circulation control, catalog access, and serial holdings that uses an incremental approach to system development. Utilizing a dedicated computer, this second of three releases pays particular attention to present and predicted computing capabilities as well as trends in library automation. (Author/RAA)
RADIAL COMPUTED TOMOGRAPHY OF AIR CONTAMINANTS USING OPTICAL REMOTE SENSING
The paper describes the application of an optical remote-sensing (ORS) system to map air contaminants and locate fugitive emissions. Many ORD systems may utilize radial non-overlapping beam geometry and a computed tomography (CT) algorithm to map the concentrations in a plane. In...
[Personnel Management and Computer Systems].
ERIC Educational Resources Information Center
Reeves, Robert F.
The organization and use of computerized management information systems at the Oakland Schools intermediate school district in Michigan is utilized by 24 local school districts. The use of remote terminals provides access for the development of ongoing personnel programs. Emphasis is given to four major computer subsystems that directly involve…
Computer program uses Monte Carlo techniques for statistical system performance analysis
NASA Technical Reports Server (NTRS)
Wohl, D. P.
1967-01-01
Computer program with Monte Carlo sampling techniques determines the effect of a component part of a unit upon the overall system performance. It utilizes the full statistics of the disturbances and misalignments of each component to provide unbiased results through simulated random sampling.
Memory management and compiler support for rapid recovery from failures in computer systems
NASA Technical Reports Server (NTRS)
Fuchs, W. K.
1991-01-01
This paper describes recent developments in the use of memory management and compiler technology to support rapid recovery from failures in computer systems. The techniques described include cache coherence protocols for user transparent checkpointing in multiprocessor systems, compiler-based checkpoint placement, compiler-based code modification for multiple instruction retry, and forward recovery in distributed systems utilizing optimistic execution.
Graphics Flutter Analysis Methods, an interactive computing system at Lockheed-California Company
NASA Technical Reports Server (NTRS)
Radovcich, N. A.
1975-01-01
An interactive computer graphics system, Graphics Flutter Analysis Methods (GFAM), was developed to complement FAMAS, a matrix-oriented batch computing system, and other computer programs in performing complex numerical calculations using a fully integrated data management system. GFAM has many of the matrix operation capabilities found in FAMAS, but on a smaller scale, and is utilized when the analysis requires a high degree of interaction between the engineer and computer, and schedule constraints exclude the use of batch entry programs. Applications of GFAM to a variety of preliminary design, development design, and project modification programs suggest that interactive flutter analysis using matrix representations is a feasible and cost effective computing tool.
ERIC Educational Resources Information Center
WITMER, DAVID R.
WISCONSIN STATE UNIVERSITIES HAVE BEEN USING THE COMPUTER AS A MANAGEMENT TOOL TO STUDY PHYSICAL FACILITIES INVENTORIES, SPACE UTILIZATION, AND ENROLLMENT AND PLANT PROJECTIONS. EXAMPLES ARE SHOWN GRAPHICALLY AND DESCRIBED FOR DIFFERENT TYPES OF ANALYSIS, SHOWING THE CARD FORMAT, CODING SYSTEMS, AND PRINTOUT. EQUATIONS ARE PROVIDED FOR DETERMINING…
A Computer Based Moire Technique To Measure Very Small Displacements
NASA Astrophysics Data System (ADS)
Sciammarella, Cesar A.; Amadshahi, Mansour A.; Subbaraman, B.
1987-02-01
The accuracy that can be achieved in the measurement of very small displacements in techniques such as moire, holography and speckle is limited by the noise inherent to the utilized optical devices. To reduce the noise to signal ratio, the moire method can be utilized. Two system of carrier fringes are introduced, an initial system before the load is applied and a final system when the load is applied. The moire pattern of these two systems contains the sought displacement information and the noise common to the two patterns is eliminated. The whole process is performed by a computer on digitized versions of the patterns. Examples of application are given.
NASA Technical Reports Server (NTRS)
Rochelle, W. C.; Liu, D. K.; Nunnery, W. J., Jr.; Brandli, A. E.
1975-01-01
This paper describes the application of the SINDA (systems improved numerical differencing analyzer) computer program to simulate the operation of the NASA/JSC MIUS integration and subsystems test (MIST) laboratory. The MIST laboratory is designed to test the integration capability of the following subsystems of a modular integrated utility system (MIUS): (1) electric power generation, (2) space heating and cooling, (3) solid waste disposal, (4) potable water supply, and (5) waste water treatment. The SINDA/MIST computer model is designed to simulate the response of these subsystems to externally impressed loads. The computer model determines the amount of recovered waste heat from the prime mover exhaust, water jacket and oil/aftercooler and from the incinerator. This recovered waste heat is used in the model to heat potable water, for space heating, absorption air conditioning, waste water sterilization, and to provide for thermal storage. The details of the thermal and fluid simulation of MIST including the system configuration, modes of operation modeled, SINDA model characteristics and the results of several analyses are described.
Optical solver of combinatorial problems: nanotechnological approach.
Cohen, Eyal; Dolev, Shlomi; Frenkel, Sergey; Kryzhanovsky, Boris; Palagushkin, Alexandr; Rosenblit, Michael; Zakharov, Victor
2013-09-01
We present an optical computing system to solve NP-hard problems. As nano-optical computing is a promising venue for the next generation of computers performing parallel computations, we investigate the application of submicron, or even subwavelength, computing device designs. The system utilizes a setup of exponential sized masks with exponential space complexity produced in polynomial time preprocessing. The masks are later used to solve the problem in polynomial time. The size of the masks is reduced to nanoscaled density. Simulations were done to choose a proper design, and actual implementations show the feasibility of such a system.
Breathing Life into Business Concepts: Utilizing Simulations in Management Information Systems
ERIC Educational Resources Information Center
Hendrix, Stephen
2016-01-01
The Department of Computing at East Tennessee State University provides students exposure to the enterprise application SAP as a part of the Information Systems curriculum. Over the past two years, the use of SAP has expanded beyond the Department of Computing into the Management Information Systems course offered by the Department of Management…
ERIC Educational Resources Information Center
Hughes, John; And Others
This report provides a description of a Computer Aided Training System Development and Management (CATSDM) environment based on state-of-the-art hardware and software technology, and including recommendations for off the shelf systems to be utilized as a starting point in addressing the particular systematic training and instruction design and…
Program on application of communications satellites to educational development
NASA Technical Reports Server (NTRS)
Morgan, R. P.; Singh, J. P.
1971-01-01
Interdisciplinary research in needs analysis, communications technology studies, and systems synthesis is reported. Existing and planned educational telecommunications services are studied and library utilization of telecommunications is described. Preliminary estimates are presented of ranges of utilization of educational telecommunications services for 1975 and 1985; instructional and public television, computer-aided instruction, computing resources, and information resource sharing for various educational levels and purposes. Communications technology studies include transmission schemes for still-picture television, use of Gunn effect devices, and TV receiver front ends for direct satellite reception at 12 GHz. Two major studies in the systems synthesis project concern (1) organizational and administrative aspects of a large-scale instructional satellite system to be used with schools and (2) an analysis of future development of instructional television, with emphasis on the use of video tape recorders and cable television. A communications satellite system synthesis program developed for NASA is now operational on the university IBM 360-50 computer.
2015-01-01
Background Incorporation of information communication technology in health care has gained wide acceptance in the last two decades. Developing countries are also incorporating information communication technology into the health system including the implementation of electronic medical records in major hospitals and the use of mobile health in rural community-based health interventions. However, the literature on the level of knowledge and utilization of information communication technology by health professionals in those settings is scarce for proper implementation planning. Objective The objective of this study is to assess knowledge, computer utilization, and associated factors among health professionals in hospitals and health institutions in Ethiopia. Methods A quantitative cross-sectional study was conducted on 554 health professionals working in 7 hospitals, 19 primary health centers, and 10 private clinics in the Harari region of Ethiopia. Data were collected using a semi-structured, self-administered, and pre-tested questionnaire. Descriptive and logistic regression techniques using SPSS version 16.0 (IBM Corporation) were applied to determine the level of knowledge and identify determinants of utilization of information communication technology. Results Out of 554 participants, 482 (87.0%) of them responded to the questionnaire. Among them, 90 (18.7%) demonstrated good knowledge of computers while 142 (29.5%) demonstrated good utilization habits. Health professionals who work in the primary health centers were found to have lower knowledge (3.4%) and utilization (18.4%). Age (adjusted odds ratio [AOR]=3.06, 95% CI 0.57-5.37), field of study (AOR=3.08, 95% CI 1.65-5.73), level of education (AOR=2.78, 95% CI 1.43-5.40), and previous computer training participation (AOR=3.65, 95% CI 1.62-8.21) were found to be significantly associated with computer utilization habits of health professionals. Conclusions Computer knowledge and utilization habits of health professionals, especially those who work in primary health centers, were found to be low. Providing trainings and continuous follow-up are necessary measures to increase the likelihood of the success of implemented eHealth systems in those settings. PMID:27025996
Alwan, Kalid; Awoke, Tadesse; Tilahun, Binyam
2015-03-26
Incorporation of information communication technology in health care has gained wide acceptance in the last two decades. Developing countries are also incorporating information communication technology into the health system including the implementation of electronic medical records in major hospitals and the use of mobile health in rural community-based health interventions. However, the literature on the level of knowledge and utilization of information communication technology by health professionals in those settings is scarce for proper implementation planning. The objective of this study is to assess knowledge, computer utilization, and associated factors among health professionals in hospitals and health institutions in Ethiopia. A quantitative cross-sectional study was conducted on 554 health professionals working in 7 hospitals, 19 primary health centers, and 10 private clinics in the Harari region of Ethiopia. Data were collected using a semi-structured, self-administered, and pre-tested questionnaire. Descriptive and logistic regression techniques using SPSS version 16.0 (IBM Corporation) were applied to determine the level of knowledge and identify determinants of utilization of information communication technology. Out of 554 participants, 482 (87.0%) of them responded to the questionnaire. Among them, 90 (18.7%) demonstrated good knowledge of computers while 142 (29.5%) demonstrated good utilization habits. Health professionals who work in the primary health centers were found to have lower knowledge (3.4%) and utilization (18.4%). Age (adjusted odds ratio [AOR]=3.06, 95% CI 0.57-5.37), field of study (AOR=3.08, 95% CI 1.65-5.73), level of education (AOR=2.78, 95% CI 1.43-5.40), and previous computer training participation (AOR=3.65, 95% CI 1.62-8.21) were found to be significantly associated with computer utilization habits of health professionals. Computer knowledge and utilization habits of health professionals, especially those who work in primary health centers, were found to be low. Providing trainings and continuous follow-up are necessary measures to increase the likelihood of the success of implemented eHealth systems in those settings.
Repetitive Domain-Referenced Testing Using Computers: the TITA System.
ERIC Educational Resources Information Center
Olympia, P. L., Jr.
The TITA (Totally Interactive Testing and Analysis) System algorithm for the repetitive construction of domain-referenced tests utilizes a compact data bank, is highly portable, is useful in any discipline, requires modest computer hardware, and does not present a security problem. Clusters of related keyphrases, statement phrases, and distractors…
ERIC Educational Resources Information Center
Gillespie, Robert W.
A market exchange simulation utilizing the PLATO computer-assisted instructional system at the University of Illinois has been designed to teach students the principles of a general equilibrium system. It serves a laboratory function which supplements traditional instruction by stimulating students' interests and providing them with illustrations…
Smart active pilot-in-the-loop systems
NASA Astrophysics Data System (ADS)
Thomas, Segun
1995-04-01
Representation of on-orbit microgravity environment in a 1-g environment is a continuing problem in space engineering analysis, procedures development and crew training. A way of adequately depicting weightlessness in the performance of on-orbit tasks is by a realistic (or real-time) computer based representation that provides the look, touch, and feel of on-orbit operation. This paper describes how a facility, the Systems Engineering Simulator at the Johnson Space Center, is utilizing recent advances in computer processing power and multi- processing capability to intelligently represent all systems, sub-systems and environmental elements associated with space flight operations. It first describes the computer hardware and interconnection between processors; the computer software responsible for task scheduling, health monitoring, sub-system and environment representation; control room and crew station. It then describes, the mathematical models that represent the dynamics of contact between the Mir and the Space Shuttle during the upcoming US and Russian Shuttle/Mir space mission. Results are presented comparing the response of the smart, active pilot-in-the-loop system to non-time critical CRAY model. A final example of how these systems are utilized is given in the development that supported the highly successful Hubble Space Telescope repair mission.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friese, Ryan; Khemka, Bhavesh; Maciejewski, Anthony A
Rising costs of energy consumption and an ongoing effort for increases in computing performance are leading to a significant need for energy-efficient computing. Before systems such as supercomputers, servers, and datacenters can begin operating in an energy-efficient manner, the energy consumption and performance characteristics of the system must be analyzed. In this paper, we provide an analysis framework that will allow a system administrator to investigate the tradeoffs between system energy consumption and utility earned by a system (as a measure of system performance). We model these trade-offs as a bi-objective resource allocation problem. We use a popular multi-objective geneticmore » algorithm to construct Pareto fronts to illustrate how different resource allocations can cause a system to consume significantly different amounts of energy and earn different amounts of utility. We demonstrate our analysis framework using real data collected from online benchmarks, and further provide a method to create larger data sets that exhibit similar heterogeneity characteristics to real data sets. This analysis framework can provide system administrators with insight to make intelligent scheduling decisions based on the energy and utility needs of their systems.« less
Rosenthal, L E
1986-10-01
Software is the component in a computer system that permits the hardware to perform the various functions that a computer system is capable of doing. The history of software and its development can be traced to the early nineteenth century. All computer systems are designed to utilize the "stored program concept" as first developed by Charles Babbage in the 1850s. The concept was lost until the mid-1940s, when modern computers made their appearance. Today, because of the complex and myriad tasks that a computer system can perform, there has been a differentiation of types of software. There is software designed to perform specific business applications. There is software that controls the overall operation of a computer system. And there is software that is designed to carry out specialized tasks. Regardless of types, software is the most critical component of any computer system. Without it, all one has is a collection of circuits, transistors, and silicone chips.
2004-09-01
protection. Firewalls, Intrusion Detection Systems (IDS’s), Anti-Virus (AV) software , and routers are such tools used. In recent years, computer security...associated with operating systems, application software , and computing hardware. When IDS’s are utilized on a host computer or network, there are two...primary approaches to detecting and / or preventing attacks. Traditional IDS’s, like most AV software , rely on known “signatures” to detect attacks
Opportunistic Computing with Lobster: Lessons Learned from Scaling up to 25k Non-Dedicated Cores
NASA Astrophysics Data System (ADS)
Wolf, Matthias; Woodard, Anna; Li, Wenzhao; Hurtado Anampa, Kenyi; Yannakopoulos, Anna; Tovar, Benjamin; Donnelly, Patrick; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas
2017-10-01
We previously described Lobster, a workflow management tool for exploiting volatile opportunistic computing resources for computation in HEP. We will discuss the various challenges that have been encountered while scaling up the simultaneous CPU core utilization and the software improvements required to overcome these challenges. Categories: Workflows can now be divided into categories based on their required system resources. This allows the batch queueing system to optimize assignment of tasks to nodes with the appropriate capabilities. Within each category, limits can be specified for the number of running jobs to regulate the utilization of communication bandwidth. System resource specifications for a task category can now be modified while a project is running, avoiding the need to restart the project if resource requirements differ from the initial estimates. Lobster now implements time limits on each task category to voluntarily terminate tasks. This allows partially completed work to be recovered. Workflow dependency specification: One workflow often requires data from other workflows as input. Rather than waiting for earlier workflows to be completed before beginning later ones, Lobster now allows dependent tasks to begin as soon as sufficient input data has accumulated. Resource monitoring: Lobster utilizes a new capability in Work Queue to monitor the system resources each task requires in order to identify bottlenecks and optimally assign tasks. The capability of the Lobster opportunistic workflow management system for HEP computation has been significantly increased. We have demonstrated efficient utilization of 25 000 non-dedicated cores and achieved a data input rate of 30 Gb/s and an output rate of 500GB/h. This has required new capabilities in task categorization, workflow dependency specification, and resource monitoring.
The computer-communication link for the innovative use of Space Station
NASA Technical Reports Server (NTRS)
Carroll, C. C.
1984-01-01
The potential capability of the computer-communications system link of space station is related to innovative utilization for industrial applications. Conceptual computer network architectures are presented and their respective accommodation of innovative industrial projects are discussed. To achieve maximum system availability for industrialization is a possible design goal, which would place the industrial community in an interactive mode with facilities in space. A worthy design goal would be to minimize the computer-communication management function and thereby optimize the system availability for industrial users. Quasi-autonomous modes and subnetworks are key design issues, since they would be the system elements directly effecting the system performance for industrial use.
NASA Technical Reports Server (NTRS)
Klumpar, D. M. (Principal Investigator)
1982-01-01
The status of the initial testing of the modeling procedure developed to compute the magnetic fields at satellite orbit due to current distributions in the ionosphere and magnetosphere is reported. The modeling technique utilizes a linear current element representation of the large scale space-current system.
Beach Profile Analysis Systems (BPAS). Volume VI. BPAS User’s Guide: Analysis Module VOLCTR.
1982-06-01
the two seawardmost points. Before computing volume changes, common bonds are established relative to the landward and seaward extent of the surveys on...bit word size, the FORTRAN- callable sort routine (interfacing with the NOS or NOSME operating system SORTMRG utility), and the utility subroutines and
NASA Technical Reports Server (NTRS)
Curran, R. T.
1971-01-01
A flight computer functional executive design for the reusable shuttle is presented. The design is given in the form of functional flowcharts and prose description. Techniques utilized in the regulation of process flow to accomplish activation, resource allocation, suspension, termination, and error masking based on process primitives are considered. Preliminary estimates of main storage utilization by the Executive are furnished. Conclusions and recommendations for timely, effective software-hardware integration in the reusable shuttle avionics system are proposed.
Computer assisted audit techniques for UNIX (UNIX-CAATS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polk, W.T.
1991-12-31
Federal and DOE regulations impose specific requirements for internal controls of computer systems. These controls include adequate separation of duties and sufficient controls for access of system and data. The DOE Inspector General`s Office has the responsibility to examine internal controls, as well as efficient use of computer system resources. As a result, DOE supported NIST development of computer assisted audit techniques to examine BSD UNIX computers (UNIX-CAATS). These systems were selected due to the increasing number of UNIX workstations in use within DOE. This paper describes the design and development of these techniques, as well as the results ofmore » testing at NIST and the first audit at a DOE site. UNIX-CAATS consists of tools which examine security of passwords, file systems, and network access. In addition, a tool was developed to examine efficiency of disk utilization. Test results at NIST indicated inadequate password management, as well as weak network resource controls. File system security was considered adequate. Audit results at a DOE site indicated weak password management and inefficient disk utilization. During the audit, we also found improvements to UNIX-CAATS were needed when applied to large systems. NIST plans to enhance the techniques developed for DOE/IG in future work. This future work would leverage currently available tools, along with needed enhancements. These enhancements would enable DOE/IG to audit large systems, such as supercomputers.« less
Computer assisted audit techniques for UNIX (UNIX-CAATS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polk, W.T.
1991-01-01
Federal and DOE regulations impose specific requirements for internal controls of computer systems. These controls include adequate separation of duties and sufficient controls for access of system and data. The DOE Inspector General's Office has the responsibility to examine internal controls, as well as efficient use of computer system resources. As a result, DOE supported NIST development of computer assisted audit techniques to examine BSD UNIX computers (UNIX-CAATS). These systems were selected due to the increasing number of UNIX workstations in use within DOE. This paper describes the design and development of these techniques, as well as the results ofmore » testing at NIST and the first audit at a DOE site. UNIX-CAATS consists of tools which examine security of passwords, file systems, and network access. In addition, a tool was developed to examine efficiency of disk utilization. Test results at NIST indicated inadequate password management, as well as weak network resource controls. File system security was considered adequate. Audit results at a DOE site indicated weak password management and inefficient disk utilization. During the audit, we also found improvements to UNIX-CAATS were needed when applied to large systems. NIST plans to enhance the techniques developed for DOE/IG in future work. This future work would leverage currently available tools, along with needed enhancements. These enhancements would enable DOE/IG to audit large systems, such as supercomputers.« less
Telecommunication Networks. Tech Use Guide: Using Computer Technology.
ERIC Educational Resources Information Center
Council for Exceptional Children, Reston, VA. Center for Special Education Technology.
One of nine brief guides for special educators on using computer technology, this guide focuses on utilizing the telecommunications capabilities of computers. Network capabilities including electronic mail, bulletin boards, and access to distant databases are briefly explained. Networks useful to the educator, general commercial systems, and local…
ERIC Educational Resources Information Center
Bozeman, William C.
This study explores the relationships between psychological types of users as identified by the Myers-Briggs Type Indicator and factors associated with the implementation and utilization of the Wisconsin System for Instructional Management (WIS-SIM), a computer management information system designed to support management processes in…
1991-09-01
System ( CAPMS ) in lieu of using DODI 4151.15H. Facility utilization rate computation is not explicitly defined; it is merely identified as a ratio of...front of a bottleneck buffers the critical resource and protects against disruption of the system. This approach optimizes facility utilization by...run titled BUFFERED BASELINE. Three different levels of inventory were used to evaluate the effect of increasing the inventory level on critical
Examining the architecture of cellular computing through a comparative study with a computer
Wang, Degeng; Gribskov, Michael
2005-01-01
The computer and the cell both use information embedded in simple coding, the binary software code and the quadruple genomic code, respectively, to support system operations. A comparative examination of their system architecture as well as their information storage and utilization schemes is performed. On top of the code, both systems display a modular, multi-layered architecture, which, in the case of a computer, arises from human engineering efforts through a combination of hardware implementation and software abstraction. Using the computer as a reference system, a simplistic mapping of the architectural components between the two is easily detected. This comparison also reveals that a cell abolishes the software–hardware barrier through genomic encoding for the constituents of the biochemical network, a cell's ‘hardware’ equivalent to the computer central processing unit (CPU). The information loading (gene expression) process acts as a major determinant of the encoded constituent's abundance, which, in turn, often determines the ‘bandwidth’ of a biochemical pathway. Cellular processes are implemented in biochemical pathways in parallel manners. In a computer, on the other hand, the software provides only instructions and data for the CPU. A process represents just sequentially ordered actions by the CPU and only virtual parallelism can be implemented through CPU time-sharing. Whereas process management in a computer may simply mean job scheduling, coordinating pathway bandwidth through the gene expression machinery represents a major process management scheme in a cell. In summary, a cell can be viewed as a super-parallel computer, which computes through controlled hardware composition. While we have, at best, a very fragmented understanding of cellular operation, we have a thorough understanding of the computer throughout the engineering process. The potential utilization of this knowledge to the benefit of systems biology is discussed. PMID:16849179
Examining the architecture of cellular computing through a comparative study with a computer.
Wang, Degeng; Gribskov, Michael
2005-06-22
The computer and the cell both use information embedded in simple coding, the binary software code and the quadruple genomic code, respectively, to support system operations. A comparative examination of their system architecture as well as their information storage and utilization schemes is performed. On top of the code, both systems display a modular, multi-layered architecture, which, in the case of a computer, arises from human engineering efforts through a combination of hardware implementation and software abstraction. Using the computer as a reference system, a simplistic mapping of the architectural components between the two is easily detected. This comparison also reveals that a cell abolishes the software-hardware barrier through genomic encoding for the constituents of the biochemical network, a cell's "hardware" equivalent to the computer central processing unit (CPU). The information loading (gene expression) process acts as a major determinant of the encoded constituent's abundance, which, in turn, often determines the "bandwidth" of a biochemical pathway. Cellular processes are implemented in biochemical pathways in parallel manners. In a computer, on the other hand, the software provides only instructions and data for the CPU. A process represents just sequentially ordered actions by the CPU and only virtual parallelism can be implemented through CPU time-sharing. Whereas process management in a computer may simply mean job scheduling, coordinating pathway bandwidth through the gene expression machinery represents a major process management scheme in a cell. In summary, a cell can be viewed as a super-parallel computer, which computes through controlled hardware composition. While we have, at best, a very fragmented understanding of cellular operation, we have a thorough understanding of the computer throughout the engineering process. The potential utilization of this knowledge to the benefit of systems biology is discussed.
NASA Technical Reports Server (NTRS)
Mulac, Richard A.; Celestina, Mark L.; Adamczyk, John J.; Misegades, Kent P.; Dawson, Jef M.
1987-01-01
A procedure is outlined which utilizes parallel processing to solve the inviscid form of the average-passage equation system for multistage turbomachinery along with a description of its implementation in a FORTRAN computer code, MSTAGE. A scheme to reduce the central memory requirements of the program is also detailed. Both the multitasking and I/O routines referred to are specific to the Cray X-MP line of computers and its associated SSD (Solid-State Disk). Results are presented for a simulation of a two-stage rocket engine fuel pump turbine.
Computer-based visual communication in aphasia.
Steele, R D; Weinrich, M; Wertz, R T; Kleczewska, M K; Carlson, G S
1989-01-01
The authors describe their recently developed Computer-aided VIsual Communication (C-VIC) system, and report results of single-subject experimental designs probing its use with five chronic, severely impaired aphasic individuals. Studies replicate earlier results obtained with a non-computerized system, demonstrate patient competence with the computer implementation, extend the system's utility, and identify promising areas of application. Results of the single-subject experimental designs clarify patients' learning, generalization, and retention patterns, and highlight areas of performance difficulties. Future directions for the project are indicated.
NASA Astrophysics Data System (ADS)
Ardıç, Mehmet Alper; Işleyen, Tevfik
2018-01-01
In this study, we deal with the development process of in-service training activities designed in order for mathematics teachers of secondary education to realize teaching of mathematics, utilizing computer algebra systems. In addition, the results obtained from the researches carried out during and after the in-service training were summarized. Last section focuses on suggestions any teacher can use to carry out activities aimed at using computer algebra systems in teaching environments.
On the Impact of Execution Models: A Case Study in Computational Chemistry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavarría-Miranda, Daniel; Halappanavar, Mahantesh; Krishnamoorthy, Sriram
2015-05-25
Efficient utilization of high-performance computing (HPC) platforms is an important and complex problem. Execution models, abstract descriptions of the dynamic runtime behavior of the execution stack, have significant impact on the utilization of HPC systems. Using a computational chemistry kernel as a case study and a wide variety of execution models combined with load balancing techniques, we explore the impact of execution models on the utilization of an HPC system. We demonstrate a 50 percent improvement in performance by using work stealing relative to a more traditional static scheduling approach. We also use a novel semi-matching technique for load balancingmore » that has comparable performance to a traditional hypergraph-based partitioning implementation, which is computationally expensive. Using this study, we found that execution model design choices and assumptions can limit critical optimizations such as global, dynamic load balancing and finding the correct balance between available work units and different system and runtime overheads. With the emergence of multi- and many-core architectures and the consequent growth in the complexity of HPC platforms, we believe that these lessons will be beneficial to researchers tuning diverse applications on modern HPC platforms, especially on emerging dynamic platforms with energy-induced performance variability.« less
Keeping PCs up to Date Can Be Fun
ERIC Educational Resources Information Center
Goldsborough, Reid
2004-01-01
The "joy" of computer maintenance takes many forms. These days, automation is the byword. Operating systems such as Microsoft Windows and utility suites such as Symantec's Norton Internet Security let you automatically keep crucial parts of your computer system up to date. It's fun to watch the technology keep tabs on itself. This document offers…
Technology survey of computer software as applicable to the MIUS project
NASA Technical Reports Server (NTRS)
Fulbright, B. E.
1975-01-01
Existing computer software, available from either governmental or private sources, applicable to modular integrated utility system program simulation is surveyed. Several programs and subprograms are described to provide a consolidated reference, and a bibliography is included. The report covers the two broad areas of design simulation and system simulation.
A Pilot-Scale Heat Recovery System for Computer Process Control Teaching and Research.
ERIC Educational Resources Information Center
Callaghan, P. J.; And Others
1988-01-01
Describes the experimental system and equipment including an interface box for displaying variables. Discusses features which make the circuit suitable for teaching and research in computing. Feedforward, decoupling, and adaptive control, examination of digital filtering, and a cascade loop are teaching experiments utilizing this rig. Diagrams and…
Analysis of large power systems
NASA Technical Reports Server (NTRS)
Dommel, H. W.
1975-01-01
Computer-oriented power systems analysis procedures in the electric utilities are surveyed. The growth of electric power systems is discussed along with the solution of sparse network equations, power flow, and stability studies.
A Model-based Framework for Risk Assessment in Human-Computer Controlled Systems
NASA Technical Reports Server (NTRS)
Hatanaka, Iwao
2000-01-01
The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems. This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions. Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.
Safety Metrics for Human-Computer Controlled Systems
NASA Technical Reports Server (NTRS)
Leveson, Nancy G; Hatanaka, Iwao
2000-01-01
The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems.This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.
The Use of Microcomputers in Distance Teaching Systems. ZIFF Papiere 70.
ERIC Educational Resources Information Center
Rumble, Greville
Microcomputers have revolutionized distance education in virtually every area. Used alone, personal computers provide students with a wide range of utilities, including word processing, graphics packages, and spreadsheets. When linked to a mainframe computer or connected to other personal computers in local area networks, microcomputers can…
Central Computational Facility CCF communications subsystem options
NASA Technical Reports Server (NTRS)
Hennigan, K. B.
1979-01-01
A MITRE study which investigated the communication options available to support both the remaining Central Computational Facility (CCF) computer systems and the proposed U1108 replacements is presented. The facilities utilized to link the remote user terminals with the CCF were analyzed and guidelines to provide more efficient communications were established.
Educational Computer Utilization and Computer Communications.
ERIC Educational Resources Information Center
Singh, Jai P.; Morgan, Robert P.
As part of an analysis of educational needs and telecommunications requirements for future educational satellite systems, three studies were carried out. 1) The role of the computer in education was examined and both current status and future requirements were analyzed. Trade-offs between remote time sharing and remote batch process were explored…
System Study at SUNY College Bookstore/Oswego
ERIC Educational Resources Information Center
DeVita, Richard; And Others
1975-01-01
A system study of the textbook ordering department is presented including systems flow chart, chart of activities, and description of operations and procedures for utilizing the computer system. Changes based on the study are noted. (JT)
NASA Technical Reports Server (NTRS)
1983-01-01
Kennedy Space Center's primary institutional computer is a 4 megabyte IBM 4341 with 3.175 billion characters of IBM 3350 disc storage. This system utilizes the Software AG product known as ADABAS with the on line user oriented features of NATURAL and COMPLETE as a Data Base Management System (DBMS). It is operational under the OS/VSI and is currently supporting batch/on line applications such as Personnel, Training, Physical Space Management, Procurement, Office Equipment Maintenance, and Equipment Visibility. A third and by far the largest DBMS application is known as the Shuttle Inventory Management System (SIMS) which is operational on a Honeywell 6660 (dedicated) computer system utilizing Honeywell Integrated Data Storage I (IDSI) as the DBMS. The SIMS application is designed to provide central supply system acquisition, inventory control, receipt, storage, and issue of spares, supplies, and materials.
A continuous physiological data collector
NASA Technical Reports Server (NTRS)
Bush, J. C.
1972-01-01
COP-DAC system utilizes oxygen and carbon dioxide analyzers, gas-flow meter, gas breathe-through system, analog computer, and data storage system to provide actual rather than average measurements of physiological and metabolic functions.
NASA Technical Reports Server (NTRS)
Schulte, Erin
2017-01-01
As augmented and virtual reality grows in popularity, and more researchers focus on its development, other fields of technology have grown in the hopes of integrating with the up-and-coming hardware currently on the market. Namely, there has been a focus on how to make an intuitive, hands-free human-computer interaction (HCI) utilizing AR and VR that allows users to control their technology with little to no physical interaction with hardware. Computer vision, which is utilized in devices such as the Microsoft Kinect, webcams and other similar hardware has shown potential in assisting with the development of a HCI system that requires next to no human interaction with computing hardware and software. Object and facial recognition are two subsets of computer vision, both of which can be applied to HCI systems in the fields of medicine, security, industrial development and other similar areas.
Engineering and Design: Control Stations and Control Systems for Navigation Locks and Dams
1997-05-30
of human intelli- hypothetical lock and dam configurations. Finally, b. Terminology. (1) PLC system. The computer- based systems utilize special...electrical industry for industrial use. There- fore, for purposes of this document, a computer- based system is referred to as a PLC system. (2) Relay- based ...be custom made, because most of today’s control systems of any complexity are PLC - based , the standard size of a given motor starter cubicle is not
Navier-Stokes simulation of plume/Vertical Launching System interaction flowfields
NASA Astrophysics Data System (ADS)
York, B. J.; Sinha, N.; Dash, S. M.; Anderson, L.; Gominho, L.
1992-01-01
The application of Navier-Stokes methodology to the analysis of Vertical Launching System/missile exhaust plume interactions is discussed. The complex 3D flowfields related to the Vertical Launching System are computed utilizing the PARCH/RNP Navier-Stokes code. PARCH/RNP solves the fully-coupled system of fluid, two-equation turbulence (k-epsilon) and chemical species equations via the implicit, approximately factored, Beam-Warming algorithm utilizing a block-tridiagonal inversion procedure.
Optimizing Resource Utilization in Grid Batch Systems
NASA Astrophysics Data System (ADS)
Gellrich, Andreas
2012-12-01
On Grid sites, the requirements of the computing tasks (jobs) to computing, storage, and network resources differ widely. For instance Monte Carlo production jobs are almost purely CPU-bound, whereas physics analysis jobs demand high data rates. In order to optimize the utilization of the compute node resources, jobs must be distributed intelligently over the nodes. Although the job resource requirements cannot be deduced directly, jobs are mapped to POSIX UID/GID according to the VO, VOMS group and role information contained in the VOMS proxy. The UID/GID then allows to distinguish jobs, if users are using VOMS proxies as planned by the VO management, e.g. ‘role=production’ for Monte Carlo jobs. It is possible to setup and configure batch systems (queuing system and scheduler) at Grid sites based on these considerations although scaling limits were observed with the scheduler MAUI. In tests these limitations could be overcome with a home-made scheduler.
DOT National Transportation Integrated Search
1997-01-01
Intelligent transportation systems (ITS) are systems that utilize advanced technologies, including computer, communications and process control technologies, to improve the efficiency and safety of the transportation system. These systems encompass a...
Mobile healthcare information management utilizing Cloud Computing and Android OS.
Doukas, Charalampos; Pliakas, Thomas; Maglogiannis, Ilias
2010-01-01
Cloud Computing provides functionality for managing information data in a distributed, ubiquitous and pervasive manner supporting several platforms, systems and applications. This work presents the implementation of a mobile system that enables electronic healthcare data storage, update and retrieval using Cloud Computing. The mobile application is developed using Google's Android operating system and provides management of patient health records and medical images (supporting DICOM format and JPEG2000 coding). The developed system has been evaluated using the Amazon's S3 cloud service. This article summarizes the implementation details and presents initial results of the system in practice.
Integrative Genomics and Computational Systems Medicine
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDermott, Jason E.; Huang, Yufei; Zhang, Bing
The exponential growth in generation of large amounts of genomic data from biological samples has driven the emerging field of systems medicine. This field is promising because it improves our understanding of disease processes at the systems level. However, the field is still in its young stage. There exists a great need for novel computational methods and approaches to effectively utilize and integrate various omics data.
Advanced Transport Operating System (ATOPS) utility library software description
NASA Technical Reports Server (NTRS)
Clinedinst, Winston C.; Slominski, Christopher J.; Dickson, Richard W.; Wolverton, David A.
1993-01-01
The individual software processes used in the flight computers on-board the Advanced Transport Operating System (ATOPS) aircraft have many common functional elements. A library of commonly used software modules was created for general uses among the processes. The library includes modules for mathematical computations, data formatting, system database interfacing, and condition handling. The modules available in the library and their associated calling requirements are described.
Experience with ethylene plant computer control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nasi, M.; Darby, M.L.; Sourander, M.
This article discusses the control strategies, results and opinions of management and operations of a computer based ethylene plant control system. The ethylene unit contains 9 cracking heaters, and its nameplate capacity is 200,000 tpa ethylene. Reports on control performance during different unit loading and using different feedstock types. By converting the yield and utility consumption benefits due to computer control into monetary units, the payback time of the system is less than 2 yrs.
Methods for utilizing maximum power from a solar array
NASA Technical Reports Server (NTRS)
Decker, D. K.
1972-01-01
A preliminary study of maximum power utilization methods was performed for an outer planet spacecraft using an ion thruster propulsion system and a solar array as the primary energy source. The problems which arise from operating the array at or near the maximum power point of its 1-V characteristic are discussed. Two closed loop system configurations which use extremum regulators to track the array's maximum power point are presented. Three open loop systems are presented that either: (1) measure the maximum power of each array section and compute the total array power, (2) utilize a reference array to predict the characteristics of the solar array, or (3) utilize impedance measurements to predict the maximum power utilization. The advantages and disadvantages of each system are discussed and recommendations for further development are made.
The eye-tracking computer device for communication in amyotrophic lateral sclerosis.
Spataro, R; Ciriacono, M; Manno, C; La Bella, V
2014-07-01
To explore the effectiveness of communication and the variables affecting the eye-tracking computer system (ETCS) utilization in patients with late-stage amyotrophic lateral sclerosis (ALS). We performed a telephone survey on 30 patients with advanced non-demented ALS that were provisioned an ECTS device. Median age at interview was 55 years (IQR = 48-62), with a relatively high education (13 years, IQR = 8-13). A one-off interview was made and answers were later provided with the help of the caregiver. The interview included items about demographic and clinical variables affecting the daily ETCS utilization. The median time of ETCS device possession was 15 months (IQR = 9-20). The actual daily utilization was 300 min (IQR = 100-720), mainly for the communication with relatives/caregiver, internet surfing, e-mailing, and social networking. 23.3% of patients with ALS (n = 7) had a low daily ETCS utilization; most reported causes were eye-gaze tiredness and oculomotor dysfunction. Eye-tracking computer system is a valuable device for AAC in patients with ALS, and it can be operated with a good performance. The development of oculomotor impairment may limit its functional use. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Computing in Secondary Physics at Armdale, W.A.
ERIC Educational Resources Information Center
Smith, Clifton L.
1976-01-01
An Australian secondary school physics course utilizing an electronic programmable calculator and computer is described. Calculation techniques and functions, programming techniques, and simulation of physical systems are detailed. A summary of student responses to the program is included. (BT)
ERIC Educational Resources Information Center
Kent, Thomas H.; And Others
The advantages, feasibility and problems associated with a student-paced course were investigated, and a computer managed evaluation system compared to paper and pencil testing mode. The development of a self-paced course was facilitated by explicit behavior objectives, a variety of learning materials referenced to the objectives and a large pool…
Smart Grid Privacy through Distributed Trust
NASA Astrophysics Data System (ADS)
Lipton, Benjamin
Though the smart electrical grid promises many advantages in efficiency and reliability, the risks to consumer privacy have impeded its deployment. Researchers have proposed protecting privacy by aggregating user data before it reaches the utility, using techniques of homomorphic encryption to prevent exposure of unaggregated values. However, such schemes generally require users to trust in the correct operation of a single aggregation server. We propose two alternative systems based on secret sharing techniques that distribute this trust among multiple service providers, protecting user privacy against a misbehaving server. We also provide an extensive evaluation of the systems considered, comparing their robustness to privacy compromise, error handling, computational performance, and data transmission costs. We conclude that while all the systems should be computationally feasible on smart meters, the two methods based on secret sharing require much less computation while also providing better protection against corrupted aggregators. Building systems using these techniques could help defend the privacy of electricity customers, as well as customers of other utilities as they move to a more data-driven architecture.
Automating the parallel processing of fluid and structural dynamics calculations
NASA Technical Reports Server (NTRS)
Arpasi, Dale J.; Cole, Gary L.
1987-01-01
The NASA Lewis Research Center is actively involved in the development of expert system technology to assist users in applying parallel processing to computational fluid and structural dynamic analysis. The goal of this effort is to eliminate the necessity for the physical scientist to become a computer scientist in order to effectively use the computer as a research tool. Programming and operating software utilities have previously been developed to solve systems of ordinary nonlinear differential equations on parallel scalar processors. Current efforts are aimed at extending these capabilities to systems of partial differential equations, that describe the complex behavior of fluids and structures within aerospace propulsion systems. This paper presents some important considerations in the redesign, in particular, the need for algorithms and software utilities that can automatically identify data flow patterns in the application program and partition and allocate calculations to the parallel processors. A library-oriented multiprocessing concept for integrating the hardware and software functions is described.
Understanding I/O workload characteristics of a Peta-scale storage system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Youngjae; Gunasekaran, Raghul
2015-01-01
Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the I/O workloads of scientific applications of one of the world s fastest high performance computing (HPC) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). OLCF flagship petascale simulation platform, Titan, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize the system utilization, the demands of reads and writes, idle time, storage space utilization,more » and the distribution of read requests to write requests for the Peta-scale Storage Systems. From this study, we develop synthesized workloads, and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution. We also study the I/O load imbalance problems using I/O performance data collected from the Spider storage system.« less
NASA Technical Reports Server (NTRS)
STACK S. H.
1981-01-01
A computer-aided design system has recently been developed specifically for the small research group environment. The system is implemented on a Prime 400 minicomputer linked with a CDC 6600 computer. The goal was to assign the minicomputer specific tasks, such as data input and graphics, thereby reserving the large mainframe computer for time-consuming analysis codes. The basic structure of the design system consists of GEMPAK, a computer code that generates detailed configuration geometry from a minimum of input; interface programs that reformat GEMPAK geometry for input to the analysis codes; and utility programs that simplify computer access and data interpretation. The working system has had a large positive impact on the quantity and quality of research performed by the originating group. This paper describes the system, the major factors that contributed to its particular form, and presents examples of its application.
ERIC Educational Resources Information Center
Komsky, Susan
Fiscal Impact Budgeting Systems (FIBS) are sophisticated computer based modeling procedures used in local government organizations, whose results, however, are often overlooked or ignored by decision makers. A study attempted to discover the reasons for this situation by focusing on four factors: potential usefulness, faith in computers,…
Energy Finite Element Analysis Developments for Vibration Analysis of Composite Aircraft Structures
NASA Technical Reports Server (NTRS)
Vlahopoulos, Nickolas; Schiller, Noah H.
2011-01-01
The Energy Finite Element Analysis (EFEA) has been utilized successfully for modeling complex structural-acoustic systems with isotropic structural material properties. In this paper, a formulation for modeling structures made out of composite materials is presented. An approach based on spectral finite element analysis is utilized first for developing the equivalent material properties for the composite material. These equivalent properties are employed in the EFEA governing differential equations for representing the composite materials and deriving the element level matrices. The power transmission characteristics at connections between members made out of non-isotropic composite material are considered for deriving suitable power transmission coefficients at junctions of interconnected members. These coefficients are utilized for computing the joint matrix that is needed to assemble the global system of EFEA equations. The global system of EFEA equations is solved numerically and the vibration levels within the entire system can be computed. The new EFEA formulation for modeling composite laminate structures is validated through comparison to test data collected from a representative composite aircraft fuselage that is made out of a composite outer shell and composite frames and stiffeners. NASA Langley constructed the composite cylinder and conducted the test measurements utilized in this work.
NASA Technical Reports Server (NTRS)
Mulac, Richard A.; Celestina, Mark L.; Adamczyk, John J.; Misegades, Kent P.; Dawson, Jef M.
1987-01-01
A procedure is outlined which utilizes parallel processing to solve the inviscid form of the average-passage equation system for multistage turbomachinery along with a description of its implementation in a FORTRAN computer code, MSTAGE. A scheme to reduce the central memory requirements of the program is also detailed. Both the multitasking and I/O routines referred to in this paper are specific to the Cray X-MP line of computers and its associated SSD (Solid-state Storage Device). Results are presented for a simulation of a two-stage rocket engine fuel pump turbine.
Challenges in Securing the Interface Between the Cloud and Pervasive Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lagesse, Brent J
2011-01-01
Cloud computing presents an opportunity for pervasive systems to leverage computational and storage resources to accomplish tasks that would not normally be possible on such resource-constrained devices. Cloud computing can enable hardware designers to build lighter systems that last longer and are more mobile. Despite the advantages cloud computing offers to the designers of pervasive systems, there are some limitations of leveraging cloud computing that must be addressed. We take the position that cloud-based pervasive system must be secured holistically and discuss ways this might be accomplished. In this paper, we discuss a pervasive system utilizing cloud computing resources andmore » issues that must be addressed in such a system. In this system, the user's mobile device cannot always have network access to leverage resources from the cloud, so it must make intelligent decisions about what data should be stored locally and what processes should be run locally. As a result of these decisions, the user becomes vulnerable to attacks while interfacing with the pervasive system.« less
Impact of coverage on the reliability of a fault tolerant computer
NASA Technical Reports Server (NTRS)
Bavuso, S. J.
1975-01-01
A mathematical reliability model is established for a reconfigurable fault tolerant avionic computer system utilizing state-of-the-art computers. System reliability is studied in light of the coverage probabilities associated with the first and second independent hardware failures. Coverage models are presented as a function of detection, isolation, and recovery probabilities. Upper and lower bonds are established for the coverage probabilities and the method for computing values for the coverage probabilities is investigated. Further, an architectural variation is proposed which is shown to enhance coverage.
ERIC Educational Resources Information Center
Hecquet, Ignace; And Others
Principles are outlined that are used as a basis for the system of pricing the services of the Computer Centre. The system illustrates the use of a management method to secure better utilization of university resources. Departments decide how to use the appropriations granted to them and establish a system of internal prices that reflect the cost…
Time Warp Operating System, Version 2.5.1
NASA Technical Reports Server (NTRS)
Bellenot, Steven F.; Gieselman, John S.; Hawley, Lawrence R.; Peterson, Judy; Presley, Matthew T.; Reiher, Peter L.; Springer, Paul L.; Tupman, John R.; Wedel, John J., Jr.; Wieland, Frederick P.;
1993-01-01
Time Warp Operating System, TWOS, is special purpose computer program designed to support parallel simulation of discrete events. Complete implementation of Time Warp software mechanism, which implements distributed protocol for virtual synchronization based on rollback of processes and annihilation of messages. Supports simulations and other computations in which both virtual time and dynamic load balancing used. Program utilizes underlying resources of operating system. Written in C programming language.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-10-20
Look-ahead dynamic simulation software system incorporates the high performance parallel computing technologies, significantly reduces the solution time for each transient simulation case, and brings the dynamic simulation analysis into on-line applications to enable more transparency for better reliability and asset utilization. It takes the snapshot of the current power grid status, functions in parallel computing the system dynamic simulation, and outputs the transient response of the power system in real time.
TMS communications hardware. Volume 1: Computer interfaces
NASA Technical Reports Server (NTRS)
Brown, J. S.; Weinrich, S. S.
1979-01-01
A prototpye coaxial cable bus communications system was designed to be used in the Trend Monitoring System (TMS) to connect intelligent graphics terminals (based around a Data General NOVA/3 computer) to a MODCOMP IV host minicomputer. The direct memory access (DMA) interfaces which were utilized for each of these computers are identified. It is shown that for the MODCOMP, an off-the-shell board was suitable, while for the NOVAs, custon interface circuitry was designed and implemented.
Park, Jung-Ho; Park, Sung-Ae; Yoon, Soon-Nyoung; Kang, Sung-Rye
2004-04-01
The purpose of this study was to develop a home care nursing network system for operating home care effectively and efficiently by utilizing a wire-wireless network and mobile computing in order to record and send patients' data in real time, and by combining the headquarter office and the local offices with home care nurses over the Internet. It complements the preceding research from 1999 by adding home care nursing standard guidelines and upgrading the PDA program. Method/1 and Prototyping were adopted to develop the main network system. The detailed research process is as follows : 1)home care nursing standard guidelines for Diabetes, cancer and peritoneal-dialysis were added in 12 domains of nursing problem fields with nursing assessment/intervention algorithms. 2) complementing the PDA program was done by omitting and integrating the home care nursing algorithm path which is unnecessary and duplicated. Also, upgrading the PDA system was done by utilizing the machinery and tools where the PDA and the data transmission modem are integrated, CDMX-1X base construction, in order to reduce a transmission error or transmission failure.
Utility Computing: Reality and Beyond
NASA Astrophysics Data System (ADS)
Ivanov, Ivan I.
Utility Computing is not a new concept. It involves organizing and providing a wide range of computing-related services as public utilities. Much like water, gas, electricity and telecommunications, the concept of computing as public utility was announced in 1955. Utility Computing remained a concept for near 50 years. Now some models and forms of Utility Computing are emerging such as storage and server virtualization, grid computing, and automated provisioning. Recent trends in Utility Computing as a complex technology involve business procedures that could profoundly transform the nature of companies' IT services, organizational IT strategies and technology infrastructure, and business models. In the ultimate Utility Computing models, organizations will be able to acquire as much IT services as they need, whenever and wherever they need them. Based on networked businesses and new secure online applications, Utility Computing would facilitate "agility-integration" of IT resources and services within and between virtual companies. With the application of Utility Computing there could be concealment of the complexity of IT, reduction of operational expenses, and converting of IT costs to variable `on-demand' services. How far should technology, business and society go to adopt Utility Computing forms, modes and models?
Design issues for grid-connected photovoltaic systems
NASA Astrophysics Data System (ADS)
Ropp, Michael Eugene
1998-08-01
Photovoltaics (PV) is the direct conversion of sunlight to electrical energy. In areas without centralized utility grids, the benefits of PV easily overshadow the present shortcomings of the technology. However, in locations with centralized utility systems, significant technical challenges remain before utility-interactive PV (UIPV) systems can be integrated into the mix of electricity sources. One challenge is that the needed computer design tools for optimal design of PV systems with curved PV arrays are not available, and even those that are available do not facilitate monitoring of the system once it is built. Another arises from the issue of islanding. Islanding occurs when a UIPV system continues to energize a section of a utility system after that section has been isolated from the utility voltage source. Islanding, which is potentially dangerous to both personnel and equipment, is difficult to prevent completely. The work contained within this thesis targets both of these technical challenges. In Task 1, a method for modeling a PV system with a curved PV array using only existing computer software is developed. This methodology also facilitates comparison of measured and modeled data for use in system monitoring. The procedure is applied to the Georgia Tech Aquatic Center (GTAC) FV system. In the work contained under Task 2, islanding prevention is considered. The existing state-of-the- art is thoroughly reviewed. In Subtask 2.1, an analysis is performed which suggests that standard protective relays are in fact insufficient to guarantee protection against islanding. In Subtask 2.2. several existing islanding prevention methods are compared in a novel way. The superiority of this new comparison over those used previously is demonstrated. A new islanding prevention method is the subject under Subtask 2.3. It is shown that it does not compare favorably with other existing techniques. However, in Subtask 2.4, a novel method for dramatically improving this new islanding prevention method is described. It is shown, both by computer modeling and experiment, that this new method is one of the most effective available today. Finally, under Subtask 2.5, the effects of certain types of loads; on the effectiveness of islanding prevention methods are discussed.
ERIC Educational Resources Information Center
Watson, William R.; Watson, Sunnie Lee
2007-01-01
The application of computers to education has a history dating back to the 1950s, well before the pervasive spread of personal computers (Reiser, 1987). With a mature history and varying approaches to utilizing computers for education, a veritable alphabet soup of terms and acronyms related to computers in education have found their way into the…
Officer Computer Utilization Report
1992-03-01
Shipboard Non-tactical ADP Program (SNAP),Navy Intelligence Processing System (NIPS), Retail Operation Management (ROM)). Mainframe - An extremely...ADP Program (SNAP), Navy Intelligence Processing System (NIPS), Retail Operation Management (ROM), etc.) @0230@6 7 7. Technical/tactical systems (e.g
Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers
NASA Astrophysics Data System (ADS)
Dreher, Patrick; Scullin, William; Vouk, Mladen
2015-09-01
Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers.
NASA Astrophysics Data System (ADS)
McFall, Steve
1994-03-01
With the increase in business automation and the widespread availability and low cost of computer systems, law enforcement agencies have seen a corresponding increase in criminal acts involving computers. The examination of computer evidence is a new field of forensic science with numerous opportunities for research and development. Research is needed to develop new software utilities to examine computer storage media, expert systems capable of finding criminal activity in large amounts of data, and to find methods of recovering data from chemically and physically damaged computer storage media. In addition, defeating encryption and password protection of computer files is also a topic requiring more research and development.
Evaluation of Advanced Computing Techniques and Technologies: Reconfigurable Computing
NASA Technical Reports Server (NTRS)
Wells, B. Earl
2003-01-01
The focus of this project was to survey the technology of reconfigurable computing determine its level of maturity and suitability for NASA applications. To better understand and assess the effectiveness of the reconfigurable design paradigm that is utilized within the HAL-15 reconfigurable computer system. This system was made available to NASA MSFC for this purpose, from Star Bridge Systems, Inc. To implement on at least one application that would benefit from the performance levels that are possible with reconfigurable hardware. It was originally proposed that experiments in fault tolerance and dynamically reconfigurability would be perform but time constraints mandated that these be pursued as future research.
Itasaka, H; Matsumata, T; Taketomi, A; Yamamoto, K; Yanaga, K; Takenaka, K; Akazawa, K; Sugimachi, K
1994-12-01
A simple outpatient follow-up system was developed with a laptop personal computer to assist management of patients with hepatocellular carcinoma after hepatic resections. Since it is based on a non-relational database program and the graphical user interface of Macintosh operating system, those who are not a specialist of the computer operation can use it. It is helpful to promptly recognize current status and problems of the patients, to diagnose recurrences of the disease and to prevent lost from follow-up cases. A portability of the computer also facilitates utilization of these data everywhere, such as in clinical conferences and laboratories.
23 CFR 771.117 - Categorical exclusions.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., computer-aided dispatching systems, radio communications systems, dynamic message signs, and security... effects can be assessed; and Federal-aid system revisions which establish classes of highways on the Federal-aid highway system. (2) Approval of utility installations along or across a transportation...
Angiuoli, Samuel V; Matalka, Malcolm; Gussman, Aaron; Galens, Kevin; Vangala, Mahesh; Riley, David R; Arze, Cesar; White, James R; White, Owen; Fricke, W Florian
2011-08-30
Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing.
Dynamic Load-Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems
NASA Technical Reports Server (NTRS)
Ecer, A.; Chien, Y. P.; Boenisch, T.; Akay, H. U.
2000-01-01
The developed methodology is aimed at improving the efficiency of executing block-structured algorithms on parallel, distributed, heterogeneous computers. The basic approach of these algorithms is to divide the flow domain into many sub- domains called blocks, and solve the governing equations over these blocks. Dynamic load balancing problem is defined as the efficient distribution of the blocks among the available processors over a period of several hours of computations. In environments with computers of different architecture, operating systems, CPU speed, memory size, load, and network speed, balancing the loads and managing the communication between processors becomes crucial. Load balancing software tools for mutually dependent parallel processes have been created to efficiently utilize an advanced computation environment and algorithms. These tools are dynamic in nature because of the chances in the computer environment during execution time. More recently, these tools were extended to a second operating system: NT. In this paper, the problems associated with this application will be discussed. Also, the developed algorithms were combined with the load sharing capability of LSF to efficiently utilize workstation clusters for parallel computing. Finally, results will be presented on running a NASA based code ADPAC to demonstrate the developed tools for dynamic load balancing.
Design consideration in constructing high performance embedded Knowledge-Based Systems (KBS)
NASA Technical Reports Server (NTRS)
Dalton, Shelly D.; Daley, Philip C.
1988-01-01
As the hardware trends for artificial intelligence (AI) involve more and more complexity, the process of optimizing the computer system design for a particular problem will also increase in complexity. Space applications of knowledge based systems (KBS) will often require an ability to perform both numerically intensive vector computations and real time symbolic computations. Although parallel machines can theoretically achieve the speeds necessary for most of these problems, if the application itself is not highly parallel, the machine's power cannot be utilized. A scheme is presented which will provide the computer systems engineer with a tool for analyzing machines with various configurations of array, symbolic, scaler, and multiprocessors. High speed networks and interconnections make customized, distributed, intelligent systems feasible for the application of AI in space. The method presented can be used to optimize such AI system configurations and to make comparisons between existing computer systems. It is an open question whether or not, for a given mission requirement, a suitable computer system design can be constructed for any amount of money.
Photovoltaics and electric utilities
NASA Astrophysics Data System (ADS)
Bright, R.; Leigh, R.; Sills, T.
1981-12-01
The long term value of grid connected, residential photovoltaic (PV) systems is determined. The value of the PV electricity is defined as the full avoided cost in accordance with the Public Utilities Regulatory Policies Act of 1978. The avoided cost is computed using a long range utility planning approach to measure revenue requirement changes in response to the time phased introduction of PV systems into the grid. A case study approach to three utility systems is used. The changing value of PV electricity over a twenty year period from 1985 is presented, and the fuel and capital savings due to FY are analyzed. These values are translated into measures of breakeven capital investment under several options of power interchange and pricing.
Evaluation of the MSFC facsimile camera system as a tool for extraterrestrial geologic exploration
NASA Technical Reports Server (NTRS)
Wolfe, E. W.; Alderman, J. D.
1971-01-01
Utility of the Marshall Space Flight (MSFC) facsimile camera system for extraterrestrial geologic exploration was investigated during the spring of 1971 near Merriam Crater in northern Arizona. Although the system with its present hard-wired recorder operates erratically, the imagery showed that the camera could be developed as a prime imaging tool for automated missions. Its utility would be enhanced by development of computer techniques that utilize digital camera output for construction of topographic maps, and it needs increased resolution for examining near field details. A supplementary imaging system may be necessary for hand specimen examination at low magnification.
NASA Technical Reports Server (NTRS)
Bains, R. W.; Herwig, H. A.; Luedeman, J. K.; Torina, E. M.
1974-01-01
The Shuttle Electric Power System (SEPS) computer program is considered in terms of the program manual, programmer guide, and program utilization. The main objective is to provide the information necessary to interpret and use the routines comprising the SEPS program. Subroutine descriptions including the name, purpose, method, variable definitions, and logic flow are presented.
Systems Analysis, Machineable Circulation Data and Library Users and Non-Users.
ERIC Educational Resources Information Center
Lubans, John, Jr.
A study to be made with computer-based circulation data of the non-use and use of a large academic library is discussed. A search of the literature reveals that computer-based circulation systems can be, but have not been, utilized to provide data bases for systematic analyses of library users and resources. The data gathered in the circulation…
NASA Technical Reports Server (NTRS)
Bailey, David H.; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
With programs such as the US High Performance Computing and Communications Program (HPCCP), the attention of scientists and engineers worldwide has been focused on the potential of very high performance scientific computing, namely systems that are hundreds or thousands of times more powerful than those typically available in desktop systems at any given point in time. Extending the frontiers of computing in this manner has resulted in remarkable advances, both in computing technology itself and also in the various scientific and engineering disciplines that utilize these systems. Within the month or two, a sustained rate of 1 Tflop/s (also written 1 teraflops, or 10(exp 12) floating-point operations per second) is likely to be achieved by the 'ASCI Red' system at Sandia National Laboratory in New Mexico. With this objective in sight, it is reasonable to ask what lies ahead for high-end computing.
NASA Technical Reports Server (NTRS)
Vallee, J.; Wilson, T.
1976-01-01
Results are reported of the first experiments for a computer conference management information system at the National Aeronautics and Space Administration. Between August 1975 and March 1976, two NASA projects with geographically separated participants (NASA scientists) used the PLANET computer conferencing system for portions of their work. The first project was a technology assessment of future transportation systems. The second project involved experiments with the Communication Technology Satellite. As part of this project, pre- and postlaunch operations were discussed in a computer conference. These conferences also provided the context for an analysis of the cost of computer conferencing. In particular, six cost components were identified: (1) terminal equipment, (2) communication with a network port, (3) network connection, (4) computer utilization, (5) data storage and (6) administrative overhead.
NASA Astrophysics Data System (ADS)
Shaat, Musbah; Bader, Faouzi
2010-12-01
Cognitive Radio (CR) systems have been proposed to increase the spectrum utilization by opportunistically access the unused spectrum. Multicarrier communication systems are promising candidates for CR systems. Due to its high spectral efficiency, filter bank multicarrier (FBMC) can be considered as an alternative to conventional orthogonal frequency division multiplexing (OFDM) for transmission over the CR networks. This paper addresses the problem of resource allocation in multicarrier-based CR networks. The objective is to maximize the downlink capacity of the network under both total power and interference introduced to the primary users (PUs) constraints. The optimal solution has high computational complexity which makes it unsuitable for practical applications and hence a low complexity suboptimal solution is proposed. The proposed algorithm utilizes the spectrum holes in PUs bands as well as active PU bands. The performance of the proposed algorithm is investigated for OFDM and FBMC based CR systems. Simulation results illustrate that the proposed resource allocation algorithm with low computational complexity achieves near optimal performance and proves the efficiency of using FBMC in CR context.
Liu, Ren; Srivastava, Anurag K.; Bakken, David E.; ...
2017-08-17
Intermittency of wind energy poses a great challenge for power system operation and control. Wind curtailment might be necessary at the certain operating condition to keep the line flow within the limit. Remedial Action Scheme (RAS) offers quick control action mechanism to keep reliability and security of the power system operation with high wind energy integration. In this paper, a new RAS is developed to maximize the wind energy integration without compromising the security and reliability of the power system based on specific utility requirements. A new Distributed Linear State Estimation (DLSE) is also developed to provide the fast andmore » accurate input data for the proposed RAS. A distributed computational architecture is designed to guarantee the robustness of the cyber system to support RAS and DLSE implementation. The proposed RAS and DLSE is validated using the modified IEEE-118 Bus system. Simulation results demonstrate the satisfactory performance of the DLSE and the effectiveness of RAS. Real-time cyber-physical testbed has been utilized to validate the cyber-resiliency of the developed RAS against computational node failure.« less
Conversion of LARSYS III.1 to an IBM 370 computer
NASA Technical Reports Server (NTRS)
Williams, G. N.; Leggett, J.; Hascall, G. A.
1975-01-01
A software system for processing multispectral aircraft or satellite data (LARSYS) was designed and written at the Laboratory for Applications of Remote Sensing at Purdue University. This system, being implemented on an IBM 360/67 computer utilizing the Cambridge Monitor System, is of an interactive nature. TAMU LARSYS maintains the essential capabilities of Purdue's LARSYS. The machine configuration for which it has been converted is an IBM-compatible Amdahl 470V/6 computer utilizing the time sharing option of the currently implemented OS/VS2 Operating System. Due to TSO limitations, the NASA-JSC deliverable TAMU LARSYS is comprised of two parts. Part one is a TSO Control Card Checker for LARSYS control cards, and part two is a batch version of LARSYS. Used together, they afford most of the capabilities of the original LARSYS III.1. Additionally, two programs have been written by TAMU to support LARSYS processing. The first is an ERTS-to-MIST conversion program used to convert ERTS data to the LARSYS input form, the MIST tape. The second is a system runtable code which maintains tape/file location information for the MIST data sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ren; Srivastava, Anurag K.; Bakken, David E.
Intermittency of wind energy poses a great challenge for power system operation and control. Wind curtailment might be necessary at the certain operating condition to keep the line flow within the limit. Remedial Action Scheme (RAS) offers quick control action mechanism to keep reliability and security of the power system operation with high wind energy integration. In this paper, a new RAS is developed to maximize the wind energy integration without compromising the security and reliability of the power system based on specific utility requirements. A new Distributed Linear State Estimation (DLSE) is also developed to provide the fast andmore » accurate input data for the proposed RAS. A distributed computational architecture is designed to guarantee the robustness of the cyber system to support RAS and DLSE implementation. The proposed RAS and DLSE is validated using the modified IEEE-118 Bus system. Simulation results demonstrate the satisfactory performance of the DLSE and the effectiveness of RAS. Real-time cyber-physical testbed has been utilized to validate the cyber-resiliency of the developed RAS against computational node failure.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wirick, D.W.; Montgomery, G.E.; Wagman, D.C.
1995-09-01
One technology that can assist utilities remain financially viable in competitive markets and help utilities and regulators to better serve the public is information technology. Because geography is an important part of an electric, natural gas, telecommunications, or water utility, computer-based Geographic Information Systems (GIS) and related Automated Mapping/Facilities Management systems are emerging as core technologies for managing an ever-expanding variety of formerly manual or paper-based tasks. This report focuses on GIS as an example of the types of information systems that can be used by utilities and regulatory commissions. Chapter 2 provides general information about information systems and effectsmore » of information on organizations; Chapter 3 explores the conversion of an organization to an information-based one; Chapters 4 and 5 set out GIS as an example of the use of information technologies to transform the operations of utilities and commissions; Chapter 6 describes the use of GIS and other information systems for organizational reengineering efforts; and Chapter 7 examines the regulatory treatment of information systems.« less
ERIC Educational Resources Information Center
Stotter, Philip L.; Culp, George H.
An experimental course in organic chemistry utilized computer-assisted instructional (CAI) techniques. The CAI lessons provided tutorial drill and practice and simulated experiments and reactions. The Conversational Language for Instruction and Computing was used, along with a CDC 6400-6600 system; students scheduled and completed the lessons at…
Computer-assisted surgical planning and automation of laser delivery systems
NASA Astrophysics Data System (ADS)
Zamorano, Lucia J.; Dujovny, Manuel; Dong, Ada; Kadi, A. Majeed
1991-05-01
This paper describes a 'real time' surgical treatment planning interactive workstation, utilizing multimodality imaging (computer tomography, magnetic resonance imaging, digital angiography) that has been developed to provide the neurosurgeon with two-dimensional multiplanar and three-dimensional 'display' of a patient's lesion.
Concurrent ultrasonic weld evaluation system
Hood, Donald W.; Johnson, John A.; Smartt, Herschel B.
1987-01-01
A system for concurrent, non-destructive evaluation of partially completed welds for use in conjunction with an automated welder. The system utilizes real time, automated ultrasonic inspection of a welding operation as the welds are being made by providing a transducer which follows a short distance behind the welding head. Reflected ultrasonic signals are analyzed utilizing computer based digital pattern recognition techniques to discriminate between good and flawed welds on a pass by pass basis. The system also distinguishes between types of weld flaws.
Concurrent ultrasonic weld evaluation system
Hood, D.W.; Johnson, J.A.; Smartt, H.B.
1985-09-04
A system for concurrent, non-destructive evaluation of partially completed welds for use in conjunction with an automated welder. The system utilizes real time, automated ultrasonic inspection of a welding operation as the welds are being made by providing a transducer which follows a short distance behind the welding head. Reflected ultrasonic signals are analyzed utilizing computer based digital pattern recognition techniques to discriminate between good and flawed welds on a pass by pass basis. The system also distinguishes between types of weld flaws.
Concurrent ultrasonic weld evaluation system
Hood, D.W.; Johnson, J.A.; Smartt, H.B.
1987-12-15
A system for concurrent, non-destructive evaluation of partially completed welds for use in conjunction with an automated welder is disclosed. The system utilizes real time, automated ultrasonic inspection of a welding operation as the welds are being made by providing a transducer which follows a short distance behind the welding head. Reflected ultrasonic signals are analyzed utilizing computer based digital pattern recognition techniques to discriminate between good and flawed welds on a pass by pass basis. The system also distinguishes between types of weld flaws. 5 figs.
Managing computer-controlled operations
NASA Technical Reports Server (NTRS)
Plowden, J. B.
1985-01-01
A detailed discussion of Launch Processing System Ground Software Production is presented to establish the interrelationships of firing room resource utilization, configuration control, system build operations, and Shuttle data bank management. The production of a test configuration identifier is traced from requirement generation to program development. The challenge of the operational era is to implement fully automated utilities to interface with a resident system build requirements document to eliminate all manual intervention in the system build operations. Automatic update/processing of Shuttle data tapes will enhance operations during multi-flow processing.
The Gain of Resource Delegation in Distributed Computing Environments
NASA Astrophysics Data System (ADS)
Fölling, Alexander; Grimme, Christian; Lepping, Joachim; Papaspyrou, Alexander
In this paper, we address job scheduling in Distributed Computing Infrastructures, that is a loosely coupled network of autonomous acting High Performance Computing systems. In contrast to the common approach of mutual workload exchange, we consider the more intuitive operator's viewpoint of load-dependent resource reconfiguration. In case of a site's over-utilization, the scheduling system is able to lease resources from other sites to keep up service quality for its local user community. Contrary, the granting of idle resources can increase utilization in times of low local workload and thus ensure higher efficiency. The evaluation considers real workload data and is done with respect to common service quality indicators. For two simple resource exchange policies and three basic setups we show the possible gain of this approach and analyze the dynamics in workload-adaptive reconfiguration behavior.
Solving Nonlinear Differential Equations in the Engineering Curriculum
ERIC Educational Resources Information Center
Auslander, David M.
1977-01-01
Described is the Dynamic System Simulation Language (SIM) mini-computer system utilized at the University of California, Los Angeles. It is used by engineering students for solving nonlinear differential equations. (SL)
Bello, Ibrahim S; Sanusi, Abubakr A; Ezeoma, Ikechi T; Abioye-Kuteyi, Emmanuel A; Akinsola, Adewale
2004-01-01
Background The computer revolution and Information Technology (IT) have transformed modern health care systems in the areas of communication, teaching, storage and retrieval of medical information. These developments have positively impacted patient management and the training and retraining of healthcare providers. Little information is available on the level of training and utilization of IT among health care professionals in developing countries. Objectives To assess the knowledge and utilization pattern of information technology among health care professionals and medical students in a university teaching hospital in Nigeria. Methods Self-structured pretested questionnaires that probe into the knowledge, attitudes and utilization of computers and IT were administered to a randomly selected group of 180 health care professionals and medical students. Descriptive statistics on their knowledge, attitude and utilization patterns were calculated. Results A total of 148 participants (82%) responded, which included 60 medical students, 41 medical doctors and 47 health records staff. Their ages ranged between 22 and 54 years. Eighty respondents (54%) reportedly had received some form of computer training while the remaining 68 (46%) had no training. Only 39 respondents (26%) owned a computer while the remaining 109 (74%) had no computer. In spite of this a total of 28 respondents (18.9%) demonstrated a good knowledge of computers while 87 (58.8%) had average knowledge. Only 33 (22.3%) showed poor knowledge. Fifty-nine respondents (39.9%) demonstrated a good attitude and good utilization habits, while in 50 respondents (33.8%) attitude and utilization habits were average and in 39 (26.4%) they were poor. While 25% of students and 27% of doctors had good computer knowledge (P=.006), only 4.3% of the records officers demonstrated a good knowledge. Forty percent of the medical students, 54% of the doctors and 27.7% of the health records officers showed good utilization habits and attitudes (P=.01) Conclusion Only 26% of the respondents possess a computer, and only a small percentage of the respondents demonstrated good knowledge of computers and IT, hence the suboptimal utilization pattern. The fact that the health records officers by virtue of their profession had better training opportunities did not translate into better knowledge and utilization habits, hence the need for a more structured training, one which would form part of the curriculum. This would likely have more impact on the target population than ad hoc arrangements. PMID:15631969
Auto-Versioning Systems Image Manager
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pezzaglia, Larry
2013-08-01
The av_sys_image_mgr utility provides an interface for the creation, manipulation, and analysis of system boot images for computer systems. It is primarily intended to provide a convenient method for managing the introduction of changes to boot images for long-lived production HPC systems.
Reliability model of a monopropellant auxiliary propulsion system
NASA Technical Reports Server (NTRS)
Greenberg, J. S.
1971-01-01
A mathematical model and associated computer code has been developed which computes the reliability of a monopropellant blowdown hydrazine spacecraft auxiliary propulsion system as a function of time. The propulsion system is used to adjust or modify the spacecraft orbit over an extended period of time. The multiple orbit corrections are the multiple objectives which the auxiliary propulsion system is designed to achieve. Thus the reliability model computes the probability of successfully accomplishing each of the desired orbit corrections. To accomplish this, the reliability model interfaces with a computer code that models the performance of a blowdown (unregulated) monopropellant auxiliary propulsion system. The computer code acts as a performance model and as such gives an accurate time history of the system operating parameters. The basic timing and status information is passed on to and utilized by the reliability model which establishes the probability of successfully accomplishing the orbit corrections.
ERIC Educational Resources Information Center
Jones, Richard M.
1981-01-01
A computer program that utilizes an optical scanning machine is used for ordering supplies in a Louisiana school system. The program provides savings in time and labor, more accurate data, and easy-to-use reports. (Author/MLF)
A Cloud-based Infrastructure and Architecture for Environmental System Research
NASA Astrophysics Data System (ADS)
Wang, D.; Wei, Y.; Shankar, M.; Quigley, J.; Wilson, B. E.
2016-12-01
The present availability of high-capacity networks, low-cost computers and storage devices, and the widespread adoption of hardware virtualization and service-oriented architecture provide a great opportunity to enable data and computing infrastructure sharing between closely related research activities. By taking advantage of these approaches, along with the world-class high computing and data infrastructure located at Oak Ridge National Laboratory, a cloud-based infrastructure and architecture has been developed to efficiently deliver essential data and informatics service and utilities to the environmental system research community, and will provide unique capabilities that allows terrestrial ecosystem research projects to share their software utilities (tools), data and even data submission workflow in a straightforward fashion. The infrastructure will minimize large disruptions from current project-based data submission workflows for better acceptances from existing projects, since many ecosystem research projects already have their own requirements or preferences for data submission and collection. The infrastructure will eliminate scalability problems with current project silos by provide unified data services and infrastructure. The Infrastructure consists of two key components (1) a collection of configurable virtual computing environments and user management systems that expedite data submission and collection from environmental system research community, and (2) scalable data management services and system, originated and development by ORNL data centers.
A programing system for research and applications in structural optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Rogers, J. L., Jr.
1981-01-01
The paper describes a computer programming system designed to be used for methodology research as well as applications in structural optimization. The flexibility necessary for such diverse utilizations is achieved by combining, in a modular manner, a state-of-the-art optimization program, a production level structural analysis program, and user supplied and problem dependent interface programs. Standard utility capabilities existing in modern computer operating systems are used to integrate these programs. This approach results in flexibility of the optimization procedure organization and versatility in the formulation of contraints and design variables. Features shown in numerical examples include: (1) variability of structural layout and overall shape geometry, (2) static strength and stiffness constraints, (3) local buckling failure, and (4) vibration constraints. The paper concludes with a review of the further development trends of this programing system.
IAPCS: A COMPUTER MODEL THAT EVALUATES POLLUTION CONTROL SYSTEMS FOR UTILITY BOILERS
The IAPCS model, developed by U.S. EPA`s Air and Energy Engineering Research Laboratory and made available to the public through the National Technical Information Service, can be used by utility companies, architectural and engineering companies, and regulatory agencies at all l...
2008-04-01
consumers and electric utilities in Arizona and Southern California. Twelve people, including five children, died as a result of the explosion. The...Modern electronics, communications, pro- tection, control and computers have allowed the physical system to be utilized fully with ever smaller... margins for error. Therefore, a relatively modest upset to the system can cause functional collapse. As the system grows in complexity and interdependence
Computer Series, 36: Bits and Pieces, 13.
ERIC Educational Resources Information Center
Moore, John W.
1983-01-01
Eleven computer/calculator programs (most are available from authors) are described. Topics include visualizing molecular vibrations, dynamic nuclear magnetic resonance spectra of two-spin systems, programming utilities for Apple II Plus, gas chromatography simulation for TRS-80, infrared spectra analysis on a calculator, naming chemical…
The Power of Computer-aided Tomography to Investigate Marine Benthic Communities
Utilization of Computer-aided-Tomography (CT) technology is a powerful tool to investigate benthic communities in aquatic systems. In this presentation, we will attempt to summarize our 15 years of experience in developing specific CT methods and applications to marine benthic co...
Cutting tool form compensation system and method
Barkman, W.E.; Babelay, E.F. Jr.; Klages, E.J.
1993-10-19
A compensation system for a computer-controlled machining apparatus having a controller and including a cutting tool and a workpiece holder which are movable relative to one another along a preprogrammed path during a machining operation utilizes a camera and a vision computer for gathering information at a preselected stage of a machining operation relating to the actual shape and size of the cutting edge of the cutting tool and for altering the preprogrammed path in accordance with detected variations between the actual size and shape of the cutting edge and an assumed size and shape of the cutting edge. The camera obtains an image of the cutting tool against a background so that the cutting tool and background possess contrasting light intensities, and the vision computer utilizes the contrasting light intensities of the image to locate points therein which correspond to points along the actual cutting edge. Following a series of computations involving the determining of a tool center from the points identified along the tool edge, the results of the computations are fed to the controller where the preprogrammed path is altered as aforedescribed. 9 figures.
Cutting tool form compensaton system and method
Barkman, William E.; Babelay, Jr., Edwin F.; Klages, Edward J.
1993-01-01
A compensation system for a computer-controlled machining apparatus having a controller and including a cutting tool and a workpiece holder which are movable relative to one another along a preprogrammed path during a machining operation utilizes a camera and a vision computer for gathering information at a preselected stage of a machining operation relating to the actual shape and size of the cutting edge of the cutting tool and for altering the preprogrammed path in accordance with detected variations between the actual size and shape of the cutting edge and an assumed size and shape of the cutting edge. The camera obtains an image of the cutting tool against a background so that the cutting tool and background possess contrasting light intensities, and the vision computer utilizes the contrasting light intensities of the image to locate points therein which correspond to points along the actual cutting edge. Following a series of computations involving the determining of a tool center from the points identified along the tool edge, the results of the computations are fed to the controller where the preprogrammed path is altered as aforedescribed.
Computer control of a microgravity mammalian cell bioreactor
NASA Technical Reports Server (NTRS)
Hall, William A.
1987-01-01
The initial steps taken in developing a completely menu driven and totally automated computer control system for a bioreactor are discussed. This bioreactor is an electro-mechanical cell growth system cell requiring vigorous control of slowly changing parameters, many of which are so dynamically interactive that computer control is a necessity. The process computer will have two main functions. First, it will provide continuous environmental control utilizing low signal level transducers as inputs and high powered control devices such as solenoids and motors as outputs. Secondly, it will provide continuous environmental monitoring, including mass data storage and periodic data dumps to a supervisory computer.
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.
1995-01-01
This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.
Factors affecting frequency and orbit utilization by high power transmission satellite systems.
NASA Technical Reports Server (NTRS)
Kuhns, P. W.; Miller, E. F.; O'Malley, T. A.
1972-01-01
The factors affecting the sharing of the geostationary orbit by high power (primarily television) satellite systems having the same or adjacent coverage areas and by satellites occupying the same orbit segment are examined and examples using the results of computer computations are given. The factors considered include: required protection ratio, receiver antenna patterns, relative transmitter power, transmitter antenna patterns, satellite grouping, and coverage pattern overlap. The results presented indicate the limits of system characteristics and orbit deployment which can result from mixing systems.
NASA Technical Reports Server (NTRS)
Hoadley, A. W.; Porter, A. J.
1990-01-01
This paper presents data on a preliminary analysis of the thermal dynamic characteristics of the Airborne Information Management System (AIMS), which is a continuing design project at NASA Dryden. The analysis established the methods which will be applied to the actual AIMS boards as they become available. The paper also describes the AIMS liquid cooling system design and presents a thermodynamic computer model of the AIMS cooling system, together with an experimental validation of this model.
Factors affecting frequency and orbit utilization by high power transmission satellite systems
NASA Technical Reports Server (NTRS)
Kuhns, P. W.; Miller, E. F.; Malley, T. A.
1972-01-01
The factors affecting the sharing of the geostationary orbit by high power (primarily television) satellite systems having the same or adjacent coverage areas and by satellites occupying the same orbit segment are examined and examples using the results of computer computations are given. The factors considered include: required protection ratio, receiver antenna patterns, relative transmitter power, transmitter antenna patterns, satellite grouping, and coverage pattern overlap. The results presented indicated the limits of system characteristics and orbit deployment which can result from mixing systems.
Distributed geospatial model sharing based on open interoperability standards
Feng, Min; Liu, Shuguang; Euliss, Ned H.; Fang, Yin
2009-01-01
Numerous geospatial computational models have been developed based on sound principles and published in journals or presented in conferences. However modelers have made few advances in the development of computable modules that facilitate sharing during model development or utilization. Constraints hampering development of model sharing technology includes limitations on computing, storage, and connectivity; traditional stand-alone and closed network systems cannot fully support sharing and integrating geospatial models. To address this need, we have identified methods for sharing geospatial computational models using Service Oriented Architecture (SOA) techniques and open geospatial standards. The service-oriented model sharing service is accessible using any tools or systems compliant with open geospatial standards, making it possible to utilize vast scientific resources available from around the world to solve highly sophisticated application problems. The methods also allow model services to be empowered by diverse computational devices and technologies, such as portable devices and GRID computing infrastructures. Based on the generic and abstract operations and data structures required for Web Processing Service (WPS) standards, we developed an interactive interface for model sharing to help reduce interoperability problems for model use. Geospatial computational models are shared on model services, where the computational processes provided by models can be accessed through tools and systems compliant with WPS. We developed a platform to help modelers publish individual models in a simplified and efficient way. Finally, we illustrate our technique using wetland hydrological models we developed for the prairie pothole region of North America.
SNS programming environment user's guide
NASA Technical Reports Server (NTRS)
Tennille, Geoffrey M.; Howser, Lona M.; Humes, D. Creig; Cronin, Catherine K.; Bowen, John T.; Drozdowski, Joseph M.; Utley, Judith A.; Flynn, Theresa M.; Austin, Brenda A.
1992-01-01
The computing environment is briefly described for the Supercomputing Network Subsystem (SNS) of the Central Scientific Computing Complex of NASA Langley. The major SNS computers are a CRAY-2, a CRAY Y-MP, a CONVEX C-210, and a CONVEX C-220. The software is described that is common to all of these computers, including: the UNIX operating system, computer graphics, networking utilities, mass storage, and mathematical libraries. Also described is file management, validation, SNS configuration, documentation, and customer services.
Molecular dynamics simulations and applications in computational toxicology and nanotoxicology.
Selvaraj, Chandrabose; Sakkiah, Sugunadevi; Tong, Weida; Hong, Huixiao
2018-02-01
Nanotoxicology studies toxicity of nanomaterials and has been widely applied in biomedical researches to explore toxicity of various biological systems. Investigating biological systems through in vivo and in vitro methods is expensive and time taking. Therefore, computational toxicology, a multi-discipline field that utilizes computational power and algorithms to examine toxicology of biological systems, has gained attractions to scientists. Molecular dynamics (MD) simulations of biomolecules such as proteins and DNA are popular for understanding of interactions between biological systems and chemicals in computational toxicology. In this paper, we review MD simulation methods, protocol for running MD simulations and their applications in studies of toxicity and nanotechnology. We also briefly summarize some popular software tools for execution of MD simulations. Published by Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Yakimovsky, Y.
1974-01-01
An approach to simultaneous interpretation of objects in complex structures so as to maximize a combined utility function is presented. Results of the application of a computer software system to assign meaning to regions in a segmented image based on the principles described in this paper and on a special interactive sequential classification learning system, which is referenced, are demonstrated.
Borisov, N; Franck, D; de Carlan, L; Laval, L
2002-08-01
The paper reports on a new utility for development of computational phantoms for Monte Carlo calculations and data analysis for in vivo measurements of radionuclides deposited in tissues. The individual properties of each worker can be acquired for a rather precise geometric representation of his (her) anatomy, which is particularly important for low energy gamma ray emitting sources such as thorium, uranium, plutonium and other actinides. The software discussed here enables automatic creation of an MCNP input data file based on scanning data. The utility includes segmentation of images obtained with either computed tomography or magnetic resonance imaging by distinguishing tissues according to their signal (brightness) and specification of the source and detector. In addition, a coupling of individual voxels within the tissue is used to reduce the memory demand and to increase the calculational speed. The utility was tested for low energy emitters in plastic and biological tissues as well as for computed tomography and magnetic resonance imaging scanning information.
NASA Technical Reports Server (NTRS)
Fishbach, L. H.
1980-01-01
The computational techniques are described which are utilized at Lewis Research Center to determine the optimum propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements. Cycle performance, and engine weight can be calculated along with costs and installation effects as opposed to fuel consumption alone. Almost any conceivable turbine engine cycle can be studied. These computer codes are: NNEP, WATE, LIFCYC, INSTAL, and POD DRG. Examples are given to illustrate how these computer techniques can be applied to analyze and optimize propulsion system fuel consumption, weight and cost for representative types of aircraft and missions.
Terahertz Computed Tomography of NASA Thermal Protection System Materials
NASA Technical Reports Server (NTRS)
Roth, D. J.; Reyes-Rodriguez, S.; Zimdars, D. A.; Rauser, R. W.; Ussery, W. W.
2011-01-01
A terahertz axial computed tomography system has been developed that uses time domain measurements in order to form cross-sectional image slices and three-dimensional volume renderings of terahertz-transparent materials. The system can inspect samples as large as 0.0283 cubic meters (1 cubic foot) with no safety concerns as for x-ray computed tomography. In this study, the system is evaluated for its ability to detect and characterize flat bottom holes, drilled holes, and embedded voids in foam materials utilized as thermal protection on the external fuel tanks for the Space Shuttle. X-ray micro-computed tomography was also performed on the samples to compare against the terahertz computed tomography results and better define embedded voids. Limits of detectability based on depth and size for the samples used in this study are loosely defined. Image sharpness and morphology characterization ability for terahertz computed tomography are qualitatively described.
Computational Science at the Argonne Leadership Computing Facility
NASA Astrophysics Data System (ADS)
Romero, Nichols
2014-03-01
The goal of the Argonne Leadership Computing Facility (ALCF) is to extend the frontiers of science by solving problems that require innovative approaches and the largest-scale computing systems. ALCF's most powerful computer - Mira, an IBM Blue Gene/Q system - has nearly one million cores. How does one program such systems? What software tools are available? Which scientific and engineering applications are able to utilize such levels of parallelism? This talk will address these questions and describe a sampling of projects that are using ALCF systems in their research, including ones in nanoscience, materials science, and chemistry. Finally, the ways to gain access to ALCF resources will be presented. This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357.
NASA Astrophysics Data System (ADS)
Sinha, Vaibhav; Srivastava, Anjali; Koo Lee, Hyoung
2014-06-01
A novel method for non-destructive analysis has been developed using a neutron/X-ray combined computed tomography (NXCT) system at the Missouri University of Science and Technology Reactor (MSTR). This imaging system takes advantage of the fact that neutrons and X-rays have characteristically different interactions with same materials. NXCT fuses the imaging capabilities of both systems at one location and allows instant evaluation for nondestructive testing (NDT) applications. This technique promises viable advances in the field of NDT. In this paper, the complete design criteria and procedures are provided. The described design criteria and procedures can effectively be utilized to design and develop advanced combined computed tomography system. The successful operation of the high resolution X-ray and neutron computed tomography has been demonstrated in this paper. The utility and importance of the NXCT system has been shown by nondestructive evaluation of various phantoms constituting different materials, geometrical, structural and compositional information. The concept of NXCT can be useful for concealed material detection, material characterization, investigation of complex geometries involving different atomic number materials and real time imaging for in-situ studies.
GINSU: Guaranteed Internet Stack Utilization
2005-11-01
Computer Architecture Data Links, Internet , Protocol Stacks 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT UNCLASSIFIED 18. SECURITY...AFRL-IF-RS-TR-2005-383 Final Technical Report November 2005 GINSU: GUARANTEED INTERNET STACK UTILIZATION Trusted... Information Systems, Inc. Sponsored by Defense Advanced Research Projects Agency DARPA Order No. ARPS APPROVED FOR PUBLIC
The ICCB Computer Based Facilities Inventory & Utilization Management Information Subsystem.
ERIC Educational Resources Information Center
Lach, Ivan J.
The Illinois Community College Board (ICCB) Facilities Inventory and Utilization subsystem, a part of the ICCB management information system, was designed to provide decision makers with needed information to better manage the facility resources of Illinois community colleges. This subsystem, dependent upon facilities inventory data and course…
Historical Development of Simulation Models of Recreation Use
Jan W. van Wagtendonk; David N. Cole
2005-01-01
The potential utility of modeling as a park and wilderness management tool has been recognized for decades. Romesburg (1974) explored how mathematical decision modeling could be used to improve decisions about regulation of wilderness use. Cesario (1975) described a computer simulation modeling approach that utilized GPSS (General Purpose Systems Simulator), a...
Large Scale Document Inversion using a Multi-threaded Computing System
Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won
2018-01-01
Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. CCS Concepts •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations. PMID:29861701
Large Scale Document Inversion using a Multi-threaded Computing System.
Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won
2017-06-01
Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shorgin, Sergey Ya.; Pechinkin, Alexander V.; Samouylov, Konstantin E.
Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. Formore » better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures.« less
Enabling Wide-Scale Computer Science Education through Improved Automated Assessment Tools
NASA Astrophysics Data System (ADS)
Boe, Bryce A.
There is a proliferating demand for newly trained computer scientists as the number of computer science related jobs continues to increase. University programs will only be able to train enough new computer scientists to meet this demand when two things happen: when there are more primary and secondary school students interested in computer science, and when university departments have the resources to handle the resulting increase in enrollment. To meet these goals, significant effort is being made to both incorporate computational thinking into existing primary school education, and to support larger university computer science class sizes. We contribute to this effort through the creation and use of improved automated assessment tools. To enable wide-scale computer science education we do two things. First, we create a framework called Hairball to support the static analysis of Scratch programs targeted for fourth, fifth, and sixth grade students. Scratch is a popular building-block language utilized to pique interest in and teach the basics of computer science. We observe that Hairball allows for rapid curriculum alterations and thus contributes to wide-scale deployment of computer science curriculum. Second, we create a real-time feedback and assessment system utilized in university computer science classes to provide better feedback to students while reducing assessment time. Insights from our analysis of student submission data show that modifications to the system configuration support the way students learn and progress through course material, making it possible for instructors to tailor assignments to optimize learning in growing computer science classes.
A Proposal for a Computer Network for the Indonesian Air Force’s Remote Site Radar System
1989-03-01
This thesis proposes two alternatives for a preliminary design of a computer network to support this need. It suggests how existing communication...suggests how existing communication resources such as telephones, microwave links and satellite systems can be used to support the network. The first...telephone, radio-link, microwave-link and satellite systems. The goal of this thesis is to suggest how to utilize or implement these resources to support
PEM-PCA: a parallel expectation-maximization PCA face recognition architecture.
Rujirakul, Kanokmon; So-In, Chakchai; Arnonkijpanich, Banchar
2014-01-01
Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.
2011-01-01
Background Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. Results We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. Conclusion The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing. PMID:21878105
Flow Control Research at NASA Langley in Support of High-Lift Augmentation
NASA Technical Reports Server (NTRS)
Sellers, William L., III; Jones, Gregory S.; Moore, Mark D.
2002-01-01
The paper describes the efforts at NASA Langley to apply active and passive flow control techniques for improved high-lift systems, and advanced vehicle concepts utilizing powered high-lift techniques. The development of simplified high-lift systems utilizing active flow control is shown to provide significant weight and drag reduction benefits based on system studies. Active flow control that focuses on separation, and the development of advanced circulation control wings (CCW) utilizing unsteady excitation techniques will be discussed. The advanced CCW airfoils can provide multifunctional controls throughout the flight envelope. Computational and experimental data are shown to illustrate the benefits and issues with implementation of the technology.
Internet SCADA Utilizing API's as Data Source
NASA Astrophysics Data System (ADS)
Robles, Rosslin John; Kim, Haeng-Kon; Kim, Tai-Hoon
An Application programming interface or API is an interface implemented by a software program that enables it to interact with other software. Many companies provide free API services which can be utilized in Control Systems. SCADA is an example of a control system and it is a system that collects data from various sensors at a factory, plant or in other remote locations and then sends this data to a central computer which then manages and controls the data. In this paper, we designed a scheme for Weather Condition in Internet SCADA Environment utilizing data from external API services. The scheme was designed to double check the weather information in SCADA.
Automated Instructional Management Systems (AIMS) Version III, Users Manual.
ERIC Educational Resources Information Center
New York Inst. of Tech., Old Westbury.
This document sets forth the procedures necessary to utilize and understand the operating characteristics of the Automated Instructional Management System - Version III, a computer-based system for management of educational processes. Directions for initialization, including internal and user files; system and operational input requirements;…
Automation of the aircraft design process
NASA Technical Reports Server (NTRS)
Heldenfels, R. R.
1974-01-01
The increasing use of the computer to automate the aerospace product development and engineering process is examined with emphasis on structural analysis and design. Examples of systems of computer programs in aerospace and other industries are reviewed and related to the characteristics of aircraft design in its conceptual, preliminary, and detailed phases. Problems with current procedures are identified, and potential improvements from optimum utilization of integrated disciplinary computer programs by a man/computer team are indicated.
NASA Technical Reports Server (NTRS)
Fournelle, John; Carpenter, Paul
2006-01-01
Modem electron microprobe systems have become increasingly sophisticated. These systems utilize either UNIX or PC computer systems for measurement, automation, and data reduction. These systems have undergone major improvements in processing, storage, display, and communications, due to increased capabilities of hardware and software. Instrument specifications are typically utilized at the time of purchase and concentrate on hardware performance. The microanalysis community includes analysts, researchers, software developers, and manufacturers, who could benefit from exchange of ideas and the ultimate development of core community specifications (CCS) for hardware and software components of microprobe instrumentation and operating systems.
Computer programs: Operational and mathematical, a compilation
NASA Technical Reports Server (NTRS)
1973-01-01
Several computer programs which are available through the NASA Technology Utilization Program are outlined. Presented are: (1) Computer operational programs which can be applied to resolve procedural problems swiftly and accurately. (2) Mathematical applications for the resolution of problems encountered in numerous industries. Although the functions which these programs perform are not new and similar programs are available in many large computer center libraries, this collection may be of use to centers with limited systems libraries and for instructional purposes for new computer operators.
Are Technology Interruptions Impacting Your Bottom Line? An Innovative Proposal for Change.
Ledbetter, Tamera; Shultz, Sarah; Beckham, Roxanne
2017-10-01
Nursing interruptions are a costly and dangerous variable in acute care hospitals. Malfunctioning technology equipment interrupts nursing care and prevents full utilization of computer safety systems to prevent patient care errors. This paper identifies an innovative approach to nursing interruptions related to computer and computer cart malfunctions. The impact on human resources is defined and outcome measures were proposed. A multifaceted proposal, based on a literature review, aimed at reducing nursing interruptions is presented. This proposal is expected to increase patient safety, as well as patient and nurse satisfaction. Acute care hospitals utilizing electronic medical records and bar-coded medication administration technology. Nurses, information technology staff, nursing informatics staff, and all leadership teams affected by technology problems and their proposed solutions. Literature from multiple fields was reviewed to evaluate research related to computer/computer cart failures, and the approaches used to resolve these issues. Outcome measured strategic goals related to patient safety, and nurse and patient satisfaction. Specific help desk metrics will demonstrate the effect of interventions. This paper addresses a gap in the literature and proposes practical and innovative solutions. A comprehensive computer and computer cart repair program is essential for patient safety, financial stewardship, and utilization of resources. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Mundhenk, Terrell N.; Dhavale, Nitin; Marmol, Salvador; Calleja, Elizabeth; Navalpakkam, Vidhya; Bellman, Kirstie; Landauer, Chris; Arbib, Michael A.; Itti, Laurent
2003-10-01
In view of the growing complexity of computational tasks and their design, we propose that certain interactive systems may be better designed by utilizing computational strategies based on the study of the human brain. Compared with current engineering paradigms, brain theory offers the promise of improved self-organization and adaptation to the current environment, freeing the programmer from having to address those issues in a procedural manner when designing and implementing large-scale complex systems. To advance this hypothesis, we discus a multi-agent surveillance system where 12 agent CPUs each with its own camera, compete and cooperate to monitor a large room. To cope with the overload of image data streaming from 12 cameras, we take inspiration from the primate"s visual system, which allows the animal to operate a real-time selection of the few most conspicuous locations in visual input. This is accomplished by having each camera agent utilize the bottom-up, saliency-based visual attention algorithm of Itti and Koch (Vision Research 2000;40(10-12):1489-1506) to scan the scene for objects of interest. Real time operation is achieved using a distributed version that runs on a 16-CPU Beowulf cluster composed of the agent computers. The algorithm guides cameras to track and monitor salient objects based on maps of color, orientation, intensity, and motion. To spread camera view points or create cooperation in monitoring highly salient targets, camera agents bias each other by increasing or decreasing the weight of different feature vectors in other cameras, using mechanisms similar to excitation and suppression that have been documented in electrophysiology, psychophysics and imaging studies of low-level visual processing. In addition, if cameras need to compete for computing resources, allocation of computational time is weighed based upon the history of each camera. A camera agent that has a history of seeing more salient targets is more likely to obtain computational resources. The system demonstrates the viability of biologically inspired systems in a real time tracking. In future work we plan on implementing additional biological mechanisms for cooperative management of both the sensor and processing resources in this system that include top down biasing for target specificity as well as novelty and the activity of the tracked object in relation to sensitive features of the environment.
Remote sensing of land-based voids using computer enhanced infrared thermography
NASA Astrophysics Data System (ADS)
Weil, Gary J.
1989-10-01
Experiments are described in which computer-enhanced infrared thermography techniques are used to detect and describe subsurface land-based voids, such as voids surrounding buried utility pipes, voids in concrete structures such as airport taxiways, abandoned buried utility storage tanks, and caves and underground shelters. Infrared thermography also helps to evaluate bridge deck systems, highway pavements, and garage concrete. The IR thermography techniques make it possible to survey large areas quickly and efficiently. The paper also surveys the advantages and limitations of thermographic testing in comparison with other forms of NDT.
Designing Interactive Learning Systems.
ERIC Educational Resources Information Center
Barker, Philip
1990-01-01
Describes multimedia, computer-based interactive learning systems that support various forms of individualized study. Highlights include design models; user interfaces; design guidelines; media utilization paradigms, including hypermedia and learner-controlled models; metaphors and myths; authoring tools; optical media; workstations; four case…
Utilization of Educationally Oriented Microcomputer Based Laboratories
ERIC Educational Resources Information Center
Fitzpatrick, Michael J.; Howard, James A.
1977-01-01
Describes one approach to supplying engineering and computer science educators with an economical portable digital systems laboratory centered around microprocessors. Expansion of the microcomputer based laboratory concept to include Learning Resource Aided Instruction (LRAI) systems is explored. (Author)
L-O-S-T: Logging Optimization Selection Technique
Jerry L. Koger; Dennis B. Webster
1984-01-01
L-O-S-T is a FORTRAN computer program developed to systematically quantify, analyze, and improve user selected harvesting methods. Harvesting times and costs are computed for road construction, landing construction, system move between landings, skidding, and trucking. A linear programming formulation utilizing the relationships among marginal analysis, isoquants, and...
Simulations of Probabilities for Quantum Computing
NASA Technical Reports Server (NTRS)
Zak, M.
1996-01-01
It has been demonstrated that classical probabilities, and in particular, probabilistic Turing machine, can be simulated by combining chaos and non-LIpschitz dynamics, without utilization of any man-made devices (such as random number generators). Self-organizing properties of systems coupling simulated and calculated probabilities and their link to quantum computations are discussed.
Singh, Dadabhai T; Trehan, Rahul; Schmidt, Bertil; Bretschneider, Timo
2008-01-01
Preparedness for a possible global pandemic caused by viruses such as the highly pathogenic influenza A subtype H5N1 has become a global priority. In particular, it is critical to monitor the appearance of any new emerging subtypes. Comparative phyloinformatics can be used to monitor, analyze, and possibly predict the evolution of viruses. However, in order to utilize the full functionality of available analysis packages for large-scale phyloinformatics studies, a team of computer scientists, biostatisticians and virologists is needed--a requirement which cannot be fulfilled in many cases. Furthermore, the time complexities of many algorithms involved leads to prohibitive runtimes on sequential computer platforms. This has so far hindered the use of comparative phyloinformatics as a commonly applied tool in this area. In this paper the graphical-oriented workflow design system called Quascade and its efficient usage for comparative phyloinformatics are presented. In particular, we focus on how this task can be effectively performed in a distributed computing environment. As a proof of concept, the designed workflows are used for the phylogenetic analysis of neuraminidase of H5N1 isolates (micro level) and influenza viruses (macro level). The results of this paper are hence twofold. Firstly, this paper demonstrates the usefulness of a graphical user interface system to design and execute complex distributed workflows for large-scale phyloinformatics studies of virus genes. Secondly, the analysis of neuraminidase on different levels of complexity provides valuable insights of this virus's tendency for geographical based clustering in the phylogenetic tree and also shows the importance of glycan sites in its molecular evolution. The current study demonstrates the efficiency and utility of workflow systems providing a biologist friendly approach to complex biological dataset analysis using high performance computing. In particular, the utility of the platform Quascade for deploying distributed and parallelized versions of a variety of computationally intensive phylogenetic algorithms has been shown. Secondly, the analysis of the utilized H5N1 neuraminidase datasets at macro and micro levels has clearly indicated a pattern of spatial clustering of the H5N1 viral isolates based on geographical distribution rather than temporal or host range based clustering.
NASA Technical Reports Server (NTRS)
Swanson, T. D.; Ollendorf, S.
1979-01-01
This paper addresses the potential for enhanced solar system performance through sophisticated control of the collector loop flow rate. Computer simulations utilizing the TRNSYS solar energy program were performed to study the relative effect on system performance of eight specific control algorithms. Six of these control algorithms are of the proportional type: two are concave exponentials, two are simple linear functions, and two are convex exponentials. These six functions are typical of what might be expected from future, more advanced, controllers. The other two algorithms are of the on/off type and are thus typical of existing control devices. Results of extensive computer simulations utilizing actual weather data indicate that proportional control does not significantly improve system performance. However, it is shown that thermal stratification in the liquid storage tank may significantly improve performance.
Designing learning management system interoperability in semantic web
NASA Astrophysics Data System (ADS)
Anistyasari, Y.; Sarno, R.; Rochmawati, N.
2018-01-01
The extensive adoption of learning management system (LMS) has set the focus on the interoperability requirement. Interoperability is the ability of different computer systems, applications or services to communicate, share and exchange data, information, and knowledge in a precise, effective and consistent way. Semantic web technology and the use of ontologies are able to provide the required computational semantics and interoperability for the automation of tasks in LMS. The purpose of this study is to design learning management system interoperability in the semantic web which currently has not been investigated deeply. Moodle is utilized to design the interoperability. Several database tables of Moodle are enhanced and some features are added. The semantic web interoperability is provided by exploited ontology in content materials. The ontology is further utilized as a searching tool to match user’s queries and available courses. It is concluded that LMS interoperability in Semantic Web is possible to be performed.
Zander, Thorsten O; Kothe, Christian
2011-04-01
Cognitive monitoring is an approach utilizing realtime brain signal decoding (RBSD) for gaining information on the ongoing cognitive user state. In recent decades this approach has brought valuable insight into the cognition of an interacting human. Automated RBSD can be used to set up a brain-computer interface (BCI) providing a novel input modality for technical systems solely based on brain activity. In BCIs the user usually sends voluntary and directed commands to control the connected computer system or to communicate through it. In this paper we propose an extension of this approach by fusing BCI technology with cognitive monitoring, providing valuable information about the users' intentions, situational interpretations and emotional states to the technical system. We call this approach passive BCI. In the following we give an overview of studies which utilize passive BCI, as well as other novel types of applications resulting from BCI technology. We especially focus on applications for healthy users, and the specific requirements and demands of this user group. Since the presented approach of combining cognitive monitoring with BCI technology is very similar to the concept of BCIs itself we propose a unifying categorization of BCI-based applications, including the novel approach of passive BCI.
Colt: an experiment in wormhole run-time reconfiguration
NASA Astrophysics Data System (ADS)
Bittner, Ray; Athanas, Peter M.; Musgrove, Mark
1996-10-01
Wormhole run-time reconfiguration (RTR) is an attempt to create a refined computing paradigm for high performance computational tasks. By combining concepts from field programmable gate array (FPGA) technologies with data flow computing, the Colt/Stallion architecture achieves high utilization of hardware resources, and facilitates rapid run-time reconfiguration. Targeted mainly at DSP-type operations, the Colt integrated circuit -- a prototype wormhole RTR device -- compares favorably to contemporary DSP alternatives in terms of silicon area consumed per unit computation and in computing performance. Although emphasis has been placed on signal processing applications, general purpose computation has not been overlooked. Colt is a prototype that defines an architecture not only at the chip level but also in terms of an overall system design. As this system is realized, the concept of wormhole RTR will be applied to numerical computation and DSP applications including those common to image processing, communications systems, digital filters, acoustic processing, real-time control systems and simulation acceleration.
A specialized plug-in software module for computer-aided quantitative measurement of medical images.
Wang, Q; Zeng, Y J; Huo, P; Hu, J L; Zhang, J H
2003-12-01
This paper presents a specialized system for quantitative measurement of medical images. Using Visual C++, we developed a computer-aided software based on Image-Pro Plus (IPP), a software development platform. When transferred to the hard disk of a computer by an MVPCI-V3A frame grabber, medical images can be automatically processed by our own IPP plug-in for immunohistochemical analysis, cytomorphological measurement and blood vessel segmentation. In 34 clinical studies, the system has shown its high stability, reliability and ease of utility.
A synchronized computational architecture for generalized bilateral control of robot arms
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.; Szakaly, Zoltan
1987-01-01
This paper describes a computational architecture for an interconnected high speed distributed computing system for generalized bilateral control of robot arms. The key method of the architecture is the use of fully synchronized, interrupt driven software. Since an objective of the development is to utilize the processing resources efficiently, the synchronization is done in the hardware level to reduce system software overhead. The architecture also achieves a balaced load on the communication channel. The paper also describes some architectural relations to trading or sharing manual and automatic control.
Pilot Test and Evaluation of a System of Computer-Managed Instruction.
ERIC Educational Resources Information Center
Spuck, Dennis W.; Bozeman, William C.
1978-01-01
The Wisconsin System for Instructional Management (WIS SIM) was evaluated on three dimensions (functioning, utilization, and effects) and the information gathered was classified into three types--actual, perceptual, and judgmental. The test demonstrates that a system supportive of an individualized system of education can be designed, developed,…
This study assessed the pollutant emission offset potential of distributed grid-connected photovoltaic (PV) power systems. Computer-simulated performance results were utilized for 211 PV systems located across the U.S. The PV systems' monthly electrical energy outputs were based ...
Research on elastic resource management for multi-queue under cloud computing environment
NASA Astrophysics Data System (ADS)
CHENG, Zhenjing; LI, Haibo; HUANG, Qiulan; Cheng, Yaodong; CHEN, Gang
2017-10-01
As a new approach to manage computing resource, virtualization technology is more and more widely applied in the high-energy physics field. A virtual computing cluster based on Openstack was built at IHEP, using HTCondor as the job queue management system. In a traditional static cluster, a fixed number of virtual machines are pre-allocated to the job queue of different experiments. However this method cannot be well adapted to the volatility of computing resource requirements. To solve this problem, an elastic computing resource management system under cloud computing environment has been designed. This system performs unified management of virtual computing nodes on the basis of job queue in HTCondor based on dual resource thresholds as well as the quota service. A two-stage pool is designed to improve the efficiency of resource pool expansion. This paper will present several use cases of the elastic resource management system in IHEPCloud. The practical run shows virtual computing resource dynamically expanded or shrunk while computing requirements change. Additionally, the CPU utilization ratio of computing resource was significantly increased when compared with traditional resource management. The system also has good performance when there are multiple condor schedulers and multiple job queues.
Utilization of the Space Vision System as an Augmented Reality System For Mission Operations
NASA Technical Reports Server (NTRS)
Maida, James C.; Bowen, Charles
2003-01-01
Augmented reality is a technique whereby computer generated images are superimposed on live images for visual enhancement. Augmented reality can also be characterized as dynamic overlays when computer generated images are registered with moving objects in a live image. This technique has been successfully implemented, with low to medium levels of registration precision, in an NRA funded project entitled, "Improving Human Task Performance with Luminance Images and Dynamic Overlays". Future research is already being planned to also utilize a laboratory-based system where more extensive subject testing can be performed. However successful this might be, the problem will still be whether such a technology can be used with flight hardware. To answer this question, the Canadian Space Vision System (SVS) will be tested as an augmented reality system capable of improving human performance where the operation requires indirect viewing. This system has already been certified for flight and is currently flown on each shuttle mission for station assembly. Successful development and utilization of this system in a ground-based experiment will expand its utilization for on-orbit mission operations. Current research and development regarding the use of augmented reality technology is being simulated using ground-based equipment. This is an appropriate approach for development of symbology (graphics and annotation) optimal for human performance and for development of optimal image registration techniques. It is anticipated that this technology will become more pervasive as it matures. Because we know what and where almost everything is on ISS, this reduces the registration problem and improves the computer model of that reality, making augmented reality an attractive tool, provided we know how to use it. This is the basis for current research in this area. However, there is a missing element to this process. It is the link from this research to the current ISS video system and to flight hardware capable of utilizing this technology. This is the basis for this proposed Space Human Factors Engineering project, the determination of the display symbology within the performance limits of the Space Vision System that will objectively improve human performance. This utilization of existing flight hardware will greatly reduce the costs of implementation for flight. Besides being used onboard shuttle and space station and as a ground-based system for mission operational support, it also has great potential for science and medical training and diagnostics, remote learning, team learning, video/media conferencing, and educational outreach.
Visual Computing Environment Workshop
NASA Technical Reports Server (NTRS)
Lawrence, Charles (Compiler)
1998-01-01
The Visual Computing Environment (VCE) is a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis.
NASA Technical Reports Server (NTRS)
Tamkin, Glenn S. (Inventor); Duffy, Daniel Q. (Inventor); Schnase, John L. (Inventor)
2016-01-01
A system, method and computer-readable storage devices for providing a climate data analytic services application programming interface distribution package. The example system can provide various components. The system provides a climate data analytic services application programming interface library that enables software applications running on a client device to invoke the capabilities of a climate data analytic service. The system provides a command-line interface that provides a means of interacting with a climate data analytic service by issuing commands directly to the system's server interface. The system provides sample programs that call on the capabilities of the application programming interface library and can be used as templates for the construction of new client applications. The system can also provide test utilities, build utilities, service integration utilities, and documentation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dress, W.B.
Rosen's modeling relation is embedded in Popper's three worlds to provide an heuristic tool for model building and a guide for thinking about complex systems. The utility of this construct is demonstrated by suggesting a solution to the problem of pseudo science and a resolution of the famous Bohr-Einstein debates. A theory of bizarre systems is presented by an analogy with entangled particles of quantum mechanics. This theory underscores the poverty of present-day computational systems (e.g., computers) for creating complex and bizarre entities by distinguishing between mechanism and organism.
Algebraic grid adaptation method using non-uniform rational B-spline surface modeling
NASA Technical Reports Server (NTRS)
Yang, Jiann-Cherng; Soni, B. K.
1992-01-01
An algebraic adaptive grid system based on equidistribution law and utilized by the Non-Uniform Rational B-Spline (NURBS) surface for redistribution is presented. A weight function, utilizing a properly weighted boolean sum of various flow field characteristics is developed. Computational examples are presented to demonstrate the success of this technique.
The ICCB Computer Based Faculty and Staff Utilization Subsystem.
ERIC Educational Resources Information Center
Lach, Ivan J.
The Illinois Community College Board (ICCB) Faculty and Staff Utilization subsystem, a component of the ICCB management information system, was designed to produce meaningful and useful information reports for the analysis of faculty and staff, as a resource, in Illinois community colleges. Accommodating the complex nature of staffing at the 49…
A Fortran-90 Based Multiprecision System
NASA Technical Reports Server (NTRS)
Bailey, David H.; Lasinski, T. A. (Technical Monitor)
1994-01-01
The author has developed a new version of his Fortran multiprecision computation system that is based on the Fortran-90 language. With this new approach, a translator program is not required - translation of Fortran code for multiprecision is accomplished by merely utilizing advanced features of Fortran-90, such as derived data types and operator extensions. This approach results in more reliable translation and also permits programmers of multiprecision applications to utilize the full power of the Fortran-90 language. Three multiprecision datatypes are supported in this system: multiprecision integer. real and complex. All the usual Fortran conventions for mixed mode operations are supported, and many of the Fortran intrinsics, such as SIN, EXP and MOD, are supported with multiprecision arguments. This paper also briefly describes an interesting application of this software, wherein new number-theoretic identities have been discovered by means of multiprecision computations.
On the utility of threads for data parallel programming
NASA Technical Reports Server (NTRS)
Fahringer, Thomas; Haines, Matthew; Mehrotra, Piyush
1995-01-01
Threads provide a useful programming model for asynchronous behavior because of their ability to encapsulate units of work that can then be scheduled for execution at runtime, based on the dynamic state of a system. Recently, the threaded model has been applied to the domain of data parallel scientific codes, and initial reports indicate that the threaded model can produce performance gains over non-threaded approaches, primarily through the use of overlapping useful computation with communication latency. However, overlapping computation with communication is possible without the benefit of threads if the communication system supports asynchronous primitives, and this comparison has not been made in previous papers. This paper provides a critical look at the utility of lightweight threads as applied to data parallel scientific programming.
NASA Technical Reports Server (NTRS)
Yanosy, J. L.; Rowell, L. F.
1985-01-01
Efforts to make increasingly use of suitable computer programs in the design of hardware have the potential to reduce expenditures. In this context, NASA has evaluated the benefits provided by software tools through an application to the Environmental Control and Life Support (ECLS) system. The present paper is concerned with the benefits obtained by an employment of simulation tools in the case of the Air Revitalization System (ARS) of a Space Station life support system. Attention is given to the ARS functions and components, a computer program overview, a SAND (solid amine water desorbed) bed model description, a model validation, and details regarding the simulation benefits.
Unified, Cross-Platform, Open-Source Library Package for High-Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kozacik, Stephen
Compute power is continually increasing, but this increased performance is largely found in sophisticated computing devices and supercomputer resources that are difficult to use, resulting in under-utilization. We developed a unified set of programming tools that will allow users to take full advantage of the new technology by allowing them to work at a level abstracted away from the platform specifics, encouraging the use of modern computing systems, including government-funded supercomputer facilities.
Practical applications of interactive voice technologies: Some accomplishments and prospects
NASA Technical Reports Server (NTRS)
Grady, Michael W.; Hicklin, M. B.; Porter, J. E.
1977-01-01
A technology assessment of the application of computers and electronics to complex systems is presented. Three existing systems which utilize voice technology (speech recognition and speech generation) are described. Future directions in voice technology are also described.
25 CFR 542.13 - What are the minimum internal control standards for gaming machines?
Code of Federal Regulations, 2014 CFR
2014-04-01
.... (j) Player tracking system. (1) The following standards apply if a player tracking system is utilized... image on the computer screen; (B) Comparing the customer to image on customer's picture ID; or (C...
25 CFR 542.13 - What are the minimum internal control standards for gaming machines?
Code of Federal Regulations, 2012 CFR
2012-04-01
.... (j) Player tracking system. (1) The following standards apply if a player tracking system is utilized... image on the computer screen; (B) Comparing the customer to image on customer's picture ID; or (C...
25 CFR 542.13 - What are the minimum internal control standards for gaming machines?
Code of Federal Regulations, 2013 CFR
2013-04-01
.... (j) Player tracking system. (1) The following standards apply if a player tracking system is utilized... image on the computer screen; (B) Comparing the customer to image on customer's picture ID; or (C...
25 CFR 542.13 - What are the minimum internal control standards for gaming machines?
Code of Federal Regulations, 2010 CFR
2010-04-01
.... (j) Player tracking system. (1) The following standards apply if a player tracking system is utilized... image on the computer screen; (B) Comparing the customer to image on customer's picture ID; or (C...
25 CFR 542.13 - What are the minimum internal control standards for gaming machines?
Code of Federal Regulations, 2011 CFR
2011-04-01
.... (j) Player tracking system. (1) The following standards apply if a player tracking system is utilized... image on the computer screen; (B) Comparing the customer to image on customer's picture ID; or (C...
Inertial navigation sensor integrated obstacle detection system
NASA Technical Reports Server (NTRS)
Bhanu, Bir (Inventor); Roberts, Barry A. (Inventor)
1992-01-01
A system that incorporates inertial sensor information into optical flow computations to detect obstacles and to provide alternative navigational paths free from obstacles. The system is a maximally passive obstacle detection system that makes selective use of an active sensor. The active detection typically utilizes a laser. Passive sensor suite includes binocular stereo, motion stereo and variable fields-of-view. Optical flow computations involve extraction, derotation and matching of interest points from sequential frames of imagery, for range interpolation of the sensed scene, which in turn provides obstacle information for purposes of safe navigation.
Development of a support software system for real-time HAL/S applications
NASA Technical Reports Server (NTRS)
Smith, R. S.
1984-01-01
Methodologies employed in defining and implementing a software support system for the HAL/S computer language for real-time operations on the Shuttle are detailed. Attention is also given to the management and validation techniques used during software development and software maintenance. Utilities developed to support the real-time operating conditions are described. With the support system being produced on Cyber computers and executable code then processed through Cyber or PDP machines, the support system has a production level status and can serve as a model for other software development projects.
Analog system for computing sparse codes
Rozell, Christopher John; Johnson, Don Herrick; Baraniuk, Richard Gordon; Olshausen, Bruno A.; Ortman, Robert Lowell
2010-08-24
A parallel dynamical system for computing sparse representations of data, i.e., where the data can be fully represented in terms of a small number of non-zero code elements, and for reconstructing compressively sensed images. The system is based on the principles of thresholding and local competition that solves a family of sparse approximation problems corresponding to various sparsity metrics. The system utilizes Locally Competitive Algorithms (LCAs), nodes in a population continually compete with neighboring units using (usually one-way) lateral inhibition to calculate coefficients representing an input in an over complete dictionary.
[Research of controlling of smart home system based on P300 brain-computer interface].
Wang, Jinjia; Yang, Chengjie
2014-08-01
Using electroencephalogram (EEG) signal to control external devices has always been the research focus in the field of brain-computer interface (BCI). This is especially significant for those disabilities who have lost capacity of movements. In this paper, the P300-based BCI and the microcontroller-based wireless radio frequency (RF) technology are utilized to design a smart home control system, which can be used to control household appliances, lighting system, and security devices directly. Experiment results showed that the system was simple, reliable and easy to be populirised.
Physician Utilization of a Hospital Information System: A Computer Simulation Model
Anderson, James G.; Jay, Stephen J.; Clevenger, Stephen J.; Kassing, David R.; Perry, Jane; Anderson, Marilyn M.
1988-01-01
The purpose of this research was to develop a computer simulation model that represents the process through which physicians enter orders into a hospital information system (HIS). Computer simulation experiments were performed to estimate the effects of two methods of order entry on outcome variables. The results of the computer simulation experiments were used to perform a cost-benefit analysis to compare the two different means of entering medical orders into the HIS. The results indicate that the use of personal order sets to enter orders into the HIS will result in a significant reduction in manpower, salaries and fringe benefits, and errors in order entry.
Accelerating Large Scale Image Analyses on Parallel, CPU-GPU Equipped Systems
Teodoro, George; Kurc, Tahsin M.; Pan, Tony; Cooper, Lee A.D.; Kong, Jun; Widener, Patrick; Saltz, Joel H.
2014-01-01
The past decade has witnessed a major paradigm shift in high performance computing with the introduction of accelerators as general purpose processors. These computing devices make available very high parallel computing power at low cost and power consumption, transforming current high performance platforms into heterogeneous CPU-GPU equipped systems. Although the theoretical performance achieved by these hybrid systems is impressive, taking practical advantage of this computing power remains a very challenging problem. Most applications are still deployed to either GPU or CPU, leaving the other resource under- or un-utilized. In this paper, we propose, implement, and evaluate a performance aware scheduling technique along with optimizations to make efficient collaborative use of CPUs and GPUs on a parallel system. In the context of feature computations in large scale image analysis applications, our evaluations show that intelligently co-scheduling CPUs and GPUs can significantly improve performance over GPU-only or multi-core CPU-only approaches. PMID:25419545
Digital avionics design and reliability analyzer
NASA Technical Reports Server (NTRS)
1981-01-01
The description and specifications for a digital avionics design and reliability analyzer are given. Its basic function is to provide for the simulation and emulation of the various fault-tolerant digital avionic computer designs that are developed. It has been established that hardware emulation at the gate-level will be utilized. The primary benefit of emulation to reliability analysis is the fact that it provides the capability to model a system at a very detailed level. Emulation allows the direct insertion of faults into the system, rather than waiting for actual hardware failures to occur. This allows for controlled and accelerated testing of system reaction to hardware failures. There is a trade study which leads to the decision to specify a two-machine system, including an emulation computer connected to a general-purpose computer. There is also an evaluation of potential computers to serve as the emulation computer.
Comptational Design Of Functional CA-S-H and Oxide Doped Alloy Systems
NASA Astrophysics Data System (ADS)
Yang, Shizhong; Chilla, Lokeshwar; Yang, Yan; Li, Kuo; Wicker, Scott; Zhao, Guang-Lin; Khosravi, Ebrahim; Bai, Shuju; Zhang, Boliang; Guo, Shengmin
Computer aided functional materials design accelerates the discovery of novel materials. This presentation will cover our recent research advance on the Ca-S-H system properties prediction and oxide doped high entropy alloy property simulation and experiment validation. Several recent developed computational materials design methods were utilized to the two systems physical and chemical properties prediction. A comparison of simulation results to the corresponding experiment data will be introduced. This research is partially supported by NSF CIMM project (OIA-15410795 and the Louisiana BoR), NSF HBCU Supplement climate change and ecosystem sustainability subproject 3, and LONI high performance computing time allocation loni mat bio7.
NASA Technical Reports Server (NTRS)
Montag, Bruce C.; Bishop, Alfred M.; Redfield, Joe B.
1989-01-01
The findings of a preliminary investigation by Southwest Research Institute (SwRI) in simulation host computer concepts is presented. It is designed to aid NASA in evaluating simulation technologies for use in spaceflight training. The focus of the investigation is on the next generation of space simulation systems that will be utilized in training personnel for Space Station Freedom operations. SwRI concludes that NASA should pursue a distributed simulation host computer system architecture for the Space Station Training Facility (SSTF) rather than a centralized mainframe based arrangement. A distributed system offers many advantages and is seen by SwRI as the only architecture that will allow NASA to achieve established functional goals and operational objectives over the life of the Space Station Freedom program. Several distributed, parallel computing systems are available today that offer real-time capabilities for time critical, man-in-the-loop simulation. These systems are flexible in terms of connectivity and configurability, and are easily scaled to meet increasing demands for more computing power.
NASA Technical Reports Server (NTRS)
Coles, W. A.
1975-01-01
The CAD/CAM interactive computer graphics system was described; uses to which it has been put were shown, and current developments of the system were outlined. The system supports batch, time sharing, and fully interactive graphic processing. Engineers using the system may switch between these methods of data processing and problem solving to make the best use of the available resources. It is concluded that the introduction of on-line computing in the form of teletypes, storage tubes, and fully interactive graphics has resulted in large increases in productivity and reduced timescales in the geometric computing, numerical lofting and part programming areas, together with a greater utilization of the system in the technical departments.
Code of Federal Regulations, 2010 CFR
2010-07-01
... database entry. Utilize the current NOISEMAP computer program for air installations and the Assessment System for Aircraft Noise for military training routes and military operating areas. Guidance on...
Code of Federal Regulations, 2013 CFR
2013-07-01
... database entry. Utilize the current NOISEMAP computer program for air installations and the Assessment System for Aircraft Noise for military training routes and military operating areas. Guidance on...
Code of Federal Regulations, 2012 CFR
2012-07-01
... database entry. Utilize the current NOISEMAP computer program for air installations and the Assessment System for Aircraft Noise for military training routes and military operating areas. Guidance on...
Code of Federal Regulations, 2011 CFR
2011-07-01
... database entry. Utilize the current NOISEMAP computer program for air installations and the Assessment System for Aircraft Noise for military training routes and military operating areas. Guidance on...
Code of Federal Regulations, 2014 CFR
2014-07-01
... database entry. Utilize the current NOISEMAP computer program for air installations and the Assessment System for Aircraft Noise for military training routes and military operating areas. Guidance on...
Compiling probabilistic, bio-inspired circuits on a field programmable analog array
Marr, Bo; Hasler, Jennifer
2014-01-01
A field programmable analog array (FPAA) is presented as an energy and computational efficiency engine: a mixed mode processor for which functions can be compiled at significantly less energy costs using probabilistic computing circuits. More specifically, it will be shown that the core computation of any dynamical system can be computed on the FPAA at significantly less energy per operation than a digital implementation. A stochastic system that is dynamically controllable via voltage controlled amplifier and comparator thresholds is implemented, which computes Bernoulli random variables. From Bernoulli variables it is shown exponentially distributed random variables, and random variables of an arbitrary distribution can be computed. The Gillespie algorithm is simulated to show the utility of this system by calculating the trajectory of a biological system computed stochastically with this probabilistic hardware where over a 127X performance improvement over current software approaches is shown. The relevance of this approach is extended to any dynamical system. The initial circuits and ideas for this work were generated at the 2008 Telluride Neuromorphic Workshop. PMID:24847199
Machine Learning in Ultrasound Computer-Aided Diagnostic Systems: A Survey
Zhang, Fan; Li, Xuelong
2018-01-01
The ultrasound imaging is one of the most common schemes to detect diseases in the clinical practice. There are many advantages of ultrasound imaging such as safety, convenience, and low cost. However, reading ultrasound imaging is not easy. To support the diagnosis of clinicians and reduce the load of doctors, many ultrasound computer-aided diagnosis (CAD) systems are proposed. In recent years, the success of deep learning in the image classification and segmentation led to more and more scholars realizing the potential of performance improvement brought by utilizing the deep learning in the ultrasound CAD system. This paper summarized the research which focuses on the ultrasound CAD system utilizing machine learning technology in recent years. This study divided the ultrasound CAD system into two categories. One is the traditional ultrasound CAD system which employed the manmade feature and the other is the deep learning ultrasound CAD system. The major feature and the classifier employed by the traditional ultrasound CAD system are introduced. As for the deep learning ultrasound CAD, newest applications are summarized. This paper will be useful for researchers who focus on the ultrasound CAD system. PMID:29687000
Machine Learning in Ultrasound Computer-Aided Diagnostic Systems: A Survey.
Huang, Qinghua; Zhang, Fan; Li, Xuelong
2018-01-01
The ultrasound imaging is one of the most common schemes to detect diseases in the clinical practice. There are many advantages of ultrasound imaging such as safety, convenience, and low cost. However, reading ultrasound imaging is not easy. To support the diagnosis of clinicians and reduce the load of doctors, many ultrasound computer-aided diagnosis (CAD) systems are proposed. In recent years, the success of deep learning in the image classification and segmentation led to more and more scholars realizing the potential of performance improvement brought by utilizing the deep learning in the ultrasound CAD system. This paper summarized the research which focuses on the ultrasound CAD system utilizing machine learning technology in recent years. This study divided the ultrasound CAD system into two categories. One is the traditional ultrasound CAD system which employed the manmade feature and the other is the deep learning ultrasound CAD system. The major feature and the classifier employed by the traditional ultrasound CAD system are introduced. As for the deep learning ultrasound CAD, newest applications are summarized. This paper will be useful for researchers who focus on the ultrasound CAD system.
NASA Astrophysics Data System (ADS)
Mazurowski, Maciej A.; Zhang, Jing; Lo, Joseph Y.; Kuzmiak, Cherie M.; Ghate, Sujata V.; Yoon, Sora
2014-03-01
Providing high quality mammography education to radiology trainees is essential, as good interpretation skills potentially ensure the highest benefit of screening mammography for patients. We have previously proposed a computer-aided education system that utilizes trainee models, which relate human-assessed image characteristics to interpretation error. We proposed that these models be used to identify the most difficult and therefore the most educationally useful cases for each trainee. In this study, as a next step in our research, we propose to build trainee models that utilize features that are automatically extracted from images using computer vision algorithms. To predict error, we used a logistic regression which accepts imaging features as input and returns error as output. Reader data from 3 experts and 3 trainees were used. Receiver operating characteristic analysis was applied to evaluate the proposed trainee models. Our experiments showed that, for three trainees, our models were able to predict error better than chance. This is an important step in the development of adaptive computer-aided education systems since computer-extracted features will allow for faster and more extensive search of imaging databases in order to identify the most educationally beneficial cases.
A Computational Workflow for the Automated Generation of Models of Genetic Designs.
Misirli, Göksel; Nguyen, Tramy; McLaughlin, James Alastair; Vaidyanathan, Prashant; Jones, Timothy S; Densmore, Douglas; Myers, Chris; Wipat, Anil
2018-06-05
Computational models are essential to engineer predictable biological systems and to scale up this process for complex systems. Computational modeling often requires expert knowledge and data to build models. Clearly, manual creation of models is not scalable for large designs. Despite several automated model construction approaches, computational methodologies to bridge knowledge in design repositories and the process of creating computational models have still not been established. This paper describes a workflow for automatic generation of computational models of genetic circuits from data stored in design repositories using existing standards. This workflow leverages the software tool SBOLDesigner to build structural models that are then enriched by the Virtual Parts Repository API using Systems Biology Open Language (SBOL) data fetched from the SynBioHub design repository. The iBioSim software tool is then utilized to convert this SBOL description into a computational model encoded using the Systems Biology Markup Language (SBML). Finally, this SBML model can be simulated using a variety of methods. This workflow provides synthetic biologists with easy to use tools to create predictable biological systems, hiding away the complexity of building computational models. This approach can further be incorporated into other computational workflows for design automation.
CSNS computing environment Based on OpenStack
NASA Astrophysics Data System (ADS)
Li, Yakang; Qi, Fazhi; Chen, Gang; Wang, Yanming; Hong, Jianshu
2017-10-01
Cloud computing can allow for more flexible configuration of IT resources and optimized hardware utilization, it also can provide computing service according to the real need. We are applying this computing mode to the China Spallation Neutron Source(CSNS) computing environment. So, firstly, CSNS experiment and its computing scenarios and requirements are introduced in this paper. Secondly, the design and practice of cloud computing platform based on OpenStack are mainly demonstrated from the aspects of cloud computing system framework, network, storage and so on. Thirdly, some improvments to openstack we made are discussed further. Finally, current status of CSNS cloud computing environment are summarized in the ending of this paper.
Portable Map-Reduce Utility for MIT SuperCloud Environment
2015-09-17
Reuther, A. Rosa, C. Yee, “Driving Big Data With Big Compute,” IEEE HPEC, Sep 10-12, 2012, Waltham, MA. [6] Apache Hadoop 1.2.1 Documentation: HDFS... big data architecture, which is designed to address these challenges, is made of the computing resources, scheduler, central storage file system...databases, analytics software and web interfaces [1]. These components are common to many big data and supercomputing systems. The platform is
DOE Office of Scientific and Technical Information (OSTI.GOV)
Staschus, K.
1985-01-01
In this dissertation, efficient algorithms for electric-utility capacity expansion planning with renewable energy are developed. The algorithms include a deterministic phase that quickly finds a near-optimal expansion plan using derating and a linearized approximation to the time-dependent availability of nondispatchable energy sources. A probabilistic second phase needs comparatively few computer-time consuming probabilistic simulation iterations to modify this solution towards the optimal expansion plan. For the deterministic first phase, two algorithms, based on a Lagrangian Dual decomposition and a Generalized Benders Decomposition, are developed. The probabilistic second phase uses a Generalized Benders Decomposition approach. Extensive computational tests of the algorithms aremore » reported. Among the deterministic algorithms, the one based on Lagrangian Duality proves fastest. The two-phase approach is shown to save up to 80% in computing time as compared to a purely probabilistic algorithm. The algorithms are applied to determine the optimal expansion plan for the Tijuana-Mexicali subsystem of the Mexican electric utility system. A strong recommendation to push conservation programs in the desert city of Mexicali results from this implementation.« less
Design & Delivery of Training for a State-Wide Data Communication Network.
ERIC Educational Resources Information Center
Zacher, Candace M.
This report describes the process of development of training for agricultural research, teaching, and extension professionals in how to use the Fast Agricultural Communications Terminal (FACTS) computer network at Purdue University (Indiana), which is currently being upgraded in order to utilize the latest computer technology. The FACTS system is…
1990-09-01
1988). Current versions of the ADATS have CATE systems insLzlled, but the software is still under development by the radar manufacturer, Contraves ...Italiana, a subcontractor to Martin Marietta (USA). Contraves Italiana will deliver the final version of the software to Martin Marietta in 1991. Until then
Impact of the Shodan Computer Search Engine on Internet-facing Industrial Control System Devices
2014-03-27
bridge implementation. The transparent bridge is designed using a Raspberry Pi configured with Linux IPtables and bridge-utils to bridge the on board...Ethernet card and a second USB Ethernet adapter. A Raspberry Pi is a credit-card-sized single-board computer running a version of Debian Linux. There
NASA Technical Reports Server (NTRS)
Wray, S. T., Jr.
1975-01-01
Information necessary to use the LOVES computer program in its existing state or to modify the program to include studies not properly handled by the basic model is provided. A users guide, a programmers manual, and several supporting appendices are included.
Distributed Name Servers: Naming and Caching in Large Distributed Computing Environments
1985-12-01
transmission rate of the communication medium1, transmission over a 56K bps line costs approx- imately 54r, and similarly, communication over a 9.6K...memories for modem computer systems attempt to maximize the hit ratio for a fixed-size cache by utilizing intelligent cache replacement algorithms
Enhancing data utilization through adoption of cloud-based data architectures (Invited Paper 211869)
NASA Astrophysics Data System (ADS)
Kearns, E. J.
2017-12-01
A traditional approach to data distribution and utilization of open government data involves continuously moving those data from a central government location to each potential user, who would then utilize them on their local computer systems. An alternate approach would be to bring those users to the open government data, where users would also have access to computing and analytics capabilities that would support data utilization. NOAA's Big Data Project is exploring such an alternate approach through an experimental collaboration with Amazon Web Services, Google Cloud Platform, IBM, Microsoft Azure, and the Open Commons Consortium. As part of this ongoing experiment, NOAA is providing open data of interest which are freely hosted by the Big Data Project Collaborators, who provide a variety of cloud-based services and capabilities to enable utilization by data users. By the terms of the agreement, the Collaborators may charge for those value-added services and processing capacities to recover their costs to freely host the data and to generate profits if so desired. Initial results have shown sustained increases in data utilization from 2 to over 100 times previously-observed access patterns from traditional approaches. Significantly increased utilization speed as compared to the traditional approach has also been observed by NOAA data users who have volunteered their experiences on these cloud-based systems. The potential for implementing and sustaining the alternate cloud-based approach as part of a change in operational data utilization strategies will be discussed.
ERIC Educational Resources Information Center
Fussler, Herman H.; Payne, Charles T.
The project's second year (1967/68) was devoted to upgrading the computer operating software and programs to increase versatility and reliability. General conclusions about the program after 24 months of operation are that the project's objectives are sound and that effective utilization of computer-aided bibliographic data processing is essential…
Using CASE Software to Teach Undergraduates Systems Analysis and Design.
ERIC Educational Resources Information Center
Wilcox, Russell E.
1988-01-01
Describes the design and delivery of a college course for information system students utilizing a Computer-Aided Software Engineering program. Discusses class assignments, cooperative learning, student attitudes, and the advantages of using this software in the course. (CW)
The Mathematics of the Global Positioning System.
ERIC Educational Resources Information Center
Nord, Gail D.; Jabon, David; Nord, John
1997-01-01
Presents an activity that illustrates the application of mathematics to modern navigation and utilizes the Global Positioning System (GPS). GPS is a constellation of 24 satellites that enables receivers to compute their position anywhere on the earth with great accuracy. (DDR)
NASA Technical Reports Server (NTRS)
Deckman, G.; Rousseau, J. (Editor)
1973-01-01
The Wash Water Recovery System (WWRS) is intended for use in processing shower bath water onboard a spacecraft. The WWRS utilizes flash evaporation, vapor compression, and pyrolytic reaction to process the wash water to allow recovery of potable water. Wash water flashing and foaming characteristics, are evaluated physical properties, of concentrated wash water are determined, and a long term feasibility study on the system is performed. In addition, a computer analysis of the system and a detail design of a 10 lb/hr vortex-type water vapor compressor were completed. The computer analysis also sized remaining system components on the basis of the new vortex compressor design.
An integrated decision support system for TRAC: A proposal
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
1991-01-01
Optimal allocation and usage of resources is a key to effective management. Resources of concern to TRAC are: Manpower (PSY), Money (Travel, contracts), Computing, Data, Models, etc. Management activities of TRAC include: Planning, Programming, Tasking, Monitoring, Updating, and Coordinating. Existing systems are insufficient, not completely automated, manpower intensive, and has the potential for data inconsistency exists. A system is proposed which suggests a means to integrate all project management activities of TRAC through the development of a sophisticated software and by utilizing the existing computing systems and network resources. The systems integration proposal is examined in detail.
Interactive Cable Television. Final Report.
ERIC Educational Resources Information Center
Active Learning Systems, Inc., Minneapolis, MN.
This report describes an interactive video system developed by Active Learning Systems which utilizes a cable television (TV) network as its delivery system to transmit computer literacy lessons to high school and college students. The system consists of an IBM PC, Pioneer LDV 4000 videodisc player, and Whitney Supercircuit set up at the head end…
NASA Technical Reports Server (NTRS)
Sainsbury-Carter, J. B.; Conaway, J. H.
1973-01-01
The development and implementation of a preprocessor system for the finite element analysis of helicopter fuselages is described. The system utilizes interactive graphics for the generation, display, and editing of NASTRAN data for fuselage models. It is operated from an IBM 2250 cathode ray tube (CRT) console driven by an IBM 370/145 computer. Real time interaction plus automatic data generation reduces the nominal 6 to 10 week time for manual generation and checking of data to a few days. The interactive graphics system consists of a series of satellite programs operated from a central NASTRAN Systems Monitor. Fuselage structural models including the outer shell and internal structure may be rapidly generated. All numbering systems are automatically assigned. Hard copy plots of the model labeled with GRID or elements ID's are also available. General purpose programs for displaying and editing NASTRAN data are included in the system. Utilization of the NASTRAN interactive graphics system has made possible the multiple finite element analysis of complex helicopter fuselage structures within design schedules.
NASA Technical Reports Server (NTRS)
Fishbach, L. H.
1979-01-01
The computational techniques utilized to determine the optimum propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements are described. The characteristics and use of the following computer codes are discussed: (1) NNEP - a very general cycle analysis code that can assemble an arbitrary matrix fans, turbines, ducts, shafts, etc., into a complete gas turbine engine and compute on- and off-design thermodynamic performance; (2) WATE - a preliminary design procedure for calculating engine weight using the component characteristics determined by NNEP; (3) POD DRG - a table look-up program to calculate wave and friction drag of nacelles; (4) LIFCYC - a computer code developed to calculate life cycle costs of engines based on the output from WATE; and (5) INSTAL - a computer code developed to calculate installation effects, inlet performance and inlet weight. Examples are given to illustrate how these computer techniques can be applied to analyze and optimize propulsion system fuel consumption, weight, and cost for representative types of aircraft and missions.
Atomic switch networks-nanoarchitectonic design of a complex system for natural computing.
Demis, E C; Aguilera, R; Sillin, H O; Scharnhorst, K; Sandouk, E J; Aono, M; Stieg, A Z; Gimzewski, J K
2015-05-22
Self-organized complex systems are ubiquitous in nature, and the structural complexity of these natural systems can be used as a model to design new classes of functional nanotechnology based on highly interconnected networks of interacting units. Conventional fabrication methods for electronic computing devices are subject to known scaling limits, confining the diversity of possible architectures. This work explores methods of fabricating a self-organized complex device known as an atomic switch network and discusses its potential utility in computing. Through a merger of top-down and bottom-up techniques guided by mathematical and nanoarchitectonic design principles, we have produced functional devices comprising nanoscale elements whose intrinsic nonlinear dynamics and memorization capabilities produce robust patterns of distributed activity and a capacity for nonlinear transformation of input signals when configured in the appropriate network architecture. Their operational characteristics represent a unique potential for hardware implementation of natural computation, specifically in the area of reservoir computing-a burgeoning field that investigates the computational aptitude of complex biologically inspired systems.
NASA Technical Reports Server (NTRS)
Fernandez, J. P.; Mills, D.
1991-01-01
A Vibroacoustic Payload Environment Prediction System (VAPEPS) Management Center was established at the JPL. The center utilizes the VAPEPS software package to manage a data base of Space Shuttle and expendable launch vehicle payload flight and ground test data. Remote terminal access over telephone lines to the computer system, where the program resides, was established to provide the payload community a convenient means of querying the global VAPEPS data base. This guide describes the functions of the VAPEPS Management Center and contains instructions for utilizing the resources of the center.
Interfacing laboratory instruments to multiuser, virtual memory computers
NASA Technical Reports Server (NTRS)
Generazio, Edward R.; Stang, David B.; Roth, Don J.
1989-01-01
Incentives, problems and solutions associated with interfacing laboratory equipment with multiuser, virtual memory computers are presented. The major difficulty concerns how to utilize these computers effectively in a medium sized research group. This entails optimization of hardware interconnections and software to facilitate multiple instrument control, data acquisition and processing. The architecture of the system that was devised, and associated programming and subroutines are described. An example program involving computer controlled hardware for ultrasonic scan imaging is provided to illustrate the operational features.
The Cost of CAI: A Matter of Assumptions.
ERIC Educational Resources Information Center
Kearsley, Greg P.
Cost estimates for Computer Assisted Instruction (CAI) depend crucially upon the particular assumptions made about the components of the system to be included in the costs, the expected lifetime of the system and courseware, and the anticipated student utilization of the system/courseware. The cost estimates of three currently operational systems…
Intelligent Systems For Aerospace Engineering: An Overview
NASA Technical Reports Server (NTRS)
KrishnaKumar, K.
2003-01-01
Intelligent systems are nature-inspired, mathematically sound, computationally intensive problem solving tools and methodologies that have become extremely important for advancing the current trends in information technology. Artificially intelligent systems currently utilize computers to emulate various faculties of human intelligence and biological metaphors. They use a combination of symbolic and sub-symbolic systems capable of evolving human cognitive skills and intelligence, not just systems capable of doing things humans do not do well. Intelligent systems are ideally suited for tasks such as search and optimization, pattern recognition and matching, planning, uncertainty management, control, and adaptation. In this paper, the intelligent system technologies and their application potential are highlighted via several examples.
Intelligent Systems for Aerospace Engineering: An Overview
NASA Technical Reports Server (NTRS)
Krishnakumar, Kalmanje
2002-01-01
Intelligent systems are nature-inspired, mathematically sound, computationally intensive problem solving tools and methodologies that have become extremely important for advancing the current trends in information technology. Artificially intelligent systems currently utilize computers to emulate various faculties of human intelligence and biological metaphors. They use a combination of symbolic and sub-symbolic systems capable of evolving human cognitive skills and intelligence, not just systems capable of doing things humans do not do well. Intelligent systems are ideally suited for tasks such as search and optimization, pattern recognition and matching, planning, uncertainty management, control, and adaptation. In this paper, the intelligent system technologies and their application potential are highlighted via several examples.
ERIC Educational Resources Information Center
Beamish, Eric; And Others
This resource guide contains over 300 entries which are available through the Optimum Utilization of Resources (OUR's) exchange system. The entries describe learning materials, such as slides, video tapes, audio tapes, films, print material, and computer assisted instructional programs, which have been developed primarily by faculty of the…
Design Tools for Evaluating Multiprocessor Programs
1976-07-01
than large uniprocessing machines, and 2. economies of scale in manufacturing. Perhaps the most compelling reason (possibly a consequence of the...speed, redundancy, (inefficiency, resource utilization, and economies of the components. [Browne 73, Lehman 66] 6. How can the system be scheduled...mejsures are interesting about the computation? Somn may be: speed, redundancy, (inefficiency, resource utilization, and economies of the components
On-board computer progress in development of A 310 flight testing program
NASA Technical Reports Server (NTRS)
Reau, P.
1981-01-01
Onboard computer progress in development of an Airbus A 310 flight testing program is described. Minicomputers were installed onboard three A 310 airplanes in 1979 in order to: (1) assure the flight safety by exercising a limit check of a given set of parameters; (2) improve the efficiency of flight tests and allow cost reduction; and (3) perform test analysis on an external basis by utilizing onboard flight types. The following program considerations are discussed: (1) conclusions based on simulation of an onboard computer system; (2) brief descriptions of A 310 airborne computer equipment, specifically the onboard universal calculator (CUB) consisting of a ROLM 1666 system and visualization system using an AFIGRAF CRT; (3) the ground system and flight information inputs; and (4) specifications and execution priorities for temporary and permanent programs.
NASA Technical Reports Server (NTRS)
Drake, Jeffrey T.; Prasad, Nadipuram R.
1999-01-01
This paper surveys recent advances in communications that utilize soft computing approaches to phase synchronization. Soft computing, as opposed to hard computing, is a collection of complementary methodologies that act in producing the most desirable control, decision, or estimation strategies. Recently, the communications area has explored the use of the principal constituents of soft computing, namely, fuzzy logic, neural networks, and genetic algorithms, for modeling, control, and most recently for the estimation of phase in phase-coherent communications. If the receiver in a digital communications system is phase-coherent, as is often the case, phase synchronization is required. Synchronization thus requires estimation and/or control at the receiver of an unknown or random phase offset.
Pyramidal neurovision architecture for vision machines
NASA Astrophysics Data System (ADS)
Gupta, Madan M.; Knopf, George K.
1993-08-01
The vision system employed by an intelligent robot must be active; active in the sense that it must be capable of selectively acquiring the minimal amount of relevant information for a given task. An efficient active vision system architecture that is based loosely upon the parallel-hierarchical (pyramidal) structure of the biological visual pathway is presented in this paper. Although the computational architecture of the proposed pyramidal neuro-vision system is far less sophisticated than the architecture of the biological visual pathway, it does retain some essential features such as the converging multilayered structure of its biological counterpart. In terms of visual information processing, the neuro-vision system is constructed from a hierarchy of several interactive computational levels, whereupon each level contains one or more nonlinear parallel processors. Computationally efficient vision machines can be developed by utilizing both the parallel and serial information processing techniques within the pyramidal computing architecture. A computer simulation of a pyramidal vision system for active scene surveillance is presented.
Comparative Implementation of High Performance Computing for Power System Dynamic Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Shuangshuang; Huang, Zhenyu; Diao, Ruisheng
Dynamic simulation for transient stability assessment is one of the most important, but intensive, computations for power system planning and operation. Present commercial software is mainly designed for sequential computation to run a single simulation, which is very time consuming with a single processer. The application of High Performance Computing (HPC) to dynamic simulations is very promising in accelerating the computing process by parallelizing its kernel algorithms while maintaining the same level of computation accuracy. This paper describes the comparative implementation of four parallel dynamic simulation schemes in two state-of-the-art HPC environments: Message Passing Interface (MPI) and Open Multi-Processing (OpenMP).more » These implementations serve to match the application with dedicated multi-processor computing hardware and maximize the utilization and benefits of HPC during the development process.« less
Importance of balanced architectures in the design of high-performance imaging systems
NASA Astrophysics Data System (ADS)
Sgro, Joseph A.; Stanton, Paul C.
1999-03-01
Imaging systems employed in demanding military and industrial applications, such as automatic target recognition and computer vision, typically require real-time high-performance computing resources. While high- performances computing systems have traditionally relied on proprietary architectures and custom components, recent advances in high performance general-purpose microprocessor technology have produced an abundance of low cost components suitable for use in high-performance computing systems. A common pitfall in the design of high performance imaging system, particularly systems employing scalable multiprocessor architectures, is the failure to balance computational and memory bandwidth. The performance of standard cluster designs, for example, in which several processors share a common memory bus, is typically constrained by memory bandwidth. The symptom characteristic of this problem is failure to the performance of the system to scale as more processors are added. The problem becomes exacerbated if I/O and memory functions share the same bus. The recent introduction of microprocessors with large internal caches and high performance external memory interfaces makes it practical to design high performance imaging system with balanced computational and memory bandwidth. Real word examples of such designs will be presented, along with a discussion of adapting algorithm design to best utilize available memory bandwidth.
Reliability modeling of fault-tolerant computer based systems
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.
1987-01-01
Digital fault-tolerant computer-based systems have become commonplace in military and commercial avionics. These systems hold the promise of increased availability, reliability, and maintainability over conventional analog-based systems through the application of replicated digital computers arranged in fault-tolerant configurations. Three tightly coupled factors of paramount importance, ultimately determining the viability of these systems, are reliability, safety, and profitability. Reliability, the major driver affects virtually every aspect of design, packaging, and field operations, and eventually produces profit for commercial applications or increased national security. However, the utilization of digital computer systems makes the task of producing credible reliability assessment a formidable one for the reliability engineer. The root of the problem lies in the digital computer's unique adaptability to changing requirements, computational power, and ability to test itself efficiently. Addressed here are the nuances of modeling the reliability of systems with large state sizes, in the Markov sense, which result from systems based on replicated redundant hardware and to discuss the modeling of factors which can reduce reliability without concomitant depletion of hardware. Advanced fault-handling models are described and methods of acquiring and measuring parameters for these models are delineated.
Remote information service access system based on a client-server-service model
Konrad, Allan M.
1996-01-01
A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user's local host, thereby providing ease of use and minimal software maintenance for users of that remote service.
Remote information service access system based on a client-server-service model
Konrad, A.M.
1997-12-09
A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user`s local host, thereby providing ease of use and minimal software maintenance for users of that remote service. 16 figs.
Remote information service access system based on a client-server-service model
Konrad, Allan M.
1999-01-01
A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user's local host, thereby providing ease of use and minimal software maintenance for users of that remote service.
Remote information service access system based on a client-server-service model
Konrad, A.M.
1996-08-06
A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user`s local host, thereby providing ease of use and minimal software maintenance for users of that remote service. 16 figs.
Remote information service access system based on a client-server-service model
Konrad, Allan M.
1997-01-01
A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user's local host, thereby providing ease of use and minimal software maintenance for users of that remote service.
Variable Generation Power Forecasting as a Big Data Problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haupt, Sue Ellen; Kosovic, Branko
To blend growing amounts of power from renewable resources into utility operations requires accurate forecasts. For both day ahead planning and real-time operations, the power from the wind and solar resources must be predicted based on real-time observations and a series of models that span the temporal and spatial scales of the problem, using the physical and dynamical knowledge as well as computational intelligence. Accurate prediction is a Big Data problem that requires disparate data, multiple models that are each applicable for a specific time frame, and application of computational intelligence techniques to successfully blend all of the model andmore » observational information in real-time and deliver it to the decision makers at utilities and grid operators. This paper describes an example system that has been used for utility applications and how it has been configured to meet utility needs while addressing the Big Data issues.« less
Variable Generation Power Forecasting as a Big Data Problem
Haupt, Sue Ellen; Kosovic, Branko
2016-10-10
To blend growing amounts of power from renewable resources into utility operations requires accurate forecasts. For both day ahead planning and real-time operations, the power from the wind and solar resources must be predicted based on real-time observations and a series of models that span the temporal and spatial scales of the problem, using the physical and dynamical knowledge as well as computational intelligence. Accurate prediction is a Big Data problem that requires disparate data, multiple models that are each applicable for a specific time frame, and application of computational intelligence techniques to successfully blend all of the model andmore » observational information in real-time and deliver it to the decision makers at utilities and grid operators. This paper describes an example system that has been used for utility applications and how it has been configured to meet utility needs while addressing the Big Data issues.« less
ERIC Educational Resources Information Center
Kurtz, Peter; And Others
This report is concerned with the implementation of two interrelated computer systems: an automatic document analysis and classification package, and an on-line interactive information retrieval system which utilizes the information gathered during the automatic classification phase. Well-known techniques developed by Salton and Dennis have been…
[Soft- and hardware support for the setup for computer tracking of radiation teletherapy].
Tarutin, I G; Piliavets, V I; Strakh, A G; Minenko, V F; Golubovskiĭ, A I
1983-06-01
A hard and soft ware computer assisted complex has been worked out for gamma-beam therapy. The complex included all radiotherapeutic units, including a Siemens program controlled betatron with an energy of 42 MEV computer ES-1022, a Medigraf system of the processing of graphic information, a Mars-256 system for control over the homogeneity of distribution of dose rate on the field of irradiation and a package of mathematical programs to select a plan of irradiation of various tumor sites. The prospects of the utilization of such complexes in the dosimetric support of radiation therapy are discussed.
Operating a Geiger Müller tube using a PC sound card
NASA Astrophysics Data System (ADS)
Azooz, A. A.
2009-01-01
In this paper, a simple MATLAB-based PC program that enables the computer to function as a replacement for the electronic scalar-counter system associated with a Geiger-Müller (GM) tube is described. The program utilizes the ability of MATLAB to acquire data directly from the computer sound card. The signal from the GM tube is applied to the computer sound card via the line in port. All standard GM experiments, pulse shape and statistical analysis experiments can be carried out using this system. A new visual demonstration of dead time effects is also presented.
Computer-assisted instruction and diagnosis of radiographic findings.
Harper, D; Butler, C; Hodder, R; Allman, R; Woods, J; Riordan, D
1984-04-01
Recent advances in computer technology, including high bit-density storage, digital imaging, and the ability to interface microprocessors with videodisk, create enormous opportunities in the field of medical education. This program, utilizing a personal computer, videodisk, BASIC language, a linked textfile system, and a triangulation approach to the interpretation of radiographs developed by Dr. W. L. Thompson, can enable the user to engage in a user-friendly, dynamic teaching program in radiology, applicable to various levels of expertise. Advantages include a relatively more compact and inexpensive system with rapid access and ease of revision which requires little instruction to the user.
Costs of cloud computing for a biometry department. A case study.
Knaus, J; Hieke, S; Binder, H; Schwarzer, G
2013-01-01
"Cloud" computing providers, such as the Amazon Web Services (AWS), offer stable and scalable computational resources based on hardware virtualization, with short, usually hourly, billing periods. The idea of pay-as-you-use seems appealing for biometry research units which have only limited access to university or corporate data center resources or grids. This case study compares the costs of an existing heterogeneous on-site hardware pool in a Medical Biometry and Statistics department to a comparable AWS offer. The "total cost of ownership", including all direct costs, is determined for the on-site hardware, and hourly prices are derived, based on actual system utilization during the year 2011. Indirect costs, which are difficult to quantify are not included in this comparison, but nevertheless some rough guidance from our experience is given. To indicate the scale of costs for a methodological research project, a simulation study of a permutation-based statistical approach is performed using AWS and on-site hardware. In the presented case, with a system utilization of 25-30 percent and 3-5-year amortization, on-site hardware can result in smaller costs, compared to hourly rental in the cloud dependent on the instance chosen. Renting cloud instances with sufficient main memory is a deciding factor in this comparison. Costs for on-site hardware may vary, depending on the specific infrastructure at a research unit, but have only moderate impact on the overall comparison and subsequent decision for obtaining affordable scientific computing resources. Overall utilization has a much stronger impact as it determines the actual computing hours needed per year. Taking this into ac count, cloud computing might still be a viable option for projects with limited maturity, or as a supplement for short peaks in demand.
Energy conservation and analysis and evaluation. [specifically at Slidell Computer Complex
NASA Technical Reports Server (NTRS)
1976-01-01
The survey assembled and made recommendations directed at conserving utilities and reducing the use of energy at the Slidell Computer Complex. Specific items included were: (1) scheduling and controlling the use of gas and electricity, (2) building modifications to reduce energy, (3) replacement of old, inefficient equipment, (4) modifications to control systems, (5) evaluations of economizer cycles in HVAC systems, and (6) corrective settings for thermostats, ductstats, and other temperature and pressure control devices.
Integrating computer programs for engineering analysis and design
NASA Technical Reports Server (NTRS)
Wilhite, A. W.; Crisp, V. K.; Johnson, S. C.
1983-01-01
The design of a third-generation system for integrating computer programs for engineering and design has been developed for the Aerospace Vehicle Interactive Design (AVID) system. This system consists of an engineering data management system, program interface software, a user interface, and a geometry system. A relational information system (ARIS) was developed specifically for the computer-aided engineering system. It is used for a repository of design data that are communicated between analysis programs, for a dictionary that describes these design data, for a directory that describes the analysis programs, and for other system functions. A method is described for interfacing independent analysis programs into a loosely-coupled design system. This method emphasizes an interactive extension of analysis techniques and manipulation of design data. Also, integrity mechanisms exist to maintain database correctness for multidisciplinary design tasks by an individual or a team of specialists. Finally, a prototype user interface program has been developed to aid in system utilization.
Characterization of real-time computers
NASA Technical Reports Server (NTRS)
Shin, K. G.; Krishna, C. M.
1984-01-01
A real-time system consists of a computer controller and controlled processes. Despite the synergistic relationship between these two components, they have been traditionally designed and analyzed independently of and separately from each other; namely, computer controllers by computer scientists/engineers and controlled processes by control scientists. As a remedy for this problem, in this report real-time computers are characterized by performance measures based on computer controller response time that are: (1) congruent to the real-time applications, (2) able to offer an objective comparison of rival computer systems, and (3) experimentally measurable/determinable. These measures, unlike others, provide the real-time computer controller with a natural link to controlled processes. In order to demonstrate their utility and power, these measures are first determined for example controlled processes on the basis of control performance functionals. They are then used for two important real-time multiprocessor design applications - the number-power tradeoff and fault-masking and synchronization.
Workload Characterization of CFD Applications Using Partial Differential Equation Solvers
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
Workload characterization is used for modeling and evaluating of computing systems at different levels of detail. We present workload characterization for a class of Computational Fluid Dynamics (CFD) applications that solve Partial Differential Equations (PDEs). This workload characterization focuses on three high performance computing platforms: SGI Origin2000, EBM SP-2, a cluster of Intel Pentium Pro bases PCs. We execute extensive measurement-based experiments on these platforms to gather statistics of system resource usage, which results in workload characterization. Our workload characterization approach yields a coarse-grain resource utilization behavior that is being applied for performance modeling and evaluation of distributed high performance metacomputing systems. In addition, this study enhances our understanding of interactions between PDE solver workloads and high performance computing platforms and is useful for tuning these applications.
Wearable computer for mobile augmented-reality-based controlling of an intelligent robot
NASA Astrophysics Data System (ADS)
Turunen, Tuukka; Roening, Juha; Ahola, Sami; Pyssysalo, Tino
2000-10-01
An intelligent robot can be utilized to perform tasks that are either hazardous or unpleasant for humans. Such tasks include working in disaster areas or conditions that are, for example, too hot. An intelligent robot can work on its own to some extent, but in some cases the aid of humans will be needed. This requires means for controlling the robot from somewhere else, i.e. teleoperation. Mobile augmented reality can be utilized as a user interface to the environment, as it enhances the user's perception of the situation compared to other interfacing methods and allows the user to perform other tasks while controlling the intelligent robot. Augmented reality is a method that combines virtual objects into the user's perception of the real world. As computer technology evolves, it is possible to build very small devices that have sufficient capabilities for augmented reality applications. We have evaluated the existing wearable computers and mobile augmented reality systems to build a prototype of a future mobile terminal- the CyPhone. A wearable computer with sufficient system resources for applications, wireless communication media with sufficient throughput and enough interfaces for peripherals has been built at the University of Oulu. It is self-sustained in energy, with enough operating time for the applications to be useful, and uses accurate positioning systems.
New space sensor and mesoscale data analysis
NASA Technical Reports Server (NTRS)
Hickey, John S.
1987-01-01
The developed Earth Science and Application Division (ESAD) system/software provides the research scientist with the following capabilities: an extensive data base management capibility to convert various experiment data types into a standard format; and interactive analysis and display package (AVE80); an interactive imaging/color graphics capability utilizing the Apple III and IBM PC workstations integrated into the ESAD computer system; and local and remote smart-terminal capability which provides color video, graphics, and Laserjet output. Recommendations for updating and enhancing the performance of the ESAD computer system are listed.
2016-09-01
PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Postgraduate School Monterey, CA 93943-5000 8. PERFORMING ORGANIZATION REPORT NUMBER 9...state- and local-level computer networks fertile ground for the cyber adversary. This research focuses on the threat to SLTT computer networks and how...institutions, and banking systems. The array of responsibilities and the cybersecurity threat landscape make state- and local-level computer networks fertile
A Systems Biology Approach to Heat Stress, Heat Injury and Heat Stroke
2015-01-01
Winkler et al., “Computational lipidology: predicting lipoprotein density profiles in human blood plasma,” PLoS Comput Biol, 4(5), e1000079 (2008). [74...other organs at high risk for injury, such as liver and kidney [24, 25]. 2.1 Utility of the computational model Molecular indicators of heat...induced heart injury had a large shift in relative abundance of proteins with high supersaturation scores, suggesting increased abundance of
Collective Properties of Neural Systems and Their Relation to Other Physical Models
1988-08-05
been computed explicitly. This has been achieved algorithmically by utilizing methods introduced earlier. It should be emphasized that in addition to...Research Institute for Mathematical Sciences. K’oto Universin. K roto 606. .apan and E. BAROUCH Department of Mathematics and Computer Sciene. Clarkon...Mathematics and Computer Science, Clarkson University, where this work was collaborated. References I. IBabu, S. V. and Barouch E., An exact soIlution for the
Terahertz computed tomography of NASA thermal protection system materials
NASA Astrophysics Data System (ADS)
Roth, D. J.; Reyes-Rodriguez, S.; Zimdars, D. A.; Rauser, R. W.; Ussery, W. W.
2012-05-01
A terahertz (THz) axial computed tomography system has been developed that uses time domain measurements in order to form cross-sectional image slices and three dimensional volume renderings of terahertz-transparent materials. The system can inspect samples as large as 0.0283 m3 (1 ft3) with no safety concerns as for x-ray computed tomography. In this study, the THz-CT system was evaluated for its ability to detect and characterize 1) an embedded void in Space Shuttle external fuel tank thermal protection system (TPS) foam material and 2) impact damage in a TPS configuration under consideration for use in NASA's multi-purpose Orion crew module (CM). Micro-focus X-ray CT is utilized to characterize the flaws and provide a baseline for which to compare the THz CT results.
The Radio Frequency Health Node Wireless Sensor System
NASA Technical Reports Server (NTRS)
Valencia, J. Emilio; Stanley, Priscilla C.; Mackey, Paul J.
2009-01-01
The Radio Frequency Health Node (RFHN) wireless sensor system differs from other wireless sensor systems in ways originally intended to enhance utility as an instrumentation system for a spacecraft. The RFHN can also be adapted to use in terrestrial applications in which there are requirements for operational flexibility and integrability into higher-level instrumentation and data acquisition systems. As shown in the figure, the heart of the system is the RFHN, which is a unit that passes commands and data between (1) one or more commercially available wireless sensor units (optionally, also including wired sensor units) and (2) command and data interfaces with a local control computer that may be part of the spacecraft or other engineering system in which the wireless sensor system is installed. In turn, the local control computer can be in radio or wire communication with a remote control computer that may be part of a higher-level system. The remote control computer, acting via the local control computer and the RFHN, cannot only monitor readout data from the sensor units but can also remotely configure (program or reprogram) the RFHN and the sensor units during operation. In a spacecraft application, the RFHN and the sensor units can also be configured more nearly directly, prior to launch, via a serial interface that includes an umbilical cable between the spacecraft and ground support equipment. In either case, the RFHN wireless sensor system has the flexibility to be configured, as required, with different numbers and types of sensors for different applications. The RFHN can be used to effect realtime transfer of data from, and commands to, the wireless sensor units. It can also store data for later retrieval by an external computer. The RFHN communicates with the wireless sensor units via a radio transceiver module. The modular design of the RFHN makes it possible to add radio transceiver modules as needed to accommodate additional sets of wireless sensor units. The RFHN includes a core module that performs generic computer functions, including management of power and input, output, processing, and storage of data. In a typical application, the processing capabilities in the RFHN are utilized to perform preprocessing, trending, and fusion of sensor data. The core module also serves as the unit through which the remote control computer configures the sensor units and the rest of the RFHN.
High data rate modem simulation for the space station multiple-access communications system
NASA Technical Reports Server (NTRS)
Horan, Stephen
1987-01-01
The communications system for the space station will require a space based multiple access component to provide communications between the space based program elements and the station. A study was undertaken to investigate two of the concerns of this multiple access system, namely, the issues related to the frequency spectrum utilization and the possibilities for higher order (than QPSK) modulation schemes for use in possible modulators and demodulators (modems). As a result of the investigation, many key questions about the frequency spectrum utilization were raised. At this point, frequency spectrum utilization is seen as an area requiring further work. Simulations were conducted using a computer aided communications system design package to provide a straw man modem structure to be used for both QPSK and 8-PSK channels.
QF/PQM-102 Target System, Project PAVE DEUCE
1975-05-01
Actual scores are computed within the dead zone by mathemat - ical computation utilizing missile velocity and time within the zone. Evaluation of...PrerUS r0l, inStability Was traced t0 a Possib’e Mufi of the autopNot ate- ensmg gyro, and It was replaced for the re-fly of OF REcord Flight No 14
Efficient Redundancy Techniques in Cloud and Desktop Grid Systems using MAP/G/c-type Queues
NASA Astrophysics Data System (ADS)
Chakravarthy, Srinivas R.; Rumyantsev, Alexander
2018-03-01
Cloud computing is continuing to prove its flexibility and versatility in helping industries and businesses as well as academia as a way of providing needed computing capacity. As an important alternative to cloud computing, desktop grids allow to utilize the idle computer resources of an enterprise/community by means of distributed computing system, providing a more secure and controllable environment with lower operational expenses. Further, both cloud computing and desktop grids are meant to optimize limited resources and at the same time to decrease the expected latency for users. The crucial parameter for optimization both in cloud computing and in desktop grids is the level of redundancy (replication) for service requests/workunits. In this paper we study the optimal replication policies by considering three variations of Fork-Join systems in the context of a multi-server queueing system with a versatile point process for the arrivals. For services we consider phase type distributions as well as shifted exponential and Weibull. We use both analytical and simulation approach in our analysis and report some interesting qualitative results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClanahan, Richard; De Leon, Phillip L.
The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less
McClanahan, Richard; De Leon, Phillip L.
2014-08-20
The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less
Capacity utilization study for aviation security cargo inspection queuing system
NASA Astrophysics Data System (ADS)
Allgood, Glenn O.; Olama, Mohammed M.; Lake, Joe E.; Brumback, Daryl
2010-04-01
In this paper, we conduct performance evaluation study for an aviation security cargo inspection queuing system for material flow and accountability. The queuing model employed in our study is based on discrete-event simulation and processes various types of cargo simultaneously. Onsite measurements are collected in an airport facility to validate the queuing model. The overall performance of the aviation security cargo inspection system is computed, analyzed, and optimized for the different system dynamics. Various performance measures are considered such as system capacity, residual capacity, throughput, capacity utilization, subscribed capacity utilization, resources capacity utilization, subscribed resources capacity utilization, and number of cargo pieces (or pallets) in the different queues. These metrics are performance indicators of the system's ability to service current needs and response capacity to additional requests. We studied and analyzed different scenarios by changing various model parameters such as number of pieces per pallet, number of TSA inspectors and ATS personnel, number of forklifts, number of explosives trace detection (ETD) and explosives detection system (EDS) inspection machines, inspection modality distribution, alarm rate, and cargo closeout time. The increased physical understanding resulting from execution of the queuing model utilizing these vetted performance measures should reduce the overall cost and shipping delays associated with new inspection requirements.
Capacity Utilization Study for Aviation Security Cargo Inspection Queuing System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allgood, Glenn O; Olama, Mohammed M; Lake, Joe E
In this paper, we conduct performance evaluation study for an aviation security cargo inspection queuing system for material flow and accountability. The queuing model employed in our study is based on discrete-event simulation and processes various types of cargo simultaneously. Onsite measurements are collected in an airport facility to validate the queuing model. The overall performance of the aviation security cargo inspection system is computed, analyzed, and optimized for the different system dynamics. Various performance measures are considered such as system capacity, residual capacity, throughput, capacity utilization, subscribed capacity utilization, resources capacity utilization, subscribed resources capacity utilization, and number ofmore » cargo pieces (or pallets) in the different queues. These metrics are performance indicators of the system s ability to service current needs and response capacity to additional requests. We studied and analyzed different scenarios by changing various model parameters such as number of pieces per pallet, number of TSA inspectors and ATS personnel, number of forklifts, number of explosives trace detection (ETD) and explosives detection system (EDS) inspection machines, inspection modality distribution, alarm rate, and cargo closeout time. The increased physical understanding resulting from execution of the queuing model utilizing these vetted performance measures should reduce the overall cost and shipping delays associated with new inspection requirements.« less
System identification using Nuclear Norm & Tabu Search optimization
NASA Astrophysics Data System (ADS)
Ahmed, Asif A.; Schoen, Marco P.; Bosworth, Ken W.
2018-01-01
In recent years, subspace System Identification (SI) algorithms have seen increased research, stemming from advanced minimization methods being applied to the Nuclear Norm (NN) approach in system identification. These minimization algorithms are based on hard computing methodologies. To the authors’ knowledge, as of now, there has been no work reported that utilizes soft computing algorithms to address the minimization problem within the nuclear norm SI framework. A linear, time-invariant, discrete time system is used in this work as the basic model for characterizing a dynamical system to be identified. The main objective is to extract a mathematical model from collected experimental input-output data. Hankel matrices are constructed from experimental data, and the extended observability matrix is employed to define an estimated output of the system. This estimated output and the actual - measured - output are utilized to construct a minimization problem. An embedded rank measure assures minimum state realization outcomes. Current NN-SI algorithms employ hard computing algorithms for minimization. In this work, we propose a simple Tabu Search (TS) algorithm for minimization. TS algorithm based SI is compared with the iterative Alternating Direction Method of Multipliers (ADMM) line search optimization based NN-SI. For comparison, several different benchmark system identification problems are solved by both approaches. Results show improved performance of the proposed SI-TS algorithm compared to the NN-SI ADMM algorithm.
Earth resources sensor data handling system: NASA JSC version
NASA Technical Reports Server (NTRS)
1974-01-01
The design of the NASA JSC data handling system is presented. Data acquisition parameters and computer display formats and the flow of image data through the system, with recommendations for improving system efficiency are discussed along with modifications to existing data handling procedures which will allow utilization of data duplication techniques and the accurate identification of imagery.
NASA Technical Reports Server (NTRS)
Schlagheck, R. A.
1977-01-01
New planning techniques and supporting computer tools are needed for the optimization of resources and costs for space transportation and payload systems. Heavy emphasis on cost effective utilization of resources has caused NASA program planners to look at the impact of various independent variables that affect procurement buying. A description is presented of a category of resource planning which deals with Spacelab inventory procurement analysis. Spacelab is a joint payload project between NASA and the European Space Agency and will be flown aboard the Space Shuttle starting in 1980. In order to respond rapidly to the various procurement planning exercises, a system was built that could perform resource analysis in a quick and efficient manner. This system is known as the Interactive Resource Utilization Program (IRUP). Attention is given to aspects of problem definition, an IRUP system description, questions of data base entry, the approach used for project scheduling, and problems of resource allocation.
NASA Astrophysics Data System (ADS)
Barreiro, F. H.; Borodin, M.; De, K.; Golubkov, D.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Padolski, S.; Wenaus, T.; ATLAS Collaboration
2017-10-01
The second generation of the ATLAS Production System called ProdSys2 is a distributed workload manager that runs daily hundreds of thousands of jobs, from dozens of different ATLAS specific workflows, across more than hundred heterogeneous sites. It achieves high utilization by combining dynamic job definition based on many criteria, such as input and output size, memory requirements and CPU consumption, with manageable scheduling policies and by supporting different kind of computational resources, such as GRID, clouds, supercomputers and volunteer-computers. The system dynamically assigns a group of jobs (task) to a group of geographically distributed computing resources. Dynamic assignment and resources utilization is one of the major features of the system, it didn’t exist in the earliest versions of the production system where Grid resources topology was predefined using national or/and geographical pattern. Production System has a sophisticated job fault-recovery mechanism, which efficiently allows to run multi-Terabyte tasks without human intervention. We have implemented “train” model and open-ended production which allow to submit tasks automatically as soon as new set of data is available and to chain physics groups data processing and analysis with central production by the experiment. We present an overview of the ATLAS Production System and its major components features and architecture: task definition, web user interface and monitoring. We describe the important design decisions and lessons learned from an operational experience during the first year of LHC Run2. We also report the performance of the designed system and how various workflows, such as data (re)processing, Monte-Carlo and physics group production, users analysis, are scheduled and executed within one production system on heterogeneous computing resources.
Estimating and validating harvesting system production through computer simulation
John E. Baumgras; Curt C. Hassler; Chris B. LeDoux
1993-01-01
A Ground Based Harvesting System Simulation model (GB-SIM) has been developed to estimate stump-to-truck production rates and multiproduct yields for conventional ground-based timber harvesting systems in Appalachian hardwood stands. Simulation results reflect inputs that define harvest site and timber stand attributes, wood utilization options, and key attributes of...
NASA Technical Reports Server (NTRS)
Whetstone, W. D.
1976-01-01
The functions and operating rules of the SPAR system, which is a group of computer programs used primarily to perform stress, buckling, and vibrational analyses of linear finite element systems, were given. The following subject areas were discussed: basic information, structure definition, format system matrix processors, utility programs, static solutions, stresses, sparse matrix eigensolver, dynamic response, graphics, and substructure processors.
What Is Anonymous?: A Case Study of an Information Systems Hacker Activist Collective Movement
ERIC Educational Resources Information Center
Pendergrass, William Stanley
2013-01-01
Interconnected computer information systems have become indispensable aspects of modern life. All forms of communication, education, finance, commerce and identity utilize these systems creating a permanent personal presence for all of us within this digital world. Individuals who reveal or threaten to reveal these personal identities for various…
CMSA: a heterogeneous CPU/GPU computing system for multiple similar RNA/DNA sequence alignment.
Chen, Xi; Wang, Chen; Tang, Shanjiang; Yu, Ce; Zou, Quan
2017-06-24
The multiple sequence alignment (MSA) is a classic and powerful technique for sequence analysis in bioinformatics. With the rapid growth of biological datasets, MSA parallelization becomes necessary to keep its running time in an acceptable level. Although there are a lot of work on MSA problems, their approaches are either insufficient or contain some implicit assumptions that limit the generality of usage. First, the information of users' sequences, including the sizes of datasets and the lengths of sequences, can be of arbitrary values and are generally unknown before submitted, which are unfortunately ignored by previous work. Second, the center star strategy is suited for aligning similar sequences. But its first stage, center sequence selection, is highly time-consuming and requires further optimization. Moreover, given the heterogeneous CPU/GPU platform, prior studies consider the MSA parallelization on GPU devices only, making the CPUs idle during the computation. Co-run computation, however, can maximize the utilization of the computing resources by enabling the workload computation on both CPU and GPU simultaneously. This paper presents CMSA, a robust and efficient MSA system for large-scale datasets on the heterogeneous CPU/GPU platform. It performs and optimizes multiple sequence alignment automatically for users' submitted sequences without any assumptions. CMSA adopts the co-run computation model so that both CPU and GPU devices are fully utilized. Moreover, CMSA proposes an improved center star strategy that reduces the time complexity of its center sequence selection process from O(mn 2 ) to O(mn). The experimental results show that CMSA achieves an up to 11× speedup and outperforms the state-of-the-art software. CMSA focuses on the multiple similar RNA/DNA sequence alignment and proposes a novel bitmap based algorithm to improve the center star strategy. We can conclude that harvesting the high performance of modern GPU is a promising approach to accelerate multiple sequence alignment. Besides, adopting the co-run computation model can maximize the entire system utilization significantly. The source code is available at https://github.com/wangvsa/CMSA .
Fault-tolerant clock synchronization validation methodology. [in computer systems
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Palumbo, Daniel L.; Johnson, Sally C.
1987-01-01
A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight-crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating the clock synchronization system of the Software Implemented Fault Tolerance computer. The design proof of the algorithm includes a theorem that defines the maximum skew between any two nonfaulty clocks in the system in terms of specific system parameters. Most of these parameters are deterministic. One crucial parameter is the upper bound on the clock read error, which is stochastic. The probability that this upper bound is exceeded is calculated from data obtained by the measurement of system parameters. This probability is then included in a detailed reliability analysis of the system.
LCP- LIFETIME COST AND PERFORMANCE MODEL FOR DISTRIBUTED PHOTOVOLTAIC SYSTEMS
NASA Technical Reports Server (NTRS)
Borden, C. S.
1994-01-01
The Lifetime Cost and Performance (LCP) Model was developed to assist in the assessment of Photovoltaic (PV) system design options. LCP is a simulation of the performance, cost, and revenue streams associated with distributed PV power systems. LCP provides the user with substantial flexibility in specifying the technical and economic environment of the PV application. User-specified input parameters are available to describe PV system characteristics, site climatic conditions, utility purchase and sellback rate structures, discount and escalation rates, construction timing, and lifetime of the system. Such details as PV array orientation and tilt angle, PV module and balance-of-system performance attributes, and the mode of utility interconnection are user-specified. LCP assumes that the distributed PV system is utility grid interactive without dedicated electrical storage. In combination with a suitable economic model, LCP can provide an estimate of the expected net present worth of a PV system to the owner, as compared to electricity purchased from a utility grid. Similarly, LCP might be used to perform sensitivity analyses to identify those PV system parameters having significant impact on net worth. The user describes the PV system configuration to LCP via the basic electrical components. The module is the smallest entity in the PV system which is modeled. A PV module is defined in the simulation by its short circuit current, which varies over the system lifetime due to degradation and failure. Modules are wired in series to form a branch circuit. Bypass diodes are allowed between modules in the branch circuits. Branch circuits are then connected in parallel to form a bus. A collection of buses is connected in parallel to form an increment to capacity of the system. By choosing the appropriate series-parallel wiring design, the user can specify the current, voltage, and reliability characteristics of the system. LCP simulation of system performance is site-specific and follows a three-step procedure. First the hourly power produced by the PV system is computed using a selected year's insolation and temperature profile. For this step it is assumed that there are no module failures or degradation. Next, the monthly simulation is performed involving a month to month progression through the lifetime of the system. In this step, the effects of degradation, failure, dirt accumulation and operations/maintenance efforts on PV system performance over time are used to compute the monthly power capability fraction. The resulting monthly power capability fractions are applied to the hourly power matrix from the first step, giving the anticipated hourly energy output over the lifetime of the system. PV system energy output is compared with the PV system owner's electricity demand for each hour. The amount of energy to be purchased from or sold to the utility grid is then determined. Monthly expenditures on the PV system and the purchase of electricity from the utility grid are also calculated. LCP generates output reports pertaining to the performance of the PV system, and system costs and revenues. The LCP model, written in SIMSCRIPT 2.5 for batch execution on an IBM 370 series computer, was developed in 1981.
Cardiology office computer use: primer, pointers, pitfalls.
Shepard, R B; Blum, R I
1986-10-01
An office computer is a utility, like an automobile, with benefits and costs that are both direct and hidden and potential for disaster. For the cardiologist or cardiovascular surgeon, the increasing power and decreasing costs of computer hardware and the availability of software make use of an office computer system an increasingly attractive possibility. Management of office business functions is common; handling and scientific analysis of practice medical information are less common. The cardiologist can also access national medical information systems for literature searches and for interactive further education. Selection and testing of programs and the entire computer system before purchase of computer hardware will reduce the chances of disappointment or serious problems. Personnel pretraining and planning for office information flow and medical information security are necessary. Some cardiologists design their own office systems, buy hardware and software as needed, write programs for themselves and carry out the implementation themselves. For most cardiologists, the better course will be to take advantage of the professional experience of expert advisors. This article provides a starting point from which the practicing cardiologist can approach considering, specifying or implementing an office computer system for business functions and for scientific analysis of practice results.
Internal fluid mechanics research on supercomputers for aerospace propulsion systems
NASA Technical Reports Server (NTRS)
Miller, Brent A.; Anderson, Bernhard H.; Szuch, John R.
1988-01-01
The Internal Fluid Mechanics Division of the NASA Lewis Research Center is combining the key elements of computational fluid dynamics, aerothermodynamic experiments, and advanced computational technology to bring internal computational fluid mechanics (ICFM) to a state of practical application for aerospace propulsion systems. The strategies used to achieve this goal are to: (1) pursue an understanding of flow physics, surface heat transfer, and combustion via analysis and fundamental experiments, (2) incorporate improved understanding of these phenomena into verified 3-D CFD codes, and (3) utilize state-of-the-art computational technology to enhance experimental and CFD research. Presented is an overview of the ICFM program in high-speed propulsion, including work in inlets, turbomachinery, and chemical reacting flows. Ongoing efforts to integrate new computer technologies, such as parallel computing and artificial intelligence, into high-speed aeropropulsion research are described.
Combating adverse selection in secondary PC markets.
Hickey, Stewart W; Fitzpatrick, Colin
2008-04-15
Adverse selection is a significant contributor to market failure in secondary personal computer (PC) markets. Signaling can act as a potential solution to adverse selection and facilitate superior remarketing of second-hand PCs. Signaling is a means whereby usage information can be utilized to enhance consumer perception of both value and utility of used PCs and, therefore, promote lifetime extension for these systems. This can help mitigate a large portion of the environmental impact associated with PC system manufacture. In this paper, the computer buying and selling behavior of consumers is characterized via a survey of 270 Irish residential users. Results confirm the existence of adverse selection in the Irish market with 76% of potential buyers being unwilling to purchase and 45% of potential vendors being unwilling to sell a used PC. The so-called "closet affect" is also apparent with 78% of users storing their PC after use has ceased. Results also indicate that consumers place a higher emphasis on specifications when considering a second-hand purchase. This contradicts their application needs which are predominantly Internet and word-processing/spreadsheet/presentation applications, 88% and 60% respectively. Finally, a market solution utilizing self monitoring and reporting technology (SMART) sensors for the purpose of real time usage monitoring is proposed, that can change consumer attitudes with regard to second-hand computer equipment.
Mamdani Fuzzy System for Indoor Autonomous Mobile Robot
NASA Astrophysics Data System (ADS)
Khan, M. K. A. Ahamed; Rashid, Razif; Elamvazuthi, I.
2011-06-01
Several control algorithms for autonomous mobile robot navigation have been proposed in the literature. Recently, the employment of non-analytical methods of computing such as fuzzy logic, evolutionary computation, and neural networks has demonstrated the utility and potential of these paradigms for intelligent control of mobile robot navigation. In this paper, Mamdani fuzzy system for an autonomous mobile robot is developed. The paper begins with the discussion on the conventional controller and then followed by the description of fuzzy logic controller in detail.
MRIVIEW: An interactive computational tool for investigation of brain structure and function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ranken, D.; George, J.
MRIVIEW is a software system which uses image processing and visualization to provide neuroscience researchers with an integrated environment for combining functional and anatomical information. Key features of the software include semi-automated segmentation of volumetric head data and an interactive coordinate reconciliation method which utilizes surface visualization. The current system is a precursor to a computational brain atlas. We describe features this atlas will incorporate, including methods under development for visualizing brain functional data obtained from several different research modalities.
Computer Aided Grid Interface: An Interactive CFD Pre-Processor
NASA Technical Reports Server (NTRS)
Soni, Bharat K.
1997-01-01
NASA maintains an applications oriented computational fluid dynamics (CFD) efforts complementary to and in support of the aerodynamic-propulsion design and test activities. This is especially true at NASA/MSFC where the goal is to advance and optimize present and future liquid-fueled rocket engines. Numerical grid generation plays a significant role in the fluid flow simulations utilizing CFD. An overall goal of the current project was to develop a geometry-grid generation tool that will help engineers, scientists and CFD practitioners to analyze design problems involving complex geometries in a timely fashion. This goal is accomplished by developing the CAGI: Computer Aided Grid Interface system. The CAGI system is developed by integrating CAD/CAM (Computer Aided Design/Computer Aided Manufacturing) geometric system output and/or Initial Graphics Exchange Specification (IGES) files (including all the NASA-IGES entities), geometry manipulations and generations associated with grid constructions, and robust grid generation methodologies. This report describes the development process of the CAGI system.
Computer Aided Grid Interface: An Interactive CFD Pre-Processor
NASA Technical Reports Server (NTRS)
Soni, Bharat K.
1996-01-01
NASA maintains an applications oriented computational fluid dynamics (CFD) efforts complementary to and in support of the aerodynamic-propulsion design and test activities. This is especially true at NASA/MSFC where the goal is to advance and optimize present and future liquid-fueled rocket engines. Numerical grid generation plays a significant role in the fluid flow simulations utilizing CFD. An overall goal of the current project was to develop a geometry-grid generation tool that will help engineers, scientists and CFD practitioners to analyze design problems involving complex geometries in a timely fashion. This goal is accomplished by developing the Computer Aided Grid Interface system (CAGI). The CAGI system is developed by integrating CAD/CAM (Computer Aided Design/Computer Aided Manufacturing) geometric system output and / or Initial Graphics Exchange Specification (IGES) files (including all the NASA-IGES entities), geometry manipulations and generations associated with grid constructions, and robust grid generation methodologies. This report describes the development process of the CAGI system.
Atomic switch networks—nanoarchitectonic design of a complex system for natural computing
NASA Astrophysics Data System (ADS)
Demis, E. C.; Aguilera, R.; Sillin, H. O.; Scharnhorst, K.; Sandouk, E. J.; Aono, M.; Stieg, A. Z.; Gimzewski, J. K.
2015-05-01
Self-organized complex systems are ubiquitous in nature, and the structural complexity of these natural systems can be used as a model to design new classes of functional nanotechnology based on highly interconnected networks of interacting units. Conventional fabrication methods for electronic computing devices are subject to known scaling limits, confining the diversity of possible architectures. This work explores methods of fabricating a self-organized complex device known as an atomic switch network and discusses its potential utility in computing. Through a merger of top-down and bottom-up techniques guided by mathematical and nanoarchitectonic design principles, we have produced functional devices comprising nanoscale elements whose intrinsic nonlinear dynamics and memorization capabilities produce robust patterns of distributed activity and a capacity for nonlinear transformation of input signals when configured in the appropriate network architecture. Their operational characteristics represent a unique potential for hardware implementation of natural computation, specifically in the area of reservoir computing—a burgeoning field that investigates the computational aptitude of complex biologically inspired systems.
NASA Astrophysics Data System (ADS)
Lou, Yang; Zhou, Weimin; Matthews, Thomas P.; Appleton, Catherine M.; Anastasio, Mark A.
2017-04-01
Photoacoustic computed tomography (PACT) and ultrasound computed tomography (USCT) are emerging modalities for breast imaging. As in all emerging imaging technologies, computer-simulation studies play a critically important role in developing and optimizing the designs of hardware and image reconstruction methods for PACT and USCT. Using computer-simulations, the parameters of an imaging system can be systematically and comprehensively explored in a way that is generally not possible through experimentation. When conducting such studies, numerical phantoms are employed to represent the physical properties of the patient or object to-be-imaged that influence the measured image data. It is highly desirable to utilize numerical phantoms that are realistic, especially when task-based measures of image quality are to be utilized to guide system design. However, most reported computer-simulation studies of PACT and USCT breast imaging employ simple numerical phantoms that oversimplify the complex anatomical structures in the human female breast. We develop and implement a methodology for generating anatomically realistic numerical breast phantoms from clinical contrast-enhanced magnetic resonance imaging data. The phantoms will depict vascular structures and the volumetric distribution of different tissue types in the breast. By assigning optical and acoustic parameters to different tissue structures, both optical and acoustic breast phantoms will be established for use in PACT and USCT studies.
MIRADS-2 Implementation Manual
NASA Technical Reports Server (NTRS)
1975-01-01
The Marshall Information Retrieval and Display System (MIRADS) which is a data base management system designed to provide the user with a set of generalized file capabilities is presented. The system provides a wide variety of ways to process the contents of the data base and includes capabilities to search, sort, compute, update, and display the data. The process of creating, defining, and loading a data base is generally called the loading process. The steps in the loading process which includes (1) structuring, (2) creating, (3) defining, (4) and implementing the data base for use by MIRADS are defined. The execution of several computer programs is required to successfully complete all steps of the loading process. This library must be established as a cataloged mass storage file as the first step in MIRADS implementation. The procedure for establishing the MIRADS Library is given. The system is currently operational for the UNIVAC 1108 computer system utilizing the Executive Operating System. All procedures relate to the use of MIRADS on the U-1108 computer.
Knowledge-based environment for optical system design
NASA Astrophysics Data System (ADS)
Johnson, R. Barry
1991-01-01
Optical systems are extensively utilized by industry government and military organizations. The conceptual design engineering design fabrication and testing of these systems presently requires significant time typically on the order of 3-5 years. The Knowledge-Based Environment for Optical System Design (KB-OSD) Program has as its principal objectives the development of a methodology and tool(s) that will make a notable reduction in the development time of optical system projects reduce technical risk and overall cost. KB-OSD can be considered as a computer-based optical design associate for system engineers and design engineers. By utilizing artificial intelligence technology coupled with extensive design/evaluation computer application programs and knowledge bases the KB-OSD will provide the user with assistance and guidance to accomplish such activities as (i) develop system level and hardware level requirements from mission requirements (ii) formulate conceptual designs (iii) construct a statement of work for an RFP (iv) develop engineering level designs (v) evaluate an existing design and (vi) explore the sensitivity of a system to changing scenarios. The KB-OSD comprises a variety of computer platforms including a Stardent Titan supercomputer numerous design programs (lens design coating design thermal materials structural atmospherics etc. ) data bases and heuristic knowledge bases. An important element of the KB-OSD Program is the inclusion of the knowledge of individual experts in various areas of optics and optical system engineering. This knowledge is obtained by KB-OSD knowledge engineers performing
A Novel Approach to Develop the Lower Order Model of Multi-Input Multi-Output System
NASA Astrophysics Data System (ADS)
Rajalakshmy, P.; Dharmalingam, S.; Jayakumar, J.
2017-10-01
A mathematical model is a virtual entity that uses mathematical language to describe the behavior of a system. Mathematical models are used particularly in the natural sciences and engineering disciplines like physics, biology, and electrical engineering as well as in the social sciences like economics, sociology and political science. Physicists, Engineers, Computer scientists, and Economists use mathematical models most extensively. With the advent of high performance processors and advanced mathematical computations, it is possible to develop high performing simulators for complicated Multi Input Multi Ouptut (MIMO) systems like Quadruple tank systems, Aircrafts, Boilers etc. This paper presents the development of the mathematical model of a 500 MW utility boiler which is a highly complex system. A synergistic combination of operational experience, system identification and lower order modeling philosophy has been effectively used to develop a simplified but accurate model of a circulation system of a utility boiler which is a MIMO system. The results obtained are found to be in good agreement with the physics of the process and with the results obtained through design procedure. The model obtained can be directly used for control system studies and to realize hardware simulators for boiler testing and operator training.
Many-core computing for space-based stereoscopic imaging
NASA Astrophysics Data System (ADS)
McCall, Paul; Torres, Gildo; LeGrand, Keith; Adjouadi, Malek; Liu, Chen; Darling, Jacob; Pernicka, Henry
The potential benefits of using parallel computing in real-time visual-based satellite proximity operations missions are investigated. Improvements in performance and relative navigation solutions over single thread systems can be achieved through multi- and many-core computing. Stochastic relative orbit determination methods benefit from the higher measurement frequencies, allowing them to more accurately determine the associated statistical properties of the relative orbital elements. More accurate orbit determination can lead to reduced fuel consumption and extended mission capabilities and duration. Inherent to the process of stereoscopic image processing is the difficulty of loading, managing, parsing, and evaluating large amounts of data efficiently, which may result in delays or highly time consuming processes for single (or few) processor systems or platforms. In this research we utilize the Single-Chip Cloud Computer (SCC), a fully programmable 48-core experimental processor, created by Intel Labs as a platform for many-core software research, provided with a high-speed on-chip network for sharing information along with advanced power management technologies and support for message-passing. The results from utilizing the SCC platform for the stereoscopic image processing application are presented in the form of Performance, Power, Energy, and Energy-Delay-Product (EDP) metrics. Also, a comparison between the SCC results and those obtained from executing the same application on a commercial PC are presented, showing the potential benefits of utilizing the SCC in particular, and any many-core platforms in general for real-time processing of visual-based satellite proximity operations missions.
Microcomputer-Based Intelligent Tutoring Systems: An Assessment.
ERIC Educational Resources Information Center
Schaffer, John William
Computer-assisted instruction, while familiar to most teachers, has failed to become an effective self-motivating instructional tool. Developments in artificial intelligence, however, have provided new and better tools for exploring human knowledge acquisition and utilization. Expert system technology represents one of the most promising of these…
From Workstation to Teacher Support System: A Tool to Increase Productivity.
ERIC Educational Resources Information Center
Chen, J. Wey
1989-01-01
Describes a teacher support system which is a computer-based workstation that provides support for teachers and administrators by integrating teacher utility programs, instructional management software, administrative packages, and office automation tools. Hardware is described and software components are explained, including database managers,…
NASA Technical Reports Server (NTRS)
Warren, A. W.; Esinger, A. W.
1979-01-01
Procedures are given for using the SIMWEST program on CDC 6000 series computers. This expanded software package includes wind and/or photovoltaic systems utilizing any combination of five types of storage (pumped hydro, battery, thermal, flywheel, and pneumatic).
Maximum likelihood convolutional decoding (MCD) performance due to system losses
NASA Technical Reports Server (NTRS)
Webster, L.
1976-01-01
A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.
Color graphics, interactive processing, and the supercomputer
NASA Technical Reports Server (NTRS)
Smith-Taylor, Rudeen
1987-01-01
The development of a common graphics environment for the NASA Langley Research Center user community and the integration of a supercomputer into this environment is examined. The initial computer hardware, the software graphics packages, and their configurations are described. The addition of improved computer graphics capability to the supercomputer, and the utilization of the graphic software and hardware are discussed. Consideration is given to the interactive processing system which supports the computer in an interactive debugging, processing, and graphics environment.
Simulation studies of the application of SEASAT data in weather and state of sea forecasting models
NASA Technical Reports Server (NTRS)
Cardone, V. J.; Greenwood, J. A.
1979-01-01
The design and analysis of SEASAT simulation studies in which the error structure of conventional analyses and forecasts is modeled realistically are presented. The development and computer implementation of a global spectral ocean wave model is described. The design of algorithms for the assimilation of theoretical wind data into computers and for the utilization of real wind data and wave height data in a coupled computer system are presented.
NASA Technical Reports Server (NTRS)
Thomas, V. C.
1986-01-01
A Vibroacoustic Data Base Management Center has been established at the Jet Propulsion Laboratory (JPL). The center utilizes the Vibroacoustic Payload Environment Prediction System (VAPEPS) software package to manage a data base of shuttle and expendable launch vehicle flight and ground test data. Remote terminal access over telephone lines to a dedicated VAPEPS computer system has been established to provide the payload community a convenient means of querying the global VAPEPS data base. This guide describes the functions of the JPL Data Base Management Center and contains instructions for utilizing the resources of the center.
Enterprise Cloud Architecture for Chinese Ministry of Railway
NASA Astrophysics Data System (ADS)
Shan, Xumei; Liu, Hefeng
Enterprise like PRC Ministry of Railways (MOR), is facing various challenges ranging from highly distributed computing environment and low legacy system utilization, Cloud Computing is increasingly regarded as one workable solution to address this. This article describes full scale cloud solution with Intel Tashi as virtual machine infrastructure layer, Hadoop HDFS as computing platform, and self developed SaaS interface, gluing virtual machine and HDFS with Xen hypervisor. As a result, on demand computing task application and deployment have been tackled per MOR real working scenarios at the end of article.
A simulation model for wind energy storage systems. Volume 1: Technical report
NASA Technical Reports Server (NTRS)
Warren, A. W.; Edsinger, R. W.; Chan, Y. K.
1977-01-01
A comprehensive computer program for the modeling of wind energy and storage systems utilizing any combination of five types of storage (pumped hydro, battery, thermal, flywheel and pneumatic) was developed. The level of detail of Simulation Model for Wind Energy Storage (SIMWEST) is consistent with a role of evaluating the economic feasibility as well as the general performance of wind energy systems. The software package consists of two basic programs and a library of system, environmental, and load components. The first program is a precompiler which generates computer models (in FORTRAN) of complex wind source storage application systems, from user specifications using the respective library components. The second program provides the techno-economic system analysis with the respective I/O, the integration of systems dynamics, and the iteration for conveyance of variables. SIMWEST program, as described, runs on the UNIVAC 1100 series computers.
Lee, Kang-Hoon; Shin, Kyung-Seop; Lim, Debora; Kim, Woo-Chan; Chung, Byung Chang; Han, Gyu-Bum; Roh, Jeongkyu; Cho, Dong-Ho; Cho, Kiho
2015-07-01
The genomes of living organisms are populated with pleomorphic repetitive elements (REs) of varying densities. Our hypothesis that genomic RE landscapes are species/strain/individual-specific was implemented into the Genome Signature Imaging system to visualize and compute the RE-based signatures of any genome. Following the occurrence profiling of 5-nucleotide REs/words, the information from top-50 frequency words was transformed into a genome-specific signature and visualized as Genome Signature Images (GSIs), using a CMYK scheme. An algorithm for computing distances among GSIs was formulated using the GSIs' variables (word identity, frequency, and frequency order). The utility of the GSI-distance computation system was demonstrated with control genomes. GSI-based computation of genome-relatedness among 1766 microbes (117 archaea and 1649 bacteria) identified their clustering patterns; although the majority paralleled the established classification, some did not. The Genome Signature Imaging system, with its visualization and distance computation functions, enables genome-scale evolutionary studies involving numerous genomes with varying sizes. Copyright © 2015 Elsevier Inc. All rights reserved.
John G. Michopoulos; John Hermanson; Athanasios Iliopoulos
2014-01-01
The research areas of mutiaxial robotic testing and design optimization have been recently utilized for the purpose of data-driven constitutive characterization of anisotropic material systems. This effort has been enabled by both the progress in the areas of computers and information in engineering as well as the progress in computational automation. Although our...
Computer-Assisted Instruction: Decision Handbook.
1985-04-01
to feelings of " depersonalization " or "dehumanization." The approach is to document investigations of attitudes toward CBI held by students and...utilized within a computer-based training system that includes management of student progress, training resources, testing, and instructional materials...training time. As compared to programmed texts and workbookl, students were more attentive and stayed on task. The attentiveness to PLATO materials
ERIC Educational Resources Information Center
Li, Yi
2012-01-01
This study focuses on the issue of learning equity in colleges and universities where teaching and learning have come to depend heavily on computer technologies. The study uses the Multiple Indicators Multiple Causes (MIMIC) latent variable model to quantitatively investigate whether there is a gender /ethnicity difference in using computer based…
Operating a Geiger-Muller Tube Using a PC Sound Card
ERIC Educational Resources Information Center
Azooz, A. A.
2009-01-01
In this paper, a simple MATLAB-based PC program that enables the computer to function as a replacement for the electronic scalar-counter system associated with a Geiger-Muller (GM) tube is described. The program utilizes the ability of MATLAB to acquire data directly from the computer sound card. The signal from the GM tube is applied to the…
ERIC Educational Resources Information Center
Kunzler, Jayson S.
2012-01-01
This dissertation describes a research study designed to explore whether customization of online instruction results in improved learning in a college business statistics course. The study involved utilizing computer spreadsheet technology to develop an intelligent tutoring system (ITS) designed to: a) collect and monitor individual real-time…
O'Reilly, Robert; Fedorko, Steve; Nicholson, Nigel
1983-01-01
This paper describes a structured interview process for medical school admissions supported by an Apple II computer system which provides feedback to interviewers and the College admissions committee. Presented are the rationale for the system, the preliminary results of analysis of some of the interview data, and a brief description of the computer program and output. The present data show that the structured interview yields very high interrater reliability coefficients, is acceptable to the medical school faculty, and results in quantitative data useful in the admission process. The system continues in development at this time, a second year of data will be shortly available, and further refinements are being made to the computer program to enhance its utilization and exportability.
A Boundary Delineation System for the Bureau of Ocean Energy Management
NASA Astrophysics Data System (ADS)
Vandegraft, Douglas L.
2018-05-01
Federal government mapping of the offshore areas of the United States in support of the development of oil and gas resources began in 1954. The first mapping system utilized a network of rectangular blocks defined by State Plane coordinates which was later revised to utilize the Universal Transverse Mercator grid. Creation of offshore boundaries directed by the Submerged Lands Act and Outer Continental Shelf Lands Act were mathematically determined using early computer programs that performed the required computations, but required many steps. The Bureau of Ocean Energy Management has revised these antiquated methods using GIS technology which provide the required accuracy and produce the mapping products needed for leasing of energy resources, including renewable energy projects, on the outer continental shelf. (Note: this is an updated version of a paper of the same title written and published in 2015).
Method of mobile robot indoor navigation by artificial landmarks with use of computer vision
NASA Astrophysics Data System (ADS)
Glibin, E. S.; Shevtsov, A. A.; Enik, O. A.
2018-05-01
The article describes an algorithm of the mobile robot indoor navigation based on the use of visual odometry. The results of the experiment identifying calculation errors in the distance traveled on a slip are presented. It is shown that the use of computer vision allows one to correct erroneous coordinates of the robot with the help of artificial landmarks. The control system utilizing the proposed method has been realized on the basis of Arduino Mego 2560 controller and a single-board computer Raspberry Pi 3. The results of the experiment on the mobile robot navigation with the use of this control system are presented.
NASA Technical Reports Server (NTRS)
Wattson, R. B.; Harvey, P.; Swift, R.
1975-01-01
An intrinsic silicon charge injection device (CID) television sensor array has been used in conjunction with a CaMoO4 colinear tunable acousto optic filter, a 61 inch reflector, a sophisticated computer system, and a digital color TV scan converter/computer to produce near IR images of Saturn and Jupiter with 10A spectral resolution and approximately 3 inch spatial resolution. The CID camera has successfully obtained digitized 100 x 100 array images with 5 minutes of exposure time, and slow-scanned readout to a computer. Details of the equipment setup, innovations, problems, experience, data and final equipment performance limits are given.
On-Iine Management System for the Periodicals in JAERl
NASA Astrophysics Data System (ADS)
Itabashi, Keizo; Mineo, Yukinobu
The article describes the outlines of the on-line serials control system utilizing a mini-computer. The system is dealt with subscription, check-in, claiming, inquiry of serials information and binding of journals. In this system journal acquisition with serial arrival prediction in an on-line mode is carried on a priority principle to record the actual receipt of incoming issues.
AGIS: Integration of new technologies used in ATLAS Distributed Computing
NASA Astrophysics Data System (ADS)
Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria
2017-10-01
The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.
Distribution system model calibration with big data from AMI and PV inverters
Peppanen, Jouni; Reno, Matthew J.; Broderick, Robert J.; ...
2016-03-03
Efficient management and coordination of distributed energy resources with advanced automation schemes requires accurate distribution system modeling and monitoring. Big data from smart meters and photovoltaic (PV) micro-inverters can be leveraged to calibrate existing utility models. This paper presents computationally efficient distribution system parameter estimation algorithms to improve the accuracy of existing utility feeder radial secondary circuit model parameters. The method is demonstrated using a real utility feeder model with advanced metering infrastructure (AMI) and PV micro-inverters, along with alternative parameter estimation approaches that can be used to improve secondary circuit models when limited measurement data is available. Lastly, themore » parameter estimation accuracy is demonstrated for both a three-phase test circuit with typical secondary circuit topologies and single-phase secondary circuits in a real mixed-phase test system.« less
Distribution system model calibration with big data from AMI and PV inverters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peppanen, Jouni; Reno, Matthew J.; Broderick, Robert J.
Efficient management and coordination of distributed energy resources with advanced automation schemes requires accurate distribution system modeling and monitoring. Big data from smart meters and photovoltaic (PV) micro-inverters can be leveraged to calibrate existing utility models. This paper presents computationally efficient distribution system parameter estimation algorithms to improve the accuracy of existing utility feeder radial secondary circuit model parameters. The method is demonstrated using a real utility feeder model with advanced metering infrastructure (AMI) and PV micro-inverters, along with alternative parameter estimation approaches that can be used to improve secondary circuit models when limited measurement data is available. Lastly, themore » parameter estimation accuracy is demonstrated for both a three-phase test circuit with typical secondary circuit topologies and single-phase secondary circuits in a real mixed-phase test system.« less
Real-Time Multiprocessor Programming Language (RTMPL) user's manual
NASA Technical Reports Server (NTRS)
Arpasi, D. J.
1985-01-01
A real-time multiprocessor programming language (RTMPL) has been developed to provide for high-order programming of real-time simulations on systems of distributed computers. RTMPL is a structured, engineering-oriented language. The RTMPL utility supports a variety of multiprocessor configurations and types by generating assembly language programs according to user-specified targeting information. Many programming functions are assumed by the utility (e.g., data transfer and scaling) to reduce the programming chore. This manual describes RTMPL from a user's viewpoint. Source generation, applications, utility operation, and utility output are detailed. An example simulation is generated to illustrate many RTMPL features.
An exploration of neuromorphic systems and related design issues/challenges in dark silicon era
NASA Astrophysics Data System (ADS)
Chandaliya, Mudit; Chaturvedi, Nitin; Gurunarayanan, S.
2018-03-01
The current microprocessors has shown a remarkable performance and memory capacity improvement since its innovation. However, due to power and thermal limitations, only a fraction of cores can operate at full frequency at any instant of time irrespective of the advantages of new technology generation. This phenomenon of under-utilization of microprocessor is called as dark silicon which leads to distraction in innovative computing. To overcome the limitation of utilization wall, IBM technologies explored and invented neurosynaptic system chips. It has opened a wide scope of research in the field of innovative computing, technology, material sciences, machine learning etc. In this paper, we first reviewed the diverse stages of research that have been influential in the innovation of neurosynaptic architectures. These, architectures focuses on the development of brain-like framework which is efficient enough to execute a broad set of computations in real time while maintaining ultra-low power consumption as well as area considerations in mind. We also reveal the inadvertent challenges and the opportunities of designing neuromorphic systems as presented by the existing technologies in the dark silicon era, which constitute the utmost area of research in future.
Determining noise temperatures in beam waveguide systems
NASA Technical Reports Server (NTRS)
Imbriale, W.; Veruttipong, W.; Otoshi, T.; Franco, M.
1994-01-01
A new 34-m research and development antenna was fabricated and tested as a precursor to introducing beam waveguide (BWG) antennas and Ka-band (32 GHz) frequencies into the NASA/JPL Deep Space Network. For deep space use, system noise temperature is a critical parameter. There are thought to be two major contributors to noise temperature in a BWG system: the spillover past the mirrors, and the conductivity loss in the walls. However, to date, there are no generally accepted methods for computing noise temperatures in a beam waveguide system. An extensive measurement program was undertaken to determine noise temperatures in such a system along with a correspondent effort in analytic prediction. Utilizing a very sensitive radiometer, noise temperature measurements were made at the Cassegrain focus, an intermediate focal point, and the focal point in the basement pedestal room. Several different horn diameters were used to simulate different amounts of spillover past the mirrors. Two analytic procedures were developed for computing noise temperature, one utilizing circular waveguide modes and the other a semiempirical approach. The results of both prediction methods are compared to the experimental data.
NASA Technical Reports Server (NTRS)
Hall, Edward J.; Delaney, Robert A.; Bettner, James L.
1991-01-01
The primary objective of this study was the development of a time-dependent three-dimensional Euler/Navier-Stokes aerodynamic analysis to predict unsteady compressible transonic flows about ducted and unducted propfan propulsion systems at angle of attack. The computer codes resulting from this study are referred to as Advanced Ducted Propfan Analysis Codes (ADPAC). This report is intended to serve as a computer program user's manual for the ADPAC developed under Task 2 of NASA Contract NAS3-25270, Unsteady Ducted Propfan Analysis. Aerodynamic calculations were based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. A time-accurate implicit residual smoothing operator was utilized for unsteady flow predictions. For unducted propfans, a single H-type grid was used to discretize each blade passage of the complete propeller. For ducted propfans, a coupled system of five grid blocks utilizing an embedded C-grid about the cowl leading edge was used to discretize each blade passage. Grid systems were generated by a combined algebraic/elliptic algorithm developed specifically for ducted propfans. Numerical calculations were compared with experimental data for both ducted and unducted propfan flows. The solution scheme demonstrated efficiency and accuracy comparable with other schemes of this class.
System Analysis for the Huntsville Operation Support Center, Distributed Computer System
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Massey, D.
1985-01-01
HOSC as a distributed computing system, is responsible for data acquisition and analysis during Space Shuttle operations. HOSC also provides computing services for Marshall Space Flight Center's nonmission activities. As mission and nonmission activities change, so do the support functions of HOSC change, demonstrating the need for some method of simulating activity at HOSC in various configurations. The simulation developed in this work primarily models the HYPERchannel network. The model simulates the activity of a steady state network, reporting statistics such as, transmitted bits, collision statistics, frame sequences transmitted, and average message delay. These statistics are used to evaluate such performance indicators as throughout, utilization, and delay. Thus the overall performance of the network is evaluated, as well as predicting possible overload conditions.
System support software for the Space Ultrareliable Modular Computer (SUMC)
NASA Technical Reports Server (NTRS)
Hill, T. E.; Hintze, G. C.; Hodges, B. C.; Austin, F. A.; Buckles, B. P.; Curran, R. T.; Lackey, J. D.; Payne, R. E.
1974-01-01
The highly transportable programming system designed and implemented to support the development of software for the Space Ultrareliable Modular Computer (SUMC) is described. The SUMC system support software consists of program modules called processors. The initial set of processors consists of the supervisor, the general purpose assembler for SUMC instruction and microcode input, linkage editors, an instruction level simulator, a microcode grid print processor, and user oriented utility programs. A FORTRAN 4 compiler is undergoing development. The design facilitates the addition of new processors with a minimum effort and provides the user quasi host independence on the ground based operational software development computer. Additional capability is provided to accommodate variations in the SUMC architecture without consequent major modifications in the initial processors.
A multitasking finite state architecture for computer control of an electric powertrain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burba, J.C.
1984-01-01
Finite state techniques provide a common design language between the control engineer and the computer engineer for event driven computer control systems. They simplify communication and provide a highly maintainable control system understandable by both. This paper describes the development of a control system for an electric vehicle powertrain utilizing finite state concepts. The basics of finite state automata are provided as a framework to discuss a unique multitasking software architecture developed for this application. The architecture employs conventional time-sliced techniques with task scheduling controlled by a finite state machine representation of the control strategy of the powertrain. The complexitiesmore » of excitation variable sampling in this environment are also considered.« less
A FPGA-based architecture for real-time image matching
NASA Astrophysics Data System (ADS)
Wang, Jianhui; Zhong, Sheng; Xu, Wenhui; Zhang, Weijun; Cao, Zhiguo
2013-10-01
Image matching is a fundamental task in computer vision. It is used to establish correspondence between two images taken at different viewpoint or different time from the same scene. However, its large computational complexity has been a challenge to most embedded systems. This paper proposes a single FPGA-based image matching system, which consists of SIFT feature detection, BRIEF descriptor extraction and BRIEF matching. It optimizes the FPGA architecture for the SIFT feature detection to reduce the FPGA resources utilization. Moreover, we implement BRIEF description and matching on FPGA also. The proposed system can implement image matching at 30fps (frame per second) for 1280x720 images. Its processing speed can meet the demand of most real-life computer vision applications.
MOICC and GIS: An Impact Study. Final Evaluation Report.
ERIC Educational Resources Information Center
Ryan, Charles W.; Drummond, Robert J.
The Guidance Information System (GIS) is a statewide computer-based career information system developed by the Maine Occupational Information Coordinating Committee (MOICC). A time-series design was utilized to investigate the impact of GIS on selected users in public schools and agencies. Participants completed questionnaires immediately after…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Wie, N.H.
An overview of the UCC-ND system for computer-aided cost estimating is provided. The program is generally utilized in the preparation of construction cost estimates for projects costing $25,000,000 or more. The advantages of the system to the manager and the estimator are discussed, and examples of the product are provided. 19 figures, 1 table.
ERIC Educational Resources Information Center
Pan, Wen Fu
2017-01-01
The objective of this study was to test whether the Kinect motion-sensing interactive system (KMIS) enhanced students' English vocabulary learning, while also comparing the system's effectiveness against a traditional computer-mouse interface. Both interfaces utilized an interactive game with a questioning strategy. One-hundred and twenty…
Heterodyne laser instantaneous frequency measurement system
Wyeth, Richard W.; Johnson, Michael A.; Globig, Michael A.
1989-01-01
A heterodyne laser instantaneous frequency measurement system is disclosed. The system utilizes heterodyning of a pulsed laser beam with a continuous wave laser beam to form a beat signal. The beat signal is processed by a controller or computer which determines both the average frequency of the laser pulse and any changes or chirp of th frequency during the pulse.
The LiveWire Project final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, C.D.; Nelson, T.T.; Kelly, J.C.
Utilities across the US have begun pilot testing a variety of hardware and software products to develop a two-way communications system between themselves and their customers. Their purpose is to reduce utility operating costs and to provide new and improved services for customers in light of pending changes in the electric industry being brought about by deregulation. A consortium including utilities, national labs, consultants, and contractors, with the support of the Department of Energy (DOE) and the Electric Power Research Institute (EPRI), initiated a project that utilized a hybrid fiber-coax (HFC) wide-area network integrated with a CEBus based local areamore » network within the customers home. The system combined energy consumption data taken within the home, and home automation features to provide a suite of energy management services for residential customers. The information was transferred via the Internet through the HFC network, and presented to the customer on their personal computer. This final project report discusses the design, prototype testing, and system deployment planning of the energy management system.« less
Monitoring system including an electronic sensor platform and an interrogation transceiver
Kinzel, Robert L.; Sheets, Larry R.
2003-09-23
A wireless monitoring system suitable for a wide range of remote data collection applications. The system includes at least one Electronic Sensor Platform (ESP), an Interrogator Transceiver (IT) and a general purpose host computer. The ESP functions as a remote data collector from a number of digital and analog sensors located therein. The host computer provides for data logging, testing, demonstration, installation checkout, and troubleshooting of the system. The IT transmits signals from one or more ESP's to the host computer to the ESP's. The IT host computer may be powered by a common power supply, and each ESP is individually powered by a battery. This monitoring system has an extremely low power consumption which allows remote operation of the ESP for long periods; provides authenticated message traffic over a wireless network; utilizes state-of-health and tamper sensors to ensure that the ESP is secure and undamaged; has robust housing of the ESP suitable for use in radiation environments; and is low in cost. With one base station (host computer and interrogator transceiver), multiple ESP's may be controlled at a single monitoring site.
NASA Astrophysics Data System (ADS)
Kun, Luis G.
1995-10-01
During the first Health Care Technology Policy conference last year, during health care reform, four major issues were brought up in regards to the efforts underway to develop a computer based patient record (CBPR), the National Information Infrastructure (NII) as part of the high performance computers and communications (HPCC), and the so-called 'patient card.' More specifically it was explained how a national information system will greatly affect the way health care delivery is provided to the United States public and reduce its costs. These four issues were: (1) Constructing a national information infrastructure (NII); (2) Building a computer based patient record system; (3) Bringing the collective resources of our national laboratories to bear in developing and implementing the NII and CBPR, as well as a security system with which to safeguard the privacy rights of patients and the physician-patient privilege; (4) Utilizing government (e.g., DOD, DOE) capabilities (technology and human resources) to maximize resource utilization, create new jobs, and accelerate technology transfer to address health care issues. This year a section of this conference entitled: 'Health Care Technology Assets of the Federal Government' addresses benefits of the technology transfer which should occur for maximizing already developed resources. This section entitled: 'Transfer and Utilization of Government Technology Assets to the Private Sector,' will look at both health care and non-health care related technologies since many areas such as information technologies (i.e. imaging, communications, archival/retrieval, systems integration, information display, multimedia, heterogeneous data bases, etc.) already exist and are part of our national labs and/or other federal agencies, i.e., ARPA. These technologies although they are not labeled under health care programs they could provide enormous value to address technical needs. An additional issue deals with both the technical (hardware, software) and human expertise that resides within these labs and their possible role in creating cost effective solutions.
Tools for Embedded Computing Systems Software
NASA Technical Reports Server (NTRS)
1978-01-01
A workshop was held to assess the state of tools for embedded systems software and to determine directions for tool development. A synopsis of the talk and the key figures of each workshop presentation, together with chairmen summaries, are presented. The presentations covered four major areas: (1) tools and the software environment (development and testing); (2) tools and software requirements, design, and specification; (3) tools and language processors; and (4) tools and verification and validation (analysis and testing). The utility and contribution of existing tools and research results for the development and testing of embedded computing systems software are described and assessed.
Publication search and retrieval system
Winget, Elizabeth A.
1981-01-01
The publication search and retrieval system of the Branch of Atlantic-Gulf of Mexico Geology, U.S. Geological Survey, Woods Hole, Mass., is a procedure for listing and describing branch-sponsored publications. It is designed for maintenance and retrieval by those having limited knowledge of computer languages and programs. Because this branch currently utilizes the Hewlett-Packard HP-1000 computer with RTE-IVB operating system, database entry and maintenance is performed in accordance with the TE-IVB Terminal User’s Reference Manual (Hewlett-Packard Company, 1980) and within the constraints of GRASP (Bowen and Botbol, 1975) and WOLF (Evenden, 1978).
Autonomous control systems: applications to remote sensing and image processing
NASA Astrophysics Data System (ADS)
Jamshidi, Mohammad
2001-11-01
One of the main challenges of any control (or image processing) paradigm is being able to handle complex systems under unforeseen uncertainties. A system may be called complex here if its dimension (order) is too high and its model (if available) is nonlinear, interconnected, and information on the system is uncertain such that classical techniques cannot easily handle the problem. Examples of complex systems are power networks, space robotic colonies, national air traffic control system, and integrated manufacturing plant, the Hubble Telescope, the International Space Station, etc. Soft computing, a consortia of methodologies such as fuzzy logic, neuro-computing, genetic algorithms and genetic programming, has proven to be powerful tools for adding autonomy and semi-autonomy to many complex systems. For such systems the size of soft computing control architecture will be nearly infinite. In this paper new paradigms using soft computing approaches are utilized to design autonomous controllers and image enhancers for a number of application areas. These applications are satellite array formations for synthetic aperture radar interferometry (InSAR) and enhancement of analog and digital images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system hasmore » been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system.« less
Computational singular perturbation analysis of stochastic chemical systems with stiffness
NASA Astrophysics Data System (ADS)
Wang, Lijin; Han, Xiaoying; Cao, Yanzhao; Najm, Habib N.
2017-04-01
Computational singular perturbation (CSP) is a useful method for analysis, reduction, and time integration of stiff ordinary differential equation systems. It has found dominant utility, in particular, in chemical reaction systems with a large range of time scales at continuum and deterministic level. On the other hand, CSP is not directly applicable to chemical reaction systems at micro or meso-scale, where stochasticity plays an non-negligible role and thus has to be taken into account. In this work we develop a novel stochastic computational singular perturbation (SCSP) analysis and time integration framework, and associated algorithm, that can be used to not only construct accurately and efficiently the numerical solutions to stiff stochastic chemical reaction systems, but also analyze the dynamics of the reduced stochastic reaction systems. The algorithm is illustrated by an application to a benchmark stochastic differential equation model, and numerical experiments are carried out to demonstrate the effectiveness of the construction.
Development of a thermal storage module using modified anhydrous sodium hydroxide
NASA Technical Reports Server (NTRS)
Rice, R. E.; Rowny, P. E.
1980-01-01
The laboratory scale testing of a modified anhydrous NaOH latent heat storage concept for small solar thermal power systems such as total energy systems utilizing organic Rankine systems is discussed. A diagnostic test on the thermal energy storage module and an investigation of alternative heat transfer fluids and heat exchange concepts are specifically addressed. A previously developed computer simulation model is modified to predict the performance of the module in a solar total energy system environment. In addition, the computer model is expanded to investigate parametrically the incorporation of a second heat exchange inside the module which will vaporize and superheat the Rankine cycle power fluid.
NASA Technical Reports Server (NTRS)
Paul, C. K.; Landini, A. J.; Diegert, C.
1975-01-01
The Santa Monica mountains of Los Angeles consist primarily of complexly folded sedimentary marine strata with igneous and metamorphic rocks at the eastern end of the mountains. With the increased development of the Santa Monicas, a study was conducted to determine the critical land use data items in the mountains. Two information systems developed in parallel are described. One capitalizes on the City's present computer line printer system, and the second utilizes map overlay techniques on an interactive computer terminal. Results concerning population, housing, and land improvement illustrate the successful linking of ordinal and nominal data files in the interactive system.-
VAXCMS - VAX CONTINUOUS MONITORING SYSTEM, VERSION 2.2
NASA Technical Reports Server (NTRS)
Farkas, L.
1994-01-01
The VAX Continuous Monitoring System (VAXCMS) was developed at NASA Headquarters to aid system managers in monitoring the performance of VAX systems through the generation of graphic images which summarize trends in performance metrics over time. Since its initial development, VAXCMS has been extensively modified at the NASA Lewis Research Center. Data is produced by utilizing the VMS MONITOR utility to collect the performance data, and then feeding the data through custom-developed linkages to the Computer Associates' TELL-A-GRAF computer graphics software to generate the chart images for analysis by the system manager. The VMS ACCOUNTING utility is also utilized to gather interactive process information. The charts that are generated by VAXCMS are: 1) CPU modes for each node over the most recent four month period 2) CPU modes for the cluster as a whole using a weighted average of all the nodes in the cluster based on processing power 3) Percent of primary memory in use for each node over the most recent four month period 4) Interactive processes for all nodes over the most recent four month period 5) Daily, weekly, and monthly, performance summaries for CPU modes, percent of primary memory in use, and page fault rates for each node 6) Daily disk I/O performance data plotting Average Disk I/O Response Time based on I/O Operation Rate and Queue Length. VAXCMS is written in DCL and VAX FORTRAN for use with DEC VAX series computers running VMS 5.1 or later. This program requires the TELL-A-GRAF graphics package in order to generate plots of system data. A FORTRAN compiler is required. The standard distribution medium for VAXCMS is a 9-track 1600 BPI magnetic tape in DEC VAX BACKUP format. It is also available on a TK50 tape cartridge in DEC VAX BACKUP format. An electronic copy of the documentation in ASCII format is included on the distribution medium. Portions of this code are copyrighted by Mr. David Lavery and are distributed with his permission. These portions of the code may not be redistributed commercially.
DET/MPS - The GSFC Energy Balance Programs
NASA Technical Reports Server (NTRS)
Jagielski, J. M.
1994-01-01
Direct Energy Transfer (DET) and MultiMission Spacecraft Modular Power System (MPS) computer programs perform mathematical modeling and simulation to aid in design and analysis of DET and MPS spacecraft power system performance in order to determine energy balance of subsystem. DET spacecraft power system feeds output of solar photovoltaic array and nickel cadmium batteries directly to spacecraft bus. MPS system, Standard Power Regulator Unit (SPRU) utilized to operate array at array's peak power point. DET and MPS perform minute-by-minute simulation of performance of power system. Results of simulation focus mainly on output of solar array and characteristics of batteries. Both packages limited in terms of orbital mechanics, they have sufficient capability to calculate data on eclipses and performance of arrays for circular or near-circular orbits. DET and MPS written in FORTRAN-77 with some VAX FORTRAN-type extensions. Both available in three versions: GSC-13374, for DEC VAX-series computers running VMS. GSC-13443, for UNIX-based computers. GSC-13444, for Apple Macintosh computers.
RighTime: A real time clock correcting program for MS-DOS-based computer systems
NASA Technical Reports Server (NTRS)
Becker, G. Thomas
1993-01-01
A computer program is described which effectively eliminates the misgivings of the DOS system clock in PC/AT-class computers. RighTime is a small, sophisticated memory-resident program that automatically corrects both the DOS system clock and the hardware 'CMOS' real time clock (RTC) in real time. RighTime learns what corrections are required without operator interaction beyond the occasional accurate time set. Both warm (power on) and cool (power off) errors are corrected, usually yielding better than one part per million accuracy in the typical desktop computer with no additional hardware, and RighTime increases the system clock resolution from approximately 0.0549 second to 0.01 second. Program tools are also available which allow visualization of RighTime's actions, verification of its performance, display of its history log, and which provide data for graphing of the system clock behavior. The program has found application in a wide variety of industries, including astronomy, satellite tracking, communications, broadcasting, transportation, public utilities, manufacturing, medicine, and the military.
JAX Colony Management System (JCMS): an extensible colony and phenotype data management system.
Donnelly, Chuck J; McFarland, Mike; Ames, Abigail; Sundberg, Beth; Springer, Dave; Blauth, Peter; Bult, Carol J
2010-04-01
The Jackson Laboratory Colony Management System (JCMS) is a software application for managing data and information related to research mouse colonies, associated biospecimens, and experimental protocols. JCMS runs directly on computers that run one of the PC Windows operating systems, but can be accessed via web browser interfaces from any computer running a Windows, Macintosh, or Linux operating system. JCMS can be configured for a single user or multiple users in small- to medium-size work groups. The target audience for JCMS includes laboratory technicians, animal colony managers, and principal investigators. The application provides operational support for colony management and experimental workflows, sample and data tracking through transaction-based data entry forms, and date-driven work reports. Flexible query forms allow researchers to retrieve database records based on user-defined criteria. Recent advances in handheld computers with integrated barcode readers, middleware technologies, web browsers, and wireless networks add to the utility of JCMS by allowing real-time access to the database from any networked computer.
1982-06-01
p*A C.._ _ __ _ _ A, d.tibutiou is unhimta 4 iit 84~ L0 TABLE OF CONTENTS APPENDIX SCOPE OF WORK B MERGE AND COST PROGRAM DOCUMENTATION C FATSCO... PROGRAM TO COMPUTE TIME SERIES FREQUENCY RELATIONSHIPS D HEC-DSS - TIME SERIES DATA FILE MANAGEMENT SYSTEM E PLAN 1 -TIM SERIES DATA PLOTS AND ANNUAL...University of Minnesota, utilized an early version of the Hydrologic Engineering * Center’s (HEC) EEC-5c Computer Program . EEC is a Corps of Engineers
NASA Astrophysics Data System (ADS)
Guilfoyle, Peter S.; Stone, Richard V.; Hessenbruch, John M.; Zeise, Frederick F.
1993-07-01
A second generation digital optical computer (DOC II) has been developed which utilizes a RISC based operating system as its host. This 32 bit, high performance (12.8 GByte/sec), computing platform demonstrates a number of basic principals that are inherent to parallel free space optical interconnects such as speed (up to 1012 bit operations per second) and low power 1.2 fJ per bit). Although DOC II is a general purpose machine, special purpose applications have been developed and are currently being evaluated on the optical platform.
Integrating an Intelligent Tutoring System for TAOs with Second Life
2010-12-01
SL) and interacts with a number of computer -controlled objects that take on the roles of the TAO’s teammates. TAOs rely on the same mechanism to...projects that utilize both game and simulation technology for training. He joined Stottler Henke in the fall of 2000 and holds a Ph.D. in computer science...including implementing tutors in multiuser worlds. He has been at Stottler Henke since 2005 and has a MS in computer science from Stanford University
NASA Technical Reports Server (NTRS)
Kalagher, R. J.
1973-01-01
Ten tipping bucket rain gauges have been installed at the NASA WSTF for the purpose of determining rainfall characteristics in this area which may affect the performance of the NASA Tracking and Data Relay Satellite System. A plan is presented for analyzing and utilizing the data which will be obtained during the course of this experiment. Also included is a description of a computer program which has been written to aid in the analysis.
An electron beam linear scanning mode for industrial limited-angle nano-computed tomography.
Wang, Chengxiang; Zeng, Li; Yu, Wei; Zhang, Lingli; Guo, Yumeng; Gong, Changcheng
2018-01-01
Nano-computed tomography (nano-CT), which utilizes X-rays to research the inner structure of some small objects and has been widely utilized in biomedical research, electronic technology, geology, material sciences, etc., is a high spatial resolution and non-destructive research technique. A traditional nano-CT scanning model with a very high mechanical precision and stability of object manipulator, which is difficult to reach when the scanned object is continuously rotated, is required for high resolution imaging. To reduce the scanning time and attain a stable and high resolution imaging in industrial non-destructive testing, we study an electron beam linear scanning mode of nano-CT system that can avoid mechanical vibration and object movement caused by the continuously rotated object. Furthermore, to further save the scanning time and study how small the scanning range could be considered with acceptable spatial resolution, an alternating iterative algorithm based on ℓ 0 minimization is utilized to limited-angle nano-CT reconstruction problem with the electron beam linear scanning mode. The experimental results confirm the feasibility of the electron beam linear scanning mode of nano-CT system.
An electron beam linear scanning mode for industrial limited-angle nano-computed tomography
NASA Astrophysics Data System (ADS)
Wang, Chengxiang; Zeng, Li; Yu, Wei; Zhang, Lingli; Guo, Yumeng; Gong, Changcheng
2018-01-01
Nano-computed tomography (nano-CT), which utilizes X-rays to research the inner structure of some small objects and has been widely utilized in biomedical research, electronic technology, geology, material sciences, etc., is a high spatial resolution and non-destructive research technique. A traditional nano-CT scanning model with a very high mechanical precision and stability of object manipulator, which is difficult to reach when the scanned object is continuously rotated, is required for high resolution imaging. To reduce the scanning time and attain a stable and high resolution imaging in industrial non-destructive testing, we study an electron beam linear scanning mode of nano-CT system that can avoid mechanical vibration and object movement caused by the continuously rotated object. Furthermore, to further save the scanning time and study how small the scanning range could be considered with acceptable spatial resolution, an alternating iterative algorithm based on ℓ0 minimization is utilized to limited-angle nano-CT reconstruction problem with the electron beam linear scanning mode. The experimental results confirm the feasibility of the electron beam linear scanning mode of nano-CT system.
Meet EPA Environmental Engineer Terra Haxton, Ph.D.
EPA Environmental Engineer Terra Haxton, Ph.D., uses computer simulation models to protect drinking water. She investigates approaches to help water utilities be better prepared to respond to contamination incidents in their distribution systems.
Predictive Models and Computational Embryology
EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...
Lehrer, Nicole; Duff, Margaret; Venkataraman, Vinay; Turaga, Pavan; Ingalls, Todd; Rymer, W. Zev; Wolf, Steven L.; Rikakis, Thanassis
2015-01-01
Interactive neurorehabilitation (INR) systems provide therapy that can evaluate and deliver feedback on a patient's movement computationally. There are currently many approaches to INR design and implementation, without a clear indication of which methods to utilize best. This article presents key interactive computing, motor learning, and media arts concepts utilized by an interdisciplinary group to develop adaptive, mixed reality INR systems for upper extremity therapy of patients with stroke. Two INR systems are used as examples to show how the concepts can be applied within: (1) a small-scale INR clinical study that achieved integrated improvement of movement quality and functionality through continuously supervised therapy and (2) a pilot study that achieved improvement of clinical scores with minimal supervision. The notion is proposed that some of the successful approaches developed and tested within these systems can form the basis of a scalable design methodology for other INR systems. A coherent approach to INR design is needed to facilitate the use of the systems by physical therapists, increase the number of successful INR studies, and generate rich clinical data that can inform the development of best practices for use of INR in physical therapy. PMID:25425694
Baran, Michael; Lehrer, Nicole; Duff, Margaret; Venkataraman, Vinay; Turaga, Pavan; Ingalls, Todd; Rymer, W Zev; Wolf, Steven L; Rikakis, Thanassis
2015-03-01
Interactive neurorehabilitation (INR) systems provide therapy that can evaluate and deliver feedback on a patient's movement computationally. There are currently many approaches to INR design and implementation, without a clear indication of which methods to utilize best. This article presents key interactive computing, motor learning, and media arts concepts utilized by an interdisciplinary group to develop adaptive, mixed reality INR systems for upper extremity therapy of patients with stroke. Two INR systems are used as examples to show how the concepts can be applied within: (1) a small-scale INR clinical study that achieved integrated improvement of movement quality and functionality through continuously supervised therapy and (2) a pilot study that achieved improvement of clinical scores with minimal supervision. The notion is proposed that some of the successful approaches developed and tested within these systems can form the basis of a scalable design methodology for other INR systems. A coherent approach to INR design is needed to facilitate the use of the systems by physical therapists, increase the number of successful INR studies, and generate rich clinical data that can inform the development of best practices for use of INR in physical therapy. © 2015 American Physical Therapy Association.
ALMA Correlator Real-Time Data Processor
NASA Astrophysics Data System (ADS)
Pisano, J.; Amestica, R.; Perez, J.
2005-10-01
The design of a real-time Linux application utilizing Real-Time Application Interface (RTAI) to process real-time data from the radio astronomy correlator for the Atacama Large Millimeter Array (ALMA) is described. The correlator is a custom-built digital signal processor which computes the cross-correlation function of two digitized signal streams. ALMA will have 64 antennas with 2080 signal streams each with a sample rate of 4 giga-samples per second. The correlator's aggregate data output will be 1 gigabyte per second. The software is defined by hard deadlines with high input and processing data rates, while requiring interfaces to non real-time external computers. The designed computer system - the Correlator Data Processor or CDP, consists of a cluster of 17 SMP computers, 16 of which are compute nodes plus a master controller node all running real-time Linux kernels. Each compute node uses an RTAI kernel module to interface to a 32-bit parallel interface which accepts raw data at 64 megabytes per second in 1 megabyte chunks every 16 milliseconds. These data are transferred to tasks running on multiple CPUs in hard real-time using RTAI's LXRT facility to perform quantization corrections, data windowing, FFTs, and phase corrections for a processing rate of approximately 1 GFLOPS. Highly accurate timing signals are distributed to all seventeen computer nodes in order to synchronize them to other time-dependent devices in the observatory array. RTAI kernel tasks interface to the timing signals providing sub-millisecond timing resolution. The CDP interfaces, via the master node, to other computer systems on an external intra-net for command and control, data storage, and further data (image) processing. The master node accesses these external systems utilizing ALMA Common Software (ACS), a CORBA-based client-server software infrastructure providing logging, monitoring, data delivery, and intra-computer function invocation. The software is being developed in tandem with the correlator hardware which presents software engineering challenges as the hardware evolves. The current status of this project and future goals are also presented.
Integration of Titan supercomputer at OLCF with ATLAS Production System
NASA Astrophysics Data System (ADS)
Barreiro Megino, F.; De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wells, J.; Wenaus, T.; ATLAS Collaboration
2017-10-01
The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job submission to Titan’s batch queues and local data management, with lightweight MPI wrappers to run single node workloads in parallel on Titan’s multi-core worker nodes. It provides for running of standard ATLAS production jobs on unused resources (backfill) on Titan. The system already allowed ATLAS to collect on Titan millions of core-hours per month, execute hundreds of thousands jobs, while simultaneously improving Titans utilization efficiency. We will discuss the details of the implementation, current experience with running the system, as well as future plans aimed at improvements in scalability and efficiency. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
CANFAR + Skytree: Mining Massive Datasets as an Essential Part of the Future of Astronomy
NASA Astrophysics Data System (ADS)
Ball, Nicholas M.
2013-01-01
The future study of large astronomical datasets, consisting of hundreds of millions to billions of objects, will be dominated by large computing resources, and by analysis tools of the necessary scalability and sophistication to extract useful information. Significant effort will be required to fulfil their potential as a provider of the next generation of science results. To-date, computing systems have allowed either sophisticated analysis of small datasets, e.g., most astronomy software, or simple analysis of large datasets, e.g., database queries. At the Canadian Astronomy Data Centre, we have combined our cloud computing system, the Canadian Advanced Network for Astronomical Research (CANFAR), with the world's most advanced machine learning software, Skytree, to create the world's first cloud computing system for data mining in astronomy. This allows the full sophistication of the huge fields of data mining and machine learning to be applied to the hundreds of millions of objects that make up current large datasets. CANFAR works by utilizing virtual machines, which appear to the user as equivalent to a desktop. Each machine is replicated as desired to perform large-scale parallel processing. Such an arrangement carries far more flexibility than other cloud systems, because it enables the user to immediately install and run the same code that they already utilize for science on their desktop. We demonstrate the utility of the CANFAR + Skytree system by showing science results obtained, including assigning photometric redshifts with full probability density functions (PDFs) to a catalog of approximately 133 million galaxies from the MegaPipe reductions of the Canada-France-Hawaii Telescope Legacy Wide and Deep surveys. Each PDF is produced nonparametrically from 100 instances of the photometric parameters for each galaxy, generated by perturbing within the errors on the measurements. Hence, we produce, store, and assign redshifts to, a catalog of over 13 billion object instances. This catalog is comparable in size to those expected from next-generation surveys, such as Large Synoptic Survey Telescope. The CANFAR+Skytree system is open for use by any interested member of the astronomical community.
Use of the internet to study the utility values of the public.
Lenert, Leslie A.; Sturley, Ann E.
2002-01-01
One of the most difficult tasks in cost-effectiveness analysis is the measurement of quality weights (utilities) for health states. The task is difficult because subjects often lack familiarity with health states they are asked to rate and because utilities measures such as the standard gamble, ask subjects to perform tasks that are complex and far from everyday experience. A large body of research suggests that computer methods can play an important role in explaining health states and measuring utilities. However, administering computer surveys to a "general public" sample, the most relevant sample for cost-effectiveness analysis, is logistically difficult. In this paper, we describe a software system designed to allow the study of general population preferences in a volunteer Internet survey panel. The approach, which relied on over sampling of ethnic groups and older members of the panel, produced a data set with an ethnically, chronologically and geographically diverse group of respondents, but was not successful in replicating the joint distribution of demographic patterns in the population. PMID:12463862
Ogawa, K
1992-01-01
This paper proposes a new evaluation and prediction method for computer usability. This method is based on our two previously proposed information transmission measures created from a human-to-computer information transmission model. The model has three information transmission levels: the device, software, and task content levels. Two measures, called the device independent information measure (DI) and the computer independent information measure (CI), defined on the software and task content levels respectively, are given as the amount of information transmitted. Two information transmission rates are defined as DI/T and CI/T, where T is the task completion time: the device independent information transmission rate (RDI), and the computer independent information transmission rate (RCI). The method utilizes the RDI and RCI rates to evaluate relatively the usability of software and device operations on different computer systems. Experiments using three different systems, in this case a graphical information input task, confirm that the method offers an efficient way of determining computer usability.
Space station operating system study
NASA Technical Reports Server (NTRS)
Horn, Albert E.; Harwell, Morris C.
1988-01-01
The current phase of the Space Station Operating System study is based on the analysis, evaluation, and comparison of the operating systems implemented on the computer systems and workstations in the software development laboratory. Primary emphasis has been placed on the DEC MicroVMS operating system as implemented on the MicroVax II computer, with comparative analysis of the SUN UNIX system on the SUN 3/260 workstation computer, and to a limited extent, the IBM PC/AT microcomputer running PC-DOS. Some benchmark development and testing was also done for the Motorola MC68010 (VM03 system) before the system was taken from the laboratory. These systems were studied with the objective of determining their capability to support Space Station software development requirements, specifically for multi-tasking and real-time applications. The methodology utilized consisted of development, execution, and analysis of benchmark programs and test software, and the experimentation and analysis of specific features of the system or compilers in the study.
Advanced information processing system for advanced launch system: Avionics architecture synthesis
NASA Technical Reports Server (NTRS)
Lala, Jaynarayan H.; Harper, Richard E.; Jaskowiak, Kenneth R.; Rosch, Gene; Alger, Linda S.; Schor, Andrei L.
1991-01-01
The Advanced Information Processing System (AIPS) is a fault-tolerant distributed computer system architecture that was developed to meet the real time computational needs of advanced aerospace vehicles. One such vehicle is the Advanced Launch System (ALS) being developed jointly by NASA and the Department of Defense to launch heavy payloads into low earth orbit at one tenth the cost (per pound of payload) of the current launch vehicles. An avionics architecture that utilizes the AIPS hardware and software building blocks was synthesized for ALS. The AIPS for ALS architecture synthesis process starting with the ALS mission requirements and ending with an analysis of the candidate ALS avionics architecture is described.
Computer systems for automatic earthquake detection
Stewart, S.W.
1974-01-01
U.S Geological Survey seismologists in Menlo park, California, are utilizing the speed, reliability, and efficiency of minicomputers to monitor seismograph stations and to automatically detect earthquakes. An earthquake detection computer system, believed to be the only one of its kind in operation, automatically reports about 90 percent of all local earthquakes recorded by a network of over 100 central California seismograph stations. The system also monitors the stations for signs of malfunction or abnormal operation. Before the automatic system was put in operation, all of the earthquakes recorded had to be detected by manually searching the records, a time-consuming process. With the automatic detection system, the stations are efficiently monitored continuously.
ERIC Educational Resources Information Center
Library of Congress, Washington, DC. Congressional Research Service.
This summary of the combined Hearing and Workshop on Applications of Computer-Based Information Systems and Services in Agriculture (May 19-20, 1982) offers an overview of the ways in which information technology--computers, telecommunications, microforms, word processing, video and audio devices--may be utilized by American farmers and ranchers.…
The PR2D (Place, Route in 2-Dimensions) automatic layout computer program handbook
NASA Technical Reports Server (NTRS)
Edge, T. M.
1978-01-01
Place, Route in 2-Dimensions is a standard cell automatic layout computer program for generating large scale integrated/metal oxide semiconductor arrays. The program was utilized successfully for a number of years in both government and private sectors but until now was undocumented. The compilation, loading, and execution of the program on a Sigma V CP-V operating system is described.
NASA Astrophysics Data System (ADS)
Valasek, Lukas; Glasa, Jan
2017-12-01
Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.
Exploring Effective Decision Making through Human-Centered and Computational Intelligence Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Kyungsik; Cook, Kristin A.; Shih, Patrick C.
Decision-making has long been studied to understand a psychological, cognitive, and social process of selecting an effective choice from alternative options. Its studies have been extended from a personal level to a group and collaborative level, and many computer-aided decision-making systems have been developed to help people make right decisions. There has been significant research growth in computational aspects of decision-making systems, yet comparatively little effort has existed in identifying and articulating user needs and requirements in assessing system outputs and the extent to which human judgments could be utilized for making accurate and reliable decisions. Our research focus ismore » decision-making through human-centered and computational intelligence methods in a collaborative environment, and the objectives of this position paper are to bring our research ideas to the workshop, and share and discuss ideas.« less
Baun, Christian
2016-01-01
Clusters usually consist of servers, workstations or personal computers as nodes. But especially for academic purposes like student projects or scientific projects, the cost for purchase and operation can be a challenge. Single board computers cannot compete with the performance or energy-efficiency of higher-value systems, but they are an option to build inexpensive cluster systems. Because of the compact design and modest energy consumption, it is possible to build clusters of single board computers in a way that they are mobile and can be easily transported by the users. This paper describes the construction of such a cluster, useful applications and the performance of the single nodes. Furthermore, the clusters' performance and energy-efficiency is analyzed by executing the High Performance Linpack benchmark with a different number of nodes and different proportion of the systems total main memory utilized.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Vinod
2017-05-05
High fidelity computational models of thermocline-based thermal energy storage (TES) were developed. The research goal was to advance the understanding of a single tank nanofludized molten salt based thermocline TES system under various concentration and sizes of the particles suspension. Our objectives were to utilize sensible-heat that operates with least irreversibility by using nanoscale physics. This was achieved by performing computational analysis of several storage designs, analyzing storage efficiency and estimating cost effectiveness for the TES systems under a concentrating solar power (CSP) scheme using molten salt as the storage medium. Since TES is one of the most costly butmore » important components of a CSP plant, an efficient TES system has potential to make the electricity generated from solar technologies cost competitive with conventional sources of electricity.« less
MOOSE: A parallel computational framework for coupled systems of nonlinear equations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Derek Gaston; Chris Newman; Glen Hansen
Systems of coupled, nonlinear partial differential equations (PDEs) often arise in simulation of nuclear processes. MOOSE: Multiphysics Object Oriented Simulation Environment, a parallel computational framework targeted at the solution of such systems, is presented. As opposed to traditional data-flow oriented computational frameworks, MOOSE is instead founded on the mathematical principle of Jacobian-free Newton-Krylov (JFNK) solution methods. Utilizing the mathematical structure present in JFNK, physics expressions are modularized into `Kernels,'' allowing for rapid production of new simulation tools. In addition, systems are solved implicitly and fully coupled, employing physics based preconditioning, which provides great flexibility even with large variance in timemore » scales. A summary of the mathematics, an overview of the structure of MOOSE, and several representative solutions from applications built on the framework are presented.« less
The Design and Development of a Web-Interface for the Software Engineering Automation System
2001-09-01
application on the Internet. 14. SUBJECT TERMS Computer Aided Prototyping, Real Time Systems , Java 15. NUMBER OF...difficult. Developing the entire system only to find it does not meet the customer’s needs is a tremendous waste of time. Real - time systems need a...software prototyping is an iterative software development methodology utilized to improve the analysis and design of real - time systems [2]. One
NASA Astrophysics Data System (ADS)
Kun, Luis G.
1994-12-01
On October 18, 1991, the IEEE-USA produced an entity statement which endorsed the vital importance of the High Performance Computer and Communications Act of 1991 (HPCC) and called for the rapid implementation of all its elements. Efforts are now underway to develop a Computer Based Patient Record (CBPR), the National Information Infrastructure (NII) as part of the HPCC, and the so-called `Patient Card'. Multiple legislative initiatives which address these and related information technology issues are pending in Congress. Clearly, a national information system will greatly affect the way health care delivery is provided to the United States public. Timely and reliable information represents a critical element in any initiative to reform the health care system as well as to protect and improve the health of every person. Appropriately used, information technologies offer a vital means of improving the quality of patient care, increasing access to universal care and lowering overall costs within a national health care program. Health care reform legislation should reflect increased budgetary support and a legal mandate for the creation of a national health care information system by: (1) constructing a National Information Infrastructure; (2) building a Computer Based Patient Record System; (3) bringing the collective resources of our National Laboratories to bear in developing and implementing the NII and CBPR, as well as a security system with which to safeguard the privacy rights of patients and the physician-patient privilege; and (4) utilizing Government (e.g. DOD, DOE) capabilities (technology and human resources) to maximize resource utilization, create new jobs and accelerate technology transfer to address health care issues.
Paskevich, Valerie F.
1992-01-01
The Branch of Atlantic Marine Geology has been involved in the collection, processing and digital mosaicking of high, medium and low-resolution side-scan sonar data during the past 6 years. In the past, processing and digital mosaicking has been accomplished with a dedicated, shore-based computer system. With the need to process sidescan data in the field with increased power and reduced cost of major workstations, a need to have an image processing package on a UNIX based computer system which could be utilized in the field as well as be more generally available to Branch personnel was identified. This report describes the initial development of that package referred to as the Woods Hole Image Processing System (WHIPS). The software was developed using the Unidata NetCDF software interface to allow data to be more readily portable between different computer operating systems.
The Mesa Arizona Pupil Tracking System
NASA Technical Reports Server (NTRS)
Wright, D. L.
1973-01-01
A computer-based Pupil Tracking/Teacher Monitoring System was designed for Mesa Public Schools, Mesa, Arizona. The established objectives of the system were to: (1) facilitate the economical collection and storage of student performance data necessary to objectively evaluate the relative effectiveness of teachers, instructional methods, materials, and applied concepts; and (2) identify, on a daily basis, those students requiring special attention in specific subject areas. The system encompasses computer hardware/software and integrated curricula progression/administration devices. It provides daily evaluation and monitoring of performance as students progress at class or individualized rates. In the process, it notifies the student and collects information necessary to validate or invalidate subject presentation devices, methods, materials, and measurement devices in terms of direct benefit to the students. The system utilizes a small-scale computer (e.g., IBM 1130) to assure low-cost replicability, and may be used for many subjects of instruction.
Study of Dynamic Characteristics of Aeroelastic Systems Utilizing Randomdec Signatures
NASA Technical Reports Server (NTRS)
Chang, C. S.
1975-01-01
The feasibility of utilizing the random decrement method in conjunction with a signature analysis procedure to determine the dynamic characteristics of an aeroelastic system for the purpose of on-line prediction of potential on-set of flutter was examined. Digital computer programs were developed to simulate sampled response signals of a two-mode aeroelastic system. Simulated response data were used to test the random decrement method. A special curve-fit approach was developed for analyzing the resulting signatures. A number of numerical 'experiments' were conducted on the combined processes. The method is capable of determining frequency and damping values accurately from randomdec signatures of carefully selected lengths.
Computer programs for calculating potential flow in propulsion system inlets
NASA Technical Reports Server (NTRS)
Stockman, N. O.; Button, S. L.
1973-01-01
In the course of designing inlets, particularly for VTOL and STOL propulsion systems, a calculational procedure utilizing three computer programs evolved. The chief program is the Douglas axisymmetric potential flow program called EOD which calculates the incompressible potential flow about arbitrary axisymmetric bodies. The other two programs, original with Lewis, are called SCIRCL AND COMBYN. Program SCIRCL generates input for EOD from various specified analytic shapes for the inlet components. Program COMBYN takes basic solutions output by EOD and combines them into solutions of interest, and applies a compressibility correction.
Bria, W F
1993-11-01
We have discussed several important transitions now occurring in PCIS that promise to improve the utility and availability of these systems for the average physician. Charles Babbage developed the first computers as "thinking machines" so that we may extend our ability to grapple with more and more complex problems. If current trends continue, we will finally witness the evolution of patient care computing from information icons of the few to clinical instruments improving the quality of medical decision making and care for all patients.
Spectrum orbit utilization program technical manual SOUP5 Version 3.8
NASA Technical Reports Server (NTRS)
Davidson, J.; Ottey, H. R.; Sawitz, P.; Zusman, F. S.
1984-01-01
The underlying engineering and mathematical models as well as the computational methods used by the SOUP5 analysis programs, which are part of the R2BCSAT-83 Broadcast Satellite Computational System, are described. Included are the algorithms used to calculate the technical parameters and references to the relevant technical literature. The system provides the following capabilities: requirements file maintenance, data base maintenance, elliptical satellite beam fitting to service areas, plan synthesis from specified requirements, plan analysis, and report generation/query. Each of these functions are briefly described.
Predictive Models and Computational Toxicology (II IBAMTOX)
EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...
NASA Technical Reports Server (NTRS)
Capo, M. A.; Disney, R. K.
1971-01-01
The work performed in the following areas is summarized: (1) Analysis of Realistic nuclear-propelled vehicle was analyzed using the Marshall Space Flight Center computer code package. This code package includes one and two dimensional discrete ordinate transport, point kernel, and single scatter techniques, as well as cross section preparation and data processing codes, (2) Techniques were developed to improve the automated data transfer in the coupled computation method of the computer code package and improve the utilization of this code package on the Univac-1108 computer system. (3) The MSFC master data libraries were updated.
National remote computational flight research facility
NASA Technical Reports Server (NTRS)
Rediess, Herman A.
1989-01-01
The extension of the NASA Ames-Dryden remotely augmented vehicle (RAV) facility to accommodate flight testing of a hypersonic aircraft utilizing the continental United States as a test range is investigated. The development and demonstration of an automated flight test management system (ATMS) that uses expert system technology for flight test planning, scheduling, and execution is documented.
Magnet measurement interfacing to the G-64 Euro standard bus and testing G-64 modules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogrefe, R.L.
1995-07-01
The Magnet Measurement system utilizes various modules with a G-64 Euro (Gespac) Standard Interface. All modules are designed to be software controlled, normally under the constraints of the OS-9 operating system with all data transfers to a host computer accomplished by a serial link.
System Architectural Concepts: Army Battlefield Command and Control Information Utility (CCIU).
1982-07-25
produce (device-type), the computers they may interface with (required- host), and the identification number of the devices (device- number). Line- printers ...interface in a network PE ( ZINK Sol. A-5 GLOSSARY Kernel A layer of the PEOS; implements the basic system primitives. LUS Local Name Space Locking A
Above the cloud computing: applying cloud computing principles to create an orbital services model
NASA Astrophysics Data System (ADS)
Straub, Jeremy; Mohammad, Atif; Berk, Josh; Nervold, Anders K.
2013-05-01
Large satellites and exquisite planetary missions are generally self-contained. They have, onboard, all of the computational, communications and other capabilities required to perform their designated functions. Because of this, the satellite or spacecraft carries hardware that may be utilized only a fraction of the time; however, the full cost of development and launch are still bone by the program. Small satellites do not have this luxury. Due to mass and volume constraints, they cannot afford to carry numerous pieces of barely utilized equipment or large antennas. This paper proposes a cloud-computing model for exposing satellite services in an orbital environment. Under this approach, each satellite with available capabilities broadcasts a service description for each service that it can provide (e.g., general computing capacity, DSP capabilities, specialized sensing capabilities, transmission capabilities, etc.) and its orbital elements. Consumer spacecraft retain a cache of service providers and select one utilizing decision making heuristics (e.g., suitability of performance, opportunity to transmit instructions and receive results - based on the orbits of the two craft). The two craft negotiate service provisioning (e.g., when the service can be available and for how long) based on the operating rules prioritizing use of (and allowing access to) the service on the service provider craft, based on the credentials of the consumer. Service description, negotiation and sample service performance protocols are presented. The required components of each consumer or provider spacecraft are reviewed. These include fully autonomous control capabilities (for provider craft), a lightweight orbit determination routine (to determine when consumer and provider craft can see each other and, possibly, pointing requirements for craft with directional antennas) and an authentication and resource utilization priority-based access decision making subsystem (for provider craft). Two prospective uses for the proposed system are presented: Earth-orbiting applications and planetary science applications. A mission scenario is presented for both uses to illustrate system functionality and operation. The performance of the proposed system is compared to traditional self-contained spacecraft performance, both in terms of task performance (e.g., how well / quickly / etc. was a given task performed) and task performance as a function of cost. The integration of the proposed service provider model is compared to other control architectures for satellites including traditional scripted control, top-down multi-tier autonomy and bottom-up multi-tier autonomy.
Usefulness of hemocytometer as a counting chamber in a computer assisted sperm analyzer (CASA)
Eljarah, A.; Chandler, J.; Jenkins, J.A.; Chenevert, J.; Alcanal, A.
2013-01-01
Several methods are used to determine sperm cell concentration, such as the haemocytometer, spectrophotometer, electronic cell counter and computer-assisted semen analysers (CASA). The utility of CASA systems has been limited due to the lack of characterization of individual systems and the absence of standardization among laboratories. The aims of this study were to: 1) validate and establish setup conditions for the CASA system utilizing the haemocytometer as a counting chamber, and 2) compare the different methods used for the determination of sperm cell concentration in bull semen. Two ejaculates were collected and the sperm cell concentration was determined using spectrophotometer and haemocytometer. For the Hamilton-Thorn method, the haemocytometer was used as a counting chamber. Sperm concentration was determined three times per ejaculate samples. A difference (P 0.05) or between the haemocytometer count and the spectrophotometer. Based on the results of this study, we concluded that the haemocytometer can be used in computerized semen analysis systems as a substitute for the commercially available disposable counting chambers, therefore avoiding disadvantageous high costs and slower procedures.
A scalable quantum computer with ions in an array of microtraps
Cirac; Zoller
2000-04-06
Quantum computers require the storage of quantum information in a set of two-level systems (called qubits), the processing of this information using quantum gates and a means of final readout. So far, only a few systems have been identified as potentially viable quantum computer models--accurate quantum control of the coherent evolution is required in order to realize gate operations, while at the same time decoherence must be avoided. Examples include quantum optical systems (such as those utilizing trapped ions or neutral atoms, cavity quantum electrodynamics and nuclear magnetic resonance) and solid state systems (using nuclear spins, quantum dots and Josephson junctions). The most advanced candidates are the quantum optical and nuclear magnetic resonance systems, and we expect that they will allow quantum computing with about ten qubits within the next few years. This is still far from the numbers required for useful applications: for example, the factorization of a 200-digit number requires about 3,500 qubits, rising to 100,000 if error correction is implemented. Scalability of proposed quantum computer architectures to many qubits is thus of central importance. Here we propose a model for an ion trap quantum computer that combines scalability (a feature usually associated with solid state proposals) with the advantages of quantum optical systems (in particular, quantum control and long decoherence times).
Johnson, Timothy C.; Versteeg, Roelof J.; Ward, Andy; Day-Lewis, Frederick D.; Revil, André
2010-01-01
Electrical geophysical methods have found wide use in the growing discipline of hydrogeophysics for characterizing the electrical properties of the subsurface and for monitoring subsurface processes in terms of the spatiotemporal changes in subsurface conductivity, chargeability, and source currents they govern. Presently, multichannel and multielectrode data collections systems can collect large data sets in relatively short periods of time. Practitioners, however, often are unable to fully utilize these large data sets and the information they contain because of standard desktop-computer processing limitations. These limitations can be addressed by utilizing the storage and processing capabilities of parallel computing environments. We have developed a parallel distributed-memory forward and inverse modeling algorithm for analyzing resistivity and time-domain induced polar-ization (IP) data. The primary components of the parallel computations include distributed computation of the pole solutions in forward mode, distributed storage and computation of the Jacobian matrix in inverse mode, and parallel execution of the inverse equation solver. We have tested the corresponding parallel code in three efforts: (1) resistivity characterization of the Hanford 300 Area Integrated Field Research Challenge site in Hanford, Washington, U.S.A., (2) resistivity characterization of a volcanic island in the southern Tyrrhenian Sea in Italy, and (3) resistivity and IP monitoring of biostimulation at a Superfund site in Brandywine, Maryland, U.S.A. Inverse analysis of each of these data sets would be limited or impossible in a standard serial computing environment, which underscores the need for parallel high-performance computing to fully utilize the potential of electrical geophysical methods in hydrogeophysical applications.
Utilities bullish on meter-reading technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garner, W.L.
1995-01-15
By the end of 1996, the 400,000 customers of Kansas City Power & Light Company (KCPL) will have their electric meters read by a real-time wireless network that will relay electrical consumption readings back to computers at the utility`s customer service office. KCPL`s executives believe the new radio and cellular network will greatly improve the company`s ability to control its power distribution, manage its load requirements, monitor outages, and in the near future, allow time-of-use and offpeak pricing. The KCPL system represents the first systemwide, commercial application of wireless automated meter reading (AMR) by a U.S. utility. The article alsomore » describes other AMR systems for reading water and gas meters, along with saying that $18 billion in future power plant investments can be avoided by using time-of-use pricing for residential customers.« less
Rosenfeld, Alan L; Mandelaris, George A; Tardieu, Philippe B
2006-08-01
The purpose of this paper is to expand on part 1 of this series (published in the previous issue) regarding the emerging future of computer-guided implant dentistry. This article will introduce the concept of rapid-prototype medical modeling as well as describe the utilization and fabrication of computer-generated surgical drilling guides used during implant surgery. The placement of dental implants has traditionally been an intuitive process, whereby the surgeon relies on mental navigation to achieve optimal implant positioning. Through rapid-prototype medical modeling and the ste-reolithographic process, surgical drilling guides (eg, SurgiGuide) can be created. These guides are generated from a surgical implant plan created with a computer software system that incorporates all relevant prosthetic information from which the surgical plan is developed. The utilization of computer-generated planning and stereolithographically generated surgical drilling guides embraces the concept of collaborative accountability and supersedes traditional mental navigation on all levels of implant therapy.
Protocols Utilizing Constant pH Molecular Dynamics to Compute pH-Dependent Binding Free Energies
2015-01-01
In protein–ligand binding, the electrostatic environments of the two binding partners may vary significantly in bound and unbound states, which may lead to protonation changes upon binding. In cases where ligand binding results in a net uptake or release of protons, the free energy of binding is pH-dependent. Nevertheless, conventional free energy calculations and molecular docking protocols typically do not rigorously account for changes in protonation that may occur upon ligand binding. To address these shortcomings, we present a simple methodology based on Wyman’s binding polynomial formalism to account for the pH dependence of binding free energies and demonstrate its use on cucurbit[7]uril (CB[7]) host–guest systems. Using constant pH molecular dynamics and a reference binding free energy that is taken either from experiment or from thermodynamic integration computations, the pH-dependent binding free energy is determined. This computational protocol accurately captures the large pKa shifts observed experimentally upon CB[7]:guest association and reproduces experimental binding free energies at different levels of pH. We show that incorrect assignment of fixed protonation states in free energy computations can give errors of >2 kcal/mol in these host–guest systems. Use of the methods presented here avoids such errors, thus suggesting their utility in computing proton-linked binding free energies for protein–ligand complexes. PMID:25134690
Cloud@Home: A New Enhanced Computing Paradigm
NASA Astrophysics Data System (ADS)
Distefano, Salvatore; Cunsolo, Vincenzo D.; Puliafito, Antonio; Scarpa, Marco
Cloud computing is a distributed computing paradigm that mixes aspects of Grid computing, ("… hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities" (Foster, 2002)) Internet Computing ("…a computing platform geographically distributed across the Internet" (Milenkovic et al., 2003)), Utility computing ("a collection of technologies and business practices that enables computing to be delivered seamlessly and reliably across multiple computers, ... available as needed and billed according to usage, much like water and electricity are today" (Ross & Westerman, 2004)) Autonomic computing ("computing systems that can manage themselves given high-level objectives from administrators" (Kephart & Chess, 2003)), Edge computing ("… provides a generic template facility for any type of application to spread its execution across a dedicated grid, balancing the load …" Davis, Parikh, & Weihl, 2004) and Green computing (a new frontier of Ethical computing1 starting from the assumption that in next future energy costs will be related to the environment pollution).
Application of Soft Computing in Coherent Communications Phase Synchronization
NASA Technical Reports Server (NTRS)
Drake, Jeffrey T.; Prasad, Nadipuram R.
2000-01-01
The use of soft computing techniques in coherent communications phase synchronization provides an alternative to analytical or hard computing methods. This paper discusses a novel use of Adaptive Neuro-Fuzzy Inference Systems (ANFIS) for phase synchronization in coherent communications systems utilizing Multiple Phase Shift Keying (MPSK) modulation. A brief overview of the M-PSK digital communications bandpass modulation technique is presented and it's requisite need for phase synchronization is discussed. We briefly describe the hybrid platform developed by Jang that incorporates fuzzy/neural structures namely the, Adaptive Neuro-Fuzzy Interference Systems (ANFIS). We then discuss application of ANFIS to phase estimation for M-PSK. The modeling of both explicit, and implicit phase estimation schemes for M-PSK symbols with unknown structure are discussed. Performance results from simulation of the above scheme is presented.
Defense strategies for asymmetric networked systems under composite utilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Ma, Chris Y. T.; Hausken, Kjell
We consider an infrastructure of networked systems with discrete components that can be reinforced at certain costs to guard against attacks. The communications network plays a critical, asymmetric role of providing the vital connectivity between the systems. We characterize the correlations within this infrastructure at two levels using (a) aggregate failure correlation function that specifies the infrastructure failure probability giventhe failure of an individual system or network, and (b) first order differential conditions on system survival probabilities that characterize component-level correlations. We formulate an infrastructure survival game between an attacker and a provider, who attacks and reinforces individual components, respectively.more » They use the composite utility functions composed of a survival probability term and a cost term, and the previously studiedsum-form and product-form utility functions are their special cases. At Nash Equilibrium, we derive expressions for individual system survival probabilities and the expected total number of operational components. We apply and discuss these estimates for a simplified model of distributed cloud computing infrastructure« less
Elastic Cloud Computing Architecture and System for Heterogeneous Spatiotemporal Computing
NASA Astrophysics Data System (ADS)
Shi, X.
2017-10-01
Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs), while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC) or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA) may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.
NASA Astrophysics Data System (ADS)
Bhardwaj, Jyotirmoy; Gupta, Karunesh K.; Gupta, Rajiv
2018-02-01
New concepts and techniques are replacing traditional methods of water quality parameter measurement systems. This paper introduces a cyber-physical system (CPS) approach for water quality assessment in a distribution network. Cyber-physical systems with embedded sensors, processors and actuators can be designed to sense and interact with the water environment. The proposed CPS is comprised of sensing framework integrated with five different water quality parameter sensor nodes and soft computing framework for computational modelling. Soft computing framework utilizes the applications of Python for user interface and fuzzy sciences for decision making. Introduction of multiple sensors in a water distribution network generates a huge number of data matrices, which are sometimes highly complex, difficult to understand and convoluted for effective decision making. Therefore, the proposed system framework also intends to simplify the complexity of obtained sensor data matrices and to support decision making for water engineers through a soft computing framework. The target of this proposed research is to provide a simple and efficient method to identify and detect presence of contamination in a water distribution network using applications of CPS.
Two-spectral Yang-Baxter operators in topological quantum computation
NASA Astrophysics Data System (ADS)
Sanchez, William F.
2011-05-01
One of the current trends in quantum computing is the application of algebraic topological methods in the design of new algorithms and quantum computers, giving rise to topological quantum computing. One of the tools used in it is the Yang-Baxter equation whose solutions are interpreted as universal quantum gates. Lately, more general Yang-Baxter equations have been investigated, making progress as two-spectral equations and Yang-Baxter systems. This paper intends to apply these new findings to the field of topological quantum computation, more specifically, the proposition of the two-spectral Yang-Baxter operators as universal quantum gates for 2 qubits and 2 qutrits systems, obtaining 4x4 and 9x9 matrices respectively, and further elaboration of the corresponding Hamiltonian by the use of computer algebra software Mathematica® and its Qucalc package. In addition, possible physical systems to which the Yang-Baxter operators obtained can be applied are considered. In the present work it is demonstrated the utility of the Yang-Baxter equation to generate universal quantum gates and the power of computer algebra to design them; it is expected that these mathematical studies contribute to the further development of quantum computers
NASA Astrophysics Data System (ADS)
Thomas, W. A.; McAnally, W. H., Jr.
1985-07-01
TABS-2 is a generalized numerical modeling system for open-channel flows, sedimentation, and constituent transport. It consists of more than 40 computer programs to perform modeling and related tasks. The major modeling components--RMA-2V, STUDH, and RMA-4--calculate two-dimensional, depth-averaged flows, sedimentation, and dispersive transport, respectively. The other programs in the system perform digitizing, mesh generation, data management, graphical display, output analysis, and model interfacing tasks. Utilities include file management and automatic generation of computer job control instructions. TABS-2 has been applied to a variety of waterways, including rivers, estuaries, bays, and marshes. It is designed for use by engineers and scientists who may not have a rigorous computer background. Use of the various components is described in Appendices A-O. The bound version of the report does not include the appendices. A looseleaf form with Appendices A-O is distributed to system users.
CAGI: Computer Aided Grid Interface. A work in progress
NASA Technical Reports Server (NTRS)
Soni, Bharat K.; Yu, Tzu-Yi; Vaughn, David
1992-01-01
Progress realized in the development of a Computer Aided Grid Interface (CAGI) software system in integrating CAD/CAM geometric system output and/or Interactive Graphics Exchange Standard (IGES) files, geometry manipulations associated with grid generation, and robust grid generation methodologies is presented. CAGI is being developed in a modular fashion and will offer fast, efficient and economical response to geometry/grid preparation, allowing the ability to upgrade basic geometry in a step-by-step fashion interactively and under permanent visual control along with minimizing the differences between the actual hardware surface descriptions and corresponding numerical analog. The computer code GENIE is used as a basis. The Non-Uniform Rational B-Splines (NURBS) representation of sculptured surfaces is utilized for surface grid redistribution. The computer aided analysis system, PATRAN, is adapted as a CAD/CAM system. The progress realized in NURBS surface grid generation, the development of IGES transformer, and geometry adaption using PATRAN will be presented along with their applicability to grid generation associated with rocket propulsion applications.
DualTrust: A Trust Management Model for Swarm-Based Autonomic Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maiden, Wendy M.
Trust management techniques must be adapted to the unique needs of the application architectures and problem domains to which they are applied. For autonomic computing systems that utilize mobile agents and ant colony algorithms for their sensor layer, certain characteristics of the mobile agent ant swarm -- their lightweight, ephemeral nature and indirect communication -- make this adaptation especially challenging. This thesis looks at the trust issues and opportunities in swarm-based autonomic computing systems and finds that by monitoring the trustworthiness of the autonomic managers rather than the swarming sensors, the trust management problem becomes much more scalable and stillmore » serves to protect the swarm. After analyzing the applicability of trust management research as it has been applied to architectures with similar characteristics, this thesis specifies the required characteristics for trust management mechanisms used to monitor the trustworthiness of entities in a swarm-based autonomic computing system and describes a trust model that meets these requirements.« less
A survey of CPU-GPU heterogeneous computing techniques
Mittal, Sparsh; Vetter, Jeffrey S.
2015-07-04
As both CPU and GPU become employed in a wide range of applications, it has been acknowledged that both of these processing units (PUs) have their unique features and strengths and hence, CPU-GPU collaboration is inevitable to achieve high-performance computing. This has motivated significant amount of research on heterogeneous computing techniques, along with the design of CPU-GPU fused chips and petascale heterogeneous supercomputers. In this paper, we survey heterogeneous computing techniques (HCTs) such as workload-partitioning which enable utilizing both CPU and GPU to improve performance and/or energy efficiency. We review heterogeneous computing approaches at runtime, algorithm, programming, compiler and applicationmore » level. Further, we review both discrete and fused CPU-GPU systems; and discuss benchmark suites designed for evaluating heterogeneous computing systems (HCSs). Furthermore, we believe that this paper will provide insights into working and scope of applications of HCTs to researchers and motivate them to further harness the computational powers of CPUs and GPUs to achieve the goal of exascale performance.« less
A survey of CPU-GPU heterogeneous computing techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mittal, Sparsh; Vetter, Jeffrey S.
As both CPU and GPU become employed in a wide range of applications, it has been acknowledged that both of these processing units (PUs) have their unique features and strengths and hence, CPU-GPU collaboration is inevitable to achieve high-performance computing. This has motivated significant amount of research on heterogeneous computing techniques, along with the design of CPU-GPU fused chips and petascale heterogeneous supercomputers. In this paper, we survey heterogeneous computing techniques (HCTs) such as workload-partitioning which enable utilizing both CPU and GPU to improve performance and/or energy efficiency. We review heterogeneous computing approaches at runtime, algorithm, programming, compiler and applicationmore » level. Further, we review both discrete and fused CPU-GPU systems; and discuss benchmark suites designed for evaluating heterogeneous computing systems (HCSs). Furthermore, we believe that this paper will provide insights into working and scope of applications of HCTs to researchers and motivate them to further harness the computational powers of CPUs and GPUs to achieve the goal of exascale performance.« less
Specialty functions singularity mechanics problems
NASA Technical Reports Server (NTRS)
Sarigul, Nesrin
1989-01-01
The focus is in the development of more accurate and efficient advanced methods for solution of singular problems encountered in mechanics. At present, finite element methods in conjunction with special functions, boolean sum and blending interpolations are being considered. In dealing with systems which contain a singularity, special finite elements are being formulated to be used in singular regions. Further, special transition elements are being formulated to couple the special element to the mesh that models the rest of the system, and to be used in conjunction with 1-D, 2-D and 3-D elements within the same mesh. Computational simulation with a least squares fit is being utilized to construct special elements, if there is an unknown singularity in the system. A novel approach is taken in formulation of the elements in that: (1) the material properties are modified to include time, temperature, coordinate and stress dependant behavior within the element; (2) material properties vary at nodal points of the elements; (3) a hidden-symbolic computation scheme is developed and utilized in formulating the elements; and (4) special functions and boolean sum are utilized in order to interpolate the field variables and their derivatives along the boundary of the elements. It may be noted that the proposed methods are also applicable to fluids and coupled problems.
Application of symbolic computations to the constitutive modeling of structural materials
NASA Technical Reports Server (NTRS)
Arnold, Steven M.; Tan, H. Q.; Dong, X.
1990-01-01
In applications involving elevated temperatures, the derivation of mathematical expressions (constitutive equations) describing the material behavior can be quite time consuming, involved and error-prone. Therefore intelligent application of symbolic systems to faciliate this tedious process can be of significant benefit. Presented here is a problem oriented, self contained symbolic expert system, named SDICE, which is capable of efficiently deriving potential based constitutive models in analytical form. This package, running under DOE MACSYMA, has the following features: (1) potential differentiation (chain rule), (2) tensor computations (utilizing index notation) including both algebraic and calculus; (3) efficient solution of sparse systems of equations; (4) automatic expression substitution and simplification; (5) back substitution of invariant and tensorial relations; (6) the ability to form the Jacobian and Hessian matrix; and (7) a relational data base. Limited aspects of invariant theory were also incorporated into SDICE due to the utilization of potentials as a starting point and the desire for these potentials to be frame invariant (objective). The uniqueness of SDICE resides in its ability to manipulate expressions in a general yet pre-defined order and simplify expressions so as to limit expression growth. Results are displayed, when applicable, utilizing index notation. SDICE was designed to aid and complement the human constitutive model developer. A number of examples are utilized to illustrate the various features contained within SDICE. It is expected that this symbolic package can and will provide a significant incentive to the development of new constitutive theories.
Military clouds: utilization of cloud computing systems at the battlefield
NASA Astrophysics Data System (ADS)
Süleyman, Sarıkürk; Volkan, Karaca; İbrahim, Kocaman; Ahmet, Şirzai
2012-05-01
Cloud computing is known as a novel information technology (IT) concept, which involves facilitated and rapid access to networks, servers, data saving media, applications and services via Internet with minimum hardware requirements. Use of information systems and technologies at the battlefield is not new. Information superiority is a force multiplier and is crucial to mission success. Recent advances in information systems and technologies provide new means to decision makers and users in order to gain information superiority. These developments in information technologies lead to a new term, which is known as network centric capability. Similar to network centric capable systems, cloud computing systems are operational today. In the near future extensive use of military clouds at the battlefield is predicted. Integrating cloud computing logic to network centric applications will increase the flexibility, cost-effectiveness, efficiency and accessibility of network-centric capabilities. In this paper, cloud computing and network centric capability concepts are defined. Some commercial cloud computing products and applications are mentioned. Network centric capable applications are covered. Cloud computing supported battlefield applications are analyzed. The effects of cloud computing systems on network centric capability and on the information domain in future warfare are discussed. Battlefield opportunities and novelties which might be introduced to network centric capability by cloud computing systems are researched. The role of military clouds in future warfare is proposed in this paper. It was concluded that military clouds will be indispensible components of the future battlefield. Military clouds have the potential of improving network centric capabilities, increasing situational awareness at the battlefield and facilitating the settlement of information superiority.
ERIC Educational Resources Information Center
Lubans, John, Jr.; And Others
Computer-based circulation systems, it is widely believed, can be utilized to provide data for library use studies. The study described in this report involves using such a data base to analyze aspects of library use and non-use and types of users. Another major objective of this research was the testing of machine-readable circulation data…
NASA Technical Reports Server (NTRS)
1980-01-01
Burns & McDonnell Engineering's environmental control study is assisted by NASA's Computer Software Management and Information Center's programs in environmental analyses. Company is engaged primarily in design of such facilities as electrical utilities, industrial plants, wastewater treatment systems, dams and reservoirs and aviation installations. Company also conducts environmental engineering analyses and advises clients as to the environmental considerations of a particular construction project. Company makes use of many COSMIC computer programs which have allowed substantial savings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trujillo, Angelina Michelle
Strategy, Planning, Acquiring- very large scale computing platforms come and go and planning for immensely scalable machines often precedes actual procurement by 3 years. Procurement can be another year or more. Integration- After Acquisition, machines must be integrated into the computing environments at LANL. Connection to scalable storage via large scale storage networking, assuring correct and secure operations. Management and Utilization – Ongoing operations, maintenance, and trouble shooting of the hardware and systems software at massive scale is required.
23. VIEW OF THE FIRST FLOOR PLAN. THE FIRST FLOOR ...
23. VIEW OF THE FIRST FLOOR PLAN. THE FIRST FLOOR HOUSED ADMINISTRATIVE OFFICES, THE CENTRAL COMPUTING, UTILITY SYSTEMS, ANALYTICAL LABORATORIES, AND MAINTENANCE SHOPS. THE ORIGINAL DRAWING HAS BEEN ARCHIVED ON MICROFILM. THE DRAWING WAS REPRODUCED AT THE BEST QUALITY POSSIBLE. LETTERS AND NUMBERS IN THE CIRCLES INDICATE FOOTER AND/OR COLUMN LOCATIONS. - Rocky Flats Plant, General Manufacturing, Support, Records-Central Computing, Southern portion of Plant, Golden, Jefferson County, CO
Decentralized Optimal Dispatch of Photovoltaic Inverters in Residential Distribution Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano; Dhople, Sairaj V.; Johnson, Brian B.
Summary form only given. Decentralized methods for computing optimal real and reactive power setpoints for residential photovoltaic (PV) inverters are developed in this paper. It is known that conventional PV inverter controllers, which are designed to extract maximum power at unity power factor, cannot address secondary performance objectives such as voltage regulation and network loss minimization. Optimal power flow techniques can be utilized to select which inverters will provide ancillary services, and to compute their optimal real and reactive power setpoints according to well-defined performance criteria and economic objectives. Leveraging advances in sparsity-promoting regularization techniques and semidefinite relaxation, this papermore » shows how such problems can be solved with reduced computational burden and optimality guarantees. To enable large-scale implementation, a novel algorithmic framework is introduced - based on the so-called alternating direction method of multipliers - by which optimal power flow-type problems in this setting can be systematically decomposed into sub-problems that can be solved in a decentralized fashion by the utility and customer-owned PV systems with limited exchanges of information. Since the computational burden is shared among multiple devices and the requirement of all-to-all communication can be circumvented, the proposed optimization approach scales favorably to large distribution networks.« less
Wonczak, Stephan; Thiele, Holger; Nieroda, Lech; Jabbari, Kamel; Borowski, Stefan; Sinha, Vishal; Gunia, Wilfried; Lang, Ulrich; Achter, Viktor; Nürnberg, Peter
2015-01-01
Next generation sequencing (NGS) has been a great success and is now a standard method of research in the life sciences. With this technology, dozens of whole genomes or hundreds of exomes can be sequenced in rather short time, producing huge amounts of data. Complex bioinformatics analyses are required to turn these data into scientific findings. In order to run these analyses fast, automated workflows implemented on high performance computers are state of the art. While providing sufficient compute power and storage to meet the NGS data challenge, high performance computing (HPC) systems require special care when utilized for high throughput processing. This is especially true if the HPC system is shared by different users. Here, stability, robustness and maintainability are as important for automated workflows as speed and throughput. To achieve all of these aims, dedicated solutions have to be developed. In this paper, we present the tricks and twists that we utilized in the implementation of our exome data processing workflow. It may serve as a guideline for other high throughput data analysis projects using a similar infrastructure. The code implementing our solutions is provided in the supporting information files. PMID:25942438
Transportation Analysis and Simulation System Requirements
DOT National Transportation Integrated Search
1973-04-01
This document provides: : a. A brief summary of overall project (PPA OS223) accomplishments during FY 72. : b. A detailed summary of the following two major FY 72 activities: : 1. Analysis of TSC's computation resources and their utilization; : 2. Pr...
In Silico Dynamics: computer simulation in a Virtual Embryo (SOT)
Abstract: Utilizing cell biological information to predict higher order biological processes is a significant challenge in predictive toxicology. This is especially true for highly dynamical systems such as the embryo where morphogenesis, growth and differentiation require preci...
Asset Management of Roadway Signs Through Advanced Technology
DOT National Transportation Integrated Search
2003-06-01
This research project aims to ease the process of Roadway Sign asset management. The project utilized handheld computer and global positioning system (GPS) technology to capture sign location data along with a timestamp. This data collection effort w...
A Recursive Method for Calculating Certain Partition Functions.
ERIC Educational Resources Information Center
Woodrum, Luther; And Others
1978-01-01
Describes a simple recursive method for calculating the partition function and average energy of a system consisting of N electrons and L energy levels. Also, presents an efficient APL computer program to utilize the recursion relation. (Author/GA)
Device and method for measuring multi-phase fluid flow in a conduit using an elbow flow meter
Ortiz, Marcos G.; Boucher, Timothy J.
1997-01-01
A system for measuring fluid flow in a conduit. The system utilizes pressure transducers disposed generally in line upstream and downstream of the flow of fluid in a bend in the conduit. Data from the pressure transducers is transmitted to a microprocessor or computer. The pressure differential measured by the pressure transducers is then used to calculate the fluid flow rate in the conduit. Control signals may then be generated by the microprocessor or computer to control flow, total fluid dispersed, (in, for example, an irrigation system), area of dispersal or other desired effect based on the fluid flow in the conduit.
System and method for bidirectional flow and controlling fluid flow in a conduit
Ortiz, Marcos German
1999-01-01
A system for measuring bidirectional flow, including backflow, of fluid in a conduit. The system utilizes a structural mechanism to create a pressure differential in the conduit. Pressure sensors are positioned upstream from the mechanism, at the mechanism, and downstream from the mechanism. Data from the pressure sensors are transmitted to a microprocessor or computer, and pressure differential detected between the pressure sensors is then used to calculate the backflow. Control signals may then be generated by the microprocessor or computer to shut off valves located in the conduit, upon the occurrence of backflow, or to control flow, total material dispersed, etc. in the conduit.
Flight simulator with spaced visuals
NASA Technical Reports Server (NTRS)
Gilson, Richard D. (Inventor); Thurston, Marlin O. (Inventor); Olson, Karl W. (Inventor); Ventola, Ronald W. (Inventor)
1980-01-01
A flight simulator arrangement wherein a conventional, movable base flight trainer is combined with a visual cue display surface spaced a predetermined distance from an eye position within the trainer. Thus, three degrees of motive freedom (roll, pitch and crab) are provided for a visual proprioceptive, and vestibular cue system by the trainer while the remaining geometric visual cue image alterations are developed by a video system. A geometric approach to computing runway image eliminates a need to electronically compute trigonometric functions, while utilization of a line generator and designated vanishing point at the video system raster permits facile development of the images of the longitudinal edges of the runway.
Biophysical constraints on the computational capacity of biochemical signaling networks
NASA Astrophysics Data System (ADS)
Wang, Ching-Hao; Mehta, Pankaj
Biophysics fundamentally constrains the computations that cells can carry out. Here, we derive fundamental bounds on the computational capacity of biochemical signaling networks that utilize post-translational modifications (e.g. phosphorylation). To do so, we combine ideas from the statistical physics of disordered systems and the observation by Tony Pawson and others that the biochemistry underlying protein-protein interaction networks is combinatorial and modular. Our results indicate that the computational capacity of signaling networks is severely limited by the energetics of binding and the need to achieve specificity. We relate our results to one of the theoretical pillars of statistical learning theory, Cover's theorem, which places bounds on the computational capacity of perceptrons. PM and CHW were supported by a Simons Investigator in the Mathematical Modeling of Living Systems Grant, and NIH Grant No. 1R35GM119461 (both to PM).
A programmable computational image sensor for high-speed vision
NASA Astrophysics Data System (ADS)
Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian
2013-08-01
In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.
Utilizing Internet Technologies in Observatory Control Systems
NASA Astrophysics Data System (ADS)
Cording, Dean
2002-12-01
The 'Internet boom' of the past few years has spurred the development of a number of technologies to provide services such as secure communications, reliable messaging, information publishing and application distribution for commercial applications. Over the same period, a new generation of computer languages have also developed to provide object oriented design and development, improved reliability, and cross platform compatibility. Whilst the business models of the 'dot.com' era proved to be largely unviable, the technologies that they were based upon have survived and have matured to the point were they can now be utilized to build secure, robust and complete observatory control control systems. This paper will describe how Electro Optic Systems has utilized these technologies in the development of its third generation Robotic Observatory Control System (ROCS). ROCS provides an extremely flexible configuration capability within a control system structure to provide truly autonomous robotic observatory operation including observation scheduling. ROCS was built using Internet technologies such as Java, Java Messaging Service (JMS), Lightweight Directory Access Protocol (LDAP), Secure Sockets Layer (SSL), eXtendible Markup Language (XML), Hypertext Transport Protocol (HTTP) and Java WebStart. ROCS was designed to be capable of controlling all aspects of an observatory and be able to be reconfigured to handle changing equipment configurations or user requirements without the need for an expert computer programmer. ROCS consists of many small components, each designed to perform a specific task, with the configuration of the system specified using a simple meta language. The use of small components facilitates testing and makes it possible to prove that the system is correct.
Image detection and compression for memory efficient system analysis
NASA Astrophysics Data System (ADS)
Bayraktar, Mustafa
2015-02-01
The advances in digital signal processing have been progressing towards efficient use of memory and processing. Both of these factors can be utilized efficiently by using feasible techniques of image storage by computing the minimum information of image which will enhance computation in later processes. Scale Invariant Feature Transform (SIFT) can be utilized to estimate and retrieve of an image. In computer vision, SIFT can be implemented to recognize the image by comparing its key features from SIFT saved key point descriptors. The main advantage of SIFT is that it doesn't only remove the redundant information from an image but also reduces the key points by matching their orientation and adding them together in different windows of image [1]. Another key property of this approach is that it works on highly contrasted images more efficiently because it`s design is based on collecting key points from the contrast shades of image.
Method of up-front load balancing for local memory parallel processors
NASA Technical Reports Server (NTRS)
Baffes, Paul Thomas (Inventor)
1990-01-01
In a parallel processing computer system with multiple processing units and shared memory, a method is disclosed for uniformly balancing the aggregate computational load in, and utilizing minimal memory by, a network having identical computations to be executed at each connection therein. Read-only and read-write memory are subdivided into a plurality of process sets, which function like artificial processing units. Said plurality of process sets is iteratively merged and reduced to the number of processing units without exceeding the balance load. Said merger is based upon the value of a partition threshold, which is a measure of the memory utilization. The turnaround time and memory savings of the instant method are functions of the number of processing units available and the number of partitions into which the memory is subdivided. Typical results of the preferred embodiment yielded memory savings of from sixty to seventy five percent.
Real time simulation of computer-assisted sequencing of terminal area operations
NASA Technical Reports Server (NTRS)
Dear, R. G.
1981-01-01
A simulation was developed to investigate the utilization of computer assisted decision making for the task of sequencing and scheduling aircraft in a high density terminal area. The simulation incorporates a decision methodology termed Constrained Position Shifting. This methodology accounts for aircraft velocity profiles, routes, and weight classes in dynamically sequencing and scheduling arriving aircraft. A sample demonstration of Constrained Position Shifting is presented where six aircraft types (including both light and heavy aircraft) are sequenced to land at Denver's Stapleton International Airport. A graphical display is utilized and Constrained Position Shifting with a maximum shift of four positions (rearward or forward) is compared to first come, first serve with respect to arrival at the runway. The implementation of computer assisted sequencing and scheduling methodologies is investigated. A time based control concept will be required and design considerations for such a system are discussed.
NASA Astrophysics Data System (ADS)
Ahn, Sul-Ah; Jung, Youngim
2016-10-01
The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.
Shinbane, Jerold S; Saxon, Leslie A
Advances in imaging technology have led to a paradigm shift from planning of cardiovascular procedures and surgeries requiring the actual patient in a "brick and mortar" hospital to utilization of the digitalized patient in the virtual hospital. Cardiovascular computed tomographic angiography (CCTA) and cardiovascular magnetic resonance (CMR) digitalized 3-D patient representation of individual patient anatomy and physiology serves as an avatar allowing for virtual delineation of the most optimal approaches to cardiovascular procedures and surgeries prior to actual hospitalization. Pre-hospitalization reconstruction and analysis of anatomy and pathophysiology previously only accessible during the actual procedure could potentially limit the intrinsic risks related to time in the operating room, cardiac procedural laboratory and overall hospital environment. Although applications are specific to areas of cardiovascular specialty focus, there are unifying themes related to the utilization of technologies. The virtual patient avatar computer can also be used for procedural planning, computational modeling of anatomy, simulation of predicted therapeutic result, printing of 3-D models, and augmentation of real time procedural performance. Examples of the above techniques are at various stages of development for application to the spectrum of cardiovascular disease processes, including percutaneous, surgical and hybrid minimally invasive interventions. A multidisciplinary approach within medicine and engineering is necessary for creation of robust algorithms for maximal utilization of the virtual patient avatar in the digital medical center. Utilization of the virtual advanced cardiac imaging patient avatar will play an important role in the virtual health care system. Although there has been a rapid proliferation of early data, advanced imaging applications require further assessment and validation of accuracy, reproducibility, standardization, safety, efficacy, quality, cost effectiveness, and overall value to medical care. Copyright © 2018 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Michaelis, A.; Nemani, R. R.; Wang, W.; Votava, P.; Hashimoto, H.
2010-12-01
Given the increasing complexity of climate modeling and analysis tools, it is often difficult and expensive to build or recreate an exact replica of the software compute environment used in past experiments. With the recent development of new technologies for hardware virtualization, an opportunity exists to create full modeling, analysis and compute environments that are “archiveable”, transferable and may be easily shared amongst a scientific community or presented to a bureaucratic body if the need arises. By encapsulating and entire modeling and analysis environment in a virtual machine image, others may quickly gain access to the fully built system used in past experiments, potentially easing the task and reducing the costs of reproducing and verify past results produced by other researchers. Moreover, these virtual machine images may be used as a pedagogical tool for others that are interested in performing an academic exercise but don't yet possess the broad expertise required. We built two virtual machine images, one with the Community Earth System Model (CESM) and one with Weather Research Forecast Model (WRF), then ran several small experiments to assess the feasibility, performance overheads costs, reusability, and transferability. We present a list of the pros and cons as well as lessoned learned from utilizing virtualization technology in the climate and earth systems modeling domain.
Study of Fluid Experiment System (FES)/CAST/Holographic Ground System (HGS)
NASA Technical Reports Server (NTRS)
Workman, Gary L.; Cummings, Rick; Jones, Brian
1992-01-01
The use of holographic and schlieren optical techniques for studying the concentration gradients in solidification processes has been used by several investigators over the years. The HGS facility at MSFC has been primary resource in researching this capability. Consequently, scientific personnel have been able to utilize these techniques in both ground based research and in space experiments. An important event in the scientific utilization of the HGS facilities was the TGS Crystal Growth and the casting and solidification technology (CAST) experiments that were flown on the International Microgravity Laboratory (IML) mission in March of this year. The preparation and processing of these space observations are the primary experiments reported in this work. This project provides some ground-based studies to optimize on the holographic techniques used to acquire information about the crystal growth processes flown on IML. Since the ground-based studies will be compared with the space-based experimental results, it is necessary to conduct sufficient ground based studies to best determine how the experiment worked in space. The current capabilities in computer based systems for image processing and numerical computation have certainly assisted in those efforts. As anticipated, this study has certainly shown that these advanced computing capabilities are helpful in the data analysis of such experiments.
NASA Technical Reports Server (NTRS)
Klumpar, D. M. (Principal Investigator)
1982-01-01
Progress made in reducing MAGSAT data and displaying magnetic field perturbations caused primarily by external currents is reported. A periodic and repeatable perturbation pattern is described that arises from external current effects but appears as unique signatures associated with upper middle latitudes on the Earth's surface. Initial testing of the modeling procedure that was developed to compute the magnetic fields at satellite orbit due to current distributions in the ionosphere and magnetosphere is also discussed. The modeling technique utilizes a linear current element representation of the large scale space current system.
NASA Technical Reports Server (NTRS)
Klumpar, D. M. (Principal Investigator)
1982-01-01
Efforts in support of the development of a model of the magnetic fields due to ionospheric and magnetospheric electrical currents are discussed. Specifically, progress made in reading MAGSAT tapes and plotting the deviation of the measured magnetic field components with respect to a spherical harmonic model of the main geomagnetic field is reported. Initial tests of the modeling procedure developed to compute the ionosphere/magnetosphere-induced fields at satellite orbit are also described. The modeling technique utilizes a liner current element representation of the large scale current system.
A programing system for research and applications in structural optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Rogers, J. L., Jr.
1981-01-01
The flexibility necessary for such diverse utilizations is achieved by combining, in a modular manner, a state-of-the-art optimization program, a production level structural analysis program, and user supplied and problem dependent interface programs. Standard utility capabilities in modern computer operating systems are used to integrate these programs. This approach results in flexibility of the optimization procedure organization and versatility in the formulation of constraints and design variables. Features shown in numerical examples include: variability of structural layout and overall shape geometry, static strength and stiffness constraints, local buckling failure, and vibration constraints.
Shared-resource computing for small research labs.
Ackerman, M J
1982-04-01
A real time laboratory computer network is described. This network is composed of four real-time laboratory minicomputers located in each of four division laboratories and a larger minicomputer in a centrally located computer room. Off the shelf hardware and software were used with no customization. The network is configured for resource sharing using DECnet communications software and the RSX-11-M multi-user real-time operating system. The cost effectiveness of the shared resource network and multiple real-time processing using priority scheduling is discussed. Examples of utilization within a medical research department are given.
Manual of phosphoric acid fuel cell power plant optimization model and computer program
NASA Technical Reports Server (NTRS)
Lu, C. Y.; Alkasab, K. A.
1984-01-01
An optimized cost and performance model for a phosphoric acid fuel cell power plant system was derived and developed into a modular FORTRAN computer code. Cost, energy, mass, and electrochemical analyses were combined to develop a mathematical model for optimizing the steam to methane ratio in the reformer, hydrogen utilization in the PAFC plates per stack. The nonlinear programming code, COMPUTE, was used to solve this model, in which the method of mixed penalty function combined with Hooke and Jeeves pattern search was chosen to evaluate this specific optimization problem.
Integrated command, control, communications and computation system functional architecture
NASA Technical Reports Server (NTRS)
Cooley, C. G.; Gilbert, L. E.
1981-01-01
The functional architecture for an integrated command, control, communications, and computation system applicable to the command and control portion of the NASA End-to-End Data. System is described including the downlink data processing and analysis functions required to support the uplink processes. The functional architecture is composed of four elements: (1) the functional hierarchy which provides the decomposition and allocation of the command and control functions to the system elements; (2) the key system features which summarize the major system capabilities; (3) the operational activity threads which illustrate the interrelationahip between the system elements; and (4) the interfaces which illustrate those elements that originate or generate data and those elements that use the data. The interfaces also provide a description of the data and the data utilization and access techniques.
Decentralized state estimation for a large-scale spatially interconnected system.
Liu, Huabo; Yu, Haisheng
2018-03-01
A decentralized state estimator is derived for the spatially interconnected systems composed of many subsystems with arbitrary connection relations. An optimization problem on the basis of linear matrix inequality (LMI) is constructed for the computations of improved subsystem parameter matrices. Several computationally effective approaches are derived which efficiently utilize the block-diagonal characteristic of system parameter matrices and the sparseness of subsystem connection matrix. Moreover, this decentralized state estimator is proved to converge to a stable system and obtain a bounded covariance matrix of estimation errors under certain conditions. Numerical simulations show that the obtained decentralized state estimator is attractive in the synthesis of a large-scale networked system. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Zhenhua; Rose, Adam Z.; Prager, Fynnwin
The state of the art approach to economic consequence analysis (ECA) is computable general equilibrium (CGE) modeling. However, such models contain thousands of equations and cannot readily be incorporated into computerized systems used by policy analysts to yield estimates of economic impacts of various types of transportation system failures due to natural hazards, human related attacks or technological accidents. This paper presents a reduced-form approach to simplify the analytical content of CGE models to make them more transparent and enhance their utilization potential. The reduced-form CGE analysis is conducted by first running simulations one hundred times, varying key parameters, suchmore » as magnitude of the initial shock, duration, location, remediation, and resilience, according to a Latin Hypercube sampling procedure. Statistical analysis is then applied to the “synthetic data” results in the form of both ordinary least squares and quantile regression. The analysis yields linear equations that are incorporated into a computerized system and utilized along with Monte Carlo simulation methods for propagating uncertainties in economic consequences. Although our demonstration and discussion focuses on aviation system disruptions caused by terrorist attacks, the approach can be applied to a broad range of threat scenarios.« less